Hi all,
I'm working on commissioning a security system for a client which comprises a number of workstations custom built by myself but despite following the guidelines of the software vendor utilised for the management and viewing of up to 40 x H264 video streams per workstation, the two workstations running an 8 screen video wall setup are coming up short in compute muscle decoding the required H264 streams.
The video wall workstation set up is as follows:
CPU: Intel Core i7 3770 3.4GHz (Ivy Bridge)
MB: Asus P8B WS C206
RAM: 8GB Corsair Vengeance DDR3 PC3-15000C9 1866MHz Dual Channel (running XMP1 profile).
GPU: AMD FirePro V7900
Both systems are stable and since they are required to work 24/7 for the next 3-5 years, I made a conscious decision to stay away from K chips and play with voltages to gain marginal increase in performance.
The situation is that when each of the 4 x 55" 1080P screens are loaded with a 3x3 tile, I have 36 x H264 @ 3-5Mbps per stream being exclusively decoded by the CPU. Unfortunately the GPU is only used for scaling and rendering in this instance, no decoding workload is being handed off by the video management software and I'm seeing 70%+ CPU utilisation with peaks up to 90%+ during busy scenes.
I'm currently at a crossroads trying to decide what's the next step up in hardware as I really need to keep CPU utilisation below 70% to allow the workstations to be responsive to end client inputs when they need to modify the video wall layouts, which at 90%+ CPU utilisation are becoming stuttery in response and leave no margin for peaks during busy times.
I am considering the value of moving to Dual Xeon platform on LGA2011, but these chips are SBe based and are mostly clocked below 3GHz, unless of course one is prepared to pay £2K plus for a CPU
I'm not looking for a solution here, instead an open discussion and feedback on possible steps above in CPU number crunching performance above my current position.
Any constructive feedback/opinions are welcome.
best regards
Humour
I'm working on commissioning a security system for a client which comprises a number of workstations custom built by myself but despite following the guidelines of the software vendor utilised for the management and viewing of up to 40 x H264 video streams per workstation, the two workstations running an 8 screen video wall setup are coming up short in compute muscle decoding the required H264 streams.
The video wall workstation set up is as follows:
CPU: Intel Core i7 3770 3.4GHz (Ivy Bridge)
MB: Asus P8B WS C206
RAM: 8GB Corsair Vengeance DDR3 PC3-15000C9 1866MHz Dual Channel (running XMP1 profile).
GPU: AMD FirePro V7900
Both systems are stable and since they are required to work 24/7 for the next 3-5 years, I made a conscious decision to stay away from K chips and play with voltages to gain marginal increase in performance.
The situation is that when each of the 4 x 55" 1080P screens are loaded with a 3x3 tile, I have 36 x H264 @ 3-5Mbps per stream being exclusively decoded by the CPU. Unfortunately the GPU is only used for scaling and rendering in this instance, no decoding workload is being handed off by the video management software and I'm seeing 70%+ CPU utilisation with peaks up to 90%+ during busy scenes.
I'm currently at a crossroads trying to decide what's the next step up in hardware as I really need to keep CPU utilisation below 70% to allow the workstations to be responsive to end client inputs when they need to modify the video wall layouts, which at 90%+ CPU utilisation are becoming stuttery in response and leave no margin for peaks during busy times.
I am considering the value of moving to Dual Xeon platform on LGA2011, but these chips are SBe based and are mostly clocked below 3GHz, unless of course one is prepared to pay £2K plus for a CPU

I'm not looking for a solution here, instead an open discussion and feedback on possible steps above in CPU number crunching performance above my current position.
Any constructive feedback/opinions are welcome.
best regards
Humour
Last edited: