Thursday, May 21st 2015

Tech Source Releases Condor 4000 3U VPX Graphics Card for GPGPU Applications

Tech Source, Inc., an independent supplier of high performance embedded video, graphics, and high end computing solutions, has released the Condor 4000 3U VPX form factor graphics/video card. Designed for compute-intensive General Purpose Graphics Processing Unit (GPGPU) applications deployed in avionics and military technology, the Condor 4000 3U VPX graphics card delivers up to 768/48 GFLOPS of peak single/double precision floating point performance and 640 shaders with the AMD Radeon E8860 GPU at its core.

The Condor 4000 3U VPX card is for seriously high-end graphics, parallel processing, and situation awareness image and sensor processing applications such as radar, sonar, video streaming, and unmanned systems. The new card operates at higher speeds than an XMC form factor equivalent card as it occupies a dedicated slot to allow for better cooling which enables it to run at 45 Watts full power.
Selwyn L. Henriques, president and CEO of Tech Source Inc., commented, "Our GPGPU customers want every ounce of performance they can get. So this 3U VPX graphics card is an attractive option as it delivers 60 percent better performance than the previous generation."

The Condor 4000 3U VPX board is fully conduction-cooled and has 6 digital video outputs (2 x DVI and 4 x DisplayPort) available from the rear VPX P2 connector on the card. It also features 2 GB of GDDR5 memory and supports the latest versions of the APIs such as Open GL 4.2, Direct X 11.1, OpenCL 1.2, and DirectCompute 11 for GPGPU computing.

The Condor 4000 3U VPX card is available with Linux and Windows drivers by default and other real time operating systems such as VxWorks may be supported. Tech Source offers 15 years product support and a board customizing service for those with specialized requirements.
Add your own comment

8 Comments on Tech Source Releases Condor 4000 3U VPX Graphics Card for GPGPU Applications

#1
john_
Was anyone producing things like this one in the past, or does this show something positive for AMD's GCN in the compute space?
Posted on Reply
#2
ShredBird
Generally for compute intensive tasks in portable applications (such as drones/robotics), FPGAs are preferred over CPU/GPUs due to their lower latency and lower power usage and ease of replacement.
Posted on Reply
#3
Steevo
This is the tech that has been missing from many unmanned systems, using multiple inputs such as inertia, GPS, sonar (material identification/density) radar (location and environment mapping), infrared and being able to combine them in real time for programmed logic and decision making, autonomous machines can use the wider parallel processing to determine exact location, map and determine where to go and where to avoid.

Lets say we wanted to map the oceans, how do you determine where you are when GPS doesn't work under water? Get a fixed location, use a combination of sensors to pick a few points to triangulate a location (same way we have a camera angle in games to determine distance to an item) then use them as reference points, and the ability to use a programmable board with things like varying filters based on expected feedback VS actual feedback is new, before we had hardware filters built in and they may have been able to work in perfectly clear water, but add in debris and it became inaccurate, or thermal convection causes scintillation and distortion, so being able to run a kalman filter on multiple systems and deterministically choose the highest accuracy one is new and a huge improvement.
Posted on Reply
#4
ShredBird
SteevoThis is the tech that has been missing from many unmanned systems, using multiple inputs such as inertia, GPS, sonar (material identification/density) radar (location and environment mapping), infrared and being able to combine them in real time for programmed logic and decision making, autonomous machines can use the wider parallel processing to determine exact location, map and determine where to go and where to avoid.

Lets say we wanted to map the oceans, how do you determine where you are when GPS doesn't work under water? Get a fixed location, use a combination of sensors to pick a few points to triangulate a location (same way we have a camera angle in games to determine distance to an item) then use them as reference points, and the ability to use a programmable board with things like varying filters based on expected feedback VS actual feedback is new, before we had hardware filters built in and they may have been able to work in perfectly clear water, but add in debris and it became inaccurate, or thermal convection causes scintillation and distortion, so being able to run a kalman filter on multiple systems and deterministically choose the highest accuracy one is new and a huge improvement.
As far as GPUs go, this one is pretty tame on the transistor count. Will it really offer computational advantages over a modern FPGA? The adaptive filtering does seem like a huge advantage though, that is something software does have over something not easily reconfigured on-the-fly like an FPGA. I'm just curious about the overall computational throughput, though.
Posted on Reply
#5
Caring1
ShredBirdAs far as GPUs go, this one is pretty tame on the transistor count. Will it really offer computational advantages over a modern FPGA? The adaptive filtering does seem like a huge advantage though, that is something software does have over something not easily reconfigured on-the-fly like an FPGA. I'm just curious about the overall computational throughput, though.
The whole point of a FPGA is the ability to reprogram via a GUI and software.
Posted on Reply
#6
ShredBird
Caring1The whole point of a FPGA is the ability to reprogram via a GUI and software.
I know that, but reprogramming gates takes way longer than just switching parameters in software. That's why I was curious if there was an advantage of running a GPU over an FPGA in this case. I was under the impression that a modern FPGA has enough gates to accommodate multiple Kalman Filters or adjust Kalman filters on the fly. I could see a GPU being advantageous if that's not the case, if the FPGA were in need of being reprogrammed while the robot is deployed in order to switch parameters, that's downtime you don't have localization or sensors.

I was just curious if in this specific application they were running into that problem with hardware based Kalman filters. If that makes sense.
Posted on Reply
#7
Caring1
Straight over my head, and not afraid to admit it. :laugh:
Only experience I have is with mining using FPGA's, and the software was pre written but could be easily modified.
Posted on Reply
#8
ShredBird
Caring1Straight over my head, and not afraid to admit it. :laugh:
Only experience I have is with mining using FPGA's, and the software was pre written but could be easily modified.
No worries. I'm an engineer by profession so I get a little carried away. I was hoping to get a dialogue going about the trade-offs, always good to know what's available out there so you can make some informed decisions should you have to build a similar system and talking to people with experience is the best way to learn.
Posted on Reply
Dec 18th, 2024 00:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts