Solving the Automotive Bandwidth Problem: Aquantia Partners with NVIDIA for 10GbE
by Ian Cutress on January 29, 2018 9:00 AM EST- Posted in
- Automotive
- Networking
- SoCs
- NVIDIA
- Aquantia
- Xavier
- 10GbE
- Autonomous
One of the lesser known topics around fully autonomous vehicles is one of transporting data around. There are usually two options: transport raw image and sensor data with super low latency but with high bandwidth requirements, or use encoding tools and DSPs to send fewer bits but at a higher latency. As we move into development of the first Level 4 (near autonomous) and Level 5 (fully autonomous) vehicle systems, for safety and response time reasons, low latency has won. This means shifting data around, and a lot of it.
Bandwidth required, in Gbps for raw video at a given resolution and frame rate, also at a specific color depth. E.g. 720p30 at 24-bit RGB (8-bit per color) is 0.66 Gbps
Raw camera data is big: a 1080p60 video, with 8-bits of color per channel, requires a bandwidth of 0.373 GB/s. That is gigabytes per second, or the equivalent of 2.99 gigabits per second, per camera. Now strap anywhere from 4 to 8 of these sensors on board, the switches needed to manage them, the redundancy required for autonomy to still work if one element gets taken offline, and we hit a bandwidth problem. Gigabit simply isn't enough.
The announcement today is two fold: NVIDIA and Aquantia are announcing a partnership that means Aquantia based network controllers and PHYs will be used inside NVIDIA's DrivePX Xavier platform, and subsequently the Pegasus platform as well. The second announcement is the new automotive product stack from Aquantia, AQcelerate, consisting of three chips depending on the automotive networking requirement.
Aquantia AQcelerate for Automotive | |||||
Type | Input | Output | Use Case | Package Size (FCBGA) |
|
AQC100 | PHY | 2500Base-X USXGMII XFI KR |
10GbE 5GbE 2.5GbE |
ADAS Cameras Parking Assist Sensors Telematics Audio/Video Infotainment |
- |
AQVC100 | MAC | XFI | PCIe 2/3 x2/x4 | 7x11 mm | |
AQVC107 | Both | PCIe 2/3 x1/x2/x4 |
10GbE 5GbE 2.5GbE |
12x14 mm |
For the three new chips, one is a PHY, one is a PCIe network controller, and a third combines the two. The PHY can take a standard camera inputs (2500BASE-X, USXGMII, and XFI) and send the data through multi-gigabit Ethernet as required. The controller can take standard XFI 10 Gb SerDes data and output direct to PCIe, while the combination chip is as a regular MACPHY combo, converting Ethernet data to PCIe. All three chips are built on a 28nm process (Aquantia works with both TSMC and GloFo, but stated that for these products the fab is not being announced), and qualified for the AEC-Q100 industry standard.
Click to enlarge block diagrams
The benefits of using multi-gigabit, as explained to us by Aquantia, is that it allows for a 2.5G connection using only a standard twisted pair cable, or 5G for dual pair, up to 10G for quad pair. Current automotive networking systems are based on single pair 100/1000Mbit technology, which is insufficient for the high bandwidth, low latency requirements that companies like NVIDIA put into their Level 4/5 systems.
These chips were designed on Aquantia's roadmap before its collaboration with NVIDIA, however NVIDIA approached Aquantia looking for something to work, given Aquantia's current march on multi-gigabit Ethernet ahead of its rivals. We are told that the silicon doesn't do anything special and specific with NVIDIA, allowing other companies keen on automotive technology to use Aquantia as well. With Aquantia's lead in the multi-gigabit Ethernet space, over say Intel, Qualcomm, and Realtek, it seems that the only option at this point for wired connectivity, if you need to send raw data, is something like this. However, the lead time for collaboration seems to be substantial: Aquantia stated that NVIDIA's Gary Shapiro recorded promotional material for them in the middle of last year, however Xavier was announced in 2016, so it is likely that Aquantia and NVIDIA were looking at integration before then.
A quick side discussion on managing all this data. If there is 16 GB/s from all the sensors flying around, the internal switches and SoCs has to be able to handle it. At CES, NVIDIA provided a base block diagram of an Xavier SoC, including some details about its custom ARM cores, its GPU, the DSPs, and some about the networking.
Image via CNX-Software
The slide shows that the silicon has gigabit and 10 gigabit embedded in (so it just needs a PHY to work), as well as 109 Gbps total networking support. On the Video Processor, it supports 1.8 gigapixel/s decode, which if we plug in some numbers (1080p60 = 124MPixel/s) allows for about a dozen or so cameras at 8bit color, or a combination of 4K cameras and other sensors. The images of the Xavier also show the ISP, capable of 1.5 gigapixel/s.
An mockup example from Aquantia showed a potential Level 4/5 autonomous arrangement, with 10 RADAR/LIDAR/SONAR sensors, 8 cameras, and a total of 18 PHYs, two controllers, and three switches. Bearing in mind that there is a level of redundancy for these systems (cameras and sensors should connect two at least two switches, if one CPU fails than another can take over, etc), then this is a lot of networking silicon to go into a single car, and a large potential for anyone who can get the multi-gigabit data transfer done right. The question then comes down to power, which is something Aquantia is not revealing at this time, instead preferring to allow NVIDIA to quote a system wide level power.
The image at the top is the setup shown to us by Aquantia at CES, demonstrating a switch using AQcelerate silicon capable of supporting various cables, including the vital 2.5 Gbps over a single pair.
Related Reading
- Aquantia Launches New 2.5G/5G Multi-Gigabit Network Controllers for PCs
- Aquantia Launch AQtion 5G/2.5G/1G Multi-Gigabit Ethernet Cards (NICs) for PCIe
- Lower Cost 10GBase-T Switches Coming: 4, 5 and 8-port Aquantia Solutions at ~$30/Port
- Dell Now Offers Aquantia AQtion AQN-108-Based 5 GbE Cards with Select PCs
Source: Aquantia
25 Comments
View All Comments
rhysiam - Monday, January 29, 2018 - link
The cameras power too, so even if you went fibre for the data you'd still need to run copper anyway. That'd mean two cables and an extra point of failure.Amandtec - Monday, January 29, 2018 - link
Optics may also shatter if there is initial but not serious impact in an accident rendering the car unable to function properly for the remainder of the the 'accident process'.mode_13h - Tuesday, January 30, 2018 - link
Yeah, kind of like how you've got little brains distributed all over your body....not! And while some processing does happen in the optic nerve, that's all about the high latency of nerve fibers.
Distributed processing would add cost to the sensor modules and potentially add cooling requirements or perhaps at least make them more bulky and difficult to mount. It also increases the set of potential failures they can have and increases overall system complexity.
If the processing can be handled in a centralized fashion, then why not? It simplifies a lot of problems, like fault tolerance. Another benefit is that you could upgrade everything by simply swapping out a single compute module.
Stochastic - Monday, January 29, 2018 - link
It's kind of interesting how the evolution of self-driving tech might mirror the evolution of our own nervous system in some respects. Our visual stream has similar latency/bandwidth/computation tradeoffs. For instance, only our foveal vision is high acuity because the optic nerve simply doesn't have the bandwidth to transmit an entire visual field worth of high fidelity data. There are many other insights about information processing that can be gleaned from the mammalian nervous system.mode_13h - Tuesday, January 30, 2018 - link
Yeah, you're right. They should used foveal processing and then add motors to pan and tilt each of the car's cameras, like real eyeballs. That would surely be an improvement, plus it would sound cool to hear the car looking around it, all the time, and keep mechanics employed replacing all of those motors.And then, like real retinas, maybe they can add a blind spot to the sensors. It must be a good solution, because it's the one nature arrived at, and natural designs are always globally optimal and perfect, right? It's why humans are incapable of any perceptual or cognitive errors, so we should design computers to be exactly the same as us. The more limitations of biology we can faithfully reproduce in copper and silicon, the better they will surely be.
But why stop there? Maybe they could switch from 10 GbE to some sort of electro-chemical signalling mechanism, to make the car more natural and feel more alive. Increasing latency and decreasing bandwidth can only be a good thing, since it will make the car's network more like the nervous system of animals, which are the pinnacle of all design in the universe, regardless of biological and material limitations.
mlvols - Tuesday, January 30, 2018 - link
I think I sensed sarcasm, but maybe it's just my biological limitations playing tricks on me...Baub - Monday, January 29, 2018 - link
Firstly, I agree in pushing for higher bandwidth in all infrastructure. I have gigabit internet, and I wish is was more widely adopted across the internet. Now, does it seem like 3Gbps is a bit high for 1080p 30 at 8bit per channel? Seems like, 10Gbps isn’t even a drop in the bucket if you had 8 or 10 cameras going at that bandwidth. How much compression versus latency are we talking about? Smart phones can record a much lower bandwidth video in milliseconds 30 FPS has 33.3 milliseconds wait period, before the next frame has to be recorded. It just seems like the figures are inflated.Billy Tallis - Monday, January 29, 2018 - link
The bit rates in that slide are for uncompressed video. PCs operate in the far right column of 24 bits per pixel, 8 each for red, green, blue. If your computer vision system is monochromatic, then you may be able to get away with far fewer bits per pixel.Alternatively, you can read that table as indicating the bit rate when compression results in an average of eg. 8 bits per pixel, which would be 3:1 compression if starting with 24bpp RGB data.
N Zaljov - Tuesday, January 30, 2018 - link
This might be the stupidest question in this regard, but: Is there a reason not to use MIPI C-PHY for the sensors, apart from being a standard for embedded stuff (well, the sensors would actually count as embedded, but whatever...), instead of GbE?I‘m asking because there‘s not much info available (at least openly) about maximum wiring leghts, SNR, distortion in general etc. that could turn C-PHY into a worthless sucker for autonomous driving solutions.
Kevin G - Tuesday, January 30, 2018 - link
If a manufacturer wants really low latency and high quality imaging, they'll skip Ethernet entirely and just use SDI. No conversion latency from a camera and no network overhead. 12 Gbit bandwidth on copper is possible today with a planning group worksing on a 24 Gbit version.There is also the potential to move to twinaxial cabling to further reliabillity. Everything automotive seems be based on commodity specs but slightly modified (see HDMI Type-E).