Bosch and NVIDIA Team Up for Xavier-Based Self-Driving Systems for Mass Market Cars
by Anton Shilov on March 18, 2017 2:00 PM EST- Posted in
- SoCs
- Arm
- NVIDIA
- Volta
- Xavier
- Automotive
- Bosch
- Self-Driving Cars
Bosch and NVIDIA on Thursday announced plans to co-develop self-driving systems for mass-market vehicles. The solutions will use NVIDIA’s next-generation codenamed Xavier SoC as well as the company’s AI-related IP. Meanwhile, Bosch will offer its expertise in car electronics as well as auto navigation.
Typically, automakers mention self-driving cars in the context of premium and commercial vehicles, but it is pretty obvious that, given the opportunity, self-driving is a technology that will be a part of the vast majority of cars available in the next decade and onwards. Bosch and NVIDIA are working on an autopilot platform for mass-market vehicles that will not cost as much as people think, and will be able to be widespread. To build the systems, the two companies will use NVIDIA’s upcoming Drive PX platform based on the Xavier system-on-chip, which is a next-gen Tegra processor set to be mass-produced sometimes in 2018 or 2019.
Bosch and NVIDIA did not disclose too many details about their upcoming self-driving systems, but indicated that they are talking about the Level 4 autonomous capabilities in which a car can drive on its own without any human intervention. To enable Level 4 autonomous capabilities, NVIDIA will offer its Xavier SoC featuring eight general-purpose in-house-designed custom ARMv8-A cores, a GPU based on the Volta architecture with 512 stream processors, hardware-based encoders/decoders for video streams with up to 7680×4320 resolution, and various I/O capabilities.
From performance point of view, Xavier is now expected to hit 30 Deep Learning Tera-Ops (DL TOPS) (a metric for measuring 8-bit integer operations), which is 50% higher when compared to NVIDIA’s Drive PX 2, the platform currently used by various automakers to build their autopilot systems (e.g., Tesla Motors uses the Drive PX 2 for various vehicles). NVIDIA's goal is to deliver this at 30 W, for an efficiency ratio of 1 DL TOPS-per-watt. This is a rather low level of power consumption given the fact that the chip is expected to be produced using TSMC’s 16 nm FinFET+ process technology, the same that is used to make the Tegra (Parker) SoC of the Drive PX 2.
The developers say that the next-gen Xavier-based Drive PX will be able to fuse data from multiple sensors (cameras, lidar, radar, ultrasonic, etc.) and its compute performance will be enough to run deep neural nets to sense surroundings, understand the environment, predict the behavior and position of other objects as well as ensure safety of the driver in real-time. Given the fact that the upcoming Drive PX will be more powerful than the Drive PX 2, it is clear that it will be able to better satisfy demands of automakers. In fact, since we are talking about a completely autonomous self-driving system, the more compute efficiency NVIDIA can get from its Xavier the better.
Speaking of the SoC, it is highly likely that the combination of its performance, power and the level of integration is what attracted Bosch to the platform. One chip with a moderate power consumption means that Bosch engineers will be able to design relatively compact and reasonable-priced systems for self-driving and then help automakers to integrate them into their vehicles.
Unfortunately, we do not know what car brands will use the autopilot systems co-developed by Bosch and NVIDIA. Bosch supplies auto electronics to many carmakers, including PSA, which owns Peugeot, Citroën and Opel brands.
Neither Bosch nor NVIDIA made any indications about when they expect actual cars featuring their autopilot systems to hit the roads. But since NVIDIA plans to start sampling of its Xavier in late 2017 and then mass produce it in 2018 or 2019, it is logical to expect the first commercial applications based on the SoC to become available sometime in the 2020s, after the (extensive) validation and certification period for an automotive system.
Related Reading:
Source: NVIDIA
43 Comments
View All Comments
jrs77 - Saturday, March 18, 2017 - link
Self driving cars might work technologically, but they'll never be able to take the philosophic hurdle necessary to be allowed on EU-roads.What does the system do, when it drives down a road, a motorcycle coming up the pther way and suddenly a stroller with a child inside rolls onto the street with not enough road ahead to stop before hitting the stroller?
A: hit the stroller with the child
B: swing to the left hitting the motorcycle
C: swing to the right into the people on the walkway
Aslong as the question isn't answered to the extend that it can be signed into law which life is worth less in that case and what option the car should take, aslong such automated systems don't belong on our roads.
Such decisions have to be made by humans and not by machines.
Alistair - Saturday, March 18, 2017 - link
Actually that's an easy decision, made by people in a split second all the time. If you'll hit something else, you just try to stop and hit what is in your lane. You don't actually know what the consequences of moving off the road to the walkway will be, and oncoming collisions are more fatal. Try to stop and hit what is in your lane is simple. That's what usually happens in real accidents anyways.asmian - Saturday, March 18, 2017 - link
Agreed. And the assumption behind that problem is that the car can tell that the stroller contains a baby and is therefore something "worth" making an extra attempt to miss. I'm pretty sure all this car-driving AI isn't going to be connected to Star Trek sensors that can tell human lifeforms apart from mere objects that happen to be on collision courses. The logic isn't going to be differentiating between "is it a baby stroller or a shopping trolley?" but simply "can I stop in time?"I'd suggest that a human in that situation, however traumatised by unavoidably hitting the child, would not be prosecuted if there was no possibility of averting the accident, which is the Kobayashi-Maru (no-win) scenario you are trying to set up. (You canna change the laws of physics - momentum of car, available braking force, likelihood of skid on road surface, distance to impact.) So why try to create a higher philosophical hurdle to pass for a machine that would have even better reaction times and control over the car than a human driver in that situation? The whole basis of the argument is nonsensical.
ddriver - Sunday, March 19, 2017 - link
What's more worrying here is that despite all the marketing monikers, machine learning is not AI. It does fairly good in situations I got to do many mistakes over and over, but once it encounters a previously untested set of environment factors, it cannot reason its way out of it, it can only fail.Granted, if the hardware survives, that learned knowledge could be incorporated into future releases, getting better over time, but it will surely suck to be an early adopter, and end up disabled or dead as part of the learning experience.
Having some actual AI would surely help prevent or at the very least severely mitigate the damage of learning on the go, with actual human beings inside, but then again, I doubt the industry will push for an actual AI any time soon, mostly because it will do and say things that make sense rather than do and say things that benefit the industry.
qap - Sunday, March 19, 2017 - link
Actually no. It is learning general rules of behavior (on a specific instances) and it will apply them to any situation. In some situations those rules will fail, but that is no different from the way you or any other human is approching situations (child will also always fail the "hot stove situation" first time, that doesn't mean it's not inteligent). And as you pinted out - it cen LEARN from those situations. Therefore it is AI.There is a difference in how complex rules it can learn for now and how quickly, but basicaly thats all. Obviously you can draw an arbitrary line how complex rules it can learn and how fast to call it na "true AI", but that's just an arbitrary line.
ddriver - Sunday, March 19, 2017 - link
Let me put it this way - it sucks at improvising. It could not possibly succeed without failing, usually many times over.This is not how intellect works. I don't need to jump off a bridge once, much less multiple times, before I establish that it will cripple or even kill me. And I don't get to. That's what separates intellect from a programmed machine - the intellect can reason, the machine can only follow a static set of predefined rules.
Which is why I refuse to call machine learning AI, it doesn't reason, it merely automates the programming by means of failing until it succeeds. It is a purely statistical approach. There is no reasoning involved, there is no understanding either.
That does not diminish the applications of machine learning. For driving cars it will likely be much safer even now, in its infancy, with a predominantly human driver traffic system, and will only get better in time, as the software matures and the number of human drivers decreases. It is just not AI. There is no intellect involved.
That being said, I wonder if we are going to see discrimination, much like we see with everything else. For example, David Rockefeller got his 7th hearth transplant at 101 years old, with only spending 2 years with his last hearth. Whereas regular people need to wait for many years for a donor, and do not even qualify for a transplant if they are over 70. The list goes on for pretty much everything in life.
Which begs the question, will we see self driving cars crashing to kill average Joe but minimize infrastructure damage, while driving mr. Fatcat into a soft group of pedestrians to cushion the impact and minimize his injuries?
ddriver - Sunday, March 19, 2017 - link
Also, machine learning cannot really learn how to learn. What it learns from and how it is applied pretty much has to be preprogrammed. Machine learning will not learn from something it wasn't set up to learn from. If lucky, developers would be able to identify what went wrong, and apply that in a future version, but on its own, it cannot learn how to learn. Which separates it from intelligence. It needs human input, it needs to be told what factors to factor and how to apply the results. And you don't have an "intellect" without those prequisites. You have automated programming, that is set up by humans, there is no self-learning involved whatsoever. This is far from AI, even machine "learning" would be generous, what it truly is, is machine TRAINING.But that just doesn't sound cool enough to the impressionable simpletons, so why not go with AI, because it is sooo much cooler ;)
ajp_anton - Sunday, March 19, 2017 - link
Human intelligence pretty much works the same way. We are pre-programmed by our genes (involving motivation and the way our neurons work), we have a lot more neurons than these machines, and we typically need many years of learning experience before we can be called intelligent.So it's not really a fair comparison. The self driving AI is general AI in the sense that it can learn and generalise from precious experience, but will only live in the world of driving, and it's capacity for learning is limited to that area. If you want to reserve the term AI for something truly general, you'll have to wait until WE learn everything there is to know and understand everything there is to understand about the universe, otherwise you'll never know if it's truly general.
ddriver - Sunday, March 19, 2017 - link
A self driving car understands car driving about as much as a mechanical cookie extruder understands cookie making. It is just a tool, albeit more sophisticated and digital, but still just a mechanism.Learning everything is not necessary, all that is necessary is the ability to learn, and I mean like new things. Current "AI" is entirely incapable of that, because there is no understanding involved, there is only evaluation according to a preprogrammed static, finite set of data based on our understanding, based on our intellect. Just because it follows rules as presented by our intellect does not imbue intellect into the machine.
Machines can already destroy us at a number of disciplines, but they all involve doing work we told them to do. This includes the so called "AI" as well. It is just number crunching, there is no abstract thought.
You could whip or treat a dog into playing a tune on the piano, which wouldn't be much different from "AI", but even then, that dog will not have learned music, it would be trained to play a specific tune, it will not understand music as music, and as a result it would not be able to play just about any music piece, much less compose music.
With machine training you could go further, you could easily program music theory into it, or you can even use existing music as training material to make it figure music theory on statistical level, you could have a machine that writes 10 hours of commercial grade music per second... but it will still not understand music. It may contain all there is to know about music, far more than any human could possibly fit in his brain, yet it will not understand music. Its standards for music would be what we've composed and have considered to be good. It will never have its own taste of music, music will never be anything more than numbers to it.
As for our brains and our genes - the genes don't really contain any meaningful information. The human is born with a few basic instincts, many of which fade in time. I am actually following progress in neurology with great attention, and I can assure you, as much as we know about what the human brain is and how it works, what makes our ability to learn is still a complete mystery. Labs have ran large scale brain simulations at a sufficiently high speed, and there is no evidence that such simulations are capable of cognition, much less intellect, awareness or abstract thought. Don't want to sound spiritual, because I am really not, but I think it is obvious that there is more to it than the machine. It may not even be attainable, I mean all that we can do is a product of our intellect, we cannot bring our intellect into it any more than a computer game character can leap out into the real world. It may well be non-reproducible to us.
That being said, even if we cannot simulate and produce an actual artificial intellect, we can incorporate our set of understanding into one, which for most intents and purposes would qualify as an AI, while likely impossible to create an intellect that learns on its own, it is entirely doable to create one that leans as we do. That would be AI. Until then, it is, at best, machine TRAINING.
Meteor2 - Sunday, March 19, 2017 - link
Btw the argument you (ddriver) are making it quite similar to the 'philosophical zombie', which postulates that you could build a machine which looks like a human and reacts like one, e.g. poke it and it goes 'ouch', and say it's really only a zombie because we built it. But that's easily countered: if it walks like a duck, swims like a duck, and quacks like a duck, it's a duck. There's no magic inside us (though some philosophers and scientists, 'dualists', think there is).