The Future of Tesla Autopilot – What Happens After Mobileye?

Usman Pirzada
Posted Jul 29, 2016
17Shares
Share Tweet Submit

Almost a year ago, I did a very detailed write up on how the Tesla Autopilot works and how it was something that was made up of technology from various hardware and software manufacturers. One of these pieces was a chip provided by Mobileye: the EyeQ3 processor. People have been calling this little piece of silicon the ‘brain’ behind Tesla’s Autopilot and since the company has decided to part ways with Mobileye it begs the question: what comes now? If you were to view this development in a vacuum, it would probably come as a surprise and potentially related to the Model S crash of recent past. However, if you were to take a step back and look at some of the things Tesla has been doing along the way, you would find that this is something that was a long time in the making, and dare I say it, inevitable.

The company has revealed that it will no longer be using Mobileye chips after the EyeQ3 (meaning the upcoming EyeQ4 processor will not be featured in any Tesla vehicle) implementation in its current lineup. Calling the EyeQ3 the ‘brain’ of the Tesla Autopilot is an analogy that is perhaps more or less on point. But if someone were to tell me to make it a bit more accurate, I would say that the EyeQ3 is a ‘part of the brain’ specifically the visual cortex (the area of the brain which receives the visual input from the eyes). The chip is responsible for receiving the feed from the multitude of sensors and processing them. This is where things get a bit technical though.

The current implementation of Autopilot was never capable of fully autonomous driving

While the EyeQ3 chip is indeed the processor responsible for recognizing objects on the road and was in fact the first ever DNN deployed on the road, the system that Tesla uses to “learn” is their own implementation and has nothing to do with Mobileye. In short, even though Tesla is losing the chip which receives the visual information and processes it, all the ‘learning’ its fleet went through in these past few months of trial run are not going anywheresince they are safely stored in Tesla’s very own DNN.

What will be going away is the static Mobileye ‘DNN’ that was used to recognize objects on the road – which depending on what succeeds it could be an improvement. Before we go any further, let me point out some very interesting human resources that Tesla has acquired in the past few months.

Tesla has been building the ultimate dream team for Autopilot 2.0

The story begins with someone called Jim Keller – a guy who has is known in the hall of fame as a legendary chip architect. Jim Keller was lead architect in AMD’s successful K8 architecture and also the man behind the upcoming Zen microarchitecture that is expected to put AMD back on the x86 processor map. If that doesn’t impress you, Jim Keller is also the same guy who headed Apple’s A4 and A5 chip efforts as Chief Architect of Microprocessor Cores. That’s right, if you are using an iPhone right now, chances are you are using a chip that is based on Jim Keller’s designs. Approximately 6 months ago, Jim Keller joined Tesla as the VP of Autopilot Hardware Engineering. So why would Tesla hire a genius processor architect? Well, it only gets better.

German Authorities Want Tesla To Stop Using The Word Autopilot In Advertising

Jim Keller AMDJim Keller (far right) in an AMD conference before he left for Tesla.

In the following weeks, Tesla underwent an aggressive hunt for talent that included poaching some high profile engineers from companies (mostly from AMD). The complete list is given below, I have included Jim Keller as well for completeness’ sake:

  • Jim Keller (legendary microprocessor architect and formerly of Apple and AMD) joined Tesla Autopilot as the VP  of Autopilot Hardware Engineering.
  • Peter Bannon (formerly of Apple) followed Jim to Tesla Autopilot.
  • David Glasco (formerly worked at Intel, IBM and Nvidia) was Senior Director of Server SOC Architecture at AMD before joining Tesla.
  • Thaddeus Fortenberry (formerly Cloud Server Architect at AMD) also joined Tesla Autopilot in the coming weeks.
  • Debjit Das Sarma (former AMD fellow and CPU Lead Architect) also joined Tesla soon after.
  • Keith Witek (formerly AMD Corporate Vice President of Strategy and Corporate Development) joined Tesla as the Director of Autopilot Enablement and Associate General Counsel.
  • Junli Gu (Formerly Member Technical Staff, Machine Learning at AMD) joined Tesla as the Tech Lead for Machine Learning, Autopilot

Keeping the above in mind, if someone were to tell me that Tesla is dropping Mobileye based on a falling out after the Model S crash last month, I would just nod politely and smile for myself. The simple fact is that this is something Elon Musk has been angling towards since a long time. He famously offered George ‘Geohot’ Hotz a multi million dollar bonus to design a Mobileye crushing autopilot implementation although he did rush to the company’s defense at that time and firmly stated that Tesla would continue to use their EyeQ processors.

Going forward Tesla has two choices:

Here is the kicker though. It is a well-documented fact that the current implementation of Autopilot (which includes the suite of hardware and software including the Mobileye EyeQ3) was never sufficient for fully autonomous driving. This is one of the reasons, why Tesla started their hiring frenzy after Elon Musk publicly claimed it (Autopilot) of being the highest priority.

Mobileye’s implementation was one that was tried and tested and played well with regulators, but an argument can be made that its offerings which are usually placed far apart in terms of years, aren’t powerful enough or adaptable enough to support the fast paced environment of a company that leads on the bleeding edge of automotive tech. Mobileye already has a partnership in place with Intel Corporation and BMW that will mature in 2021 and will probably offer a fully autonomous car but for Tesla’s ambitions – they need something faster.

And this is where the team I have mentioned above comes in. Now two things can happen going forward. Either Tesla will create their own chip from scratch, which considering they now have all the resources to do so, is certainly possible, or they will try to implement a brand new system based on a flexible architecture. Nvidia’s GPGPU approach to autonomous driving comes to mind and will probably be something that will be Tesla’s go-to falling short of a completely in-house design. Both approaches have their pros and cons.

German Authorities Want Tesla To Stop Using The Word Autopilot In Advertising

Tesla Autopilot Chip DesignTesla has all the necessary resources to design their own chips and potentially become an IHV for autonomous driving tech.

Designing and fab-less manufacturing their own hardware suite

If Tesla were to design their own chip in house, it would offer them unparalleled flexibility but the costs of fabless manufacturing could be significant. Not to mention that they would have to design an architecture completely from scratch (ARM open source implementations may not be entirely suitable for these types of data workloads). Considering they have people like Jim Keller on board, this is something that is almost certainly possible.

The architecture and silicon will have to be designed in house and will have to be outsourced to fabrication facilities like TSMC for production. This would allow seamless unification of the ‘brain’ of Autopilot and Tesla’s ambitions and would be an ideal-case scenario. It could even allow the company to lead the autonomous driving initiative (even more so than it does right now!) and actually act as a primary supplied of autonomous tech to other car manufacturers. So becoming an IHV for autonomous driving tech is definitely on the table if it chooses to go down this path. This is something that would play well with investors considering it would mean a completely new stream of income for Tesla.

Getting into bed with Nvidia Corporation

Elon Musk appears in Nvidia’s GTC event to talk about the self-driving initiative.

Failing that, the company could look into a hybrid system that utilizes Nvidia’s processors. Currently, Tesla does not use the Drive PX or Drive CX kits from Nvidia and only uses the Tegra K1 VCM to power its dashboard cluster and infotainment system. That said however, Elon Musk himself has appeared many times in Nvidia’s demonstrations of autonomous driving and it isn’t really a secret that Nvidia fancies itself as an autonomous driving chip maker as well. Elon Musk and Jen Hsun Huang, CEO of Nvidia are on very good terms and the former has regularly appeared on Nvidia’s showcases of their self-driving tech.

Not to mention one of the biggest advantages of having Nvidia as the autonomous partner is that their chips scale exponentially in power (and features) almost every year. That might not sit well with regulators but appears to be a much better fit for a fast paced company like Tesla than Mobileye. Using a GPGPU based approach would allow them much more flexibility for updates delivered over the air as well  as increasing the capabilities of the Autopilot as time passes.

Its also worth noting that the current implementation of Autopilot uses monocular cameras coupled with a radar. For fully autonomous driving, changes will have to be made in the sensing department as well and not just the ‘brain’ of Autopilot. Whatever, Tesla has planned for the future, we can say one thing for sure. The enormous amount of talent the company has acquired coupled with the raw ambition of Elon Musk make for expectations of great things to come.

Share Tweet Submit