Making self-driving cars a reality

Home/Faculty of Engineering & Technology/Making self-driving cars a reality

Making self-driving cars a reality

Elon Musk, Tesla’s CEO, predicts that by “November or December of this year, we should be able to go from a parking lot in California to a parking lot in New York, no controls touched at any point during the entire journey.”

Musk has clarified the plans and predicts that level 5 autonomy[1], is roughly two years away. Many of the other major automakers like Mercedes and Ford are giving a 2020 to 2025 timeframe for level 4 autonomous driving, but Tesla is pushing hard for the end of 2018. Currently, Tesla’s Autopilot can be classified as advanced cruise control and lane assistance, but new cars rolling out of the California factory will have up to eight cameras providing 360 vision, the ultrasonic sensors have been upgraded around the perimeter of the car and a new supercomputer has been inserted, along with many other features, which will eventually lead to level five autonomous driving in all conditions.

As it stands, every Tesla car being produced, and those currently in the customer’s possessions, collects data and sends it back to the company’s headquarters. Engineers are constantly analysing and refining the system. In October 2016 Tesla announced that they planned to equip every model with autonomous driving hardware. However, these features will run in the background, in what Tesla calls “shadow-mode”, where it is constantly collecting the data from the human driver and comparing it to actions the car would have taken instead. The autonomous driving won’t work until the cars have collectively racked up millions of kilometres of real-life driving and the data therein. Tesla specified that they are just waiting for software validation and regulatory approval, but that by the end of 2018 Tesla could have a level 4 autonomous system enabled in second generation cars. Meaning that drivers could sit back and let the car drive but not in all environments or conditions.

A current problem with the autonomous driving is that the human brain is still infinitely more intelligent than the cars. Mainly because road infrastructure and signs are designed for the human interface. All the signs, warnings, cues and hints were designed with the human in mind, not computers. A human is still more likely to notice a construction sign, see a pothole, or children running into the street because the human brain can fill in the gaps and make decisions based on small bits of incomplete information, whereas the artificial intelligence of the computer cannot.

Initially, the computer doesn’t know anything. It must be taught to recognise pedestrians, by feeding it pictures of pedestrians. Millions of pictures of pedestrians have to be feed to the computer because pedestrians look different. The more data that’s fed, the more vocabulary it has and the more it can recognise what a pedestrian is. The same thing must be done with cyclists, cars, trucks, and at all times of day and different weather conditions. So essentially it has this infinite capability to build up a memory and understanding of what all of these different types of things it could encounter would look like says Danny Shapiro[2].

Another issue is that as autonomous cars are getting closer and closer to driving on the roads moral dilemmas need to be considered by the industry. In a crash situation, things happen so fast that often there is no time to think about what choices you have. You react on gut instinct. But for a computer, a fraction of a second gives it plenty of time to think about ethics. Should the autonomous car jeopardise its passenger’s safety, for the safety of another? What happens if there is more than one passenger, which angle of the car should bear the brunt of the accident? Would this action change if it was the other vehicle’s fault? The answer isn’t as straightforward as it seems…It is still early days but conversations like this need to happen and need to be resolved before production begins.

Once regulations have been met, the software has been approved and ethics considered, Tesla could technically enable a system like that in those markets. But for a truly level 5 coast-to-coast drive where someone could be sleeping at the wheel or not even be in the driver’s seat, it’s unlikely to be before Musk’s “2-year” timeline.


[1] Autonomous driving is ranked on a scale of one to five, with five being fully autonomous. Seventy-two percent of cars on the road today are sitting at level one; in which most functions are still controlled by the driver. Twelve percent of cars are at level two; where there are some automated functions line lane centring and hands-free parking. Currently, most car companies, are aiming for level four; where cars can drive themselves under all conditions, but humans retain the ability to take the wheel.

[2] Senior Director of Nvidia’s automotive business unit, (Tesla’s Autopilot system uses Nvidia’s Drive PX2).

2018-09-05T10:27:09+02:00

Contact Info

Vaal University of Technology, Private Bag X021, Vanderbijlpark, 1900, South Africa

Phone: +27(0)16 950 9531

Fax: +27(0)16 950 9999

Web: VUT Research

Recent Posts

Go to Top