No conversation about the future of our society would be complete without mentioning self-driving cars. It seems they are always in the headlines with major companies like Tesla and Google leading the charge. When most people think about self-driving cars, they picture the numerous benefits they would experience if only their car could drive itself. Performing things like eating, sleeping, playing around on your phone, and many other things while riding alone in your car would become a reality as our boring commutes are transformed into whatever we want them to be.

Some people immediately focus on the safety hazards that may present themselves. Others are adamant about the fact that they would never trust a machine with their lives. Is there a legitimate concern? Perhaps, but like with other kinds of AI powered technology the benefits generally outweigh the concerns. In this post we will describe exactly what self-driving AI’s are and how they work.


A simple definition of a self-driving car is a car that is capable of sensing its environment and navigating with little to no human input. These vehicles have sensors in the form of cameras and motion detection devices and GPS systems that allow the agent to interact with the external world and operate the vehicle safely, effectively, and autonomously. Things like obstacles and road signs must be interpreted by very sophisticated control systems that analyze sensory input. The rules of the road must also be programmed into the vehicles main computer to ensure safety.


Creation of these self-drivers dates back to the early 2000’s. Back then these autonomous vehicles existed only as an idea in the heads of many who sought to drive innovation forward. A branch of the US department of defense called DARPA decided to hold a challenge in 2004. They invited virtually anyone to build a self-driving car that could drive itself across the Mojave Desert in California. The winner was to receive a 1 million dollar reward. Now during that time computer programming was far more primitive and none of the vehicles were able to complete the 142 mile challenge. Most of the cars crashed and flipped over within view of the starting line.

Though no one completed the challenge the idea that a fully autonomous self-driving car was possible was still shining brightly in the minds of those that had failed. The following year the same competition was held and 5 cars finished the course proving that it was possible. By 2007 most of the cars were able to follow traffic laws, parking effectively, and making flawless lane changes. Google launched its first self-driving project in 2009 and a few years later the CEO of Tesla Elon Musk followed suit. Companies like Uber and Lyft also have attempted to break into the industry knowing how profitable the technology could become. The self-driving car scene has seen mostly ground breaking innovation and hardly any significant roadblocks. The arms race has begun.


Google has recently been expanding their self-driving car testing programs.


We already briefly touched on the fact that these autonomous cars have to be outfitted with a wide array of different sensors and a sophisticated GPS system. The self-driver uses the GPS to orient itself relative to its environment. It then uses the sensory input provided to it by its sensors to build a three dimensional representation of the external environment and to refine its estimated future position according to how fast the vehicle is traveling. Most self-driving cars are able to use the 3d model that they generate of their environment to make intelligent decisions revolving around finding the most efficient route to their destinations, minimizing any safety risks by following all traffic rules, avoiding obstacles that must be avoided and stopping entirely for others, and predicting what other drivers will do in order to steer clear of collisions.

That last function is by far the hardest to program into a computer. One big reason humans crash into each other while driving is because humans are extremely unpredictable. The self-driving cars face this same dilemma. Most of the accidents that are reported involving self-driving cars are due to the fault of another human driver or a pedestrian doing something they shouldn’t be doing. In the next post in this new series we will explore how incidents like this will slow down the roll out of this technology and other challenges with regulating it.


Before the self driving vehicle can make decisions about where it is to travel it must build an accurate 3d model of its external environment and orient itself precisely according to any obstacles it may encounter. The sensors usually employed for this purpose include laser rangefinders and cameras. A laser rangefinder scans the environment using sweeping laser beams to calculate the distance to nearby objects by measuring the time it takes for each laser beam to travel to the object and back. It is a simple yet powerful method of ensuring the car knows exactly what its position is at any given point in time. The camera allows the self-driver to identify objects in its path utilizing various machine learning techniques that are responsible for creating software that deals with facial and image recognition.

The vehicles internal map provides the agent with an accurate representation of where everything is along with the predicted locations of moving objects in its view such as other cars or pedestrians. These objects must be accurately categorized so that the agent knows what it is looking at. For example, a two wheel vehicle moving at 40 mph would be interpreted as a motorcycle rather than a bike, these interpretations affect how the agent will respond if it has to avoid the two wheel vehicle for any reason. When approaching traffic lights or crosswalks or when jaywalkers decide to walk in front of the agent it is able to make intelligent decisions regarding avoiding the obstacle or stopping before the obstacle based on how the object is classified.


The goal of the self-driver is to successfully get itself from point A to point B in the most direct and efficient way to minimize fuel consumption and to follow all the rules of the road along with traffic signals and stop signs to maximize safety and to ensure no accidents happen. The specific route is usually handled by the GPS which it has its own mapping algorithms similar to the default maps app on most smartphones. Those long range paths are based on a series of short range paths that the vehicle would be capable of completing given its current speed and the speeds it is limited to traveling by the posted speed limits.

Many possible paths are generated by the autonomous driver but are quickly eliminated if they veer off the road or collide with an obstacle or are simply impossible to take for other reasons. For example, a vehicle traveling at 50 mph would not be able to safely complete a right turn 5 meters ahead, therefore that path would be eliminated from the possible set of paths. This process must be repeated over and over to ensure that the self-driver stays on course according to the long range path generated by its internal GPS.


When will self-driving cars become mainstream? How will we ensure public safety? What will we do about jaywalkers who make it more difficult for the self drivers to navigate safely by being unpredictable? What kind of challenges are encountered when attempting to regulate this technology? We will attempt to analyze these problems and come up with possible solutions in the upcoming part 2 of this new series about self-driving cars. What do you think about all this? Leave your comments down below.

Leave a Reply

Your email address will not be published. Required fields are marked *