By Dr. Lance Eliot, the AI Trends Insider
I was driving on a highway in Northern California recently when I saw up ahead of me a wind turbine off the side of the road. I’ve seen modern wind turbines before, but this one caught my eye. I liked the design of it and this one was cranking pretty fast, hopefully generating lots of electrical power. As I got closer to it, I realized that there were quite a number of them, all lined up across a vast valley. In fact, as I got even closer, I could see hundreds of them, maybe even more. It was an amazing sight to see. Wind turbine upon wind turbine, nearly as far as the eye could see. Some were turning rapidly in the prevailing wind. Some were moving slowly. Others were standing still. I suppose I should have been paying attention to the highway, but I admit that I was in visual awe of this enormous collection of wind turbines.
I wondered who put all those wind turbines there. Why are they there, in terms of presumably this must be a very windy place? How much power do they generate? How long have the turbines been there? Is there a tour that one can take to see the wind turbines up close? How much maintenance do the wind turbines require? Do birds get whacked by the wind turbines? How do they protect wildlife from getting hurt? These and a zillion more questions entered into my mind.
In today’s modern world, I could take a look at my GPS, let’s assume that I was using Google, and find out where I was. I could then likely have done a search to find out more about the place. Turns out, I did indeed do so, and I discovered that I had just driven through Altamont Pass. It houses the Altamont Pass Wind Farm, one of the first wind farms in the United States and considered one of the largest wind farms in the world. There are nearly 5,000 wind turbines there. The combined effort of the wind turbines produces nearly 600 megawatts of power. Amazing stuff!
What does this have to do with self-driving cars?
At the Cybernetic Self-Driving Car Institute, we are developing AI for self-driving cars that looks at the scenery around the self-driving car and will try to determine what is there. I realize that most self-driving cars are already doing this on a limited distance basis, namely that most self-driving cars are detecting nearby road signs, fire hydrants, sidewalks, walls, and the like. But, very few if any are looking beyond the immediate scenery.
In fact, we call this kind of AI the catchy phrase of Extra-Scenery Perception (ESP2). It is “extra” in that it looks outside the norm of what most self-driving cars are scanning. It tries to “perceive” the surrounding area of the self-driving car, doing so at a distance of the self-driving car such as a football field length away or more.
Of course, there is the conventional ESP, Extra Sensory Perception, and we are here admittedly playing a bit of humor on that wording, thus, we call ours ESP2, appending the number two, in order to distinguish it from the true use of the acronym ESP. We aren’t yet reading people’s minds with the AI of self-driving cars, though, as I’ve mentioned many times, we are working on BMI (Brain Machine Interfaces) for self-driving cars.
What would the ESP2 do for us? It would have detected the wind turbines and potentially let the occupant of the self-driving car know about the wind turbines. Besides detecting the wind turbines, it would have looked up information about them, and been able to tell the occupant the various facts that I’ve told you earlier in this article. Presumably, it could even have asked the occupant whether they would like to stop and take a tour, and possibly have connected with the online tour system of the wind farm and made sure that there were tickets available.
That’s an example of how the ESP2 could be beneficial to the occupants of the car, doing so on a somewhat touristy kind of approach. There are though more serious kinds of aspects to developing the ESP2. Allow me a moment to share an example.
Predicting a Dust Cloud
While I was driving past that wind farm, I also happened to notice that there were some tractors moving slowly on a dirt road that was adjacent to the wind farm. The busy tractors were a couple of miles up ahead of me and I could see that they were kicking up a lot of dirt. A large dust cloud was being created by the tractors. So what, you might ask? Well, I realized that eventually that dust cloud was going to be carried by the wind onto the highway. And, the timing looked like I might end-up driving right into that thick dust cloud. It wasn’t covering the highway just yet, but by the pace of the wind it was a reasonable prediction to anticipate that the dust would arrive to the highway at about the same time that I passed nearby to the tractors.
For most AI and self-driving cars, the sensors are narrowly focused to a short distance from the car. As such, they would not have realized that the dust cloud was forming. Only once the self-driving car pretty much entered into the dust cloud would the sensors realize that something was amiss. By then, the cameras of the self-driving car might be so covered with dust that they no longer would work properly to gauge the visuals needed to properly drive the self-driving car.
If the AI had Extra-Scenery Perception, it would have been able to predict the possibility of the dust cloud. As such, the AI could then either decide to route the vehicle a different way, trying to avoid the dust cloud. Or, the AI could have opted to slow down the self-driving car and try to pace the self-driving car such that the dust cloud would float over the highway prior to arrival of the self-driving car, and be gone by the time the self-driving car reached that point of the roadway. Or, the AI could have decided that it would keep going, but then be prepared that the visual cameras of the self-driving car will become occluded. It could have even used special shutters over the cameras to protect them from the dust, being willing to have its eyes shut momentarily to ensure they did not get permanently damaged.
None of those anticipatory acts could be planned for, if the AI was not looking beyond the immediate scenery. Thus, the value of having Extra-Scenery Perception, if you will.
Currently, sadly, we aren’t arming most of the self-driving cars with sensory devices that can look that far ahead. Also, there is so much effort going into just keeping the self-driving car on the road and not hitting obstacles, the idea of doing a larger scenery analysis is at the back of the bus, so to speak. It’s a nice idea, some developers say, but they have bigger fish to fry right now. For us, the desire to get to a Level 5 self-driving car, which is one that can do anything a human driver could do, causes us to believe that it is important to be working on ESP2.
When you consider the nature of the Extra-Scenery Perception, you’d realize that there are two major kinds of aspects that it will be looking for: (1) enduring, and (2) emerging.
One aspect consists of things that are of an enduring nature. For example, the wind turbines are an enduring item. They are there now and will likely be there a week from now. The AI and ESP2 can build up a library of enduring items over time. Obviously, many of those items can be researched online too, such that via GPS you can have the AI figure out what you will be expecting to see up ahead. The description though of something such as a wind farm is not as precise as what you actually see when you drive nearby the item. Therefore, though the online research will be fruitful, there is nonetheless still a need to collect the local data about the enduring item as you drive past it.
With the advent of V2V (vehicle to vehicle) communications, we’ll eventually be able to have each self-driving car inform another self-driving car about what is up ahead. Suppose that someone else that had been a few miles ahead of me had the ESP2 on their self-driving car, it could have detected the wind farm prior to my being able to see it, and have passed along to my self-driving car that the wind farm was coming up.
What would have been even more helpful likely would be the passing along of information about an emerging item. For example, the dust cloud was an emerging item. It was not of an enduring nature, and instead was something of a momentary or temporary nature. It’s detection in real-time was essential as its impact was also in real-time. Emerging items are aspects that can have a more immediate impact on the self-driving car at the time of detection.
That’s not to say that an enduring item might not also have an emerging aspect too. Suppose that one of the blades on a wind turbine suddenly broke off and went flying into the air. This is an emerging aspect that it would be handy for the AI to detect, and therefore take evasive action for the self-driving car if needed to avoid the flying blade.
This is an important point because you might think that an enduring item only needs to be scanned once. In other words, if I drove past the wind farm again, you might say there’s no need to do a detection and analysis because I had already driven past it previously. That’s not though taking into account that things change over time. We need to assume that even the enduring items will change over time. The second time analysis will be faster to undertake, since the AI mainly needs to ascertain only differences between what it determined before and what it determines now.
Machine Learning an Essential Element of the ESP2
We are using Machine Learning (ML) as an essential element of the ESP2. For example, driving around farm areas it becomes apparent that there are likely chances of dust clouds. The ML portion tries to look for patterns of the extra-scenery objects and learn over time what they might mean for the efficacy of the self-driving car. This also would be placed into a shared database so that other self-driving cars could tap into it. Your individual self-driving car can benefit from the hundreds or ultimately thousands upon thousands of other self-driving cars that are doing EPS2 too.
One question that I get asked when I describe the Extra-Scenery Perception is whether it makes sense to be detecting and analyzing everything. If the AI is trying to analyze everything around it, this would seem to be a rather large problem, involving lots of processing time and lots of memory. Shouldn’t instead the ESP2 only be invoked when needed?
That’s the classic chicken-and-the-egg kind of question. How will the AI know when it is appropriate to invoke the ESP2? If it was not invoking it when the tractors were kicking up dust, there would have been no way to learn from the dust cloud. If it was not invoking it when the wind farm appeared, it would not have learned about the wind farm aspects. You could argue that the human occupant inside the self-driving car should tell the AI to start doing an analysis, such as maybe when I was looking out the window of the car that I could have told the AI of the self-driving car to pay attention to the dust cloud or pay attention to the wind farm.
Our view right now is that we’re having the ESP2 working all the time. We figure that it is like a child that once it learns enough, the effort to be scanning all the time will be greatly reduced. We’d rather that it learns as much as it can, now, rather than waiting until some random moment when it needs to be urged into action by a human occupant.
This does bring up the interactivity with the human occupants. To some degree, the ESP2 could be considered like a tour guide that is advising the occupants. It also has the safety factor in mind, such as the dust cloud issue earlier described. The degree of chattiness would depend upon the desires of the human occupant. In any case, the Extra-Scenery Perception is more than just an idle add-on, since there are lots of circumstances wherein a far away aspect can potentially endanger the self-driving car, and the sooner that the self-driving car realizes that the threat exists, the more options the AI has to try and prevent harm to the self-driving car and the human occupants.
This column is originally posted on AI Trends.
thanks you RSS link