Robots and humans on roadways
Lanes and signage – for human or computer visualisation?
We keep hearing a lot about connected cars, autonomous vehicles and driver assistance. The connectivity solutions fall under different monikers such as V2V, V2I, V2P – all of which may be collectively be understood as V2X (Vehicle-to-Everything). What I have been missing in the discussion is the practical matter of introducing autonomous vehicles on the roads that continue to have non-autonomous vehicle steered by us humans driving on them, too.
The Franco-British Symposium on Intelligent Transport Systems last week (4-5th October) finally shed some light on the – shall we say – infrastructure aspects of the puzzle. When we consider today’s roadways, they are built with human eyes in mind. The road markings tell us information regarding where to position the vehicle, whether it is allowed to cross a lane and so on. The road signs have reflective surfaces that are easily detectable for the cone cells and rod cells that the retina in our eye converts to neural signals for the brain.
When we look at the picture on the left, there is a perfectly normal rural highway in the United Kingdom (by Bob Jones, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=12547074). For human perception the lane markings are clear and the signage in the distance will be legible once closer by. The question is how an autonomous vehicle – in effect a robot – should be taken into consideration. Do we take the legacy human approach of visual perception under visible light as the starting point? Or do we consider other approaches? As far as I can see, there are multiple ways to think about roads for robots which fall somewhere between the two extremes in approaches:
1) use the existing lane markings and signage
2) create own roadways for automated vehicles with their own “signage”
One topic of particular interest to me at the Franco-British Symposium was the concept of hybrid roads. This acknowledges the very fact that we are unlikely to be able to dedicate separate places for autonomous vehicles to run on. Probably for decades to come there will be both human operated vehicles and autonomous vehicles on the roadways. On the other hand, requiring a robot to operate based on signs developed for human observation is likely not the optimum way of helping automated vehicles find their way around. Whilst retaining the legacy “human” markings the addition of other road “signage” for computer consumption is what hybrid roads are about.
What could these signs for autonomous vehicles be like? One thought that has been tossed about is signage that is reflective at infrared frequencies – containing different information for a computer to read. For example, the same sign we read as a distance of “London 36 miles” could have a QR-code only visible for infrared detector. Similarly, for lane markings different wavelengths outside the visible spectrum could have reflectivity which makes it easier for computer vision to detect them. Another possibility we talked about at the symposium is the use of radio frequencies, say RFID tags on the road surface for a vehicle to detect. After all, visual detection is not robust enough when we consider the various factors outdoors that may block lane markings (snow, ice, dirt, sand etc).
Satellite positioning systems such as GPS or Galileo are familiar and they are extensively used as navigation aids. Their accuracy, however, is not sufficient for the centimetre-level granularity that would be required to keep an autonomous vehicle exactly where it is supposed to be on a roadway. Other difficulties arise from shadowing and interference. Satellite positioning certainly will play a role in autonomous transportation but more as an aid – similarly to what it is to us humans today.
I learned at the symposium that the MIT Lincoln Laboratory has been working on using Localising Ground Penetrating Radar (LGPR) to provide real-time accurate positioning information for autonomous vehicles. This concept is based on the use of VHF frequency signals (100 – 400 MHz) that reflect from anomalies in the underground features – the idea being that the reflections are sufficiently random to provide unique mapping of a location at a centimetre accuracy. Coupling a prior mapping of the ground beneath a road (or anywhere, for that matter) allows later on an autonomous vehicle with this map to position itself very accurately.
I would classify the various means of accurately locating the autonomous vehicle as either assisted or unassisted. A technique such as LGPR is unassisted since as far as the road infrastructure is concerned nothing has to be done. On the other hand, lane markings that have e.g. reflectivity on infrared spectrum or RFID tags require those features to be installed, thus they constitute an assisted means of locating an autonomous vehicle. LGPR would have a map similarly to a satnav map which it consults with and determines the vehicle position through comparing measurement results and the map. Lane markings that the vehicle reads whilst passing them do not require a map database to be consulted with.
Another dimension of the physical infrastructure that a vehicle benefits from in locating itself is the longevity and stability of it. Weather is an obvious factor which changes several times during a calendar year. Also the built-up environment and nature changes in due course. Trees may be cut down or new fences are put up. This all encourages in finding robust solutions for positioning of autonomous vehicles. Simply replacing our eyeballs with computer cameras is not the sensible way forward.
This field is very interesting and I am looking forward to learning more about it.