By Mariana Meneses
Navigation systems provide information and guidance for spatial-positioning tasks that we may want to delegate or optimize.
There are various types of navigation systems that use different technologies and navigation methods, for example those that use either satellite, inertial, acoustic, or robotic inputs. These systems have diverse applications, including transportation, communication, security, agriculture, mining, finance, and research.
For example, many of us use navigation tools like Google Maps to navigate unfamiliar roads or avoid traffic. These tools also have indirect effects on other aspects of our lives, such as the routes taken by taxi drivers, the flight paths of aircraft, or the purchase deliveries we receive at our doorsteps. However, navigation systems also face some challenges and limitations, such as signal interference, accuracy errors, security risks, ethical issues, and environmental damage. Therefore, we need to be aware of the benefits and drawbacks of these systems and use them responsibly.
When errors occur, the consequences can be catastrophic.
Take, for example, transportation navigation systems that guide vehicles of all types – aircraft, ships, and cars – through potentially hazardous routes in pursuit of the quickest path, and the malfunctions that could lead to sometimes very costly and life-threatening collisions.
For instance, according to Space.com, SpaceX’s Starlink satellites have had to move over 50,000 times to avoid potential collisions since their first launch in 2019. The number of these maneuvers has been doubling every six months, leading to concerns about the safety of operations in space. Between December 1, 2022, and May 31, 2023, the satellites had to swerve more than 25,000 times to avoid dangerous approaches to other spacecraft and orbital debris. This rapid increase in maneuvers is worrying experts as it could lead to many collisions if the trend continues.
Navigation systems, like the GPS in your phone or car, work by receiving signals from satellites that are orbiting the Earth.
Each satellite sends out a signal that includes the time the message was transmitted and the satellite’s location when it sent the message. Your navigation device receives these signals and calculates how long it took for each signal to arrive. Since the signals travel at the speed of light, this time tells your device how far away each satellite is. With the distance to at least four satellites known, your device can figure out exactly where you are, and this process is called trilateration.
Autonomous vehicles, or self-driving cars, use navigation systems to track where they are and where they’re going. They use GPS signals, just like the type your phone receives, for a basic indication of their location.
But GPS isn’t always precise, so the cars also use other tools. They have sensors and cameras that can measure the road and surrounding objects. They use this information to create a detailed map of the surroundings. This map helps them to determine things like where the lanes are, the positions of other vehicles, and whether there are any obstacles in the way. The car’s computer uses all this information to make decisions about where to go and how to get there safely.
According to the news channel KXAN Austin, a recent incident has recently brought the issues of navigation systems to the forefront.
In Austin, Texas, around 20 self-driving cars operated by Cruise caused a major traffic jam, leaving local drivers furious. The autonomous cars got stuck on the streets due to heavy pedestrian and vehicle traffic. The company stated that their cars are designed to prioritize safety and use caution around pedestrians. However, this incident led to gridlock and inconvenience, for which the company apologized.
It is worth noticing that about 125 autonomous cars are currently operating in Austin, and it seems that they are frequently seen stuck at crosswalks, traffic lights, and intersections, causing frustration among drivers.
The Guardian reports that, in San Francisco, autonomous taxi navigation systems have encountered a range of problems, particularly within the fleet operated by Cruise. Noteworthy issues include:
- Unpredictable stops and traffic delays: These self-driving taxis have been observed making sudden stops, resulting in traffic congestion. Consequently, this has disrupted the overall flow of city traffic, causing frustration among fellow road users.
- Interference with emergency services: Incidents have occurred where these autonomous taxis have impeded emergency response efforts. Instances include running over fire hoses and obstructing fire engines during critical situations.
- Challenges with pedestrians and cyclists: Autonomous taxis have struggled to navigate around pedestrians and cyclists, further contributing to transportation disruptions.
These problems have prompted numerous complaints from both city authorities and residents, and ongoing investigations are being conducted to address these concerns.
Autonomous vehicles cause chaos on the roads in Austin, Texas
The incidents in Austin and San Francisco underscore the risks of relying solely on autonomous navigation systems without human intervention or oversight.
Autonomous vehicles are designed to follow algorithms and predefined rules, but they may struggle to adapt to exceptional or unforeseen circumstances. Designing navigation systems based solely on norms and typical scenarios may lead to a lack of preparedness for exceptional or unusual situations. It’s crucial to consider outliers during system development.
Human drivers often rely on their ability to make quick decisions and adapt to unexpected events. This adaptability is challenging to replicate in autonomous systems, especially in situations where ethical choices must be made.
Ensuring that autonomous systems can handle exceptional cases effectively and safely is a critical part of their development. This may involve improving sensor technology, enhancing decision-making algorithms, and incorporating fail-safe mechanisms.
A 2021 paper published in the journal Philosophy & Technology by Sven Ove Hansson and co-authors alerted for many potential ethical issues around self-driving cars that aren’t just about rare and tricky situations, such as:
- Tolerance: If we expect zero accidents caused by self-driving cars, it might take a long time before we can use them safely. This could delay the improvements and their effective implementation, while we must decide how safe is safe enough.
- Trade-offs: We’ll have to make choices about what’s more important on the road: safety, or other things like speed and convenience.
- Over-reliance: If we trust self-driving cars too much to avoid accidents, we might do risky things like walking in front of them, thinking they’ll always stop in time.
- Kids not following rules: Children traveling alone might not always follow safety rules like wearing seatbelts when there’s no human driver to remind them.
- Segregation: If “fast passage” is commercialized (i.e. a system or service that allows for quicker travel or transit, potentially through dedicated lanes or routes), it might lead to a division of road traffic based on the socio-economic status of drivers.
- Hacking and crimes: Criminals could hack into self-driving cars and make them crash or use them for harmful purposes.
In sum, both unexpected and anticipated problems and the risks associated with removing the human from the navigation equation emphasize the need for robust, adaptive, and context-aware navigation systems.
While autonomous systems offer many benefits, designers and policymakers must carefully consider the potential challenges and outliers to ensure the safety and effectiveness of these technologies.
Additionally, a balance must be struck between automating tasks and maintaining the ability for human intervention and oversight in exceptional situations. Comprehensive testing, continuous monitoring, and regulatory oversight are essential components to address these challenges and risks effectively.
Interested in exploring related topics? Discover these recommended TQR articles.
- Human and Machine Interact. What Could Go Wrong, Over Time?
- Imagination is a Key Difference Between the Neural Network in Your Head and in a Machine
- The Incredible Difficulty of Replicating the Human Power of Common Sense in a Machine
- Beyond the Binary: Can Machines Achieve Conscious Understanding?
- The Power We Have Given Our Machines to Judge Who Among Us is Human
- PLATO and the Quest to Give Machines the Intuition of a Newborn Human