Why Autonomous Systems Struggle With Informal Rules
Autonomous systems are often described as rule followers. They obey traffic laws, maintain safe distances, and optimize for collision avoidance. On paper, this definitely sounds like the ideal driver. In reality, it exposes a deeper miscommunication between how machines interpret the world and how humans actually navigate it.
A lot of everyday human behavior is constructed by informal rules. I wouldn't say that they are written down or consistently enforced in a manner. They are negotiated in real time. Eye contact at an intersection. A slight wave to signal “you go first.” A shared understanding that someone merging aggressively probably has somewhere urgent to be, so we will let them go first. Humans are constantly reading in between the lines and intention.
Most autonomous systems are not built to do this. From a technical perspective, that makes sense. Planning and control pipelines rely on explicit state representations. Cost functions reward smooth trajectories, predictable behavior, and adherence to formal constraints. Uncertainty is treated as something to reduce, not interpret.
In autonomous driving, this shows up clearly. Models are trained on large datasets of labeled behavior, often collected in structured environments. Simulators reinforce scenarios where rules are followed cleanly. Edge cases are treated as rare exceptions. But in many parts of the world, and even in many parts of the US, driving is not clean. It is cooperative, messy, and deeply social.
Consider a four-way stop where one driver arrives slightly earlier but pauses, making eye contact to yield to someone who looks unsure. A human immediately reads that hesitation and proceeds. An autonomous vehicle, following right-of-way rules and waiting for a fully unambiguous signal, may stall the interaction entirely. What humans resolve through subtle negotiation becomes a deadlock when intent is not explicitly encoded.
Or take dense pedestrian traffic in a city center. Humans often step into the street gradually, not to force a stop, but to test whether approaching cars are aware of them. Drivers respond by slowing just enough to acknowledge that presence. For an autonomous system, that same pedestrian can register as an unpredictable obstacle, triggering abrupt braking or excessive yielding.
I remember noticing this when I watched Waymo vehicles driving around San Francisco. In moments where human drivers would usually exchange a glance, hesitate briefly, or just go, the car often waited, as if it was looking for a clearer signal that never really came. What felt like a quick, unspoken negotiation between people stretched into hesitation. It didn’t look like the system was broken, just out of sync with the informal way humans move through shared space.
As someone who has driven in places where traffic flows through mutual understanding rather than strict enforcement, the limitations feel obvious. A pedestrian stepping slightly into the road is not just an obstacle. They are communicating intent. A car inching forward is negotiating space. These interactions are not violations so much as conversations.
Technically, informal rules are difficult because they are not consistent. They vary by region, culture, time of day, and even individual temperament. There is no single reward function that captures them cleanly. Training data tends to flatten this complexity, averaging behaviors into something that looks safe but lacks nuance.
The result is systems that hesitate in situations humans resolve intuitively. The autonomy stack does exactly what it was designed to do, but the design assumes that rules are explicit and behavior is standardized. Humans know that neither of those things is true.
This gap matters because autonomy does not exist in a vacuum. When systems fail to read informal rules, humans tend to adapt. People learn how to signal to machines and how to fit into the model’s expectations.
That shift is definintely not evenly felt. People who already operate within dominant norms of behavior are easier for autonomous systems to model. Those whose environments rely more heavily on informal coordination are more likely to be treated as anomalies. What may initially be seen as a technical limitation very quickly becomes a social one.
Still, this does not mean the gap is permanent. As autonomy research moves beyond rigid rule compliance toward interaction-aware models, there is room to design systems that treat uncertainty as information rather than noise. Systems that learn not just what humans do, but how humans negotiate. The future of autonomy may depend less on “perfect rule-following”, and more on learning how to participate in the informal human conversations that make shared spaces effectively work.