A strange incident involving a very angry, anti-robot man and a Waymo robotaxi that happened in January is getting media attention now, and it’s one that highlights some interesting and often-overlooked aspects of automated vehicles. Specifically, those aspects are the ones that are less focused on the mechanics of driving and more about the social and cultural aspects of driving, which are deceptively important to the operation of a self-driving vehicle. Cars have always been, after all, people operating a machine. Before AVs, one could always assume that any car on the road was essentially a person in a prosthetic that let them move faster and carry more than they could on foot, but fundamentally, every car was operated by a person who existed within and was part of the surrounding culture, at least to some extent. AVs are changing that, and this incident is a somewhat unsettling example of why this is a topic that needs to be addressed.
The Waymo incident from January involved a 37-year-old San Franciscan named Doug Fulop and two other passengers, who were in a Waymo after a night out. A very agitated man attacked the car, pounding on the windows and screaming about how he wanted to murder Fulop and his friends for “giving money to a robot.”
Technically, anybody who has used a vending machine has done as much giving money to a robot as the Waymo passengers, but we tend not to view the world like that.
“We felt helpless,” Fulop told the Seattle Times. “If he had kept hammering on one window instead of alternating, I’m sure he would have eventually broken through,” Fulop added, which may or may not be true; car windows are pretty tough, but it’s certainly not impossible to break them if you’re determined enough, or have a spark plug handy to throw at them.
Waymo cars are programmed, like most automated vehicles, to stop when a human being approaches them, for obvious safety reasons. While this makes sense in many contexts, it can also be exactly what you don’t want when the human in question is blinded by anti-robot rage or has other nefarious intentions toward the people inside the car.
You may recall an incident in 2024 when a creepy loser stopped a Waymo with a woman passenger inside so he could harass her and fecklessly try to get her number:
????Warning to women in SF ????
I love Waymo but this was scary ????
2 men stopped in front of my car and demanded that I give my number.
It left me stuck as the car was stalled in the street.
Thankfully, it only lasted a few minutes…
Ladies please be aware of this pic.twitter.com/6VEqb1WoJb
— Amina (@Amina_io) September 30, 2024
There have been other incidents where people have attempted to cover a Waymo’s driving cameras and sensors, effectively disabling the car. There’s no really good, clear solution to this sort of thing, either, because the only solutions to dealing with human bad actors are ones that cause the automated vehicles to have to behave in ways that could be dangerous, which is usually anathema to how we want them to behave.
There are times when you may need an AV to break rules, and possibly even make value judgments about human safety. If people are threatening the passengers of a robotaxi, is a robotaxi within its rights to endanger them to protect its occupants? For a human, there are legal standards for actions that can be taken in the service of self-defense; can we reasonably apply those criteria to a machine?
In Fulop’s case, he did call 911 and Waymo’s own support line; Waymo made it clear that Fulop would not be able to drive the car away manually, nor could the car be instructed to move if a person was standing nearby. The attack lasted six minutes in total, and it was only because a crowd supporting the attacker gathered – which in itself is perhaps a little troubling – that the guy strayed far enough away from the car to escape.
There’s also an interesting parallel in this sort of attack to something that happened almost 200 years ago. In 1829, a steam automobile built and operated by Goldsworthy Gurney was attacked by Luddites as it drove passengers from London to Bath.

In this case, the attackers were millworkers who had lost their jobs, and they, according to Gurney’s daughter, “burnt their fingers, threw stones, and wounded poor Martyn the stoker.” Two centuries later, the same resentments and fears of automation remain.
All this is a good reminder of just how much more the task of driving is than just the physical act of driving. Driving is really just another way humans interact with one another, especially in crowded locations and contexts like cities. What we see in this particular event is simply another way in which automated vehicles need to learn how to interact with their environment. This isn’t something like learning how to navigate a blind left turn, but it’s potentially just as important.
These “soft” challenges may be potentially more difficult for machines to navigate, because the machine isn’t aware of what it is doing or why. It has no understanding of what it is or how what it is interacts with society at large, and individuals in that society at, um, small.
I do know of a book that brought up a lot of these issues years ago, if anyone at Waymo would like to buy it for their employees, by the way.
I reached out to Waymo for commentary, and will update if/when I hear anything. I think this event and others like it, and ones that are definitely going to happen again in the future, are a good reminder that this social/cultural aspect of automated driving can’t be ignored, and, perhaps more importantly, should not be left up to individual corporations to decide. We, as a society that chooses to have automated vehicles operating within it, need to decide what sorts of behaviors we want these machines to perform.
The parameters for what is acceptable or not shouldn’t be decided by companies focused on profits; these machines are in human spaces, and what we, collectively, decide is appropriate behavior in difficult situations should be codified, and any company participating in this space needs to comply with what we decide.
These are not easy questions; are we okay with an automated vehicle deliberately causing a person harm if it means protecting passengers from harm? How is that determined? Do we want these decisions to be made within the machine, or do we want to have human input? What are the thresholds of danger we want to establish?
None of this is easy, but we can’t ignore these sorts of situations and questions. The longer we wait, the harder it’s just going to get.
Top graphic image: Waymo, Gurney Journey
The post Waymo Incident That Trapped Harassed Passengers In Car Is A Reminder That A Person Driving Is Still A Person, But An AV Is Not, And Why That Matters appeared first on The Autopian.