Still frame from video by Tesla driver Daniel Milligan. A boat docked at a pier at night, with its reflection visible in the calm water. The image is taken from a vehicle dashboard camera, showing a speed of 7 MPH and indicating that the vehicle is in self-driving mode.

Daniel Milligan

FSD Tries to Drive Straight Into Lake

A Tesla operating on the company’s Full Self-Driving (FSD) beta software recently made a surprising and potentially perilous attempt to steer a driver’s vehicle directly into a lake, an incident that has once again cast a spotlight on the ongoing challenges and sometimes alarming misjudgments of autonomous driving technology. The event, captured on video and shared by the Tesla owner, highlights the critical need for driver vigilance, even when advanced assistance systems are engaged, and ironically echoes past boasts by Tesla CEO Elon Musk regarding the aquatic capabilities of his vehicles.

The unsettling footage, which quickly garnered attention online, was shared by Daniel Milligan, a Tesla owner and former SpaceX engineer. It begins innocently enough, depicting a quiet nighttime drive along a seemingly ordinary road. However, the situation quickly takes an unexpected turn as the car, under FSD control, makes a gentle but decisive right turn onto what is clearly a boat ramp. Instead of slowing or recognizing the imminent danger, the vehicle appears to accelerate steadily towards the water’s edge, seemingly intent on plunging into the lake. It was only Milligan’s swift intervention – hitting the brakes – that prevented what he described as a definite “swim” for his car. “I had to intervene,” Milligan recounted, expressing a mix of curiosity and relief. “I’d like to see what it would’ve done, but at the speed it was going, it definitely felt like it was going to go for a swim.”

Milligan clarified that his intended destination was a driveway located merely fifty feet away from the boat ramp, a route he presumably expected FSD to navigate with ease. He hypothesized that the dim lighting conditions played a significant role in the software’s misinterpretation of the environment. His suspicion was later reinforced by a daytime test of the same route: “Just tried it again during the day (same direction and destination) and it completely skipped the boat ramp,” he reported. “My guess is that it could actually see the driveway up ahead in the daytime or could more clearly see the lake.” This observation underscores the persistent difficulties self-driving systems face in low-visibility conditions, where human perception often adapts more readily than current AI.

The incident carries a striking, if unintended, irony when viewed against the backdrop of Elon Musk’s more audacious claims. The Tesla CEO famously once boasted that the Cybertruck could “double as a boat,” suggesting it would be waterproof enough to serve as a temporary vessel. While intended as a hyperbolic nod to the Cybertruck’s supposed ruggedness, this latest FSD misadventure humorously (and dangerously) suggests that the AI powering Tesla’s vehicles might have taken such pronouncements a bit too literally, or perhaps “hallucinated” that such amphibious capabilities applied universally to all Teslas. This disconnect between aspirational marketing and the often-flawed reality of the technology is a recurring theme in the discourse surrounding Tesla’s autonomous ambitions.

Indeed, Milligan’s experience is far from an isolated incident. The footage joins a growing catalog of examples illustrating the struggles of self-driving cars, and Tesla’s FSD in particular, to safely and consistently navigate public roads. Tesla’s FSD and its predecessor, Autopilot, have generated a substantial volume of controversies, drawing intense scrutiny from regulatory bodies worldwide. The National Highway Traffic Safety Administration (NHTSA) in the United States, for instance, has launched multiple investigations into Tesla’s advanced driver-assistance systems, probing incidents ranging from vehicles colliding with stationary emergency vehicles to phantom braking events where cars suddenly slow down without apparent reason. Recalls have also been issued for FSD features, such as allowing vehicles to perform “rolling stops” at intersections, further underscoring the system’s immaturity.

The track record of these systems is, to put it mildly, worrisome. Beyond documented instances of Teslas driving straight into the path of oncoming trains or emergency vehicles, FSD and Autopilot have been implicated in numerous deadly accidents. In one particularly tragic case, Autopilot was found partially responsible for the death of a 22-year-old woman who was struck by a Tesla operating under the system. These incidents highlight the profound difference between a driver-assistance feature and a truly autonomous vehicle, a distinction that Tesla’s “Full Self-Driving” moniker often blurs.

The very name “Full Self-Driving” is a central point of contention. Despite its suggestive branding, FSD is classified as a Level 2 advanced driver-assistance system (ADAS) by the Society of Automotive Engineers (SAE). This means it requires active supervision from a human driver, who must remain fully engaged and prepared to intervene at any moment. It is not, by any industry standard, a fully autonomous system that can operate without human oversight. Critics argue that this misleading nomenclature can create a false sense of security among drivers, encouraging over-reliance and potentially reducing their readiness to take control when the system inevitably encounters situations it cannot handle safely. The psychological impact of this branding, leading drivers to delegate too much responsibility to the AI, is a significant safety concern that regulatory bodies are increasingly addressing.

Technically, Tesla’s reliance on a “pure vision” approach, using only cameras without supplemental sensors like radar or lidar, is often cited as a contributing factor to these issues. While Tesla argues that the human brain operates on vision, replicating that complexity with current AI technology proves incredibly challenging. Low light, adverse weather, confusing road markings, and the vast “long tail” of unpredictable real-world scenarios pose immense hurdles for a vision-only system. Such conditions can lead the AI to misinterpret its surroundings, essentially “hallucinating” navigable paths where none exist, as perhaps happened with Milligan’s Tesla mistaking a boat ramp for a road leading to a driveway.

Despite these well-documented flaws and the associated safety risks, many Tesla owners, like Milligan, remain fervent proponents of FSD. He still considers it a “game changer,” albeit one that “needs more work before it’s fully autonomous.” This sentiment reflects a broader phenomenon: the allure of cutting-edge technology, the convenience it offers, and the belief in continuous improvement can overshadow immediate safety concerns for some early adopters. The futuristic appeal of having a car that can largely drive itself, even with caveats, is a powerful draw, creating a dedicated user base willing to accept the current imperfections in anticipation of future advancements.

The incident also provides a moment of dark, albeit unintentional, humor. It nearly re-enacted a famous scene from the beloved American sitcom “The Office,” in which character Michael Scott, blindly trusting his GPS, drives his car straight into a lake while screaming, “The machine knows where it’s going. The machine knows!” While comedic in the show, the real-world parallel underscores a serious point: the potential for human over-reliance on imperfect technology can lead to dangerous, even tragic, outcomes. Michael Scott’s absurd faith in his GPS mirrors, in a more serious context, the implicit trust drivers place in systems like FSD, a trust that the technology has not yet fully earned.

Looking beyond Tesla, the challenges of achieving true autonomy are evident across the entire industry. Competitors like Waymo and Cruise, while deploying their autonomous vehicles in more geofenced and controlled environments, also encounter significant hurdles. Even these highly sophisticated systems often rely on human remote operators for intervention when the AI becomes “stumped,” as revealed by reports of Waymo vehicles requiring assistance from workers in the Philippines. This further reinforces that fully autonomous driving, free from human intervention, remains a distant goal for all players, not just Tesla.

In conclusion, while self-driving technology holds immense promise for transforming transportation, incidents like Daniel Milligan’s near-miss at the lake serve as stark and vivid reminders of the significant technical hurdles that still need to be overcome. They underscore the paramount importance of safety, realistic expectations, and responsible deployment. As the journey towards fully autonomous vehicles continues, the delicate balance between technological ambition and real-world safety will remain a critical focus, ensuring that innovation does not come at the expense of human lives.