The core promise of autonomous vehicles has always hinged on the vision of a safer future, one where advanced AI and precision sensors eradicate the human error responsible for the vast majority of traffic accidents. Yet, recent revelations surrounding Tesla’s nascent "Robotaxi" service in Austin, Texas, paint a starkly contradictory picture, suggesting that the company’s autonomous fleet is not only failing to meet this fundamental safety objective but is performing significantly worse than human drivers and industry competitors. Updated filings from the National Highway Traffic Safety Administration (NHTSA), meticulously examined by Electrek, expose a disturbing pattern of collisions and a troubling lack of transparency, raising serious questions about Tesla’s approach to self-driving technology and its commitment to public safety.

Since its launch in June, Tesla’s self-proclaimed Robotaxi service has accumulated a total of 14 documented collisions in Austin. The latest batch of five crashes, submitted by Tesla last month and occurring between December 2025 and January 2026, includes a variety of incidents: a collision with a fixed object at 17 miles per hour, an impact with a bus while the Robotaxi was stationary, a crash with a truck at four miles per hour, and two separate incidents where the Tesla reversed into a pole or a tree. This diverse array of incidents suggests potential deficiencies across various driving scenarios, from obstacle detection to maneuvering in complex environments, rather than isolated anomalies.

However, a complete understanding of these incidents remains elusive due to Tesla’s heavy redaction of its crash reports. The company consistently censors crucial details, including the crash narrative, under the guise of protecting "confidential business information." While legally permissible, this practice starkly contrasts with other autonomous vehicle developers and severely impedes independent analysis and public oversight. No amount of black ink, as Electrek aptly notes, can obscure the alarming frequency of these accidents, which statistical analysis reveals to be substantially higher than industry benchmarks.

Based on mileage data shared in Tesla’s Q4 2025 earnings, Electrek estimates that the Robotaxi fleet had accumulated approximately 800,000 miles by mid-January. Dividing this by the 14 reported crashes yields a crash rate of roughly one incident every 57,000 miles. This figure stands in stark contrast to Tesla’s own Vehicle Safety Report, which claims the average American driver experiences a minor collision every 229,000 miles. This means that, by Tesla’s own metrics, its Robotaxis are crashing at a rate four times higher than human motorists. This discrepancy challenges the very premise of autonomous vehicles as inherently safer, especially when deployed by a company that has long championed its "Full Self-Driving" (FSD) capabilities as a revolutionary safety enhancement.

The comparison becomes even more unflattering when Tesla’s performance is stacked against that of Waymo, a pioneer in autonomous driving technology and a frequent target of Elon Musk’s public derision. Waymo, which operates a far more mature and extensive fleet, averages an accident around every 98,000 miles across over 127 million fully driverless miles. This already substantial gap is further magnified by significant operational differences. Waymo’s fleet comprises over 2,000 robotaxis, operating in several major US cities, including Phoenix, San Francisco, and Los Angeles, with plans for further expansion. These vehicles are considered fully driverless, operating at Level 4 autonomy, meaning they can handle all driving tasks under specific conditions without human intervention, though they may utilize remote teleoperation for assistance in complex or unusual situations.

In contrast, Tesla’s "Robotaxi" endeavor in Austin is a much smaller-scale operation, reportedly involving fewer than fifty vehicles confined to a limited geographical area. Crucially, these vehicles are essentially operating Tesla’s FSD Beta software in a commercial pilot, a system that, despite its ambitious name, is classified as a Level 2 advanced driver-assistance system. This means it still requires constant active human supervision and intervention. Reports have even indicated that Tesla resorted to having human-driven "tailing cars" follow its Robotaxis to fulfill Musk’s boast of supervisor-less rides, highlighting the nascent and somewhat experimental nature of its deployment compared to Waymo’s established Level 4 service. The distinction is critical: Waymo’s vehicles are designed for true driverless operation, while Tesla’s FSD Beta, even in a "Robotaxi" context, still relies on a complex and often opaque interplay between AI and human oversight, blurring the lines of responsibility and true autonomy.

Beyond the troubling crash statistics, Tesla’s reporting practices have drawn significant criticism and regulatory scrutiny. The updated NHTSA data revealed a particularly egregious example: a July 2025 crash initially described as causing "property damage only" was quietly revised in December to indicate "Minor W/Hospitalization." This nearly half-year delay in accurately reporting an injury underscores a pattern of delayed or inadequate disclosure that has previously prompted an NHTSA investigation. The agency has scrutinized Tesla for repeatedly failing to report crashes in a timely manner, sometimes months after incidents occurred. Such delays not only hinder regulatory oversight but also undermine public trust in the company’s commitment to safety and transparency.

The systematic censorship of crash narratives further exacerbates these concerns. While other autonomous vehicle companies, such as Waymo and Cruise (prior to its recent operational pause and subsequent scrutiny), provide more detailed, albeit anonymized, accounts of incidents in their public safety reports, Tesla stands alone in consistently redacting critical details from its NHTSA filings. This practice, framed as protecting "confidential business information," effectively shields the specifics of its autonomous system’s failures from public and independent scrutiny. Such a lack of transparency makes it exceedingly difficult for safety advocates, researchers, and the public to assess the true risks and identify recurring patterns or systemic flaws in Tesla’s FSD technology.

The broader implications of Tesla’s Robotaxi safety record extend beyond the company itself, potentially impacting the public’s perception and acceptance of autonomous vehicle technology as a whole. For the autonomous vehicle industry to gain widespread adoption, it must demonstrate an unimpeachable safety record and foster unwavering public trust. Incidents like these, coupled with transparency issues, risk eroding that trust, making consumers hesitant to embrace a technology that promises safety but, in practice, appears to introduce new risks.

Tesla’s "Full Self-Driving" (FSD) Beta, which forms the technological backbone of its Robotaxi ambition, has long been a subject of controversy. Despite its name, FSD Beta is not truly "full self-driving" in the industry’s widely accepted Level 4 or Level 5 definitions. It frequently requires human intervention, has been associated with incidents of "phantom braking," misjudgments in traffic, and confusion in complex driving scenarios. The company has faced multiple recalls and software updates related to safety deficiencies in FSD Beta. Deploying such a system, even in a limited commercial pilot, as a "Robotaxi" service, particularly with a less robust safety record than human drivers or competitors, raises questions about the maturity and readiness of the technology for widespread commercialization.

In conclusion, the latest data from Austin paints a troubling picture for Tesla’s Robotaxi ambitions. The company’s autonomous vehicles are crashing at a rate significantly higher than both human drivers and leading competitors like Waymo. This concerning safety record is compounded by a pattern of delayed incident reporting and systematic censorship of crash details, which severely undermines transparency and public confidence. While Tesla remains a formidable innovator, its current approach to autonomous vehicle deployment, characterized by an aggressive timeline and a perceived lack of openness regarding safety performance, poses substantial technical, operational, and ethical challenges. For the promise of truly safer roads through autonomy to be realized, a commitment to rigorous safety, comprehensive testing, and unwavering transparency must take precedence over marketing claims and ambitious timelines. The journey to a fully autonomous future is complex, and it is imperative that safety, not speed, remains the ultimate determinant of success.