Microsoft Added AI to Notepad and It Created a Security Failure Because the AI Was Stupidly Easy for Hackers to Trick.
Microsoft’s relentless pursuit of artificial intelligence integration across its core Windows operating system and essential software has led to a cascade of glaring issues, culminating in recent critical security vulnerabilities, including a remote code execution zero-day in the once-simple Notepad application. This aggressive strategy, championed by CEO Satya Nadella who boasts that as much as 30% of Microsoft’s code is now AI-generated, aims to transform the platform into an “agentic OS”—a system designed to anticipate and proactively assist users. However, this vision is increasingly clashing with user expectations for reliability and security, leading to widespread frustration and the coining of the pejorative term “Microslop” by a growing number of disgruntled users. The company’s push for AI, while framed as innovation, is often perceived as feature bloat that introduces instability and significant security risks into applications that historically served basic, robust functions.
The notion of an “agentic OS” suggests a future where the operating system actively manages tasks and anticipates user needs, moving beyond simple command execution to a more autonomous role. While this futuristic concept holds promise for enhancing productivity and streamlining complex workflows, its rushed implementation has demonstrably introduced more problems than solutions. The promise of an intelligent, self-optimizing system has, for many users, devolved into a cumbersome, bug-ridden experience. The sheer volume of AI-generated code, while potentially speeding up development, appears to be compromising the rigorous testing and quality assurance that users have come to expect from a foundational operating system. This shift has not only led to performance degradation but has also created fertile ground for new, unexpected vulnerabilities.
Indeed, the rise in software bugs and system instability has become alarmingly frequent, surpassing the usual occurrences associated with operating system updates. Just last month, Windows 11 enterprise users found themselves grappling with a critical flaw that caused their systems to become trapped in an endless shutdown loop. This wasn’t merely an inconvenience; it represented a significant security risk. Such a loop prevents systems from fully powering down or restarting correctly, leaving them in an indeterminate state that can hinder critical security updates, expose data to potential compromise if not properly managed, and severely impact business continuity. The incident highlighted how quickly a seemingly minor operational glitch can escalate into a major security and productivity nightmare in a professional environment, underscoring the delicate balance between new features and system stability.
Perhaps the most startling example of this “mission creep” and its perilous consequences comes from the transformation of Notepad. What was once a universally relied-upon, minimalist plain text editor—a paragon of simplicity and functional purity—has evolved into a bloated, AI-enhanced application laden with features that defy its original purpose. This expansion has inadvertently turned it into a significant security liability. Malware researchers from the collective vx-underground recently uncovered a “remote code execution zero-day” within the app. A zero-day vulnerability is a severe flaw unknown even to its creators, making it particularly dangerous as there’s no immediate patch available, leaving systems exposed to attack. Remote Code Execution (RCE) means an attacker can run malicious code on a victim’s computer from a distant location, effectively taking control.
Microsoft’s own documentation for this bug, identified as CVE-2026-20841, details an “improper neutralization of special elements used in a command (‘command injection’) in Windows Notepad App allows an unauthorized attacker to execute code over a network.” This technical jargon translates to a frighteningly simple attack vector: an attacker could craft a malicious link embedded within a Markdown file—a lightweight markup language for formatting plain text—and trick a user into opening it in Notepad. When clicked, the application would then launch “unverified protocols that load and execute remote files,” effectively allowing the attacker to run their own code on the user’s system. The absurdity of a basic text editor possessing the network functionality to facilitate such an attack is not lost on cybersecurity experts, who argue that such capabilities are entirely superfluous for its core function and only serve to expand the attack surface. While Microsoft eventually patched this vulnerability in its monthly security updates, its very existence in an application like Notepad signals a profound misdirection in software development priorities.
This Notepad incident is not an isolated case but rather symptomatic of a larger pattern of rushed AI integration leading to security compromises. A notable precedent is Microsoft’s AI “Recall” feature, introduced in late 2024. Designed to unobtrusively capture screenshots of users’ screens every few seconds, ostensibly to create a searchable photographic memory of their digital activity, Recall quickly unraveled into an enormous security nightmare. Cybersecurity experts and privacy advocates swiftly demonstrated how the feature created a treasure trove of sensitive personal data, stored locally in an easily exfiltratable format, making it a prime target for malicious actors. The backlash was so intense that the Windows team was forced to delay its widespread rollout and go back to the drawing board for a significant redesign. Even after being pushed to users in mid-2025 with supposed security enhancements, experts continue to issue stern warnings, branding it a “privacy nightmare” and “far too risky to be used,” emphasizing the inherent risks of constantly logging and storing such granular user activity.
Beyond security vulnerabilities, Microsoft’s AI endeavors are facing broader market and user acceptance challenges. A recent investigation by the *Wall Street Journal*, drawing on insights from current and former employees, revealed a significant struggle with confusing branding and a glaring lack of cohesion across Microsoft’s burgeoning AI product portfolio. This internal chaos has translated into external frustration, ultimately “frustrating and turning off users.” A critical indicator of this lukewarm reception is the extremely slim adoption rate of Copilot, Microsoft’s flagship AI chatbot baked directly into Windows 11. This lack of public enthusiasm for a feature positioned as central to the future of Windows suggests that users either don’t perceive the value, don’t trust the technology, or simply prefer simpler, more traditional computing experiences.
The sentiment among many technical professionals and everyday users is that Microsoft is engaging in “mission creep” on an unprecedented scale. As vx-underground eloquently put it in a tweet, “Hot take: text editors don’t need network functionality.” This simple statement encapsulates the core objection: fundamental tools should remain focused on their primary purpose without unnecessary additions that introduce complexity and risk. Secure.com echoed this, quipping, “Notepad [remote code execution] in 2026? We really out here weaponizing the .txt file because we just HAD to have AI in our basic editor.” They further warned, “If ur text editor has enough network functionality to trigger a remote shell, ur basically building a playground for attackers.” This highlights the absurdity of transforming a tool meant for basic text manipulation into a potential launchpad for cyberattacks.
Manel Rodero, a computer engineer at Polytechnic University of Catalonia, lamented, “Microsoft is turning Notepad into a slow, feature-heavy mess we don’t need. We just want something to open text files, not an AI-powered editor with security holes like this. Who the hell is in charge of this development?” His frustration encapsulates the sentiment of many who feel that core applications are being ruined in the name of innovation. IT systems engineer Nathan Kasco agreed, calling the Notepad vulnerability a “prime example of a solution in search of a problem,” arguing that while innovation is commendable, it must address genuine user needs rather than force unwanted features. Rodero further argued that Windows has “plenty of areas that need real improvement,” yet users “keep getting visual tweaks and AI gimmicks that most users will never touch.”
This pervasive dissatisfaction is quantifiable: hundreds of millions of users are reportedly refusing to upgrade from Windows 10 to Windows 11, as of late last year. Their reluctance is often rooted in concerns about system stability, privacy implications of new features, and a general aversion to forced changes that offer little tangible benefit. The performance of many AI features has also left much to be desired. Programmer Ryan Fleury demonstrated how Windows 11’s AI-powered search bar, a fundamental component, struggled with basic queries, leading to more netizens adopting the “Microslop” moniker.
The consequences extend beyond individual user frustration, imposing significant burdens on system administrators and IT professionals. Rodero articulated this perfectly, lamenting, “All this does is make system admins spend countless hours stripping out nonsense just to deploy a clean, well-configured machine.” This “nonsense” refers to the unwanted, often intrusive AI features and telemetry that must be meticulously removed or configured to maintain system stability, security, and compliance in corporate environments. This hidden cost of Microsoft’s AI push translates into increased operational overhead, wasted time, and a less efficient IT infrastructure, all to compensate for a core product that is increasingly perceived as bloated and unreliable. The drive to infuse AI into every corner of its ecosystem, while perhaps strategically sound for Microsoft’s long-term vision, is creating immediate, tangible problems for its vast user base, compromising the very foundations of trust and functionality that define a robust operating system.

