Selig articulated this perspective during a recent appearance on The Pomp Podcast, hosted by Anthony Pompliano, a prominent figure in the crypto and tech sphere. The conversation delved into the complex implications of AI-generated content, particularly memes and images, within financial markets. Pompliano pressed Selig on whether the intent behind such content matters, or if it should face outright restriction. Selig’s response steered towards market-based, technological solutions rather than immediate blanket regulation, emphasizing the inherent capabilities of blockchain.
"The private markets have solutions — blockchain technology is a great one," Selig stated. He elaborated on the core functionalities that make blockchain suitable for this challenge: "If you can timestamp things and make sure there’s an identifier for each meme or AI generated posts, you can verify if it’s real or generated by AI… Having these technologies here in the US is critical." This vision points to a future where every piece of digital content, especially those circulating in sensitive areas like financial discourse, could carry an immutable, cryptographically secured record of its origin and creation time. Such a system would enable users and platforms to quickly ascertain whether a piece of media was authentically human-created or if it originated from an AI model, providing a robust defense against deepfakes, synthetic news, and manipulative content. The timestamp serves as an undeniable proof of existence at a specific moment, while a unique identifier, often a cryptographic hash, acts as a digital fingerprint, ensuring the content’s integrity and linking it to its source. This approach mirrors existing efforts in digital watermarking but leverages the decentralized, tamper-proof nature of blockchain for enhanced security and transparency.
Beyond content verification, Selig’s remarks extended to the broader interplay between cutting-edge technologies. He made a striking assertion, stating, "you can’t have AI without blockchain." This bold claim suggests a deeply intertwined future for these two transformative technologies. Selig implied that for AI to operate effectively, particularly in critical sectors like finance, it requires the underlying trust, transparency, and data integrity that blockchain provides. Whether it’s for securing the datasets AI models are trained on, tracking the provenance of AI-generated outputs, or ensuring the accountability of autonomous AI agents, blockchain could serve as the immutable ledger that underpins the reliability and trustworthiness of AI systems. He further emphasized the importance of maintaining US leadership in crypto and AI innovation, suggesting that embracing these technologies domestically is crucial for national competitiveness and security.
The conversation also touched upon the regulatory approach to AI agents, particularly as autonomous trading mechanisms become more prevalent in financial markets. Regulators face the intricate task of distinguishing between automated tools, which typically operate under human supervision, and fully autonomous agents that make decisions independently. The question of how to regulate the latter presents a significant challenge. Selig advocated for a "minimum effective dose of regulation" approach, expressing concern that over-regulation could stifle technological advancement in the US. His philosophy centers on regulating "the actors" — the entities deploying and responsible for AI agents in financial transactions — rather than "the software developers" who build the tools. "The software developers are the ones building the tools, but they’re not actually engaging in the financial transactions," he clarified. This distinction aims to foster innovation by not burdening the creators of AI technologies with regulatory overhead typically applied to financial market participants, while still ensuring accountability for those who leverage these tools for financial activity. The CFTC, Selig confirmed, is actively assessing how AI models are being utilized in markets, with an emphasis on focusing enforcement efforts on participants directly engaged in financial activity, rather than broadly stifling technological development.
Selig’s comments are reflective of a broader, global push among policymakers, technologists, and developers to leverage blockchain and cryptographic solutions for content verification and provenance in the age of rampant artificial intelligence. The central challenge remains: how to effectively distinguish human-generated, authentic content from sophisticated synthetic media, often referred to as deepfakes, which can mimic human speech, appearance, and writing with alarming accuracy. These concerns are amplified in an era where misinformation can sway public opinion, impact financial markets, and even undermine democratic processes.

One prominent approach emerging to address this is the development of "proof-of-personhood" (PoP) systems. These systems aim to cryptographically confirm that an online account or action is indeed linked to a real, unique human being, rather than a bot or an AI. This is vital for combating the spread of AI-generated spam, fake reviews, and coordinated disinformation campaigns. The most widely recognized, and often controversial, example in this space is Sam Altman’s Worldcoin project. Worldcoin’s core offering, the World ID protocol, allows users to prove their humanity online without revealing their personal data, ostensibly protecting privacy while fighting against the proliferation of bots and AI-generated content.
Worldcoin’s mechanism relies on encrypted biometric iris scans, which are processed by a physical device called an "Orb." The resulting World ID is then stored on the user’s device, allowing them to verify their unique humanity in various online contexts. While the project champions privacy through zero-knowledge proofs and aims for decentralized identity, it has attracted significant criticism. Privacy advocates, including figures like Edward Snowden, have raised concerns over the collection of biometric data, the potential for centralized control despite decentralization claims, and the risks of data misuse or coercion, particularly in developing nations where the Orb is often deployed with financial incentives. The project walks a tightrope between its promise of universal human identity verification and the inherent privacy and surveillance risks associated with biometric data.
In March, Worldcoin expanded its utility with the launch of AgentKit, a toolkit designed to enable AI agents to prove their linkage to a verified human. This innovation seeks to bridge the gap between AI autonomy and human accountability. AgentKit integrates proof-of-personhood credentials with the x402 micropayments protocol, developed by Coinbase and Cloudflare. This integration allows AI agents to pay for access to services or information while simultaneously presenting cryptographic proof that they are backed by a human identity verified through World ID. This system introduces a layer of accountability, making it harder for malicious AI agents to operate anonymously and facilitating a framework where human oversight and responsibility can be traced back to AI actions, particularly in sensitive financial or social applications.
Beyond Worldcoin, other prominent figures in the crypto space have also explored blockchain-based solutions for content verification. Ethereum co-founder Vitalik Buterin, for instance, has repeatedly proposed using advanced cryptography and blockchain technology to enhance the verifiability of online systems. His suggestions include the extensive use of zero-knowledge proofs (ZKPs) and on-chain timestamps. ZKPs allow one party to prove the truth of a statement to another without revealing any additional information beyond the fact that the statement is true. In the context of content verification, this could mean proving that a piece of content was created by a specific, verified source without revealing the source’s actual identity, or proving that a certain timestamp is accurate without disclosing sensitive details about the content itself until necessary. Coupled with immutable on-chain timestamps, these technologies could significantly help validate how digital content is generated, modified, and distributed, creating a transparent and auditable trail without necessarily compromising user privacy.
These proposals from both regulators and prominent developers come amidst a broader push by global policymakers to establish comprehensive AI regulation. On March 20, the Trump administration released a national framework calling for a unified federal approach to AI regulation in the United States. The framework warned that a fragmented "patchwork of state laws" could impede innovation and undermine national competitiveness in the rapidly evolving AI landscape. This unified approach aims to provide clarity and consistency for AI developers and deployers, balancing the need for safety and ethical deployment with the imperative to foster technological advancement. The challenge lies in crafting regulations that are adaptable enough to keep pace with AI’s rapid evolution, yet robust enough to mitigate its risks, particularly in areas like misinformation and market manipulation.
The integration of blockchain for AI content verification, while promising, is not without its challenges. Widespread implementation requires significant technological hurdles to be overcome, including scalability issues for processing vast amounts of content, user adoption, and interoperability between different blockchain networks and existing digital platforms. Furthermore, the arms race between those creating synthetic media and those developing verification tools is continuous. As AI models become more sophisticated, so too must the methods of detection and verification. However, the fundamental properties of blockchain – its immutability, transparency, and decentralization – offer a powerful foundation for building trust in the digital realm. The convergence of AI and blockchain, as championed by figures like Michael Selig, suggests a future where these technologies don’t just exist side-by-side but are intrinsically linked, with blockchain serving as the critical infrastructure that enables AI to operate responsibly and verifiably, ultimately safeguarding the integrity of information in our increasingly digital world.

