Skip to content
SonnetDexIco2025CryptoAccessibilityBlockchainEntityBinanceTokentechnologyDeepseek

On-chain AI isn't the ultimate solution - The significance of integrating AI on blockchain platforms, an op ed.

Open-source AI, devoid of confirmable safeguards, is susceptible to manipulation. However, blockchain can deliver the trust foundation it requires.

On-chain AI isn't the ultimate solution - The significance of integrating AI on blockchain platforms, an op ed.

Contemporary advancements in AI take the limelight, with DeepSeek's R1 outshining ChatGPT as the most popular application in the US Apple App Store. Unlike private models like ChatGPT, DeepSeek embraces an open-source philosophy, making its code accessible for all to scrutinize, modify, and use.

In the pursuit of transparency, DeepSeek's rise stirs enthusiasm within the AI community, pushing the sector toward a more open environment. Late last month, Anthropic showcased Claude 3.7 Sonnet, a hybrid reasoning model that offers partial research previews, further fueling the conversation around accessible AI.

Though these advancements deliver progress, they lay bare a concerning misunderstanding: open-source AI is not intrinsically more secure than other closed alternatives.

The Allure and the Achilles' Heel

Open-source AI behemoths, such as DeepSeek's R1 and Replit's latest coding agents, demonstrate the might of democratized technology. DeepSeek proudly declared that it constructed its system for a mock-bargain $5.6 million, significantly less than Meta's Llama model. Similarly, Replit's Agent, enhanced by Claude 3.5 Sonnet, empowers individuals – even non-coders – to build software from natural language prompts.

The repercussions are substantial: most entities, regardless of size, now possess the means to harness this robust model and craft tailored AI applications, including AI agents, on a budget, at an accelerated pace, and with unparalleled ease. This revolutionary shift could inaugurate a new AI economy, with access to models as its currency.

Yet, open-source brilliance comes with heightened visibility, thus increased scrutiny. Freely available resources, such as DeepSeek's $5.6 million model, democratize innovation while opening gates for cyber risks. Criminals could manipulate these models to create malware or exploit vulnerabilities before fixes are enacted.

Insecurity isn't innate to open-source AI. It thrives on a legacy of transparency, which has fortified technology for eons. Previously, engineers relied on "security through obscurity," veiling system details within proprietary barriers. This approach stumbled: vulnerabilities emerged, often recognized first by malevolent actors. Open-source shifted this paradigm, laying bare source code for public examination and collaboration, creating resilience through communal effort. Yet, neither open nor closed AI models can ensure flawless verification without additional measures.

The ethical implications are no less daunting. Just like proprietary models, open-source AI can mirror biases or generate toxic outputs, rooted in training data. Though misconduct isn't inherent to openness, it underscores the necessity of accountability. Transparency, on its own, does not eradicate these risks nor fully prevent misuse. Nevertheless, open-source invites collective oversight, a strength that proprietary models usually lack, but it still calls for mechanisms to guarantee integrity.

The Quest for Verifiable AI

For open-source AI to earn trust, it requires scrutiny. In the absence of verification, both open and closed models can be manipulated or abused, driving misinformation and swaying automated decisions that increasingly shape our destiny. It's not enough for models to be accessible; they must also be auditable, tamper-proof, and accountable.

Leveraging distributed networks, blockchains can verify that AI models remain unaltered, their training data stays transparent, and their outputs can be authenticated against established baselines. Unlike centralized verification, which depends on trusting a single entity, blockchain's decentralized, cryptographic approach thwarts tampering by bad actors. It also inverts third-party control, spreading oversight across a network and creating incentives for broader participation, unlike today, where underpaid contributors fuel trillion-token datasets without consent or reward, then pay to utilize the results.

A blockchain-powered verification system brings layers of security and transparency to open-source AI. Storing models on blockchain or via cryptographic fingerprints ensures modifications are tracked openly, allowing developers and users to confirm they're using the genuine version.

Documenting the origins of training data on a blockchain verifies that the models draw from unbiased, trustworthy sources, diminishing risks of hidden biases or manipulated inputs. Additionally, cryptographic techniques can validate outputs without compromising personal data users divulge (often unprotected), balancing privacy with faith as AI systems strengthen.

Blockchain's transparent, tamper-resistant nature offers the accountability open-source AI desperately craves. While AI systems flourish on user data with little protection, blockchain can reward contributors and safeguard their inputs. By incorporating cryptographic evidence and decentralized governance, we can build an AI ecosystem that is open, secure, and less reliant on central powers.

AI's future hangs on trust... onchain

Open-source AI is an integral component, but transparency is not the endgame. The future of AI and its relevance will be built on trust, not only accessibility. And trust can't be open-sourced. It must be cultivated, confirmed, and fortified across the entirety of the AI stack. Our industry needs to prioritize attention on the verification layer and the integration of secure AI. For now, ushering AI onto blockchain and utilizing blockchain technology seem our safest alternatives for building a more trustworthy future.

David Pinger is the co-founder and CEO of Warden Protocol, a company dedicated to introducing safe AI to web3. Before establishing Warden, he spearheaded R&D at Qredo Labs, spearheading web3 innovations such as stateless chains, webassembly, and zero-knowledge proofs. Before Qredo, he held roles in product, data analytics, and operations at both Uber and Binance. David commenced his career as a financial analyst in venture capital and private equity, funding high-growth internet startups. He holds an MBA from Pantheon-Sorbonne University.

Key Insights to Integrate:

  • Vulnerabilities in open-source AI models may arise from outdated or vulnerable dependencies, improper configurations, exposure to malicious actors, and data vulnerability.
  • Mitigating risks using blockchain involves offering an immutable ledger, secure data storage, permissioned access, and transparency and accountability.
  • To effectively mitigate risks using blockchain, consider assessing security needs, choosing a suitable blockchain platform, developing secure smart contracts, implementing decentralized data solutions, and monitoring and updating regularly.
  1. To address security concerns in open-source AI models, blockchain technology can be employed as a solution.
  2. DeepSeek's open-source AI model, despite its democratizing influence, may become a target for cyber risks due to increased visibility.
  3. A blockchain-powered verification system can ensure the authenticity of AI models, track modifications, and provide transparency in training data sources.
  4. David Pinger, CEO of Warden Protocol, advocates for the integration of blockchain technology to build a more trustworthy future for AI.
  5. In 2025, the AI sector's development could be greatly influenced by a blockchain-powered ecosystem that incorporates security, transparency, and accountability.
  6. Binance, in its potential collaboration with Warden Protocol, could contribute to the development of secure, verifiable AI models, ensuring the protection of user data and the growth of web3.
Open-source AI faces potential manipulation threats without proper safeguards, however, blockchain technology's trust layer could offer the necessary protection.

Read also:

    Latest