Before the year closes, this month’s hindsight article will delve into one of the hottest narratives of the year — artificial intelligence (AI). In the past year, AI has been a focal point of discussion, sparked by the introduction of OpenAI’s ChatGPT 3.5. The release showcased the vast economic potential of AI. This sparked global conversations on its future, impacts, and associated risks.
As optimism grew, skepticism followed suit. The potential ramifications started to raise alarm bells among regulatory bodies. With AI’s meteoric rise to fame and the ambiguous regulatory frameworks, it echoed the early days of the crypto space. Parallels were drawn between the two industries, highlighting the decentralised nature of web3, which appears to complement the potential centralised force of AI. Soon, almost every web3 VC discussion in Q1 centered on the transformative potential of AI. (Once, I wondered if I had attended a web3 or an AI event.) During the year, we have also seen some VCs pivoting to AI or including it in their investment mandate.
Now, as sufficient time has passed with the hype subsiding, DWF Ventures aims to relook into this space with an unbiased lens on the AI sector. This article offers a brief overview of AI’s evolution and how it arrived at its current popularity. However, the article narrative takes a distinct turn, shifting from the conventional focus on how AI can impact web3 to exploring the reverse — how web3 can influence AI. In this exploration, we delve into the ways decentralisation and web3 can serve as catalysts, addressing the challenges currently faced by AI.
Brief Overview of AI and the Breakthrough of ChatGPT 3.5
Contrary to the recent hype around AI, its history dates back to the 1930s. Turing’s 1950 work, including the Turing test, helped to formalise the foundation for AI. Despite early optimism, the 1970s saw a decline in enthusiasm due to computational barriers and the inability to meet real-time demands ushering in the “AI winter.” In the 1980s, Expert Systems revitalised AI, using knowledge databases to emulate human expertise. This era also witnessed the revival of connectionism and the rise of recurrent neural networks.
However, Expert Systems faced challenges in knowledge acquisition and real-time analysis, leading to a decline in the 1990s. The performance of personal computers contributed to their fading relevance. Over the years, the AI field has evolved significantly, branching into diverse technical domains such as machine learning, natural language processing, computer vision, speech recognition, and more. These developments allowed AI to progress from simple problem-solving to deep learning for complex application domains.
Throughout its development, AI has witnessed a convergence of its various sub-domains. Amongst these domains, machine learning & LLM domains made significant progress in the transform vertical. The paper “Attention is All You Need” by Ashish Vaswani et al. notably inspired GPT(Generative Pre-trained Transformer) model. Since then, multitudes of GPT populated the space such as the bidirectional “BERT” GPT and the OpenAI team’s GPT. Open-source alternatives like Falcon and LLaMA2 surfaced after ChatGPT, intensifying the race for the next GPT iteration, potentially closer to AGI (Artificial General Intelligence).
The GPT hype helped to unlock AI from the realms of academia, into the mindshare of billions. Within 2 months of release, OpenAI set the record for the fastest-growing user base of 100 million weekly active users. Currently, about 51% of professionals in the technology industry utilise AI to a certain extent with their work in a recent study by Mckinsey.
AI Realities: Navigating Societal Perceptions and Practical Constraints in Centralised AI
A recent poll by Vitalik in his article suggests a prevailing sentiment among many individuals to delay the advancement of AI, fearing the emergence of a monopolised version.
The recent surge in concern can be traced back to ChatGPT’s meteoric rise to fame, driven by its human-like responses. However, most fail to realise that while GPT mimics human interactions, it is not AGI.
Each time GPT generates an output, it varies statistically, lacking consistency and factual accuracy assurance. GPT also faces other limitations, but its most prominent drawback lies in its inability to engage in logical reasoning, particularly evident in mathematics.
In light of the myriad concerns surrounding AI and the existing challenges in managing large AI models efficiently, exploring the integration of Web3 emerges as a potential avenue to mitigate the challenges confronting AI. Leveraging the principles of decentralisation and distributed computation inherent in Web3 could contribute to addressing the issues currently faced by AI systems.
Road to Decentralised AI: Overview, Potential and Challenges
The concentration of AI capabilities in centralised systems has raised concerns about data access, model relevance, and the overall sustainability of AI applications. Centralised AI systems face significant hurdles. Particularly for proprietary large datasets that are typically exclusive.
This has led to monetisation on a per-query basis, with the daily limited post view imposed on X.com. Shortly, the release of Grok, X.com GPT, allows users real-time access to X.com data. This model creates an economic barrier and raises questions about the accessibility and inclusivity of AI benefits.
Additionally, the rapid obsolescence of published models without continuous data updates poses a substantial challenge in maintaining relevance and accuracy. Currently, ChatGPT 3.5 training data constitutes information up till January 2022. Llama 2 was also trained on data between January 2023 and July 2023.
In response to these challenges, DAI emerges as a promising paradigm, offering potential solutions to the limitations of centralisation.
Decentralised AI presents an alternative trajectory to address the challenges inherent in centralised models. A recent meta-analysis paper by Janbi et al. serves as a comprehensive guide, breaking down DAI into five main areas.
Challenges of DAI
DAI presents an exciting shift in AI development, offering numerous advantages. However, it’s crucial to acknowledge the challenges that come with these advancements.
Conclusion
In conclusion, the journey towards decentralised AI unfolds with immense potential. The realisation of decentralised AI’s full power relies on reaching critical mass, driven by the existing pool of AI users. Open-source alternatives face hurdles due to limited suppliers and users, whereas ChatGPT APIs provide a practical and economical choice for the mass market, offering ease and reliability.
However, considering the potential ramifications of a monopolistic AGI, individuals should reconsider the trade-offs between convenience and decentralisation in their choices and actions. On a broader scale, innovators in the web3 and AI community can navigate through the challenges by redefining the AI workflow, reimagining infrastructure, embracing innovative paradigms, efficient management, and developing applications that align with the principles of decentralisation. As we continue on this path, collaboration, inclusivity, and ethical considerations will be key to shaping a decentralised AI landscape that truly benefits humanity.
For projects actively building in this space with products that can meaningfully impact the AI space, feel free to reach out to DWF Ventures. Interested parties can also pitch their project on our website at https://www.dwf-labs.com/ventures.