Episode 3: When hackers take on AI: Sci-fi – or the future?
In 2020 during the height of the pandemic, Connor Leahy, co-founder of EleutherAI and CEO at Conjecture, and other bored, self-described hackers and AI enthusiasts were chatting on Discord about how cool GTP-3 is and how fun it would be to build an machine learning model like that, but make it “Open Source”.
GPT-3 (Generative Pre-trained Transformer, third generation), is a massive (175 billion parameters) machine learning model developed by OpenAI using supercomputers and vast amounts of data scraped from the internet. GPT-3, exclusively licensed by Microsoft, can be used to generate most any type of text as if a human had written it.
So Leahy was joking, really, when he said, “Hey guys, let’s give OpenAI a run for their money like the good ol’ days.” But the joke quickly took on a life of its own and became a bona fide, volunteer-led project to create an alternative to GPT-3, called EleutherAI, which earlier this year open sourced GPT-NeoX, a 20-billion parameter deep learning model.
In this episode, Stefano Maffulli, executive director of the Open Source Initiative, and Connor Leahy, chat about why EleutherAI’s work is so important. They discuss questions such as:
- Why was it important to create an Open Source alternative to GPT-3?
- How did a decentralized collective of volunteer researchers, engineers, and developers gather the resources needed to build such a massive model?
- Where did you get the expertise? Aren’t there only a few dozen people in the world who have the expertise, and they are employed by large corporations?
- How and where did you get the massive amounts of text data needed to train the model?
- While other organizations keep their models in black boxes behind APIs, EleutherAI was determined to release its model to the world. Why?
- Some have criticized this as risky. Why is that, and what gives you confidence that GPT-NeoX, for instance, is safe to make open to the world?
- What challenges do we face if we want to ensure that the most powerful technology of our time is used to improve our lives rather than for nefarious purposes?
- We often think of AI models as black boxes–we don’t know how they work inside, so we can’t see or fix problems that may exist in the neural network—and that scares us. How does open sourcing language models—as EleutherAI has done—combat this problem?
- EleutherAI’s work has already furthered research into the safe use of AI. What is the most vital resource you need to continue this mission?
As Craig Smith wrote in his article about EleutherAI, “One of the most disconcerting things about artificial intelligence is that it transcends the power of nation states to control, contain, or regulate it.” This discussion underscores that we’re right to be concerned about the risks of AI and introduces an amazing organization of volunteers who are advancing the Open Source banner in their quest to promote AI safety.
At its core, this story about hero hackers out to save the world–who can resist a story like that?
Subscribe: Apple Podcasts | Google Podcasts | PocketCasts | RSS | Spotify
In its Deep Dive: AI event, OSI is diving deep into the topics shaping the future of open source business, ethics and practice. Our goal is to help OSI stakeholders frame a conversation to discover what’s acceptable for AI systems to be “Open Source.” A key component of Deep Dive: AI is our podcast series, where we interview experts from academia, legal experts, policy makers, developers of commercial applications and non-profits.
Likes
Reposts
Mentions