Things I learned at Brussels to the Bay: AI governance in the world
Powerful Artificial Intelligence technology is pressing more questions on Open Source and society every day. AI is contributing to important breakthroughs and we’re only beginning to see the potential for innovation it can bring to fields such as climate, health, agriculture and mobility (to name just a few). Recently I participated in Brussels to the Bay: AI governance in the world, a conference hosted by Berkeley Law’s Center for Law, Energy and the Environment (CLEE) and EU in the US, to learn more about the status of international policies on AI.
The Brussels to the Bay: AI governance in the world conference gathered a panel of experts to look further into the actions of the EU, US, and California. Each of these entities are working on initiatives for responsible AI regulation:
- The AI Act – the first major jurisdiction globally, to regulate the processes and applications of Artificial Intelligence through a binding, risk-based approach, being advanced by the EU
- The (non-binding) AI Bill of Rights, produced by the US Administration, and a voluntary AI Risk Management Framework, developed by the National Institute of Standards and Technology (NIST)
- Numerous AI bills being introduced by the California Legislature
Two panels were brought together to engage on important topics relating to these regulations:
- How will these initiatives help maximize the opportunities that AI, including Generative AI, offers and how will they minimize the risks?
- What are the similarities and where are the differences?
- How effective will these measures be?
- What can we learn from each other?
- What about the rest of the world?
There were two panels and below is a summary of what I learned.
State of Play: AI governance tactics in the EU and in the US
Moderator: Brandie Nonnecke, founding director of the CITRIS Policy Lab, UC Berkeley
Panel Speakers:
- Lucilla Sioli, director for artificial intelligence and digital industry, European Commission, DG CONNECT
- Elham Tabassi, associate director for emerging technologies at Information Technology Laboratory, NIST.
- Rebecca Bauer-Kahan, California state assembly member
- Gillian Hadfield, chair and director of the Schwartz Reisman Institute for Technology and Society, University of Toronto
Bauer-Kahan of California said her office is busy listening to the “industry and innovators.” Later I was able to ask a question about the inclusion of research academia and non-profits being engaged in this conversation. She addressed my question by clarifying that the discussion began with academia and does now include innovators creating the technology and anyone interested in joining the conversation.
Open Source developers are not considered a separate category of actors to be heard. We have work to do on this front. She made it clear that her proposed legislation is strongly opposed by the whole of Silicon Valley. Even if the various Zuckerberg, Pichai, Musk, Altman say publicly that the industry wants AI to be regulated, when they go to Sacramento they sing another song.
All panelists agreed that balancing innovation and regulation is delicate. Hadfield highlighted that self regulation doesn’t work and she’s glad to see the EU taking the lead. But she also noticed that the introduction of ChatGPT forces European regulators to review the draft of the EU AI Act.
The panel also shared lots of agreements on the needs to drive investments to evaluate AI systems and establish standards. Sioli, joining from remote, said that the EU Commission has yet to find tools to evaluate and control AI. Hadfield suggested setting up government sandboxes where developers can go and show their systems in a controlled environment: if they pass the tests, they can get a “checkmark.” This sounded to me like the approach taken by the EU Commission for the Cyber Resilience Act…it sounds simple but in practice we’re seeing it’s a messy proposition.
A Future Way: Opportunities and barriers to AI governance in a globalized world
Moderator: Brandie Nonnecke, Founding Director of the CITRIS Policy Lab, UC Berkeley
Panel Speakers:
- Kilian Gross, head of unit, Artificial Intelligence Policy Development and Coordination, European Commission, DG CONNECT
- Miriam Vogel, chair of the National AI Advisory Committee (NAIAC), president and CEO of EqualAI
- Pamela Samuelson, Richard M. Sherman distinguished professor of law, co-director of the Berkeley Center for Law & Technology, UC Berkeley
- Jen Gennai, founder & director of Responsible Innovation, Google
- Navrina Singh, founder and CEO of CredoAI
The first to speak was Kilian Gross who explained the basic principles of the AI Act, how it’s based on risks and prohibited and limited uses. For him the opportunity is for Europe to set the foundation for regulation in the world, the same way that the GDPR set the basic rules for all corporations in the world who want to do business in Europe.
The lack of tools for governance was highlighted by Gross on multiple occasions when talking about the AI Act, somewhat revealing how ambitious the legislation is. When asked, he used sentences like “we hope there will be tools for governments to oversee the quality of AI” or “There should be systems to certify…” which don’t send any sign of confidence that the infrastructure necessary for the AI Act will be anywhere near ready by the two years timeframe set by the European Commission.
For Samuelson the biggest barrier for AI in a global world is arbitrage. Small companies, startup innovators and grad students will simply choose not to operate in regulatory environments where there are too many hoops to jump through. Crafting regulation that aligns positive incentives for all actors in a fast moving environment is very complicated and there aren’t many examples to follow. For her, the six month moratorium called by the letter is a wake up call for the discussion of AI governance. She looked very skeptical that a regulation specific for AI is needed. She suggested another approach: take existing laws and see if they already cover AI use cases. For example, take the American Disability Act, privacy and anti-discrimination laws, copyright and one-by-one check what provisions don’t cover AI. This could be done in a collaborative workshop, each group takes one law. I find this proposal brilliant.
The panelists talked about tools and frameworks for AI governance and monitoring, showing there are already elements in this space. The two companies on stage, Google and CredoAI, talked about the AI Risk Management Framework developed by NIST, and the self-regulation that they implement. The panelists made a distinction between general purpose AI models like GPT-4 and more vertical systems for medical diagnoses or bank loan applications. The needs for detecting and measuring bias or testing effectiveness are different, and for general purpose AI there is less availability of tools.
The moderator, Brandie Nonnecke, did a fantastic job keeping two brilliant panels moving at a great pace. I can’t wait to see her TV series Tech Hype!
If you’d like to watch the recording of these two panel discussions, you can register and be taken to the recording.
Reposts
Likes