Operationalising the SAFE-D principles for Open Source AI

Part of the Deep Dive: AI Webinar Series

The SAFE-D principles (Leslie, 2019) were developed at the Alan Turing Institute, the UK’s national institute for data science and artificial intelligence. They have been operationalised within the Turing’s Research Ethics (TREx) institutional review process. In this panel we will advocate for the definition of Open Source AI to include reflections on each of these principles and present case studies of how AI projects are embedding these normative values in the delivery of their work.

The SAFE-D approach is anchored in the following five normative goals:

* **Safety and Sustainability** ensuring the responsible development, deployment, and use of a data-intensive system. From a technical perspective, this requires the system to be secure, robust, and reliable. And from a social sustainability perspective, this requires the data practices behind the system’s production and use to be informed by ongoing consideration of the risk of exposing affected rights-holders to harms, continuous reflection on project context and impacts, ongoing stakeholder engagement and involvement, and change monitoring of the system from its deployment through to its retirement or deprovisioning.
* Our recommendation: Open source AI must be safe and sustainable, and open ways of working ensure that “many eyes make all bugs shallow”. Having a broad and engaged community involved throughout the AI workflow keeps infrastructure more secure and keeps the purpose of the work aligned with the needs of the impacted stakeholders.
* **Accountability** can include specific forms of process transparency (e.g., as enacted through process logs or external auditing) that may be necessary for mechanisms of redress, or broader processes of responsible governance that seek to establish clear roles of responsibility where transparency may be inappropriate (e.g., confidential projects).
* Our recommendation: Open source AI should have clear accountability documentation and processes of raising concerns. These are already common practice in open source communities, including through codes of conduct and requests for comment for extensions or breaking changes.
* **Fairness and Non-Discrimination** are inseparably connected with sociolegal conceptions of equity and justice, which may emphasize a variety of features such as equitable outcomes or procedural fairness through bias mitigation, but also social and economic equality, diversity, and inclusiveness.
* Our recommendation: Open source AI should clearly communicate how the AI model and workflow are considering equity and justice. We hope that the open source AI community will embed existing tools for bias reporting into an interoperable open source AI ecosystem.
* **Explainability and Transparency** are key conditions for autonomous and informed decision-making in situations where data processing interacts with or influence human judgement and decision-making. Explainability goes beyond the ability to merely interpret the outcomes of a data-intensive system; it also depends on the ability to provide an accessible and relevant information base about the processes behind the outcome.
* Our recommendation: Open source AI should build on the strong history of transparency that is the foundation of the definition of open source: access to the source code, data, and documentation. We are confident that current open source ways of working will enhance transparency and explainability across the AI ecosystem.
* **Data quality, integrity, protection and privacy** must all be established to be confident that the data-intensive systems and models have been developed on secure grounds.
* Our recommendation: Even where data can not be made openly available, there is accountability and transparency around how the data is gathered and used.

The agenda for the session will be:

1. Prof David Leslie will give an overview of the SAFE-D principles.
2. Victoria Kwan will present how the SAFE-D principles have been operationalised for institutional review processes.
3. Dr Kirstie Whitaker will propose how the institutional process can be adapted for decentralised adoption through a shared definition of Open Source AI.

The final 20 minutes will be a panel responding to questions and comments from the audience.

Webinar summary

In this webinar hosted by the Open Source Initiative as a part of the “Deep Dive: Defining Open Source AI” series, Kirstie Whitaker, Professor David Leslie and Victoria Kwan of The Alan Turing Institute discuss the operationalization of safety principles in AI research and development within the context of open source practices. Leslie introduces the safety principles of safety, accountability, fairness, explainability, and data stewardship (SAFE-D) and explain their significance in ensuring responsible and trustworthy AI development. Kwan then demonstrates how these principles are integrated into the Turing’s Research Ethics (TREx) institutional review process, emphasizing stakeholder engagement, accountability, fairness, and more. Whitaker concludes by highlighting the alignment of open source practices with the safety principles, emphasizing the importance of transparency, accountability, diversity, and data stewardship within the open source AI community, while advocating for a more inclusive, accountable, and interconnected ecosystem.

[publishpress_authors_box]