Introduction: The Openness Challenge
A growing trend in the development of Generative AI models is “openness,” where developers make their technologies accessible to the public. While this transparency promotes knowledge sharing and competition, it also introduces significant risks. Malicious actors can misuse openly available models to create harmful or illegal content. Adding to this complexity is the phenomenon of “open washing,” where companies falsely present themselves as transparent. Policymakers are tasked with creating balanced AI regulations that preserve the benefits of openness while minimizing its risks, without resorting to overly broad legal exemptions.

The Risks of Open Generative AI
Generative AI magnifies various online safety risks due to its ability to produce highly realistic outputs and operate at scale. These risks include child sexual abuse material (AI-CSAM), non-consensual intimate deepfakes (NCIDs), online scams, and disinformation campaigns. Open model access can amplify these risks by potentially enabling:

  1. Loss of Developer Oversight: Developers cannot control how openly released models are used or misused.
  2. The Removal of Safeguards: Models can be altered to bypass filters or generate prohibited content.
  3. Fine-Tuning for Harm: Malicious actors can refine models to enhance harmful capabilities, such as creating customized disinformation or illegal content.

These factors show that increasing openness without protective measures could result in significant societal harm.

Degrees of Openness: A Spectrum, Not Binary
Openness in Generative AI exists along a spectrum, encompassing varying degrees of access, such as Query API, Modular API, or downloadable access. Fully open models provide unregulated access to all components – architectures, weights, and training data – enabling full transparency. However, non-gated access allows malicious actors similar privileges, enhancing their ability to misuse these technologies. A structured balance of partial openness, such as gated access or researcher-specific provisions, could mitigate risks while still facilitating innovation and safety research.

The Role of External Researchers
External researchers play a key role in mitigating risks associated with Generative AI. By enabling external experts to test and evaluate AI models, developers can uncover vulnerabilities, identify flaws in safeguards, and evaluate societal impacts. External scrutiny adds transparency to development processes, widens perspectives on potential harms, and drives iterative safety improvements – all critical steps toward building safer and more accountable technologies.

Benefits of Controlled Openness
A structured approach to granting access – such as “structured access” for independent researchers – allows policymakers to harvest the benefits of openness without jeopardizing public safety. Controlled openness can:

  1. Enhance Safety: Researchers can analyze, test, and propose improvements to model safeguards.
  2. Democratize AI Development: Allowing diverse actors access promotes inclusivity and balanced governance.
  3. Advance Innovation: Open science fosters long-term advancements in AI safety norms and research methodologies.

Such systems ensure that the advantages of collaboration and scrutiny outweigh potential risks.

Challenges and Barriers
Despite its benefits, current openness initiatives face challenges. Developers often limit researcher access to specific areas, reducing the potential scope of scrutiny. Inadequate provisions, such as weak safe-harbor protections for independent researchers, further stymie their ability to effectively identify risks. Policymakers must address these barriers to ensure that the safety benefits of openness are fully realized.

Policy Recommendations: Striking the Right Balance
Policymakers across regions like the EU, UK, and US have the opportunity to address openness in new or upcoming AI legislation. Proposed frameworks include pre-release evaluations, stricter vetting processes for external researchers, and controlled access to highly capable models. Rather than exempting open-source models from laws, a nuanced policy approach focusing on structured openness can ensure Generative AI is both innovative and safe.


Resource
Read more in this report.

Share this: