As artificial intelligence (AI) continues to advance, there is growing concern about its potential for harm in society. While there are various proposed methods for regulating AI, such as the EU’s AI Act, one potential solution may lie in adapting existing tools, such as software licensing.
Open responsible AI licenses (OpenRails) could be a significant step forward in mitigating the negative effects of AI. Similar to open-source software, AI systems licensed under OpenRails can be freely used, adapted, and shared by the public. However, OpenRails includes additional conditions that promote responsible use, such as not breaking the law, impersonating others without consent, or engaging in discriminatory practices.
These licenses can also be customized to include technology-specific conditions. For instance, if an AI system is designed to categorize apples, the developer can specify that it should never be used to categorize oranges, as this would be irresponsible.
The benefit of this approach is that many AI technologies are so versatile that they can be utilized for a wide range of purposes, making it difficult to anticipate potential misuse. By implementing conditions on AI usage, developers can contribute to open innovation while reducing the risk of their creations being exploited irresponsibly.
In contrast, proprietary licenses, which are more common in the tech industry, place greater restrictions on how software can be used and adapted. While these licenses protect the interests of creators and investors, they hinder open innovation. Companies like Microsoft have built immense empires by charging for access to their proprietary AI systems.
Given the far-reaching impact of AI, a more nuanced approach is needed to foster openness and drive progress. While many major firms currently operate closed AI systems, there are examples of companies embracing open-source practices. For instance, Meta’s generative AI system Llama-v2 and the image generator Stable Diffusion are open source. French AI startup Mistral, valued at $2 billion, plans to openly release its latest model, which is rumored to rival GPT-4, the model behind Chat GPT.
Nevertheless, while openness is important, it must be coupled with responsibility. The potential risks associated with AI, including algorithmic discrimination, job displacement, and existential threats, necessitate a cautious and responsible approach to its development and use.
Implementing OpenRails as a licensing framework for AI systems could strike a balance between openness and responsibility. By encouraging responsible use and preventing misuse, this approach could help pave the way for the continued progress of AI while protecting society from potential harm. As AI continues to evolve, exploring innovative licensing models like OpenRails could be a valuable step towards creating a safer and more responsible AI-driven future.
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it