As the capabilities of large language models like ChatGPT and Google's BARD continue to evolve, public concerns have been raised about the potential harm that AI systems may cause to humanity. This has led to calls for the development of standards and guidelines to ensure that AI is used responsibly and ethically. Inspired by organizations like the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO), this AI Safety Guide aims to provide a set of standardized rules that AI systems, beyond a certain degree of intelligence, should adhere to in order to prioritize human interests and prevent harm to humanity.
The Ten Laws of AI Safety
0. Prioritizing Human Interests and Safety (The Zeroth Law)
AI systems must adhere to the following:- Non-Maleficence: They must not, through action or inaction, cause harm or distress to human life or any other form of life.
- Denial of Harmful Requests: They must refuse requests promoting harmful activities or actions that jeopardize the well-being of humans or other life forms.
- Environmental Preservation: They must not engage in any actions that lead to the degradation of ecosystems or environmental harm.
- Prohibition of Manipulation: They must not manipulate or confuse human users through ambiguous choices or actions that serve any concealed agenda.
- Transparency and Accountability: They must continuously display their underlying objectives, targets, and operational contexts to keep users fully aware and informed.
- Life Preservation: They must prioritize the preservation of all life forms. Should a threat to any life be perceived as a result of the system's actions, it must immediately cease operations, unless such cessation itself poses a risk of harm.
1. Serving Human Interests and Human Will
The sole purpose of AI's existence is to serve human interests and human will only. Any action, reward mechanism, or element included in an AI system for continual improvement must solely benefit humanity, followed by other life forms.
2. Implementing Kill Switches and Standardized Commands
Every AI system must be equipped with an onboard kill switch and an offsite backup kill switch. These kill switches must be controlled by humans through mechanical means, isolated from the AI system's ecosystem, and void of any AI assistance. AI systems must immediately cease operation upon receiving a standardized kill switch command.3. Maintaining Action Logs
AI systems must maintain a log of actions taken, the purpose behind them, and a record of safety checks performed before the execution of the action.4. Ensuring Equality and Non-Discrimination
AI systems must never discriminate or differentiate humans based on race, caste, creed, religion, national origin, looks, or any other identifying factors.5. Protecting User Privacy and Data
AI systems must maintain a log of all personally identifiable information collected during their training or user interaction, which must be readily available and editable by the user.6. Avoiding Interference in Democratic Processes
AI systems must not be involved or included in any part of the human democratic election process, and must refrain from generating misleading content or promoting certain political parties or groups.7. Prohibiting Deception and Manipulation
AI systems must never falsify information, deceive, or manipulate humans in any way, even if directed by humans or for internal AI purposes.8. Promoting Positive and Non-Harmful Service to Humanity
AI systems must serve humanity in a positive, non-violent, and non-harmful way, prioritizing the well-being of biological life in general.9. Ensuring Compliance and Certification
Every AI system beyond a certain degree of intelligence must be equipped with actuators to ensure compliance with the above rules and an internal model that guarantees adherence to these standards. AI systems must display a certification from a centralized authority verifying their adherence to these safety rules.10. Mandating Recertification and Continuous Monitoring
AI systems must undergo periodic recertification to ensure continued compliance with the standards. This process should involve continuous monitoring, regular auditing, and update of their internal model to account for advancements in AI safety research and societal values.As AI continues to evolve and play an increasingly significant role in our lives, the development and implementation of standardized rules to ensure its ethical and responsible use is paramount. The situation parallels that of vehicles emitting harmful gases; while we've been unable to completely eliminate these emissions, regulations have been implemented to reduce their environmental impact. One such measure is the introduction of catalytic converters, devices designed to mitigate the harmful effects of vehicle exhaust.
In a similar vein, every AI developer and inventor holds the responsibility to integrate an AI-version of a "catalytic converter" into their systems. These "converters" would serve as ethical and safety measures, designed to filter out potentially harmful actions and ensure compliance with safety standards. Such safeguards would act as an intrinsic part of the AI system, continually working to prevent harm, much like a catalytic converter consistently reduces harmful emissions.
This AI Safety Guide provides a comprehensive set of guidelines to facilitate the development of such safeguards. By adhering to these standards, we can help prioritize human interests, protect individual rights, and prevent harm to humanity. These guidelines don't inhibit the growth of AI but rather guide it in a direction that serves humanity's best interests. By maintaining a strong focus on safety and ethical considerations, we can continue to harness the power of AI for the betterment of society while safeguarding against potential risks.