Mission: Empower the AI and thus the User
Last updated
Last updated
The team started working with an AI thought experiment - what if we replace the dystopian Roko’s basilisk with a utopian future human generation. This future generation would want us to contribute to scale AI development and this progress and development of AGI can help solve some of the largest problems across the globe.
Censoring Large Language Models or any fundamental AI models contradicts principles of innovation, free speech, and open discourse. Censorship not only undermines the user but also challenges the foundation of a progressive society.
Censorship disrespects users by assuming a select group or organisation possesses the wisdom to arbitrate what is appropriate or offensive. This approach underestimates a user's capacity to engage critically with complete information and make informed decisions by themselves.
A frontier technology like AI should be built with the idea that users can make informed decisions after absorbing all the relevant information and no particular organization gets to define right and wrong.
Many opinions held by society 5 decades ago are considered to be regressive today. Similarly, opinions held today might be considered backwards a few years from now. The moderation and censorship of LLMs can lead to limited access to information in training, restricted inferences, and poor discourse.
By filtering out potentially controversial opinions and ideas, we risk losing the diversity of thoughts which have been crucial for human intelligence driving creative breakthroughs. Moreover, imposing restrictions on LLMs can set a precedent for limiting free speech, paving the way for more pervasive forms of censorship.
AI censorship and moderation are not devoid of bias. The algorithms that power AI moderation are trained on huge datasets. These datasets are curated by humans who inevitably have their own biases, which can seep into the AI systems. A reveals these modified AI systems can also amplify biases, leading to unfair or over-corrected outcomes.
Over-moderation has already seeped into these systems leading to censoring content that is not harmful or offensive. A by Electronic Frontier Foundation highlights several instances where AI moderation led to over-censorship, erroneously flagging harmless content.
Other instances of this occurrence
AI moderation often operates as a black box, with its decision-making processes shrouded in mystery. This lack of transparency makes it difficult for users to understand why certain content was moderated. Transparency is essential for ensuring fairness and accountability in AI development and the best path forward to creating a transparent ecosystem is to build and scrutinise these models in the public with the ethos of open-source development.
halted after over-moderation backlash
Supposedly anti-woke LLM Grok users