Further Reading

Acknowledgements

Providing credit where its due - we are grateful to the extensive research and breakthroughs made by researchers and centralized AI behemoths that made foundationally powerful AI models accessible. It is on top of the work of these giants that we have been able to bring to fruition our vision of decentralised LLMs. To make informed decisions regarding the design of Monai we’ve also taken a deep look into major papers on AI moderation, bias management and safety, which have ultimately strengthened our belief in development of moderation free AI models.

Following is the list of research papers that have helped us build our initial model. We can confidently say that our model provides moderation free inferences to users irrespective of the subject matter.

“Attention is All You Need"

“Language Models Today: Foundation Models"

"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"

"GPT-3: Language Models are Few-Shot Learners"

“Mistral 7B”

"XLNet: Generalized Autoregressive Pretraining for Language Understanding"

"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

"Mitigating Unwanted Biases in Word Embeddings"

"Transparency Helps Reveal When Language Models Learn Meaning”

"Concrete Problems in AI Safety"

"AI Alignment"

"AI Safety via Debate"

"Sequence to Sequence Learning with Neural Networks"

"MapReduce: Simplified Data Processing on Large Clusters"

  • Authors: Jeffrey Dean, Sanjay Ghemawat

DOI: https://doi.org/10.1145/1327452.1327492

Last updated