Monai
  • MONAI
    • Introduction
    • What we do
    • Mission: Empower the AI and thus the User
    • Query Routing
    • Decentralized LLM Economics & Tech
    • The importance of Monai
    • Merovingian I & II
    • FAQ
  • OVERVIEW
    • Roadmap
    • $MONAI
      • LBP
      • Tokenomics
    • Team, Advisors & Friends
    • Further Reading
  • SOCIALS
    • Website
    • Twitter
    • Telegram
Powered by GitBook
On this page
  1. OVERVIEW

Further Reading

PreviousTeam, Advisors & Friends

Last updated 1 year ago

Acknowledgements

Providing credit where its due - we are grateful to the extensive research and breakthroughs made by researchers and centralized AI behemoths that made foundationally powerful AI models accessible. It is on top of the work of these giants that we have been able to bring to fruition our vision of decentralised LLMs. To make informed decisions regarding the design of Monai we’ve also taken a deep look into major papers on AI moderation, bias management and safety, which have ultimately strengthened our belief in development of moderation free AI models.

Following is the list of research papers that have helped us build our initial model. We can confidently say that our model provides moderation free inferences to users irrespective of the subject matter.

“Attention is All You Need"

  • Authors: Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin

  • DOI:

“Language Models Today: Foundation Models"

  • Authors: Aakanksha Chowdhery, Sharan Narang, Jacob Devlin et al

  • DOI:

"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"

  • Authors: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova

  • DOI:

"GPT-3: Language Models are Few-Shot Learners"

  • Authors: OpenAI

  • DOI:

“Mistral 7B”

  • Authors: Albert Q. Jiang, Alexandre Sablayrolles, Devendra Singh Chaplot et al

"XLNet: Generalized Autoregressive Pretraining for Language Understanding"

  • Authors: Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le

"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

  • Authors: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee et al

"Mitigating Unwanted Biases in Word Embeddings"

  • Authors: Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama

"Transparency Helps Reveal When Language Models Learn Meaning”

  • Authors: Zhaofeng Wu, William Merrill, Hao Peng, Iz Beltagy, Noah A. Smith

"Concrete Problems in AI Safety"

  • Authors: Dario Amodei, Chris Olah, Paul Christiano, John Schulman, Dan Mané

"AI Alignment"

  • Authors: Stuart Armstrong

"AI Safety via Debate"

  • Authors: Geoffrey Irving, Paul Christiano, Dario Amodei

"Sequence to Sequence Learning with Neural Networks"

  • Authors: Ilya Sutskever, Oriol Vinyals, Quoc V. Le

"MapReduce: Simplified Data Processing on Large Clusters"

  • Authors: Jeffrey Dean, Sanjay Ghemawat

DOI:

DOI:

DOI:

DOI:

DOI:

DOI:

DOI:

DOI:

DOI:

DOI:

https://doi.org/10.48550/arXiv.1706.03762
https://doi.org/10.48550/arXiv.2302.07293
https://doi.org/10.48550/arXiv.1810.04805
https://doi.org/10.48550/arXiv.2005.14165
https://doi.org/10.48550/arXiv.2310.06825
https://doi.org/10.48550/arXiv.1906.08237
https://doi.org/10.48550/arXiv.1910.10683
https://doi.org/10.48550/arXiv.1606.06121
https://doi.org/10.1162/tacl_a_00565
https://doi.org/10.48550/arXiv.1606.06565
https://doi.org/10.48550/arXiv.2310.19852
https://doi.org/10.48550/arXiv.1805.00899
https://doi.org/10.48550/arXiv.1409.3215
https://doi.org/10.1145/1327452.1327492