List of some large language models (LLMs) a

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • megri
    Administrator
    • Mar 2004
    • 822

    List of some large language models (LLMs) a

    Here's a list of some large language models (LLMs) and their sizes:
    • Megatron-Turing NLG (530B parameters) - Developed by Google AI, it is one of the largest LLMs currently available.
    • WuDao 2.0 (1.75T parameters) - Developed by BAAI (Beijing Academy of Artificial Intelligence), it is the world's largest publicly known LLM.
    • Jur***ic-1 Jumbo (178B parameters) - Developed by AI21 Labs, it is a commercially available LLM.
    • GShard-ShardingSwitch (6144B parameters) - Developed by Google AI, it is an experimental LLM that uses a novel sharding technique to train on a m***ive dataset.
    • Switch Transformer (1.5T parameters) - Developed by Google AI, it is an LLM that uses a novel switch architecture to improve efficiency.
    • PaLM 540B (540B parameters) - Developed by Google AI, it is a LLM that is trained on a m***ive dataset of text and code.
    • BLOOM (176B parameters) - An open-source LLM developed by Hugging Face and a consortium of companies and organizations.
    • WuDao 1.0 (1.75T parameters) - Developed by BAAI (Beijing Academy of Artificial Intelligence), it was one of the largest LLMs when it was released.
    • T5-XXL (11B parameters) - A large LLM developed by Google AI, it is a versatile model that can be fine-tuned for a variety of tasks.
    • Jur***ic-1 Grande (137B parameters) - Developed by AI21 Labs, it is a commercially available LLM.
    Parveen K - Forum Administrator
    SEO India - TalkingCity Forum Rules - Webmaster Forum
    Please Do Not Spam Our Forum
  • Anjali Kumari
    Junior Member
    • May 2024
    • 29

    #2
    Here are some of the large language models (LLMs) as of my last update:
    1. GPT-3 (Generative Pre-trained Transformer 3) by OpenAI
    2. BERT (Bidirectional Encoder Representations from Transformers) by Google
    3. T5 (Text-To-Text Transfer Transformer) by Google
    4. XLNet by Google/CMU
    5. RoBERTa (Robustly Optimized BERT Approach) by Facebook AI
    6. ELECTRA (Efficiently Learning an Encoder that Cl***ifies Token Replacements Accurately) by Google
    7. GPT-2 (Generative Pre-trained Transformer 2) by OpenAI
    8. T-NLG (Text-to-Text Neural Language Generation) by Microsoft
    9. CTRL (Conditional Transformer Language Model) by Salesforce
    10. MarianMT (Multilingual Transformer) by Microsoft

    Comment

    • lisajohn
      Senior Member
      • May 2007
      • 263

      #3


      Sure, here are some of the prominent large language models (LLMs) as of my last update:
      1. GPT (Generative Pre-trained Transformer) series by OpenAI:
        • GPT-1
        • GPT-2
        • GPT-3
      2. BERT (Bidirectional Encoder Representations from Transformers) by Google.
      3. XLNet by Google/CMU.
      4. RoBERTa (Robustly optimized BERT approach) by Facebook AI.
      5. T5 (Text-To-Text Transfer Transformer) by Google.
      6. ELECTRA by Google.
      7. CTRL (Conditional Transformer Language Model) by Salesforce.
      8. BART (Bidirectional and Auto-Regressive Transformers) by Facebook AI.
      9. Turing-NLG by Microsoft.
      10. ALBERT (A Lite BERT) by Google/Toyota Tech.
      11. DistilBERT by Hugging Face.
      12. CamemBERT by Inria/Facebook AI.
      13. Megatron by Nvidia.
      14. ProphetNet by Microsoft.
      15. ERNIE (Enhanced Representation through kNowledge Integration) by Baidu.

      Comment

      Working...
      😀
      😂
      🥰
      😘
      🤢
      😎
      😞
      😡
      👍
      👎