InstructLM 500M by instruction-pretrain

 ยป  All LLMs  ยป  instruction-pretrain  ยป  InstructLM 500M   URL Share it on

  Arxiv:2309.09530   Arxiv:2406.14491   Arxiv:2411.19930   Autotrain compatible Dataset:instruction-pretrain/f... Dataset:instruction-pretrain/g... Dataset:tiiuae/falcon-refinedw...   En   Endpoints compatible   Instruct   Mistral   Pytorch   Region:us   Safetensors

InstructLM 500M Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
InstructLM 500M (instruction-pretrain/InstructLM-500M)

InstructLM 500M Parameters and Internals

Model Type 
Language Model
Additional Notes 
Instruction Pre-Training framework scalably augments massive raw corpora with instruction-response pairs to pre-train language models.
Training Details 
Data Sources:
tiiuae/falcon-refinedweb, instruction-pretrain/ft-instruction-synthesizer-collection, instruction-pretrain/general-instruction-augmented-corpora
Data Volume:
100B - 250B tokens
Methodology:
Instruction Pre-Training using supervised multitask pre-training with instruction-response pairs.
Release Notes 
Date:
2024-09-20
Notes:
Our paper has been accepted by EMNLP 2024 main conference.
Date:
2024-09-11
Notes:
Updated FAQ on continual pre-training from Llama3.
Date:
2024-08-29
Notes:
Updated guidelines on evaluating any Huggingface models on domain-specific tasks.
Date:
2024-07-31
Notes:
Updated pre-training suggestions in the Advanced Usage section of instruction-synthesizer.
Date:
2024-07-15
Notes:
Scaled up the pre-trained tokens from 100B to 250B, with the number of synthesized instruction-response pairs reaching 500M.
Date:
2024-06-21
Notes:
Released the paper, code, and resources.
LLM NameInstructLM 500M
Repository ๐Ÿค—https://huggingface.co/instruction-pretrain/InstructLM-500M 
Model Size500m
Required VRAM2.3 GB
Updated2024-12-14
Maintainerinstruction-pretrain
Model Typemistral
Instruction-BasedYes
Model Files  2.3 GB
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.34.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Rank the InstructLM 500M Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 39237 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124