InstructLM 1.3B by instruction-pretrain

 ยป  All LLMs  ยป  instruction-pretrain  ยป  InstructLM 1.3B   URL Share it on

  Arxiv:2309.09530   Arxiv:2406.14491   Autotrain compatible Dataset:instruction-pretrain/f... Dataset:instruction-pretrain/g... Dataset:tiiuae/falcon-refinedw...   En   Endpoints compatible   Instruct   Mistral   Region:us   Safetensors   Sharded   Tensorflow

InstructLM 1.3B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
InstructLM 1.3B (instruction-pretrain/InstructLM-1.3B)

InstructLM 1.3B Parameters and Internals

Model Type 
GPT, pre-trained, instruction-based
Use Cases 
Areas:
Research, Commercial Applications
Applications:
General Language Modeling, Domain-Specific Instruction Modeling
Primary Use Cases:
Domain adaptation in finance and biomedicine, Synthesizing instruction-response pairs
Limitations:
No specific finance data due to ethical concerns
Additional Notes 
Demonstrates the effectiveness of supervised multitask pre-training using instruction-response pairs.
Supported Languages 
en (proficient)
Training Details 
Data Sources:
tiiuae/falcon-refinedweb, instruction-pretrain/ft-instruction-synthesizer-collection, instruction-pretrain/general-instruction-augmented-corpora
Data Volume:
100B tokens
Methodology:
Supervised multitask pre-training using instruction-response pairs
Model Architecture:
Instruction-based GPT model
Release Notes 
Version:
2024/9/20
Date:
2024-09-20
Notes:
Paper accepted at EMNLP 2024 main conference.
Version:
2024/9/11
Date:
2024-09-11
Notes:
Updated FAQ on continual pre-training from Llama3.
Version:
2024/8/29
Date:
2024-08-29
Notes:
Updated guidelines on domain-specific tasks evaluation.
Version:
2024/7/31
Date:
2024-07-31
Notes:
Scaled up pre-trained tokens to 250B, with 500M instruction-response pairs.
Version:
2024/6/21
Date:
2024-06-21
Notes:
Released paper, code, and resources.
LLM NameInstructLM 1.3B
Repository ๐Ÿค—https://huggingface.co/instruction-pretrain/InstructLM-1.3B 
Model Size1.3b
Required VRAM5.5 GB
Updated2025-05-15
Maintainerinstruction-pretrain
Model Typemistral
Instruction-BasedYes
Model Files  1.0 GB: 1-of-6   1.0 GB: 2-of-6   1.0 GB: 3-of-6   1.0 GB: 4-of-6   1.0 GB: 5-of-6   0.5 GB: 6-of-6
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.34.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Rank the InstructLM 1.3B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 47368 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227