Llama3 8B 1.58 100B Tokens by HF1BitLLM

 ยป  All LLMs  ยป  HF1BitLLM  ยป  Llama3 8B 1.58 100B Tokens   URL Share it on

  Arxiv:2402.17764   8-bit   Autotrain compatible Base model:meta-llama/meta-lla... Base model:quantized:meta-llam...   Bitnet   Conversational   Endpoints compatible   Instruct   Llama   Region:us   Safetensors

Llama3 8B 1.58 100B Tokens Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama3 8B 1.58 100B Tokens (HF1BitLLM/Llama3-8B-1.58-100B-tokens)

Llama3 8B 1.58 100B Tokens Parameters and Internals

Model Type 
large language model
Additional Notes 
The model is fine-tuned on the BitNet 1.58b architecture, and it uses extreme quantization techniques.
Training Details 
Data Sources:
FineWeb-edu dataset
Data Volume:
100 billion tokens
Model Architecture:
BitNet 1.58b
LLM NameLlama3 8B 1.58 100B Tokens
Repository ๐Ÿค—https://huggingface.co/HF1BitLLM/Llama3-8B-1.58-100B-tokens 
Base Model(s)  Meta Llama 3 8B Instruct   meta-llama/Meta-Llama-3-8B-Instruct
Model Size8b
Required VRAM3.9 GB
Updated2024-12-21
MaintainerHF1BitLLM
Model Typellama
Instruction-BasedYes
Model Files  3.9 GB
Model ArchitectureLlamaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.44.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama3 8B 1.58 100B Tokens

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB5528678
MrRoboto ProLong 8B V1a1024K / 16.1 GB1070
MrRoboto ProLong 8B V2a1024K / 16.1 GB1000
MrRoboto ProLong 8B V2f1024K / 16.1 GB510
MrRoboto ProLong 8B V1f1024K / 16.1 GB630
MrRoboto ProLong 8B V1l1024K / 16.1 GB600
8B Unaligned BASE V2b1024K / 16.1 GB930
MrRoboto ProLong 8B V1h1024K / 16.1 GB360
MrRoboto ProLong 8B V1d1024K / 16.1 GB340
MrRoboto ProLong 8B V1m1024K / 16.1 GB280
Note: green Score (e.g. "73.2") means that the model is better than HF1BitLLM/Llama3-8B-1.58-100B-tokens.

Rank the Llama3 8B 1.58 100B Tokens Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217