Prince Canuma Llama 3 6B V0.1 2 2bpw EXL2 by Zoyd

 ยป  All LLMs  ยป  Zoyd  ยป  Prince Canuma Llama 3 6B V0.1 2 2bpw EXL2   URL Share it on

  Arxiv:2212.05055   Arxiv:2404.08634   6b   Autotrain compatible Base model:prince-canuma/llama... Base model:quantized:prince-ca...   Dataset:huggingfacefw/fineweb Dataset:prince-canuma/fineweb-...   En   Endpoints compatible   Exl2   Llama   Llama-3-6b   Quantized   Region:us   Safetensors

Prince Canuma Llama 3 6B V0.1 2 2bpw EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Prince Canuma Llama 3 6B V0.1 2 2bpw EXL2 (Zoyd/prince-canuma_Llama-3-6B-v0.1-2_2bpw_exl2)

Prince Canuma Llama 3 6B V0.1 2 2bpw EXL2 Parameters and Internals

Model Type 
Llama
Use Cases 
Areas:
Coding assistant, RAG, Function Calling
Primary Use Cases:
instruct and chat
Limitations:
Limited scope for coding and math, English-only pretraining
Supported Languages 
en (high proficiency)
Training Details 
Data Sources:
Huggingface's FineWeb CC-Main-2024-10
Data Volume:
1 billion tokens
Methodology:
Downcycling technique
Context Length:
8192
Hardware Used:
4xRTX6000
Model Architecture:
24 layers out of 32 copied from Llama-3-8B
Input Output 
Accepted Modalities:
text
Output Format:
text
LLM NamePrince Canuma Llama 3 6B V0.1 2 2bpw EXL2
Repository ๐Ÿค—https://huggingface.co/Zoyd/prince-canuma_Llama-3-6B-v0.1-2_2bpw_exl2 
Base Model(s)  Llama 3 6B V0   prince-canuma/Llama-3-6B-v0
Model Size6b
Required VRAM2.9 GB
Updated2025-02-05
MaintainerZoyd
Model Typellama
Model Files  2.9 GB
Supported Languagesen
Quantization Typeexl2
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.40.1
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typefloat16

Best Alternatives to Prince Canuma Llama 3 6B V0.1 2 2bpw EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...i 6B 200K AEZAKMI V2 6bpw EXL2195K / 4.9 GB83
Yi 6B 200K 8.0bpw H8 EXL2195K / 6.3 GB81
Yi 6B 200K 6.0bpw H6 EXL2195K / 4.9 GB71
Yi 1.5 6B Bnb 4bit4K / 3.9 GB1262
Docllm Yi 6B4K / 13.5 GB71
Yi 6B Bnb 4bit4K / 3.9 GB761
Yi 6B Chat 6bpw H8 EXL2 Cnen4K / 4.9 GB81
Yi Ko 1.22K / 24.6 GB22950
Yi Ko 3 1 72K / 24.6 GB22910
Electus Yiko DPO2K / 12.4 GB640
Note: green Score (e.g. "73.2") means that the model is better than Zoyd/prince-canuma_Llama-3-6B-v0.1-2_2bpw_exl2.

Rank the Prince Canuma Llama 3 6B V0.1 2 2bpw EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42565 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227