Llama3 8B Oig Unsloth Merged by Davidcv18

 ยป  All LLMs  ยป  Davidcv18  ยป  Llama3 8B Oig Unsloth Merged   URL Share it on

  4bit   Autotrain compatible Base model:unsloth/llama-3-8b-...   En   Endpoints compatible   License:apache-2.0   Llama   Pytorch   Quantized   Region:us   Sft   Sharded   Trl   Unsloth

Rank the Llama3 8B Oig Unsloth Merged Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Llama3 8B Oig Unsloth Merged (Davidcv18/llama3-8b-oig-unsloth-merged)

Best Alternatives to Llama3 8B Oig Unsloth Merged

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
...Antino 3 ANITA 8B Inst DPO ITA75.128K / 16.1 GB659718
Llama 3 Merged Linear73.938K / 16.1 GB86816
...ama 3 SauerkrautLM 8B Instruct73.748K / 16.1 GB2941346
Llama3s Merged Linear73.668K / 16.1 GB13370
Llama 3 8B Ita73.658K / 16.1 GB2123820
Llama 3 Gutenberg 8B73.188K / 16.1 GB16176
Llama 3 8B Instruct V0.873.178K / 16 GB26211
Llama 3 Wissenschaft 8B V273.068K / 16.1 GB13391
NeuralLLaMa 3 8B ORPO V0.372.668K / 16.1 GB48280
NeuralLLaMa 3 8B DT V0.172.528K / 16.1 GB54461
Note: green Score (e.g. "73.2") means that the model is better than Davidcv18/llama3-8b-oig-unsloth-merged.

Llama3 8B Oig Unsloth Merged Parameters and Internals

LLM NameLlama3 8B Oig Unsloth Merged
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Llama 3 8B Bnb 4bit   unsloth/llama-3-8b-bnb-4bit
Model Size8b
Required VRAM16.1 GB
Updated2024-06-23
MaintainerDavidcv18
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesen
Quantization Type4bit
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.41.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|reserved_special_token_250|>
Vocabulary Size128256
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 34944 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801