LLaMa 3 Stheno V3.2 15B by PJMixers

 ยป  All LLMs  ยป  PJMixers  ยป  LLaMa 3 Stheno V3.2 15B   URL Share it on

  Merged Model   Arxiv:2212.04089   Autotrain compatible Base model:elinas/llama-3-15b-... Base model:elinas/llama-3-15b-... Base model:pjmixers/llama-3-st...   Conversational   Endpoints compatible   Instruct   Llama   Region:us   Safetensors   Sharded   Tensorflow

Rank the LLaMa 3 Stheno V3.2 15B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
LLaMa 3 Stheno V3.2 15B (PJMixers/LLaMa-3-Stheno-v3.2-15B)

Best Alternatives to LLaMa 3 Stheno V3.2 15B

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Llama 3 15B Instruct Zeroed Ft67.938K / 30.1 GB16441
Llama 3 15B Instruct Zeroed67.758K / 30 GB17341
Meta Llama 3 15B Instruct8K / 23.9 GB11521
Llama 3 15B Instruct Ft V28K / 30.1 GB14453
L3 NA Aethora 15B8K / 30.1 GB113
Llama 3 15B Instruct NSFW ORPO8K / 30.9 GB11
Llama 3 15B Instruct Ft V2 AWQ8K / 9.4 GB130
WizardCoder 15B V1.1 4bit2K / 7.3 GB142
Note: green Score (e.g. "73.2") means that the model is better than PJMixers/LLaMa-3-Stheno-v3.2-15B.

LLaMa 3 Stheno V3.2 15B Parameters and Internals

LLM NameLLaMa 3 Stheno V3.2 15B
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Llama 3 15B Instruct Zeroed   Llama 3 15B Instruct Ft V2   LLaMa 3 Stheno V3.2 Zeroed 15B   elinas/Llama-3-15B-Instruct-zeroed   elinas/Llama-3-15B-Instruct-ft-v2   PJMixers/LLaMa-3-Stheno-v3.2-Zeroed-15B
Merged ModelYes
Model Size15b
Required VRAM30 GB
Updated2024-06-24
MaintainerPJMixers
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-7   5.0 GB: 2-of-7   4.9 GB: 3-of-7   4.9 GB: 4-of-7   5.0 GB: 5-of-7   4.9 GB: 6-of-7   0.3 GB: 7-of-7
Model ArchitectureLlamaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.41.0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 34902 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801