Poppy Porpoise 0.72 L3 8B by ChaoticNeutrals

 ยป  All LLMs  ยป  ChaoticNeutrals  ยป  Poppy Porpoise 0.72 L3 8B   URL Share it on

  Autotrain compatible   Base model:nitral-ai/pp 0.71a   Base model:nitral-ai/pp 0.71b   Conversational   Endpoints compatible   License:other   Llama   Merge   Mergekit   Moe   Region:us   Safetensors   Sharded   Tensorflow

Poppy Porpoise 0.72 L3 8B Benchmarks

Rank the Poppy Porpoise 0.72 L3 8B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Poppy Porpoise 0.72 L3 8B (Nitral-AI/Poppy_Porpoise-0.72-L3-8B)

Quantized Models of the Poppy Porpoise 0.72 L3 8B

Poppy Porpoise 0.72 L3 8B AWQ085 GB

Best Alternatives to Poppy Porpoise 0.72 L3 8B

Best Alternatives
HF Rank
...Antino 3 ANITA 8B Inst DPO ITA75.128K / 16.1 GB659719
Llama 3 Merged Linear73.938K / 16.1 GB86216
...ama 3 SauerkrautLM 8B Instruct73.748K / 16.1 GB2941347
Llama3s Merged Linear73.668K / 16.1 GB13370
Llama 3 8B Ita73.658K / 16.1 GB2123620
Llama 3 Gutenberg 8B73.188K / 16.1 GB16176
Llama 3 8B Instruct V0.873.178K / 16 GB28651
Llama 3 Wissenschaft 8B V273.068K / 16.1 GB13391
NeuralLLaMa 3 8B ORPO V0.372.668K / 16.1 GB50040
NeuralLLaMa 3 8B DT V0.172.528K / 16.1 GB54031
Note: green Score (e.g. "73.2") means that the model is better than Nitral-AI/Poppy_Porpoise-0.72-L3-8B.

Poppy Porpoise 0.72 L3 8B Parameters and Internals

LLM NamePoppy Porpoise 0.72 L3 8B
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Nitral-AI/PP_0.71b   Nitral-AI/PP_0.71a   Nitral-AI/PP_0.71b   Nitral-AI/PP_0.71a
Model Size8b
Required VRAM16 GB
Model Typellama
Model Files  1.9 GB: 1-of-9   2.0 GB: 2-of-9   1.9 GB: 3-of-9   1.1 GB: 4-of-9   1.9 GB: 5-of-9   2.0 GB: 6-of-9   2.0 GB: 7-of-9   1.9 GB: 8-of-9   1.3 GB: 9-of-9
Model ArchitectureLlamaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.39.3
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 34902 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801