Psyonic Rose 20B Higher Quality by v000000

 ยป  All LLMs  ยป  v000000  ยป  Psyonic Rose 20B Higher Quality   URL Share it on

  Arxiv:2203.05482   Autotrain compatible Base model:davidau/psyonic-cet...   Base model:tavtav/rose-20b   Endpoints compatible   Llama   Merge   Mergekit   Region:us   Safetensors   Sharded   Tensorflow

Rank the Psyonic Rose 20B Higher Quality Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Psyonic Rose 20B Higher Quality (v000000/Psyonic-Rose-20B-Higher-Quality)

Best Alternatives to Psyonic Rose 20B Higher Quality

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Internlm2 20B Llama32K / 39.6 GB114719
Bagel 20B V04 Llama32K / 39.6 GB57
Internlm2 Chat 20B Llama Old32K / 39.6 GB5013
Internlm2 Base 20B Llama32K / 39.6 GB73
Bagel DPO 20B V04 Llama32K / 39.6 GB33
Internlm2 Limarp Chat 20B32K / 39.6 GB32
Internlm2 20B Llama32K / 39.6 GB21
Internlm2 Chat 20B Sft Llama32K / 39.6 GB3080
Internlm2 Base 20B Llama32K / 39.6 GB70
Internlm2 Chat 20B Llama32K / 39.8 GB25

Psyonic Rose 20B Higher Quality Parameters and Internals

LLM NamePsyonic Rose 20B Higher Quality
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Rose 20B   ...n V1 20B Ultra Quality Float32   tavtav/Rose-20B   DavidAU/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32
Model Size20b
Required VRAM80.3 GB
Updated2024-07-13
Maintainerv000000
Model Typellama
Model Files  4.9 GB: 1-of-17   5.0 GB: 2-of-17   5.0 GB: 3-of-17   4.8 GB: 4-of-17   4.8 GB: 5-of-17   4.8 GB: 6-of-17   5.0 GB: 7-of-17   5.0 GB: 8-of-17   5.0 GB: 9-of-17   5.0 GB: 10-of-17   4.8 GB: 11-of-17   4.8 GB: 12-of-17   4.8 GB: 13-of-17   5.0 GB: 14-of-17   5.0 GB: 15-of-17   5.0 GB: 16-of-17   1.6 GB: 17-of-17
Model ArchitectureLlamaForCausalLM
Context Length4096
Model Max Length4096
Transformers Version4.41.0
Tokenizer ClassLlamaTokenizer
Padding Token[PAD]
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat32

What open-source LLMs or SLMs are you in search of? 36243 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801