POLAR 14B DPO V1.02 by x2bee

 ยป  All LLMs  ยป  x2bee  ยป  POLAR 14B DPO V1.02   URL Share it on

  Arxiv:1910.09700   8-bit   Autotrain compatible Dataset:we-want-gpu/yi-ko-dpo-...   Endpoints compatible   License:apache-2.0   Llama   Region:us   Safetensors   Sharded   Tensorflow

Rank the POLAR 14B DPO V1.02 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
POLAR 14B DPO V1.02 (x2bee/POLAR-14B-DPO-v1.02)

Best Alternatives to POLAR 14B DPO V1.02

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
14B8K / 28.4 GB6679290
14B DPO Alpha8K / 28.4 GB5138110
Qwen 14B Chat LLaMAfied8K / 28.4 GB33278
Qwen 14B Llamafied8K / 28.4 GB56125
CausalLM Platypus 14B8K / 28.4 GB34681
Qwen 14B Llamafied8K / 28.4 GB41
SkunkApe 14B8K / 28.5 GB93
JerseyDevil 14B8K / 28.5 GB142
Docsgpt 14B4K / 25.9 GB25619
IA 14B4K / 28.4 GB43102

POLAR 14B DPO V1.02 Parameters and Internals

LLM NamePOLAR 14B DPO V1.02
RepositoryOpen on ๐Ÿค— 
Model Size14b
Required VRAM14.6 GB
Updated2024-05-22
Maintainerx2bee
Model Typellama
Model Files  5.0 GB: 1-of-3   5.0 GB: 2-of-3   4.6 GB: 3-of-3
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.38.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 35549 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801