C4ai Command R Plus EXL2 2.25bpw by Dracones

 ยป  All LLMs  ยป  Dracones  ยป  C4ai Command R Plus EXL2 2.25bpw   URL Share it on

  Ar   Autotrain compatible   Cohere   De   En   Endpoints compatible   Es   Exl2   Fr   It   Ja   Ko   Pt   Quantized   Region:us   Safetensors   Sharded   Tensorflow   Zh

C4ai Command R Plus EXL2 2.25bpw Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
C4ai Command R Plus EXL2 2.25bpw (Dracones/c4ai-command-r-plus_exl2_2.25bpw)

C4ai Command R Plus EXL2 2.25bpw Parameters and Internals

Supported Languages 
languages_supported (), proficiency_level ()
LLM NameC4ai Command R Plus EXL2 2.25bpw
Repository ๐Ÿค—https://huggingface.co/Dracones/c4ai-command-r-plus_exl2_2.25bpw 
Required VRAM38 GB
Updated2025-02-22
MaintainerDracones
Model Typecohere
Model Files  8.6 GB: 1-of-5   8.5 GB: 2-of-5   8.5 GB: 3-of-5   8.6 GB: 4-of-5   3.8 GB: 5-of-5
Supported Languagesen fr de es it pt ja ko zh ar
Quantization Typeexl2
Model ArchitectureCohereForCausalLM
Licensecc-by-nc-4.0
Context Length8192
Model Max Length8192
Transformers Version4.40.0.dev0
Tokenizer ClassCohereTokenizerFast
Padding Token<PAD>
Vocabulary Size256000
Torch Data Typefloat16

Best Alternatives to C4ai Command R Plus EXL2 2.25bpw

Best Alternatives
Context / RAM
Downloads
Likes
...ai Command R Plus EXL2 2.75bpw8K / 44.6 GB113
...4ai Command R Plus EXL2 5.5bpw8K / 80.2 GB91
...4ai Command R Plus EXL2 8.0bpw8K / 109.8 GB52
C4ai Command R V01 EXL2 6.0bpw8K / 32.1 GB143
...4ai Command R Plus EXL2 6.5bpw8K / 93.1 GB50
...ommand R V01 EXL2 3.5bpw Rpcal8K / 21.2 GB72
C4ai Command R V01 EXL2 5.0bpw8K / 27.8 GB101
C4ai Command R V01 EXL2 3.5bpw8K / 21.1 GB61
...4ai Command R Plus 8.0bpw EXL28K / 110 GB31
...4ai Command R Plus 6.0bpw EXL28K / 86.8 GB51
Note: green Score (e.g. "73.2") means that the model is better than Dracones/c4ai-command-r-plus_exl2_2.25bpw.

Rank the C4ai Command R Plus EXL2 2.25bpw Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227