Calme 2.1 Qwen2 72B by MaziyarPanahi

 ยป  All LLMs  ยป  MaziyarPanahi  ยป  Calme 2.1 Qwen2 72B   URL Share it on

  Autotrain compatible Base model:finetune:qwen/qwen2... Base model:qwen/qwen2-72b-inst...   Chat   Chatml   Conversational   En   Finetuned   Instruct   Model-index   Qwen   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Calme 2.1 Qwen2 72B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Calme 2.1 Qwen2 72B (MaziyarPanahi/calme-2.1-qwen2-72b)

Calme 2.1 Qwen2 72B Parameters and Internals

Model Type 
text-generation
Use Cases 
Applications:
Advanced question-answering systems, Intelligent chatbots and virtual assistants, Content generation and summarization, Code generation and analysis, Complex problem-solving and decision support
Additional Notes 
The model excels across a wide range of benchmarks and real-world applications.
Responsible Ai Considerations 
Mitigation Strategies:
As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.
Input Output 
Input Format:
ChatML
LLM NameCalme 2.1 Qwen2 72B
Repository ๐Ÿค—https://huggingface.co/MaziyarPanahi/calme-2.1-qwen2-72b 
Model NameMaziyarPanahi/calme-2.1-qwen2-72b
Model CreatorMaziyarPanahi
Base Model(s)  Qwen2 72B Instruct   Qwen/Qwen2-72B-Instruct
Model Size72b
Required VRAM146 GB
Updated2025-03-18
MaintainerMaziyarPanahi
Model Typeqwen2
Instruction-BasedYes
Model Files  4.5 GB: 1-of-31   5.0 GB: 2-of-31   4.8 GB: 3-of-31   4.8 GB: 4-of-31   4.8 GB: 5-of-31   5.0 GB: 6-of-31   4.8 GB: 7-of-31   4.8 GB: 8-of-31   4.8 GB: 9-of-31   5.0 GB: 10-of-31   4.8 GB: 11-of-31   4.8 GB: 12-of-31   4.8 GB: 13-of-31   5.0 GB: 14-of-31   4.8 GB: 15-of-31   4.8 GB: 16-of-31   4.8 GB: 17-of-31   5.0 GB: 18-of-31   4.8 GB: 19-of-31   4.8 GB: 20-of-31   4.8 GB: 21-of-31   5.0 GB: 22-of-31   4.8 GB: 23-of-31   4.8 GB: 24-of-31   4.8 GB: 25-of-31   5.0 GB: 26-of-31   4.8 GB: 27-of-31   4.8 GB: 28-of-31   4.8 GB: 29-of-31   3.2 GB: 30-of-31   2.5 GB: 31-of-31
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseother
Context Length32768
Model Max Length32768
Transformers Version4.41.0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151646
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Calme 2.1 Qwen2 72B

Best Alternatives
Context / RAM
Downloads
Likes
EVA Qwen2.5 72B V0.2128K / 146 GB91017
AceInstruct 72B128K / 146 GB19714
...n2.5 72B 2x Instruct TIES V1.0128K / 146.1 GB231
...A Abliterated TIES Qwen2.5 72B128K / 146.1 GB280
72B Qwen2.5 Kunou V1128K / 146 GB6721
EVA Qwen2.5 72B V0.1128K / 146 GB2816
EVA Qwen2.5 72B V0.0128K / 146 GB155
Qwen2.5 72B Instruct32K / 145.5 GB297830771
Athene V2 Chat32K / 146 GB6181287
Qwen2 72B Instruct32K / 145.5 GB36440713
Note: green Score (e.g. "73.2") means that the model is better than MaziyarPanahi/calme-2.1-qwen2-72b.

Rank the Calme 2.1 Qwen2 72B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45231 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227