Magnum V4 27B by anthracite-org

 ยป  All LLMs  ยป  anthracite-org  ยป  Magnum V4 27B   URL Share it on

  Autotrain compatible   Chat   Conversational Dataset:anthracite-org/c2 logs... Dataset:anthracite-org/kalo-op... Dataset:anthracite-org/kalo mi... Dataset:anthracite-org/kalo op... Dataset:anthracite-org/nopm cl... Dataset:epiculous/synthrp-gens... Dataset:epiculous/synthstruct-... Dataset:lodrick-the-lafted/kal... Dataset:neweden/claude-instruc...   En   Endpoints compatible   Gemma2   Instruct   Model-index   Region:us   Safetensors   Sharded   Tensorflow

Magnum V4 27B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Magnum V4 27B (anthracite-org/magnum-v4-27b)

Magnum V4 27B Parameters and Internals

Model Type 
text-generation
Training Details 
Data Sources:
anthracite-org/c2_logs_16k_llama_v1.1, NewEden/Claude-Instruct-5K, anthracite-org/kalo-opus-instruct-22k-no-refusal, Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned, lodrick-the-lafted/kalo-opus-instruct-3k-filtered, anthracite-org/nopm_claude_writing_fixed, Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned, anthracite-org/kalo_opus_misc_240827, anthracite-org/kalo_misc_part2
Context Length:
8192
Hardware Used:
8x Nvidia H100 GPUs
LLM NameMagnum V4 27B
Repository ๐Ÿค—https://huggingface.co/anthracite-org/magnum-v4-27b 
Model Size27b
Required VRAM54.7 GB
Updated2025-02-22
Maintaineranthracite-org
Model Typegemma2
Instruction-BasedYes
Model Files  4.7 GB: 1-of-12   4.9 GB: 2-of-12   4.9 GB: 3-of-12   5.0 GB: 4-of-12   4.9 GB: 5-of-12   4.9 GB: 6-of-12   5.0 GB: 7-of-12   4.9 GB: 8-of-12   4.9 GB: 9-of-12   5.0 GB: 10-of-12   4.9 GB: 11-of-12   0.7 GB: 12-of-12
Supported Languagesen
Model ArchitectureGemma2ForCausalLM
Licensegemma
Context Length8192
Model Max Length8192
Transformers Version4.45.0.dev0
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
Vocabulary Size256000
Torch Data Typebfloat16

Rank the Magnum V4 27B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227