Opus V1.4 70B Llama3 EXL2 6.0bpw H6 by dreamgen

 ยป  All LLMs  ยป  dreamgen  ยป  Opus V1.4 70B Llama3 EXL2 6.0bpw H6   URL Share it on

  6-bit   Autotrain compatible   Axolotl   Conversational   En   Endpoints compatible   Exl2   Instruct   Llama   Pytorch   Quantized   Region:us   Sharded   Tensorflow   Unsloth

Opus V1.4 70B Llama3 EXL2 6.0bpw H6 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Opus V1.4 70B Llama3 EXL2 6.0bpw H6 (dreamgen/opus-v1.4-70b-llama3-exl2-6.0bpw-h6)

Opus V1.4 70B Llama3 EXL2 6.0bpw H6 Parameters and Internals

Model Type 
text-generation
Use Cases 
Primary Use Cases:
story-writing, role-playing
Additional Notes 
Make sure prompt is formatted according to Opus V1 standards.
Training Details 
Data Sources:
steerable story-writing, role-playing, writing-assistant, general-assistant examples
Data Volume:
>100M tokens
Context Length:
8192
Input Output 
Input Format:
Llama 3 extended template with 'writer' role
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Min P sampling with 'min_p' in the range '[0.01, 0.1]' and 'temperature' in the range '[0.5, 1.5]' is recommended.
LLM NameOpus V1.4 70B Llama3 EXL2 6.0bpw H6
Repository ๐Ÿค—https://huggingface.co/dreamgen/opus-v1.4-70b-llama3-exl2-6.0bpw-h6 
Model Size70b
Required VRAM54.2 GB
Updated2025-02-22
Maintainerdreamgen
Model Typellama
Instruction-BasedYes
Model Files  8.4 GB: 1-of-7   8.6 GB: 2-of-7   8.4 GB: 3-of-7   8.5 GB: 4-of-7   8.5 GB: 5-of-7   8.6 GB: 6-of-7   3.2 GB: 7-of-7
Supported Languagesen
Quantization Typeexl2
Model ArchitectureLlamaForCausalLM
Licensecc-by-nc-nd-4.0
Context Length8192
Model Max Length8192
Transformers Version4.40.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Opus V1.4 70B Llama3 EXL2 6.0bpw H6

Best Alternatives
Context / RAM
Downloads
Likes
...B Instruct Gradient 1048K 8bit1024K / 75 GB75
...B Instruct Gradient 1048K 4bit1024K / 39.7 GB63
...B Instruct Gradient 1048K 2bit1024K / 21.9 GB52
...0B Instruct Gradient 262K 4bit256K / 39.7 GB93
...0B Instruct Gradient 262K 8bit256K / 75 GB72
...0B Instruct Gradient 262K 2bit256K / 21.9 GB51
... Gradient 262K 2.25bpw H6 EXL2256K / 22.2 GB50
...t Gradient 262K 4.0bpw H6 EXL2256K / 37.2 GB81
...t Gradient 262K 3.5bpw H6 EXL2256K / 32.9 GB70
...t Gradient 262K 2.4bpw H6 EXL2256K / 23.5 GB60

Rank the Opus V1.4 70B Llama3 EXL2 6.0bpw H6 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227