Llama 30B Supercot SuperHOT 8K Fp16 by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Llama 30B Supercot SuperHOT 8K Fp16   URL Share it on

  Autotrain compatible   Custom code   Ext 8k   Fp16   Llama   Pytorch   Quantized   Region:us   Sharded

Llama 30B Supercot SuperHOT 8K Fp16 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 30B Supercot SuperHOT 8K Fp16 (TheBloke/llama-30b-supercot-SuperHOT-8K-fp16)

Llama 30B Supercot SuperHOT 8K Fp16 Parameters and Internals

Model Type 
text generation
Additional Notes 
Note that `config.json` has been set to a sequence length of 8192, but can be modified to 4096.
Training Details 
Data Sources:
huggyllama/llama-30b, kaiokendev/SuperCOT-LoRA
Methodology:
Langchain prompting with SuperHOT extensions, LoRA technique
Input Output 
Input Format:
USER: [prompt] ASSISTANT:
LLM NameLlama 30B Supercot SuperHOT 8K Fp16
Repository ๐Ÿค—https://huggingface.co/TheBloke/llama-30b-supercot-SuperHOT-8K-fp16 
Model Size30b
Required VRAM65.2 GB
Updated2025-02-22
MaintainerTheBloke
Model Typellama
Model Files  9.8 GB: 1-of-7   10.0 GB: 2-of-7   9.9 GB: 3-of-7   9.9 GB: 4-of-7   9.9 GB: 5-of-7   10.0 GB: 6-of-7   5.7 GB: 7-of-7
Context Length8k
Quantization Typefp16
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.30.0.dev0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Llama 30B Supercot SuperHOT 8K Fp16

Best Alternatives
Context / RAM
Downloads
Likes
Tenebra 30B Alpha01 FP1616K / 65 GB17928
Tenebra 30B Alpha01 4BIT16K / 19.4 GB18961
...nebra 30B Alpha01 EXL2 2 80bpw16K / 11.9 GB121
Tenebra 30B Alpha01 EXL2 3bpw16K / 12.7 GB110
...nebra 30B Alpha01 EXL2 2 50bpw16K / 10.7 GB60
Tenebra 30B Alpha01 3BIT16K / 12.9 GB90
Platypus 30B SuperHOT 8K Fp168K / 65.2 GB20712
GPlatty 30B SuperHOT 8K Fp168K / 65.2 GB20611
Tulu 30B Fp162K / 65.2 GB20575
WizardLM 30B Fp162K / 65.2 GB205810
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/llama-30b-supercot-SuperHOT-8K-fp16.

Rank the Llama 30B Supercot SuperHOT 8K Fp16 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227