Daredevil 8B Abliterated GPTQ by aspirina765

 Β»  All LLMs  Β»  aspirina765  Β»  Daredevil 8B Abliterated GPTQ   URL Share it on

  Arxiv:1910.09700   4-bit   Autoquant   Autotrain compatible   Conversational   Endpoints compatible   Gptq   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Daredevil 8B Abliterated GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Daredevil 8B Abliterated GPTQ (aspirina765/Daredevil-8B-abliterated-GPTQ)

Daredevil 8B Abliterated GPTQ Parameters and Internals

Additional Notes 
This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. More information is needed across several sections, such as the model’s type, training data, and evaluation metrics.
LLM NameDaredevil 8B Abliterated GPTQ
Repository πŸ€—https://huggingface.co/aspirina765/Daredevil-8B-abliterated-GPTQ 
Base Model(s)  Daredevil 8B Abliterated   mlabonne/Daredevil-8B-abliterated
Model Size8b
Required VRAM5.8 GB
Updated2025-03-14
Maintaineraspirina765
Model Typellama
Model Files  4.7 GB: 1-of-2   1.1 GB: 2-of-2
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.41.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typefloat16

Best Alternatives to Daredevil 8B Abliterated GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct 262K 4bit GPTQ256K / 5.8 GB171
... 8B Instruct 262K 4bit GPTQ 02256K / 5.7 GB120
...lama 3.1 8B Instruct GPTQ INT4128K / 5.8 GB1246022
...Instruct 80K Qlora Merged GPTQ80K / 5.8 GB100
Llama3 German 8B 32K GPTQ64K / 5.7 GB90
...oLeo Instruct 8B 32K V0.1 GPTQ64K / 5.7 GB80
Tsukasa Llama 3 8B Qlora Gptq32K / 5.8 GB110
Llama 3 Soliloquy 8B V2 GPTQ24K / 5.7 GB101
Llama 3 Soliloquy 8B GPTQ16K / 5.7 GB105
Meta Llama 3 8B Instruct GPTQ8K / 5.7 GB25343
Note: green Score (e.g. "73.2") means that the model is better than aspirina765/Daredevil-8B-abliterated-GPTQ.

Rank the Daredevil 8B Abliterated GPTQ Capabilities

πŸ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 45019 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227