Codestral 22B V0.1 Abliterated V3.8.0bpw H8 EXL2 by FuturisticVibes

 ยป  All LLMs  ยป  FuturisticVibes  ยป  Codestral 22B V0.1 Abliterated V3.8.0bpw H8 EXL2   URL Share it on

  8-bit   Autotrain compatible   Code   Endpoints compatible   Exl2   Mistral   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Codestral 22B V0.1 Abliterated V3.8.0bpw H8 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Codestral 22B V0.1 Abliterated V3.8.0bpw H8 EXL2 (FuturisticVibes/Codestral-22B-v0.1-abliterated-v3-8.0bpw-h8-exl2)

Codestral 22B V0.1 Abliterated V3.8.0bpw H8 EXL2 Parameters and Internals

Model Type 
code generation, fill-in-the-middle
Use Cases 
Areas:
Code generation, Software development, Research applications
Applications:
Simplifying code tasks, Education in programming, Integration with development tools like VS Code
Primary Use Cases:
Generating code snippets, Code explanation and documentation, Predicting code completions using FIM
Limitations:
No moderation mechanisms, Untested quirks due to new methodology
Considerations:
Intended to increase compliance but may have unexpected outcomes due to methodology.
Additional Notes 
Model may exhibit quirks due to new methodology experimental application.
Supported Languages 
languages_included (>80 programming languages), proficiency (>80)
Training Details 
Data Sources:
Diverse dataset of 80+ programming languages
Methodology:
Orthogonalization/Ablation to inhibit refusal expression
Hardware Used:
RunPod H100 for some stages
Responsible Ai Considerations 
Fairness:
Model includes uncensored features
Transparency:
Orthogonalization method used for ablation
Input Output 
Input Format:
Instruct format, fill-in-the-middle (FIM)
Accepted Modalities:
text
Output Format:
Code snippets, documentation, explanations
Performance Tips:
Appropriate prompt-engineering may enhance outputs; orthogonalization complements fine-tuning
LLM NameCodestral 22B V0.1 Abliterated V3.8.0bpw H8 EXL2
Repository ๐Ÿค—https://huggingface.co/FuturisticVibes/Codestral-22B-v0.1-abliterated-v3-8.0bpw-h8-exl2 
Model Size22b
Required VRAM21.2 GB
Updated2025-02-05
MaintainerFuturisticVibes
Model Typemistral
Model Files  8.5 GB: 1-of-3   8.5 GB: 2-of-3   4.2 GB: 3-of-3
Supported Languagescode
Quantization Typeexl2
Model ArchitectureMistralForCausalLM
LicenseMNPL-0.1
Context Length32768
Model Max Length32768
Transformers Version4.40.1
Tokenizer ClassLlamaTokenizer
Vocabulary Size32768
Torch Data Typebfloat16

Best Alternatives to Codestral 22B V0.1 Abliterated V3.8.0bpw H8 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
MwM 22B Instruct128K / 44.7 GB150
... Instruct 0.2 Chkpt 200 16 Bit128K / 44.7 GB201
...l 22B V0.1 Hf FIM Fix Bnb 4bit32K / 13.1 GB51
Text2cypher Codestral 16bit32K / 44.7 GB133
Codestral 22B V0.1 EXL2 8.0bpw32K / 20.9 GB93
...estral 22B V0.1 Hf 2 2bpw EXL232K / 6.6 GB81
Codestral 22B V0.1 EXL2 6.0bpw32K / 16.9 GB71
Codestral 22B V0.1 EXL2 5.0bpw32K / 14.2 GB41
MS Schisandra 22B V0.2128K / 44.7 GB188
...ntheon RP Pure 1.6.2 22B Small128K / 44.7 GB6326
Note: green Score (e.g. "73.2") means that the model is better than FuturisticVibes/Codestral-22B-v0.1-abliterated-v3-8.0bpw-h8-exl2.

Rank the Codestral 22B V0.1 Abliterated V3.8.0bpw H8 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227