Aurbliterated Qwen2 7B by jeiku

 ยป  All LLMs  ยป  jeiku  ยป  Aurbliterated Qwen2 7B   URL Share it on

  Merged Model   Arxiv:2403.19522   Autotrain compatible Base model:natong19/qwen2-7b-i... Base model:resplendentai/qwen ... Base model:resplendentai/qwen ... Base model:resplendentai/qwen ...   Conversational   Endpoints compatible   Instruct   Lora   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Aurbliterated Qwen2 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Aurbliterated Qwen2 7B (jeiku/Aurbliterated_Qwen2_7B)

Aurbliterated Qwen2 7B Parameters and Internals

Model Type 
text generation
Training Details 
Methodology:
Model Stock merge method
LLM NameAurbliterated Qwen2 7B
Repository ๐Ÿค—https://huggingface.co/jeiku/Aurbliterated_Qwen2_7B 
Base Model(s)  Qwen2 7B Instruct Abliterated   Qwen Sissification LoRA 128   Qwen2 7B Instruct Abliterated   Qwen2 7B Instruct Abliterated   Qwen Soul LoRA 128   Qwen2 7B Instruct Abliterated   Qwen Jeiku LoRA 128   natong19/Qwen2-7B-Instruct-abliterated   ResplendentAI/Qwen_Sissification_LoRA_128   natong19/Qwen2-7B-Instruct-abliterated   natong19/Qwen2-7B-Instruct-abliterated   ResplendentAI/Qwen_Soul_LoRA_128   natong19/Qwen2-7B-Instruct-abliterated   ResplendentAI/Qwen_jeiku_LoRA_128
Merged ModelYes
Model Size7b
Required VRAM15.2 GB
Updated2025-02-22
Maintainerjeiku
Model Typeqwen2
Instruction-BasedYes
Model Files  5.0 GB: 1-of-4   4.9 GB: 2-of-4   5.0 GB: 3-of-4   0.3 GB: 4-of-4
Model ArchitectureQwen2ForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.41.0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
LoRA ModelYes
Torch Data Typefloat16
Errorsreplace

Best Alternatives to Aurbliterated Qwen2 7B

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 7B Instruct 1M986K / 15.4 GB289038236
Qwen2.5 7B RRP 1M986K / 15.2 GB2944
Qwen2.5 7B CelestialHarmony 1M986K / 14.8 GB1535
COCO 7B Instruct 1M986K / 15.2 GB1059
Q2.5 Instruct 1M Harmony986K / 15.2 GB611
Impish QWEN 7B 1M986K / 15.2 GB701
Qwen2.5 7B DeepSeek R1 1M986K / 15.2 GB8810
Qwen2.5 7B Sky R1 Mini986K / 15.2 GB250
Qwen2.5 7B Instruct 1M986K / 15.2 GB6332
MwM 7B CoT Merge1986K / 15.2 GB432
Note: green Score (e.g. "73.2") means that the model is better than jeiku/Aurbliterated_Qwen2_7B.

Rank the Aurbliterated Qwen2 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227