Qwen2 7B FocusMix by Nelathan

 ยป  All LLMs  ยป  Nelathan  ยป  Qwen2 7B FocusMix   URL Share it on

  Merged Model   Autotrain compatible Base model:arcee-ai/arcee-spar... Base model:maziyarpanahi/calme... Base model:weyaxi/einstein-v7-...   Conversational   Endpoints compatible   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Qwen2 7B FocusMix Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2 7B FocusMix (Nelathan/Qwen2-7B-FocusMix)

Qwen2 7B FocusMix Parameters and Internals

Model Type 
text generation, multimodal
Use Cases 
Primary Use Cases:
Task-Specific Instructions, Complex Reasoning, Diverse Knowledge Domains
Training Details 
Methodology:
The model is created by merging four different language models: Replete-AI/Replete-LLM-Qwen2-7b, arcee-ai/Arcee-Spark, Weyaxi/Einstein-v7-Qwen2-7B, and MaziyarPanahi/calme-2.8-qwen2-7b.
LLM NameQwen2 7B FocusMix
Repository ๐Ÿค—https://huggingface.co/Nelathan/Qwen2-7B-FocusMix 
Base Model(s)  Replete LLM Qwen2 7B   Arcee Spark   Einstein V7 Qwen2 7B   MaziyarPanahi/calme-2.8-qwen2-7b   Replete-AI/Replete-LLM-Qwen2-7b   arcee-ai/Arcee-Spark   Weyaxi/Einstein-v7-Qwen2-7B   MaziyarPanahi/calme-2.8-qwen2-7b
Merged ModelYes
Model Size7b
Required VRAM15.2 GB
Updated2025-02-05
MaintainerNelathan
Model Typeqwen2
Model Files  5.0 GB: 1-of-4   4.9 GB: 2-of-4   5.0 GB: 3-of-4   0.3 GB: 4-of-4
Model ArchitectureQwen2ForCausalLM
Context Length131072
Model Max Length131072
Transformers Version4.44.0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151646
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen2 7B FocusMix

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 7B Instruct 1M986K / 15.4 GB25044175
Qwen2.5 7B RRP 1M986K / 15.2 GB974
COCO 7B Instruct 1M986K / 15.2 GB738
Q2.5 Instruct 1M Harmony986K / 15.2 GB350
Impish QWEN 7B 1M986K / 15.2 GB391
Qwen2.5 7B DeepSeek R1 1M986K / 15.2 GB508
MwM 7B CoT Merge1986K / 15.2 GB222
Mergekit Della Linear Vmeykci986K / 16.2 GB110
SakalFusion 7B Beta986K / 15.2 GB160
SJT 7B V1.1986K / 14.8 GB21
Note: green Score (e.g. "73.2") means that the model is better than Nelathan/Qwen2-7B-FocusMix.

Rank the Qwen2 7B FocusMix Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227