Qwen2 7B Matter 0.1 Slim A by munish0838

 ยป  All LLMs  ยป  munish0838  ยป  Qwen2 7B Matter 0.1 Slim A   URL Share it on

  4bit   Autotrain compatible   Conversational   En   Endpoints compatible   Pytorch   Quantized   Qwen2   Region:us   Sft   Sharded   Trl   Unsloth

Qwen2 7B Matter 0.1 Slim A Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2 7B Matter 0.1 Slim A (munish0838/Qwen2-7b-Matter-0.1-Slim-A)

Qwen2 7B Matter 0.1 Slim A Parameters and Internals

Model Type 
text-generation
Additional Notes 
The model uses Unsloth and Huggingface's TRL library for accelerated training, achieving 2x training speed.
Supported Languages 
en (English)
LLM NameQwen2 7B Matter 0.1 Slim A
Repository ๐Ÿค—https://huggingface.co/munish0838/Qwen2-7b-Matter-0.1-Slim-A 
Base Model(s)  unsloth/Qwen2-7b-bnb-4bit   unsloth/Qwen2-7b-bnb-4bit
Model Size7b
Required VRAM15.2 GB
Updated2025-02-22
Maintainermunish0838
Model Typeqwen2
Model Files  4.9 GB: 1-of-4   4.9 GB: 2-of-4   4.3 GB: 3-of-4   1.1 GB: 4-of-4
Supported Languagesen
Quantization Type4bit
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.41.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|PAD_TOKEN|>
Vocabulary Size152064
Torch Data Typefloat16

Best Alternatives to Qwen2 7B Matter 0.1 Slim A

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 7B Instruct 1M 4bit986K / 4.3 GB7306
...B Instruct 1M Unsloth Bnb 4bit986K / 7.5 GB3541
...still Qwen 7B Unsloth Bnb 4bit128K / 8.5 GB529809
...ek R1 Distill Qwen 7B Bnb 4bit128K / 5.5 GB26823
Mini Pathfinder128K / 15.2 GB150
CogitoDistil128K / 15.2 GB280
A1 V002128K / 15.2 GB300
A1 V0.0.1128K / 15.2 GB70
Qwen2.5 7B Bnb 4bit128K / 5.5 GB191674
Atlas Flash 7B Preview128K / 15.2 GB1173
Note: green Score (e.g. "73.2") means that the model is better than munish0838/Qwen2-7b-Matter-0.1-Slim-A.

Rank the Qwen2 7B Matter 0.1 Slim A Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227