Samantha Qwen 2 7B by macadeliccc

 ยป  All LLMs  ยป  macadeliccc  ยป  Samantha Qwen 2 7B   URL Share it on

  Autotrain compatible Base model:finetune:qwen/qwen2...   Base model:qwen/qwen2-7b   Conversational Dataset:huggingfaceh4/ultracha... Dataset:macadeliccc/opus saman... Dataset:sao10k/claude-3-opus-i...   Dataset:teknium/openhermes-2.5   En   Endpoints compatible   Instruct   Qwen2   Region:us   Safetensors   Sharded   Tensorflow   Zh

Samantha Qwen 2 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Samantha Qwen 2 7B (macadeliccc/Samantha-Qwen-2-7B)

Samantha Qwen 2 7B Parameters and Internals

Supported Languages 
en (English), zh (Chinese)
Training Details 
Data Sources:
macadeliccc/opus_samantha, HuggingfaceH4/ultrachat_200k, teknium/OpenHermes-2.5, Sao10K/Claude-3-Opus-Instruct-15K
Methodology:
Trained on 2x4090 using QLoRa and FSDP + [LoRa](macadeliccc/Samantha-Qwen2-7B-LoRa)
Context Length:
2048
Hardware Used:
2x4090
Input Output 
Accepted Modalities:
text
LLM NameSamantha Qwen 2 7B
Repository ๐Ÿค—https://huggingface.co/macadeliccc/Samantha-Qwen-2-7B 
Base Model(s)  Qwen2 7B   Qwen/Qwen2-7B
Model Size7b
Required VRAM15.2 GB
Updated2025-02-05
Maintainermacadeliccc
Model Typeqwen2
Instruction-BasedYes
Model Files  4.9 GB: 1-of-4   4.9 GB: 2-of-4   4.3 GB: 3-of-4   1.1 GB: 4-of-4
Supported Languagesen zh
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.41.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typefloat16
Errorsreplace

Best Alternatives to Samantha Qwen 2 7B

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 7B Instruct 1M986K / 15.4 GB25044175
Qwen2.5 7B RRP 1M986K / 15.2 GB974
COCO 7B Instruct 1M986K / 15.2 GB738
Q2.5 Instruct 1M Harmony986K / 15.2 GB350
Impish QWEN 7B 1M986K / 15.2 GB391
Qwen2.5 7B DeepSeek R1 1M986K / 15.2 GB508
MwM 7B CoT Merge1986K / 15.2 GB222
Mergekit Della Linear Vmeykci986K / 16.2 GB110
SJT 7B 1M986K / 15.2 GB131
....5 7B DeepSeek BunnyHarmony 1M986K / 14.8 GB01
Note: green Score (e.g. "73.2") means that the model is better than macadeliccc/Samantha-Qwen-2-7B.

Rank the Samantha Qwen 2 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227