EVA Qwen2.5 7B V0.1 by EVA-UNIT-01

 ยป  All LLMs  ยป  EVA-UNIT-01  ยป  EVA Qwen2.5 7B V0.1   URL Share it on

Base model:finetune:qwen/qwen2...   Base model:qwen/qwen2.5-7b Dataset:allura-org/celeste-1.x... Dataset:allura-org/shortstorie... Dataset:anthracite-org/kalo-op... Dataset:epiculous/synthrp-gens... Dataset:epiculous/synthstruct-... Dataset:gryphe/chatgpt-4o-writ... Dataset:gryphe/sonnet3.5-charc... Dataset:gryphe/sonnet3.5-slimo... Dataset:nopm/opus writingstruc... Dataset:nothingiisreal/reddit-...   Instruct   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

EVA Qwen2.5 7B V0.1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
EVA Qwen2.5 7B V0.1 (EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1)

EVA Qwen2.5 7B V0.1 Parameters and Internals

Model Type 
RP/storywriting specialist
Use Cases 
Areas:
Research applications, Commercial applications
Additional Notes 
Model uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor".
Training Details 
Data Sources:
Celeste 70B 0.1 data mixture, Kalomaze's Opus_Instruct_25k dataset, ChatGPT-4o-WritingPrompts by Gryphe, Sonnet3.5-Charcards-Roleplay by Gryphe, shortstories_synthlabels by Auri, Synthstruct and SynthRP datasets by Epiculous
Methodology:
Full-parameter finetune on a mixture of synthetic and natural data
Training Time:
2 days
Hardware Used:
4x3090Ti
Input Output 
Input Format:
ChatML
Performance Tips:
Model appears to prefer lower temperatures (at least 0.9 and lower). Min-P seems to work now, as well.
Release Notes 
Version:
0.1
Notes:
Dataset was deduped and cleaned from version 0.0, and learning rate was adjusted. Resulting model seems to be stabler, and 0.0 problems with handling short inputs and min_p sampling seem to be mostly gone. It will be retrained once more due to crash.
LLM NameEVA Qwen2.5 7B V0.1
Repository ๐Ÿค—https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1 
Base Model(s)  Qwen/Qwen2.5-7B   Qwen/Qwen2.5-7B
Model Size7b
Required VRAM15.2 GB
Updated2025-02-22
MaintainerEVA-UNIT-01
Model Typeqwen2
Instruction-BasedYes
Model Files  4.9 GB: 1-of-4   4.9 GB: 2-of-4   4.3 GB: 3-of-4   1.1 GB: 4-of-4   0.0 GB
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.45.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to EVA Qwen2.5 7B V0.1

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 7B Instruct 1M986K / 15.4 GB289038236
Qwen2.5 7B RRP 1M986K / 15.2 GB2944
Qwen2.5 7B CelestialHarmony 1M986K / 14.8 GB1535
COCO 7B Instruct 1M986K / 15.2 GB1059
Q2.5 Instruct 1M Harmony986K / 15.2 GB611
Impish QWEN 7B 1M986K / 15.2 GB701
Qwen2.5 7B DeepSeek R1 1M986K / 15.2 GB8810
Qwen2.5 7B Sky R1 Mini986K / 15.2 GB250
Qwen2.5 7B Instruct 1M986K / 15.2 GB6332
MwM 7B CoT Merge1986K / 15.2 GB432
Note: green Score (e.g. "73.2") means that the model is better than EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1.

Rank the EVA Qwen2.5 7B V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227