Einstein V4 Qwen 1.5 32B Adapter Checkpoints by Weyaxi

 ยป  All LLMs  ยป  Weyaxi  ยป  Einstein V4 Qwen 1.5 32B Adapter Checkpoints   URL Share it on

  4-bit   Autotrain compatible   Axolotl Base model:quantized:qwen/qwen...   Base model:qwen/qwen1.5-32b   Biology   Bitsandbytes   Chatml   Chemistry   Conversational   Dataset:allenai/ai2 arc   Dataset:bigbio/med qa   Dataset:camel-ai/biology   Dataset:camel-ai/chemistry   Dataset:camel-ai/math   Dataset:camel-ai/physics Dataset:cot-alpaca-gpt4-from-o...   Dataset:derek-thomas/scienceqa Dataset:glaiveai/glaive-code-a... Dataset:jondurbin/airoboros-3.... Dataset:knowrohit07/saraswati-...   Dataset:ldjnr/capybara   Dataset:lmsys/lmsys-chat-1m   Dataset:mandyyyyii/scibench Dataset:meta-math/metamathqa-4...   Dataset:metaeval/reclor Dataset:migtissera/synthia-v1....   Dataset:open-orca/slimorca   Dataset:openbookqa   Dataset:piqa   Dataset:sablo/oasst2 curated   Dataset:scibench   Dataset:sciq Dataset:stem-ai-mtl/electrical...   Dataset:tiger-lab/mathinstruct   Dataset:tiger-lab/scienceeval   Einstein   En   Endpoints compatible   Finetuned   Generated from trainer   Gpt4   Instruct   Lora   Math   Phi   Phi2   Physics   Qwen2   Region:us   Safetensors   Science   Synthetic data

Einstein V4 Qwen 1.5 32B Adapter Checkpoints Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Einstein V4 Qwen 1.5 32B Adapter Checkpoints (Weyaxi/Einstein-v4-Qwen-1.5-32B-adapter-checkpoints)

Einstein V4 Qwen 1.5 32B Adapter Checkpoints Parameters and Internals

Model Type 
instruct, finetune, chatml, gpt4, science, synthetic data
Training Details 
Data Sources:
allenai/ai2_arc, camel-ai/physics, camel-ai/chemistry, camel-ai/biology, camel-ai/math, metaeval/reclor, openbookqa, mandyyyyii/scibench, derek-thomas/ScienceQA, TIGER-Lab/ScienceEval, jondurbin/airoboros-3.2, LDJnr/Capybara, Cot-Alpaca-GPT4-From-OpenHermes-2.5, STEM-AI-mtl/Electrical-engineering, knowrohit07/saraswati-stem, sablo/oasst2_curated, glaiveai/glaive-code-assistant, lmsys/lmsys-chat-1m, TIGER-Lab/MathInstruct, bigbio/med_qa, meta-math/MetaMathQA-40K, openbookqa, piqa, metaeval/reclor, derek-thomas/ScienceQA, scibench, sciq, Open-Orca/SlimOrca, migtissera/Synthia-v1.3, TIGER-Lab/ScienceEval
LLM NameEinstein V4 Qwen 1.5 32B Adapter Checkpoints
Repository ๐Ÿค—https://huggingface.co/Weyaxi/Einstein-v4-Qwen-1.5-32B-adapter-checkpoints 
Base Model(s)  Qwen/Qwen1.5-32B   Qwen/Qwen1.5-32B
Model Size32b
Required VRAM4.2 GB
Updated2025-06-01
MaintainerWeyaxi
Model Files  4.2 GB   0.0 GB
Supported Languagesen
Model ArchitectureAutoModelForCausalLM
Licenseother
Model Max Length32768
Is Biasednone
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesk_proj|up_proj|v_proj|q_proj|gate_proj|down_proj|o_proj
LoRA Alpha32
LoRA Dropout0.05
R Param64
Errorsreplace

Rank the Einstein V4 Qwen 1.5 32B Adapter Checkpoints Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 47770 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227