Einstein V7 Qwen2 7B by Weyaxi

 ยป  All LLMs  ยป  Weyaxi  ยป  Einstein V7 Qwen2 7B   URL Share it on

  Autotrain compatible   Axolotl Base model:finetune:qwen/qwen2...   Base model:qwen/qwen2-7b   Biology   Chatml   Chemistry   Conversational Dataset:abacusai/systemchat-1....   Dataset:allenai/ai2 arc   Dataset:allenai/wildchat   Dataset:bigbio/med qa   Dataset:camel-ai/biology   Dataset:camel-ai/chemistry   Dataset:camel-ai/math   Dataset:camel-ai/physics Dataset:cot-alpaca-gpt4-from-o...   Dataset:derek-thomas/scienceqa   Dataset:h-d-t/buzz-v1.2 Dataset:huggingfaceh4/no robot... Dataset:jondurbin/airoboros-3.... Dataset:knowrohit07/saraswati-...   Dataset:ldjnr/capybara   Dataset:lmsys/lmsys-chat-1m Dataset:m-a-p/codefeedback-fil...   Dataset:mandyyyyii/scibench Dataset:meta-math/metamathqa-4...   Dataset:metaeval/reclor Dataset:microsoft/orca-math-wo... Dataset:migtissera/synthia-v1....   Dataset:open-orca/slimorca Dataset:openassistant/oasst to...   Dataset:openbookqa Dataset:openchat/openchat shar...   Dataset:piqa   Dataset:sablo/oasst2 curated   Dataset:scibench   Dataset:sciq Dataset:stem-ai-mtl/electrical... Dataset:teknium/gpteacher-gene...   Dataset:tiger-lab/mathinstruct   Dataset:tiger-lab/scienceeval Dataset:totally-not-an-llm/eve... Dataset:wizardlm/wizardlm evol...   En   Endpoints compatible   Finetuned   Gpt4   Instruct   Math   Model-index   Physics   Qwen   Qwen2   Region:us   Safetensors   Science   Sharded   Synthetic data   Tensorflow

Einstein V7 Qwen2 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Einstein V7 Qwen2 7B Parameters and Internals

Model Type 
text-generation
Additional Notes 
Model is fully fine-tuned for 2 epochs with a total of 500 steps.
Training Details 
Data Sources:
allenai/ai2_arc, camel-ai/physics, camel-ai/chemistry, camel-ai/biology, camel-ai/math, metaeval/reclor, openbookqa, mandyyyyii/scibench, derek-thomas/ScienceQA, TIGER-Lab/ScienceEval, jondurbin/airoboros-3.2, LDJnr/Capybara, Cot-Alpaca-GPT4-From-OpenHermes-2.5, STEM-AI-mtl/Electrical-engineering, knowrohit07/saraswati-stem, sablo/oasst2_curated, lmsys/lmsys-chat-1m, TIGER-Lab/MathInstruct, bigbio/med_qa, meta-math/MetaMathQA-40K, openbookqa, piqa, metaeval/reclor, derek-thomas/ScienceQA, scibench, sciq, Open-Orca/SlimOrca, migtissera/Synthia-v1.3, TIGER-Lab/ScienceEval, allenai/WildChat, microsoft/orca-math-word-problems-200k, openchat/openchat_sharegpt4_dataset, teknium/GPTeacher-General-Instruct, m-a-p/CodeFeedback-Filtered-Instruction, totally-not-an-llm/EverythingLM-data-V3, HuggingFaceH4/no_robots, OpenAssistant/oasst_top1_2023-08-25, WizardLM/WizardLM_evol_instruct_70k, abacusai/SystemChat-1.1, H-D-T/Buzz-V1.2
Context Length:
8192
Hardware Used:
8xMI300X
Input Output 
Input Format:
ChatML prompt template
Accepted Modalities:
text
Output Format:
text
LLM NameEinstein V7 Qwen2 7B
Repository ๐Ÿค—https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B 
Base Model(s)  Qwen2 7B   Qwen/Qwen2-7B
Model Size7b
Required VRAM15.2 GB
Updated2024-11-21
MaintainerWeyaxi
Model Typeqwen2
Instruction-BasedYes
Model Files  4.9 GB: 1-of-4   4.9 GB: 2-of-4   4.3 GB: 3-of-4   1.1 GB: 4-of-4   0.0 GB
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseother
Context Length131072
Model Max Length131072
Transformers Version4.40.0.dev0
Tokenizer ClassQwen2Tokenizer
Padding Token<|end_of_text|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace
Einstein V7 Qwen2 7B (Weyaxi/Einstein-v7-Qwen2-7B)

Best Alternatives to Einstein V7 Qwen2 7B

Best Alternatives
Context / RAM
Downloads
Likes
Gte Qwen2 7B Instruct128K / 30.5 GB37719217
Qwen2 7B Instruct V0.1128K / 15.2 GB409681
Qwen2 7B Instruct V0.8128K / 15.2 GB409903
...nity Instruct 3M 0625 Qwen2 7B128K / 15.2 GB49208
Samantha Qwen 2 7B128K / 15.2 GB48822
Cybertron V4 Qw7B MGS128K / 15.2 GB101811
Rombos LLM V2.5 Qwen 7B128K / 15.2 GB78214
Meraj Mini128K / 15.2 GB88511
Qwen2 7B Instruct V0.4128K / 15.2 GB410081
Qwen2 7B Instruct V0.5128K / 15.2 GB410191
Note: green Score (e.g. "73.2") means that the model is better than Weyaxi/Einstein-v7-Qwen2-7B.

Rank the Einstein V7 Qwen2 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 38149 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241110