Einstein V6 7B by Weyaxi

 ยป  All LLMs  ยป  Weyaxi  ยป  Einstein V6 7B   URL Share it on

  Autotrain compatible   Axolotl Base model:finetune:mistral-co... Base model:mistral-community/m...   Biology   Chatml   Chemistry   Conversational   Dataset:allenai/ai2 arc   Dataset:allenai/wildchat   Dataset:bigbio/med qa   Dataset:camel-ai/biology   Dataset:camel-ai/chemistry   Dataset:camel-ai/math   Dataset:camel-ai/physics Dataset:cot-alpaca-gpt4-from-o...   Dataset:derek-thomas/scienceqa Dataset:huggingfaceh4/no robot... Dataset:jondurbin/airoboros-3.... Dataset:knowrohit07/saraswati-...   Dataset:ldjnr/capybara   Dataset:lmsys/lmsys-chat-1m Dataset:m-a-p/codefeedback-fil...   Dataset:mandyyyyii/scibench Dataset:meta-math/metamathqa-4...   Dataset:metaeval/reclor Dataset:microsoft/orca-math-wo... Dataset:migtissera/synthia-v1....   Dataset:open-orca/slimorca Dataset:openassistant/oasst to...   Dataset:openbookqa Dataset:openchat/openchat shar...   Dataset:piqa   Dataset:sablo/oasst2 curated   Dataset:scibench   Dataset:sciq Dataset:stem-ai-mtl/electrical... Dataset:teknium/gpteacher-gene...   Dataset:tiger-lab/mathinstruct   Dataset:tiger-lab/scienceeval Dataset:totally-not-an-llm/eve... Dataset:wizardlm/wizardlm evol...   En   Endpoints compatible   Finetuned   Generated from trainer   Gpt4   Instruct   Math   Mistral   Model-index   Physics   Region:us   Safetensors   Science   Sharded   Synthetic data   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/Weyaxi/Einstein-v6-7B 

Einstein V6 7B Benchmarks

Einstein V6 7B (Weyaxi/Einstein-v6-7B)

Einstein V6 7B Parameters and Internals

Model Type 
text-generation
Additional Notes 
The model was trained using the Axolotl framework and the training was sponsored by sablo.ai.
Training Details 
Data Sources:
allenai/ai2_arc, camel-ai/physics, camel-ai/chemistry, camel-ai/biology, camel-ai/math, metaeval/reclor, openbookqa, mandyyyyii/scibench, derek-thomas/ScienceQA, TIGER-Lab/ScienceEval, jondurbin/airoboros-3.2, LDJnr/Capybara, Cot-Alpaca-GPT4-From-OpenHermes-2.5, STEM-AI-mtl/Electrical-engineering, knowrohit07/saraswati-stem, sablo/oasst2_curated, lmsys/lmsys-chat-1m, TIGER-Lab/MathInstruct, bigbio/med_qa, meta-math/MetaMathQA-40K, piqa, scibench, sciq, Open-Orca/SlimOrca, migtissera/Synthia-v1.3, allenai/WildChat, microsoft/orca-math-word-problems-200k, openchat/openchat_sharegpt4_dataset, teknium/GPTeacher-General-Instruct, m-a-p/CodeFeedback-Filtered-Instruction, totally-not-an-llm/EverythingLM-data-V3, HuggingFaceH4/no_robots, OpenAssistant/oasst_top1_2023-08-25, WizardLM/WizardLM_evol_instruct_70k
Context Length:
8192
Training Time:
2 epochs
Hardware Used:
8xRTX3090, 1xRTXA6000
Input Output 
Input Format:
ChatML
Accepted Modalities:
text
Output Format:
text
Release Notes 
Version:
v6
Notes:
Full fine-tuning of alpindale/Mistral-7B-v0.2-hf model using updated datasets and configurations as described. Sponsored by sablo.ai.
LLM NameEinstein V6 7B
Repository ๐Ÿค—https://huggingface.co/Weyaxi/Einstein-v6-7B 
Base Model(s)  alpindale/Mistral-7B-v0.2-hf   alpindale/Mistral-7B-v0.2-hf
Model Size7b
Required VRAM14.4 GB
Updated2025-02-22
MaintainerWeyaxi
Model Typemistral
Instruction-BasedYes
Model Files  4.9 GB: 1-of-3   5.0 GB: 2-of-3   4.5 GB: 3-of-3   0.0 GB
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licenseother
Context Length32768
Model Max Length32768
Transformers Version4.38.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16

Best Alternatives to Einstein V6 7B

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB462011
SpydazWeb AI HumanAI RP512K / 14.4 GB121
SpydazWeb AI HumanAI 002512K / 14.4 GB181
...daz Web AI ChatML 512K Project512K / 14.5 GB120
... Summarize 64K QLoRANET Merged128K / 4.1 GB120
...1 Summarize 64K LoRANET Merged128K / 14.4 GB110
Mistral 7B Instruct V0.232K / 14.4 GB33160952630
Mistral 7B Instruct V0.132K / 14.4 GB1916301573
...ity Instruct 7M Gen Mistral 7B32K / 14.4 GB37505
...ty Instruct 3M 0625 Mistral 7B32K / 14.4 GB37063
Note: green Score (e.g. "73.2") means that the model is better than Weyaxi/Einstein-v6-7B.

Rank the Einstein V6 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227