Mistral Ft Optimized 1218 by OpenPipe

 ยป  All LLMs  ยป  OpenPipe  ยป  Mistral Ft Optimized 1218   URL Share it on

  Autotrain compatible   En   Endpoints compatible   Mistral   Region:us   Safetensors   Sharded   Tensorflow

Mistral Ft Optimized 1218 Benchmarks

Mistral Ft Optimized 1218 (OpenPipe/mistral-ft-optimized-1218)

Mistral Ft Optimized 1218 Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
Downstream fine-tuning
Primary Use Cases:
Strong base for downstream tasks
Additional Notes 
Built using Weyaxi and Q-bert model slices for improved base capabilities.
Supported Languages 
en (full proficiency)
Training Details 
Data Sources:
Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp, Q-bert/MetaMath-Cybertron-Starling
Methodology:
Version created using Mergekit with method 'slerp' and parameters 'bfloat16'. Optimized for fine-tuning.
Release Notes 
Version:
12/27/2023
Date:
12/27/2023
Notes:
Release of an updated version with similar performance and a more permissive license. Recommended for most users over this model.
LLM NameMistral Ft Optimized 1218
Repository ๐Ÿค—https://huggingface.co/OpenPipe/mistral-ft-optimized-1218 
Model Size7b
Required VRAM14.4 GB
Updated2025-02-10
MaintainerOpenPipe
Model Typemistral
Model Files  9.9 GB: 1-of-2   4.5 GB: 2-of-2
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.36.0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the Mistral Ft Optimized 1218

Model
Likes
Downloads
VRAM
Mistral Ft Optimized 1218 GGUF01032 GB
Mistral Ft Optimized 1218 GGUF221653 GB
Mistral Ft Optimized 1218 GPTQ4224 GB
Mistral Ft Optimized 1218 AWQ084 GB

Best Alternatives to Mistral Ft Optimized 1218

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB550110
MegaBeam Mistral 7B 512K512K / 14.4 GB670250
SpydazWeb AI HumanAI RP512K / 14.4 GB91
SpydazWeb AI HumanAI 002512K / 14.4 GB181
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB662316
Hebrew Mistral 7B 200K256K / 30 GB952315
Astral 256K 7B250K / 14.4 GB70
Astral 256K 7B V2250K / 14.4 GB60
Test001128K / 14.5 GB90
Note: green Score (e.g. "73.2") means that the model is better than OpenPipe/mistral-ft-optimized-1218.

Rank the Mistral Ft Optimized 1218 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42935 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227