MSH Lite 7B V1 Bielik V2.3 Instruct Llama Prune by meditsolutions

 ยป  All LLMs  ยป  meditsolutions  ยป  MSH Lite 7B V1 Bielik V2.3 Instruct Llama Prune   URL Share it on

Base model:finetune:meditsolut... Base model:meditsolutions/msh-...   Conversational   En   Instruct   Medit-lite   Mistral   Model-pruning   Pl   Region:us   Safetensors   Sharded   Tensorflow

MSH Lite 7B V1 Bielik V2.3 Instruct Llama Prune Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
MSH Lite 7B V1 Bielik V2.3 Instruct Llama Prune (meditsolutions/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune)

MSH Lite 7B V1 Bielik V2.3 Instruct Llama Prune Parameters and Internals

LLM NameMSH Lite 7B V1 Bielik V2.3 Instruct Llama Prune
Repository ๐Ÿค—https://huggingface.co/meditsolutions/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune 
Base Model(s)  meditsolutions/MSH-v1-Bielik-v2.3-Instruct-MedIT-merge   speakleash/Bielik-11B-v2.3-Instruct   meditsolutions/MSH-v1-Bielik-v2.3-Instruct-MedIT-merge   speakleash/Bielik-11B-v2.3-Instruct
Model Size11b
Required VRAM15.4 GB
Updated2024-12-22
Maintainermeditsolutions
Model Typemistral
Instruction-BasedYes
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   5.0 GB: 3-of-4   0.4 GB: 4-of-4
Supported Languagespl en
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.46.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32128
Torch Data Typebfloat16

Best Alternatives to MSH Lite 7B V1 Bielik V2.3 Instruct Llama Prune

Best Alternatives
Context / RAM
Downloads
Likes
...elik V2.3 Instruct MedIT Merge32K / 22.3 GB19121
Mistral 11B Instruct V0.132K / 42.9 GB810
...Alpha Mistral 7B Instruct V0.132K / 14.5 GB241
...Mistral 7B Instruct V0.2 Slerp32K / 14.4 GB281
...Mistral 7B Instruct V0.2 Slerp32K / 14.4 GB302
Mistral 11B Instruct V0.232K / 21.4 GB63
Bielik 11B V2.2 Instruct FP832K / 11.4 GB4933
Bielik 11B V2.2 Instruct W8A832K / 11.5 GB3093
... 11B V2.2 Instruct Quanto 8bit32K / 12 GB243
... 11B V2.2 Instruct EXL2 4.5bit32K / 6.5 GB333
Note: green Score (e.g. "73.2") means that the model is better than meditsolutions/MSH-Lite-7B-v1-Bielik-v2.3-Instruct-Llama-Prune.

Rank the MSH Lite 7B V1 Bielik V2.3 Instruct Llama Prune Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217