MSH V1 Bielik V2.3 Instruct MedIT Merge by meditsolutions

 ยป  All LLMs  ยป  meditsolutions  ยป  MSH V1 Bielik V2.3 Instruct MedIT Merge   URL Share it on

Base model:finetune:speakleash... Base model:speakleash/bielik-1...   Conversational   En   Instruct   Medit-merge   Mistral   Pl   Region:us   Safetensors   Sharded   Tensorflow

MSH V1 Bielik V2.3 Instruct MedIT Merge Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
MSH V1 Bielik V2.3 Instruct MedIT Merge (meditsolutions/MSH-v1-Bielik-v2.3-Instruct-MedIT-merge)

MSH V1 Bielik V2.3 Instruct MedIT Merge Parameters and Internals

LLM NameMSH V1 Bielik V2.3 Instruct MedIT Merge
Repository ๐Ÿค—https://huggingface.co/meditsolutions/MSH-v1-Bielik-v2.3-Instruct-MedIT-merge 
Base Model(s)  speakleash/Bielik-11B-v2.3-Instruct   speakleash/Bielik-11B-v2.3-Instruct
Model Size11b
Required VRAM22.3 GB
Updated2024-12-22
Maintainermeditsolutions
Model Typemistral
Instruction-BasedYes
Model Files  4.9 GB: 1-of-5   5.0 GB: 2-of-5   4.9 GB: 3-of-5   4.9 GB: 4-of-5   2.6 GB: 5-of-5
Supported Languagespl en
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.46.0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32128
Torch Data Typebfloat16

Best Alternatives to MSH V1 Bielik V2.3 Instruct MedIT Merge

Best Alternatives
Context / RAM
Downloads
Likes
...elik V2.3 Instruct Llama Prune32K / 15.4 GB21450
Mistral 11B Instruct V0.132K / 42.9 GB810
...Alpha Mistral 7B Instruct V0.132K / 14.5 GB241
...Mistral 7B Instruct V0.2 Slerp32K / 14.4 GB281
...Mistral 7B Instruct V0.2 Slerp32K / 14.4 GB302
Mistral 11B Instruct V0.232K / 21.4 GB63
Bielik 11B V2.2 Instruct FP832K / 11.4 GB4933
Bielik 11B V2.2 Instruct W8A832K / 11.5 GB3093
... 11B V2.2 Instruct Quanto 8bit32K / 12 GB243
... 11B V2.2 Instruct EXL2 4.5bit32K / 6.5 GB333

Rank the MSH V1 Bielik V2.3 Instruct MedIT Merge Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217