Hebrew Mistral 7B 200K by yam-peleg

 ยป  All LLMs  ยป  yam-peleg  ยป  Hebrew Mistral 7B 200K   URL Share it on

  Autotrain compatible   En   Endpoints compatible   He   Mistral   Region:us   Safetensors   Sharded   Tensorflow

Hebrew Mistral 7B 200K Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Hebrew Mistral 7B 200K (yam-peleg/Hebrew-Mistral-7B-200K)

Hebrew Mistral 7B 200K Parameters and Internals

Model Type 
causal language model
Additional Notes 
Model is focused on Hebrew language understanding and generation. It lacks moderation mechanisms.
Supported Languages 
en (full proficiency), he (full proficiency)
Training Details 
Context Length:
200000
Input Output 
Input Format:
Text inputs using AutoTokenizer from the Transformers library.
Accepted Modalities:
text
Output Format:
Generated text responses decoded back from outputs.
LLM NameHebrew Mistral 7B 200K
Repository ๐Ÿค—https://huggingface.co/yam-peleg/Hebrew-Mistral-7B-200K 
Model Size7b
Required VRAM30 GB
Updated2024-12-21
Maintaineryam-peleg
Model Typemistral
Model Files  4.9 GB: 1-of-7   5.0 GB: 2-of-7   4.8 GB: 3-of-7   5.0 GB: 4-of-7   5.0 GB: 5-of-7   4.2 GB: 6-of-7   1.1 GB: 7-of-7
Supported Languagesen he
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length262144
Model Max Length262144
Transformers Version4.40.1
Tokenizer ClassLlamaTokenizer
Padding Token[PAD]
Vocabulary Size64001
Torch Data Typefloat32

Best Alternatives to Hebrew Mistral 7B 200K

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB26999
MegaBeam Mistral 7B 512K512K / 14.4 GB715646
SpydazWeb AI HumanAI RP512K / 14.4 GB331
SpydazWeb AI HumanAI 002512K / 14.4 GB201
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB330815
Astral 256K 7B V2250K / 14.4 GB190
Astral 256K 7B250K / 14.4 GB160
Boptruth Agatha 7B128K / 14.4 GB4980
Test001128K / 14.5 GB90
Note: green Score (e.g. "73.2") means that the model is better than yam-peleg/Hebrew-Mistral-7B-200K.

Rank the Hebrew Mistral 7B 200K Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217