Zephyr Python Ru Merged by MexIvanov

 ยป  All LLMs  ยป  MexIvanov  ยป  Zephyr Python Ru Merged   URL Share it on

  Arxiv:2409.09353   Autotrain compatible Base model:finetune:huggingfac... Base model:huggingfaceh4/zephy...   Conversational Dataset:mexivanov/codeexercise... Dataset:mexivanov/vezora-teste... Dataset:zelkame/ru-stackoverfl...   En   Endpoints compatible   Mistral   Region:us   Ru   Safetensors   Sharded   Tensorflow

Zephyr Python Ru Merged Benchmarks

Zephyr Python Ru Merged (MexIvanov/zephyr-python-ru-merged)

Zephyr Python Ru Merged Parameters and Internals

Model Type 
Base model
Use Cases 
Areas:
research
Primary Use Cases:
Instruction-based coding in Python, based on instructions written in English or Russian
Considerations:
This adapter model is intended (but not limited) for research usage only. It does not have any moderation mechanisms.
Additional Notes 
Users should be aware of the risks, biases, and limitations of the model.
Supported Languages 
ru (NLP), en (NLP)
Training Details 
Data Sources:
MexIvanov/Vezora-Tested-22k-Python-Alpaca-ru, MexIvanov/CodeExercise-Python-27k-ru, zelkame/ru-stackoverflow-py
Methodology:
Merged with LoRA (Peft) adapter model MexIvanov/zephyr-python-ru trained on a mix of publicly available data and machine-translated synthetic python coding datasets.
LLM NameZephyr Python Ru Merged
Repository ๐Ÿค—https://huggingface.co/MexIvanov/zephyr-python-ru-merged 
Base Model(s)  Zephyr 7B Beta   HuggingFaceH4/zephyr-7b-beta
Model Size7b
Required VRAM14.4 GB
Updated2025-04-24
MaintainerMexIvanov
Model Typemistral
Model Files  1.9 GB: 1-of-8   1.9 GB: 2-of-8   2.0 GB: 3-of-8   1.9 GB: 4-of-8   2.0 GB: 5-of-8   1.9 GB: 6-of-8   2.0 GB: 7-of-8   0.8 GB: 8-of-8
Supported Languagesen ru
Model ArchitectureMistralForCausalLM
Licensemit
Context Length32768
Model Max Length32768
Transformers Version4.36.1
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Zephyr Python Ru Merged

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB218316
MegaBeam Mistral 7B 512K512K / 14.4 GB217350
SpydazWeb AI HumanAI RP512K / 14.4 GB01
SpydazWeb AI HumanAI 002512K / 14.4 GB181
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB377916
MegaBeam Mistral 7B 300K282K / 14.4 GB180216
Hebrew Mistral 7B 200K256K / 30 GB1440215
Astral 256K 7B V2250K / 14.4 GB220
Astral 256K 7B250K / 14.4 GB200
Note: green Score (e.g. "73.2") means that the model is better than MexIvanov/zephyr-python-ru-merged.

Rank the Zephyr Python Ru Merged Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46634 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227