Zephyr Python Ru by MexIvanov

 ยป  All LLMs  ยป  MexIvanov  ยป  Zephyr Python Ru   URL Share it on

  Arxiv:2409.09353   Adapter Base model:finetune:huggingfac... Base model:huggingfaceh4/zephy...   Conversational Dataset:mexivanov/codeexercise... Dataset:mexivanov/vezora-teste... Dataset:zelkame/ru-stackoverfl...   En   Finetuned   Lora   Region:us   Ru   Safetensors

Zephyr Python Ru Benchmarks

Zephyr Python Ru (MexIvanov/zephyr-python-ru)

Zephyr Python Ru Parameters and Internals

Model Type 
LoRA adapter, Peft
Use Cases 
Primary Use Cases:
Instruction-based coding in Python, based on instructions written in English or Russian
Limitations:
Model is not aligned to human preferences for safety, No moderation mechanisms, Trained on code based instruction so may produce problematic outputs without filtering
Considerations:
Users should be aware of risks, biases, and limitations of the model.
Additional Notes 
This adapter model was trained using `bitsandbytes` quantization config: 4-bit load, nf4 quant type, float16 compute dtype.
Supported Languages 
ru (high), en (high), Python (high)
Training Details 
Data Sources:
zelkame/ru-stackoverflow-py, MexIvanov/Vezora-Tested-22k-Python-Alpaca-ru, MexIvanov/CodeExercise-Python-27k-ru
Methodology:
Special training methods or approaches used, such as fine-tuning techniques.
Model Architecture:
A LoRA (Peft) adapter model trained on a mix of publicly available data and machine-translated synthetic python coding datasets.
Responsible Ai Considerations 
Mitigation Strategies:
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Input Output 
Input Format:
<|system|> ~~ <|user|> {prompt}~~ <|assistant|>
Accepted Modalities:
text
LLM NameZephyr Python Ru
Repository ๐Ÿค—https://huggingface.co/MexIvanov/zephyr-python-ru 
Base Model(s)  Zephyr 7B Beta   HuggingFaceH4/zephyr-7b-beta
Model Size7b
Required VRAM0.1 GB
Updated2025-02-22
MaintainerMexIvanov
Model Files  0.1 GB   0.1 GB
Supported Languagesen ru
Model ArchitectureAdapter
Licensemit
Model Max Length1024
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token</s>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesv_proj|q_proj
LoRA Alpha16
LoRA Dropout0.05
R Param64

Best Alternatives to Zephyr Python Ru

Best Alternatives
Context / RAM
Downloads
Likes
Qwen Megumin0K / 0.1 GB130
...s 25 Mistral 7B Irca DPO Pairs0K / 0.1 GB50
Qwen1.5 7B Chat Sa V0.10K / 0 GB90
Zephyr 7B Ipo 0K 15K I10K / 0.7 GB90
Deepthink Reasoning Adapter0K / 0.2 GB278
Deepseek Llm 7B Chat Sa V0.10K / 0 GB50
... Days Of Sodom LoRA Mistral 7B0K / 0.2 GB50
Mistral 7B Instruct Sa V0.10K / 0 GB60
CodeAstra 7B0K / 0 GB63910
...eze Embed Tokens Q V Proj Lora0K / 0.1 GB41
Note: green Score (e.g. "73.2") means that the model is better than MexIvanov/zephyr-python-ru.

Rank the Zephyr Python Ru Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227