Luca MN Bf16 by rAIfle

 ยป  All LLMs  ยป  rAIfle  ยป  Luca MN Bf16   URL Share it on

  4bit   Autotrain compatible Base model:finetune:unsloth/mi... Base model:unsloth/mistral-nem...   Conversational   Endpoints compatible   Mistral   Quantized   Region:us   Safetensors   Sft   Sharded   Tensorflow   Trl   Unsloth
Model Card on HF ๐Ÿค—: https://huggingface.co/rAIfle/Luca-MN-bf16 

Luca MN Bf16 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Luca MN Bf16 (rAIfle/Luca-MN-bf16)

Luca MN Bf16 Parameters and Internals

Training Details 
Data Sources:
some RP data, c2-data, jondurbin/gutenberg-dpo-v0.1
Methodology:
High-r LoRA-pass over Nemo-Base with 2 epochs of some RP data, then a low-r pass with 0.5 epochs of the c2-data, then 3 epochs of DPO using jondurbin/gutenberg-dpo-v0.1.
Input Output 
Input Format:
Use the Mistral V3-Tekken context- and instruct-templates.
Performance Tips:
Temperature at about 1.25 seems to be the sweet spot, with either MinP at 0.05 or TopP at 0.9. DRY/Smoothing etc depending on your preference.
LLM NameLuca MN Bf16
Repository ๐Ÿค—https://huggingface.co/rAIfle/Luca-MN-bf16 
Base Model(s)  ...istral Nemo Base 2407 Bnb 4bit   unsloth/Mistral-Nemo-Base-2407-bnb-4bit
Model Size12.2b
Required VRAM24.5 GB
Updated2024-12-22
MaintainerrAIfle
Model Typemistral
Model Files  4.9 GB: 1-of-5   4.9 GB: 2-of-5   4.9 GB: 3-of-5   4.9 GB: 4-of-5   4.9 GB: 5-of-5
Quantization Type4bit
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.44.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<pad>
Vocabulary Size131072
Torch Data Typebfloat16

Best Alternatives to Luca MN Bf16

Best Alternatives
Context / RAM
Downloads
Likes
NaturalLM1000K / 24.5 GB1956
NaturalLM1000K / 24.5 GB1636
...d 1.0 Nemo Base 2407 Ita 16bit1000K / 24.5 GB6300
Nemo Carpmuscle V0.11000K / 24.5 GB241
... Mistral Nemo Base 2407 Sft V11000K / 24.5 GB491
...all MathMistral Nemo Base 24071000K / 24.5 GB231
Violet Twilight V0.21000K / 24.5 GB110619
...ish Mistral Nemo Instruct 24071000K / 24.5 GB772
Mistral Nemo Kurdish Instruct1000K / 24.5 GB2582
Mistral Nemo Kurdish1000K / 24.5 GB632
Note: green Score (e.g. "73.2") means that the model is better than rAIfle/Luca-MN-bf16.

Rank the Luca MN Bf16 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217