GEITje 7B Ultra Sft by BramVanroy

 ยป  All LLMs  ยป  BramVanroy  ยป  GEITje 7B Ultra Sft   URL Share it on

  Arxiv:2412.04092   Alignment-handbook   Autotrain compatible Base model:finetune:rijgersber... Base model:rijgersberg/geitje-...   Conversational Dataset:bramvanroy/alpaca-clea... Dataset:bramvanroy/dolly-15k-d... Dataset:bramvanroy/no robots d... Dataset:bramvanroy/stackoverfl... Dataset:bramvanroy/ultrachat 2...   Endpoints compatible   Geitje   Mistral   Nl   Region:us   Safetensors   Sft   Sharded   Tensorflow   Trl

GEITje 7B Ultra Sft Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
GEITje 7B Ultra Sft (BramVanroy/GEITje-7B-ultra-sft)

GEITje 7B Ultra Sft Parameters and Internals

Model Type 
text-generation, conversational
Use Cases 
Areas:
research
Limitations:
Cannot be used for commercial purposes, Not specifically aligned through reinforcement learning techniques
Additional Notes 
Training involves synthetic datasets including GPT-3.5-turbo and GPT-4-turbo data, multi- and single-turn conversations, and code.
Supported Languages 
nl (fluent)
Training Details 
Data Sources:
BramVanroy/ultrachat_200k_dutch, BramVanroy/stackoverflow-chat-dutch, BramVanroy/alpaca-cleaned-dutch, BramVanroy/dolly-15k-dutch, BramVanroy/no_robots_dutch
Data Volume:
240M tokens
Methodology:
Fine-tuning based on alignment techniques described in the alignment handbook, utilizing synthetic data
Context Length:
8192
Training Time:
around 2.5 hours
Hardware Used:
Two nodes of four A100 80GB GPUs
Model Architecture:
Based on Mistral 7B
Input Output 
Accepted Modalities:
text
Output Format:
text
LLM NameGEITje 7B Ultra Sft
Repository ๐Ÿค—https://huggingface.co/BramVanroy/GEITje-7B-ultra-sft 
Base Model(s)  GEITje 7B   Rijgersberg/GEITje-7B
Model Size7b
Required VRAM14.4 GB
Updated2025-05-21
MaintainerBramVanroy
Model Typemistral
Model Files  4.9 GB: 1-of-3   5.0 GB: 2-of-3   4.5 GB: 3-of-3   0.0 GB
Supported Languagesnl
Model ArchitectureMistralForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to GEITje 7B Ultra Sft

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB166916
MegaBeam Mistral 7B 512K512K / 14.4 GB116950
SpydazWeb AI HumanAI RP512K / 14.4 GB91
SpydazWeb AI HumanAI 002512K / 14.4 GB181
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB377916
MegaBeam Mistral 7B 300K282K / 14.4 GB104416
Hebrew Mistral 7B 200K256K / 30 GB239615
Astral 256K 7B V2250K / 14.4 GB50
Astral 256K 7B250K / 14.4 GB50
Note: green Score (e.g. "73.2") means that the model is better than BramVanroy/GEITje-7B-ultra-sft.

Rank the GEITje 7B Ultra Sft Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 47473 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227