Merak 7B V4 by Ichsan2895

 ยป  All LLMs  ยป  Ichsan2895  ยป  Merak 7B V4   URL Share it on

  Arxiv:2305.14314   Arxiv:2306.02707   Autotrain compatible   Conversational Dataset:ichsan2895/alpaca-gpt4... Dataset:ichsan2895/oasst top1 ...   Dataset:wikipedia   En   Endpoints compatible   Id   Mistral   Pytorch   Region:us   Sharded
Model Card on HF ๐Ÿค—: https://huggingface.co/Ichsan2895/Merak-7B-v4 

Merak 7B V4 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Merak 7B V4 (Ichsan2895/Merak-7B-v4)

Merak 7B V4 Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research, AI development
Applications:
text generation, chatbots for Indonesian language
Primary Use Cases:
Text-based AI Applications
Limitations:
does not work optimally with BitsandBytes 4-bit Quantization
Additional Notes 
Created with help from Axolotl fine-tuning tool.
Supported Languages 
id (high proficiency), en (medium proficiency)
Training Details 
Data Sources:
wikipedia, Ichsan2895/OASST_Top1_Indonesian, Ichsan2895/alpaca-gpt4-indonesian
Data Volume:
200k to 600k ID Wikipedia articles
Methodology:
Fine-tuning with QLoRA and Full Parameter Finetuning (FFT)
Training Time:
1 epoch with QLoRA, 4 epochs with FFT
Hardware Used:
utilizes BitsAndBytes and can run with 16 GB VRAM
Model Architecture:
Fine-tuned on top of Mistral-7B-OpenOrca using QLoRA
Input Output 
Input Format:
Chat format using roles (system, user)
Accepted Modalities:
text
Output Format:
Text response
Performance Tips:
For better performance, avoid using BitsandBytes 4-bit Quantization
Release Notes 
Version:
v4
Notes:
Used Mistral-7B-OpenOrca instead of Llama-2-Chat-HF; FFT applied with extra resources.
Version:
v3
Notes:
Fine-tuning with additional datasets.
Version:
v2
Notes:
Fine-tuning with 600k ID Wikipedia articles.
Version:
v1
Notes:
Initial model with 200k selected and cleaned ID Wikipedia articles.
LLM NameMerak 7B V4
Repository ๐Ÿค—https://huggingface.co/Ichsan2895/Merak-7B-v4 
Model Size7b
Required VRAM28.9 GB
Updated2025-02-22
MaintainerIchsan2895
Model Typemistral
Model Files  5.0 GB: 1-of-6   4.9 GB: 2-of-6   5.0 GB: 3-of-6   5.0 GB: 4-of-6   4.8 GB: 5-of-6   4.2 GB: 6-of-6
Supported Languagesid en
Model ArchitectureMistralForCausalLM
Licensecc-by-nc-sa-4.0
Context Length32768
Model Max Length32768
Transformers Version4.34.1
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typefloat32

Quantized Models of the Merak 7B V4

Model
Likes
Downloads
VRAM
USK Mistral 7B Unsloth GGUF0284 GB
Merak 7B V4 GPTQ2114 GB

Best Alternatives to Merak 7B V4

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB462011
MegaBeam Mistral 7B 512K512K / 14.4 GB568150
SpydazWeb AI HumanAI RP512K / 14.4 GB121
SpydazWeb AI HumanAI 002512K / 14.4 GB181
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB563316
Hebrew Mistral 7B 200K256K / 30 GB1461915
Astral 256K 7B V2250K / 14.4 GB70
Astral 256K 7B250K / 14.4 GB50
Test001128K / 14.5 GB90
Note: green Score (e.g. "73.2") means that the model is better than Ichsan2895/Merak-7B-v4.

Rank the Merak 7B V4 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227