Merak 7B V2 GGML by asyafiqe

 »  All LLMs  »  asyafiqe  »  Merak 7B V2 GGML   URL Share it on

  Arxiv:2305.14314   Arxiv:2307.09288   Autotrain compatible   Dataset:wikipedia   En   Facebook   Ggml   Id   Llama   Llama2   Meta   Pytorch   Quantized   Region:us

Merak 7B V2 GGML Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Merak 7B V2 GGML (asyafiqe/Merak-7B-v2-GGML)

Merak 7B V2 GGML Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
research, commercial applications
Primary Use Cases:
Story telling, AI development, Text generation
Limitations:
Not suitable for tasks requiring extensive resource or high inference speed
Supported Languages 
id (proficient), en (proficient)
Training Details 
Data Sources:
Indonesia Wikipedia
Data Volume:
200k articles ID Wikipedia
Methodology:
Finetuning with QLoRA
Model Architecture:
Based on Meta Llama-2-7B-Chat-HF
Input Output 
Input Format:
<|prompt|>{question}\n<|answer|>
Accepted Modalities:
text
Output Format:
generated response text
Performance Tips:
For better performance, avoid using BitsandBytes 4-bit quantization
Release Notes 
Version:
v1
Notes:
The first Merak-7B model with 200k ID Wikipedia articles.
Version:
v2
Notes:
Finetuned version with adjusted prompt-style.
LLM NameMerak 7B V2 GGML
Repository 🤗https://huggingface.co/asyafiqe/Merak-7B-v2-GGML 
Model Size7b
Required VRAM2.9 GB
Updated2024-12-29
Maintainerasyafiqe
Model Typellama
Model Files  2.9 GB   3.3 GB   3.6 GB   3.3 GB   3.0 GB   3.8 GB   4.2 GB   4.1 GB   4.1 GB   3.8 GB   4.7 GB   5.1 GB   4.8 GB   4.8 GB   4.7 GB   5.5 GB   7.1 GB
Supported Languagesid en
GGML QuantizationYes
Quantization Typeggml
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.32.0.dev0
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Merak 7B V2 GGML

Best Alternatives
Context / RAM
Downloads
Likes
Cria Llama2 7B V1.3 GGML4K / 3.8 GB6800
Nb Sau 7B 8K Step100k4K / 13.5 GB141
Ennodata 7B2K / 13.5 GB14370
Smaugv0.1 6.0bpw H6 EXL2195K / 26.4 GB154
Smaugv0.1 5.0bpw H6 EXL2195K / 22.3 GB153
Smaugv0.1 4.0bpw H6 EXL2195K / 18 GB191
Smaugv0.1 3.0bpw H6 EXL2195K / 13.9 GB161
Smaugv0.1 8.0bpw H8 EXL2195K / 34.9 GB161
Smaugv0.1 4.65bpw H6 EXL2195K / 20.8 GB151
Chef Ai32K / 14.4 GB210
Note: green Score (e.g. "73.2") means that the model is better than asyafiqe/Merak-7B-v2-GGML.

Rank the Merak 7B V2 GGML Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40410 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227