CataLlama V0.1 Base by catallama

 ยป  All LLMs  ยป  catallama  ยป  CataLlama V0.1 Base   URL Share it on

  Autotrain compatible Base model:finetune:meta-llama... Base model:meta-llama/meta-lla...   Ca   Catalan   Conversational Dataset:catallama/catalan-raw-...   En   Endpoints compatible   Llama   Llama-3   Region:us   Safetensors   Sharded   Tensorflow

CataLlama V0.1 Base Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
CataLlama V0.1 Base (catallama/CataLlama-v0.1-Base)

CataLlama V0.1 Base Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
Commercial, Research
Applications:
Instruction-tuned models for assistant-like chat, Pretrained models for natural language generation tasks
Limitations:
Use in any manner that violates applicable laws or regulations, Use in languages other than English
Considerations:
Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
Additional Notes 
This model is not intended to beat benchmarks, but to demonstrate techniques for augmenting LLMs on new languages and preserve rare languages as part of our world heritage.
Supported Languages 
ca (high), en (high)
Training Details 
Data Sources:
catallama/Catalan-Raw-Text
Methodology:
Supervised fine-tuning and direct preference optimization (DPO) to align with human preferences for helpfulness and safety
Hardware Used:
6x A100 80GB GPUs
Model Architecture:
Auto-regressive language model that uses an optimized transformer architecture
LLM NameCataLlama V0.1 Base
Repository ๐Ÿค—https://huggingface.co/catallama/CataLlama-v0.1-Base 
Base Model(s)  Meta Llama 3 8B   meta-llama/Meta-Llama-3-8B
Model Size8b
Required VRAM16.1 GB
Updated2025-02-22
Maintainercatallama
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4   0.0 GB
Supported Languagesca en
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.38.1
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|pad|>
Vocabulary Size128257
Torch Data Typebfloat16

Best Alternatives to CataLlama V0.1 Base

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB3927680
MrRoboto ProLong 8B V4i1024K / 16.1 GB661
...o ProLongBASE Pt8 Unaligned 8B1024K / 16.1 GB240
MrRoboto BASE V2 Unholy 8B 64K1024K / 16.1 GB271
Mpasila Viking 8B1024K / 16.1 GB840
Thor V1.4 8B DARK FICTION1024K / 16.1 GB9412
41024K / 16.1 GB3220
Hel V2 8B DARK FICTION1024K / 16.1 GB220
161024K / 16.1 GB1690
...di95 LewdStorytellerMix 8B 64K1024K / 16.1 GB692
Note: green Score (e.g. "73.2") means that the model is better than catallama/CataLlama-v0.1-Base.

Rank the CataLlama V0.1 Base Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227