Latxa 70B V1.2 by HiTZ

 Β»  All LLMs  Β»  HiTZ  Β»  Latxa 70B V1.2   URL Share it on

  Arxiv:1910.09700   Arxiv:2112.10668   Arxiv:2307.09288   Arxiv:2308.16884   Arxiv:2403.20266   Autotrain compatible   Dataset:hitz/latxa-corpus-v1.1   En   Endpoints compatible   Eu   Llama   Model-index   Pytorch   Region:us   Sharded
Model Card on HF πŸ€—: https://huggingface.co/HiTZ/latxa-70b-v1.2 

Latxa 70B V1.2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Latxa 70B V1.2 (HiTZ/latxa-70b-v1.2)

Latxa 70B V1.2 Parameters and Internals

Model Type 
language model
Use Cases 
Areas:
Basque language technology and research
Primary Use Cases:
Pre-trained LLMs for specific tasks or further fine-tuning for specific use cases.
Limitations:
Not fine-tuned to follow instructions or work as a chat assistant.
Considerations:
Use with Basque data; performance in other languages is not guaranteed.
Additional Notes 
Models range from 7 to 70 billion parameters, evaluation and datasets are publicly available under open licenses.
Supported Languages 
nlp ()
Training Details 
Data Sources:
HiTZ/latxa-corpus-v1.1, EleutherAI/pile
Data Volume:
4.17B tokens
Methodology:
Prioritizing high-quality data sources with deduplication and filtering. Trained using GPT-Neox on HPC infrastructure.
Context Length:
4096
Training Time:
10k steps with 20B total tokens, around 4 epochs
Hardware Used:
CINECA HPC Leonardo computing cluster, 3456 nodes each containing 4x custom A100 64Gb GPUs
Model Architecture:
Follows Meta’s LLaMA architecture, further trained on Basque corpus
Responsible Ai Considerations 
Fairness:
Trained on carefully selected and processed data to minimize disturbing or harmful content.
Mitigation Strategies:
Thorough deduplication and filtering process applied on training data.
Input Output 
Accepted Modalities:
text
Output Format:
Text generated output
Release Notes 
Version:
1.1
LLM NameLatxa 70B V1.2
Repository πŸ€—https://huggingface.co/HiTZ/latxa-70b-v1.2 
Model Size70b
Required VRAM138 GB
Updated2025-01-17
MaintainerHiTZ
Model Typellama
Model Files  9.8 GB: 1-of-15   9.8 GB: 2-of-15   10.0 GB: 3-of-15   9.8 GB: 4-of-15   9.8 GB: 5-of-15   9.8 GB: 6-of-15   10.0 GB: 7-of-15   9.8 GB: 8-of-15   9.8 GB: 9-of-15   9.8 GB: 10-of-15   10.0 GB: 11-of-15   9.8 GB: 12-of-15   9.8 GB: 13-of-15   9.5 GB: 14-of-15   0.5 GB: 15-of-15
Supported Languageseu en
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.31.0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Latxa 70B V1.2

Best Alternatives
Context / RAM
Downloads
Likes
... Chat 1048K Chinese Llama3 70B1024K / 141.9 GB30195
... 3 70B Instruct Gradient 1048K1024K / 141.9 GB366121
Llama3 Function Calling 1048K1024K / 141.9 GB181
...a 3 70B Instruct Gradient 524K512K / 141.9 GB6523
...a 3 70B Instruct Gradient 262K256K / 141.9 GB8055
...ama 3 70B Arimas Story RP V2.0256K / 141.1 GB413
...ama 3 70B Arimas Story RP V1.6256K / 141.2 GB210
...ama 3 70B Arimas Story RP V1.5256K / 141.2 GB252
Yi 70B 200K RPMerge Franken195K / 142.4 GB121
...a 3.1 Nemotron 70B Instruct HF128K / 141.9 GB3903821996
Note: green Score (e.g. "73.2") means that the model is better than HiTZ/latxa-70b-v1.2.

Rank the Latxa 70B V1.2 Capabilities

πŸ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41473 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227