Llama 3.1 8B EZO 1.1 It by HODACHI

 ยป  All LLMs  ยป  HODACHI  ยป  Llama 3.1 8B EZO 1.1 It   URL Share it on

  Autotrain compatible   Conversational   En   Endpoints compatible   Instruct   Ja   Japanese   Llama   Region:us   Safetensors   Sharded   Tensorflow

Llama 3.1 8B EZO 1.1 It Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3.1 8B EZO 1.1 It (AXCXEPT/Llama-3.1-8B-EZO-1.1-it)

Llama 3.1 8B EZO 1.1 It Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
Japanese language performance, Various global tasks
Limitations:
Unpredictable output, Need for safety testing, Limited unsupported language usage, Risks as new technology, Need for continuous improvement
Considerations:
Developers and users should be aware of limitations and strive for responsible use. See Llama 3.1 Responsible Use Guide.
Additional Notes 
Based on Meta AI's Llama 3.1 with significant Japanese language performance improvements.
Supported Languages 
ja (Proficient), en (Basic)
Training Details 
Data Sources:
https://huggingface.co/datasets/legacy-datasets/wikipedia, https://huggingface.co/datasets/HuggingFaceFW/fineweb
Methodology:
Plain instruction tuning method + QLoRA
Hardware Used:
H100 ร— 1
Model Architecture:
Llama-based architecture with fine-tuning for Japanese tasks
Responsible Ai Considerations 
Fairness:
Same limitations and ethical considerations as Llama 3.1.
LLM NameLlama 3.1 8B EZO 1.1 It
Repository ๐Ÿค—https://huggingface.co/AXCXEPT/Llama-3.1-8B-EZO-1.1-it 
Model Size8b
Required VRAM16.1 GB
Updated2025-05-20
MaintainerHODACHI
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesja en
Model ArchitectureLlamaForCausalLM
Licensellama3.1
Context Length131072
Model Max Length131072
Transformers Version4.43.3
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama 3.1 8B EZO 1.1 It

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB4677105
UltraLong Thinking4192K / 16.1 GB2482
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB22615
Zero Llama 3.1 8B Beta61048K / 16.1 GB5271
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB306941
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
...dger Nu Llama 3.1 8B UltraLong1048K / 16.2 GB432
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB322
Note: green Score (e.g. "73.2") means that the model is better than AXCXEPT/Llama-3.1-8B-EZO-1.1-it.

Rank the Llama 3.1 8B EZO 1.1 It Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 47466 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227