Aeonium V1 Base 4B by aeonium

 ยป  All LLMs  ยป  aeonium  ยป  Aeonium V1 Base 4B   URL Share it on

  Aeonium   Autotrain compatible   Endpoints compatible   Llama   Region:us   Ru   Safetensors   Sharded   Tensorflow

Aeonium V1 Base 4B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Aeonium V1 Base 4B (aeonium/Aeonium-v1-Base-4B)

Aeonium V1 Base 4B Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
Russian language processing
Applications:
general text generation
Considerations:
Users should filter content to avoid biases and other offensive language.
Additional Notes 
Content warning for potentially harmful outputs, and acknowledgement of biases.
Supported Languages 
ru (fluent)
Training Details 
Data Sources:
public web pages in Russian
Hardware Used:
TPU v4-256 node
Responsible Ai Considerations 
Fairness:
Aeonium v1 may generate text that contains biases. Users should apply careful filtering.
Mitigation Strategies:
Caution is advised when using generated text for sensitive applications.
LLM NameAeonium V1 Base 4B
Repository ๐Ÿค—https://huggingface.co/aeonium/Aeonium-v1-Base-4B 
Model Size4b
Required VRAM16.2 GB
Updated2025-02-22
Maintaineraeonium
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.6 GB: 3-of-4   1.6 GB: 4-of-4
Supported Languagesru
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.39.3
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128000
Torch Data Typefloat32

Best Alternatives to Aeonium V1 Base 4B

Best Alternatives
Context / RAM
Downloads
Likes
SJT 4B146K / 7.6 GB170
Loxa 4B128K / 16 GB640
Aura 4B128K / 9 GB2710
...ama 3.1 Minitron 4B Depth Base128K / 9.1 GB285321
Nemotron W 4b MagLight 0.1128K / 9.2 GB162
...ama 3.1 Minitron 4B Width Base128K / 9 GB3382188
....5 MINI 4B SFTxORPO HESSIAN AI128K / 7.7 GB160
....5 MINI 4B ORPOxSFT HESSIAN AI128K / 7.7 GB150
....5 MINI 4B ORPOxSFT HESSIAN AI128K / 7.7 GB150
....5 MINI 4B SFTxORPO HESSIAN AI128K / 7.7 GB130
Note: green Score (e.g. "73.2") means that the model is better than aeonium/Aeonium-v1-Base-4B.

Rank the Aeonium V1 Base 4B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227