L3 Aethora 15B V2 EXL2 6.0bpw by bullerwins

 ยป  All LLMs  ยป  bullerwins  ยป  L3 Aethora 15B V2 EXL2 6.0bpw   URL Share it on

  6-bit   Autotrain compatible Base model:elinas/llama-3-15b-... Base model:quantized:elinas/ll...   Conversational Dataset:theskullery/aether-lit...   En   Endpoints compatible   Exl2   Instruct   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

L3 Aethora 15B V2 EXL2 6.0bpw Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

L3 Aethora 15B V2 EXL2 6.0bpw Parameters and Internals

Model Type
text generation, instruction following
Use Cases
Primary Use Cases:
Creative Writing and Storytelling, General Intelligence, Instructional and Educational Content, Reasoning and Problem-Solving, Contextual Understanding and Adaptability
Additional NotesThe model employs state-of-the-art training techniques and a curated dataset to deliver enhanced performance across a wide range of tasks.
Training Details
Data Sources:
TheSkullery/Aether-Lite-v1.8.1
Data Volume:125k samples
Methodology:LoRA (Low-Rank Adaptation)
Context Length:8192
Training Time:17.5 hours
Hardware Used:
4 x A100 GPUs
Model Architecture:Llama 3-based
LLM NameL3 Aethora 15B V2 EXL2 6.0bpw
Repository ๐Ÿค—https://huggingface.co/bullerwins/L3-Aethora-15B-V2-exl2_6.0bpw 
Base Model(s)  Llama 3 15B Instruct Zeroed   elinas/Llama-3-15B-Instruct-zeroed
Model Size15b
Required VRAM11.9 GB
Updated2024-11-13
Maintainerbullerwins
Model Typellama
Instruction-BasedYes
Model Files  8.6 GB: 1-of-2   3.3 GB: 2-of-2
Supported Languagesen
Quantization Typeexl2
Model ArchitectureLlamaForCausalLM
Licensecc-by-sa-4.0
Context Length8192
Model Max Length8192
Transformers Version4.40.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16
L3 Aethora 15B V2 EXL2 6.0bpw (bullerwins/L3-Aethora-15B-V2-exl2_6.0bpw)

Best Alternatives to L3 Aethora 15B V2 EXL2 6.0bpw

Best Alternatives
Context / RAM
Downloads
Likes
L3 Aethora 15B V2 EXL2 8.0bpw8K / 12.4 GB111
L3 Aethora 15B V2 EXL2 4.0bpw8K / 8.4 GB101
L3 Aethora 15B V2 EXL2 5.0bpw8K / 10.2 GB71
L3 Aethora 15B V28K / 30.1 GB2939
OpenCrystal 15B L3 V38K / 30 GB928
OpenCrystal L3 15B V2.18K / 30 GB85
OpenCrystal 15B L3 V28K / 30 GB134
Meta Llama 3 15B Instruct8K / 23.9 GB101
Llama 3 15B Instruct Ft V2 AWQ8K / 9.4 GB50
Llama 3 15B Instruct Ft V28K / 30.1 GB114
Note: green Score (e.g. "73.2") means that the model is better than bullerwins/L3-Aethora-15B-V2-exl2_6.0bpw.

Rank the L3 Aethora 15B V2 EXL2 6.0bpw Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 37901 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241110