Luminia 13B V3 by Nekochu

 ยป  All LLMs  ยป  Nekochu  ยป  Luminia 13B V3   URL Share it on

  Alpaca Base model:adapter:meta-llama/... Base model:meta-llama/llama-2-...   Conversational   Dataset:gair/lima Dataset:garage-baind/open-plat... Dataset:glaiveai/glaive-functi... Dataset:nekochu/discord-unstab...   Dataset:open-orca/slimorca Dataset:sahil2801/codealpaca-2...   Dataset:tiger-lab/mathinstruct   En   Endpoints compatible   Finetuned   Generated from trainer   Gguf   Gpt4   Instruct   Llama   Llama-factory   Llama2   Lora   Peft   Q3   Quantized   Region:us   Safetensors   Sharded   Stable diffusion   Synthetic data   Tensorflow

Luminia 13B V3 Benchmarks

Luminia 13B V3 Parameters and Internals

Model Type 
llama2
Use Cases 
Areas:
text-generation
Applications:
Stable Diffusion prompt enhancement
Primary Use Cases:
question-answering, text2text-generation, conversational
Additional Notes 
LoRa included and various quantized versions available
Supported Languages 
en (proficient)
Training Details 
Data Sources:
Nekochu/discord-unstable-diffusion-SD-prompts, glaiveai/glaive-function-calling-v2, TIGER-Lab/MathInstruct, Open-Orca/SlimOrca, GAIR/lima, sahil2801/CodeAlpaca-20k, garage-bAInd/Open-Platypus
Methodology:
QLoRA, lora finetuning, quantization 4bits
Context Length:
4096
Hardware Used:
QLoRA training OS Windows, CUDA 12.1 on 24GB VRAM
Model Architecture:
Llama
LLM NameLuminia 13B V3
Repository ๐Ÿค—https://huggingface.co/Nekochu/Luminia-13B-v3 
Model NameLuminia 13B v3
Model CreatorNekochu
Base Model(s)  Llama 2 13B Chat Hf   meta-llama/Llama-2-13b-chat-hf
Model Size13b
Required VRAM26 GB
Updated2024-11-16
MaintainerNekochu
Model Typellama2
Model Files  7.4 GB   6.3 GB   7.9 GB   7.4 GB   9.9 GB: 1-of-3   9.9 GB: 2-of-3   6.2 GB: 3-of-3
Supported Languagesen
GGUF QuantizationYes
Quantization Typeq3|gguf|q4_k
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.38.1
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16
Luminia 13B V3 (Nekochu/Luminia-13B-v3)

Best Alternatives to Luminia 13B V3

Best Alternatives
Context / RAM
Downloads
Likes
Llm Compiler 13B Ftd GGUF16K / 4.8 GB2600
Llm Compiler 13B GGUF16K / 4.8 GB870
Llm Compiler 13B Ftd GGUF16K / 4.8 GB710
Llm Compiler 13B GGUF16K / 4.8 GB270
CodeLlama 13B Instruct GGUF16K / 5.4 GB1812
Mythomax L2 13B Q4 K M GGUF4K / 8.1 GB111731
DiarizationLM 13B Fisher V14K / 26 GB3789
HyperLlama2Test4K / 26 GB80
...V2 13B L2 BetaTest Q4 K M GGUF4K / 7.9 GB160
AppleSauce L2 13B4K / 26.7 GB14311
Note: green Score (e.g. "73.2") means that the model is better than Nekochu/Luminia-13B-v3.

Rank the Luminia 13B V3 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 38020 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241110