Qllama .5B Base Wiki Chat RAG by Josephgflowers

 ยป  All LLMs  ยป  Josephgflowers  ยป  Qllama .5B Base Wiki Chat RAG   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Llama   Region:us   Safetensors

Qllama .5B Base Wiki Chat RAG Benchmarks

Qllama .5B Base Wiki Chat RAG (Josephgflowers/Qllama-.5B-Base-Wiki-Chat-RAG)

Qllama .5B Base Wiki Chat RAG Parameters and Internals

Model Type 
text generation, Q&A
Use Cases 
Areas:
research, commercial applications
Limitations:
verbose due to training on Wikipedia Q&A
Considerations:
Fine-tuning on RAG, function calling, programming, or assistant datasets recommended for best performance.
Additional Notes 
Next model focus on RAG. Model is currently only okay at RAG.
Training Details 
Data Sources:
Wikipedia Q&A, Tiny-textbooks, Cosmopedia 100k, Cinder, normal RAG datasets, medical RAG dataset, math chat datasets, conversation datasets like Hermes 1, fastchat, Synthia, Capybara, Cinder, Puffin
Methodology:
Fine-tuned on wiki, math, science, and chat datasets
LLM NameQllama .5B Base Wiki Chat RAG
Repository ๐Ÿค—https://huggingface.co/Josephgflowers/Qllama-.5B-Base-Wiki-Chat-RAG 
Model Size464m
Required VRAM0 GB
Updated2025-02-22
MaintainerJosephgflowers
Model Typellama
Model Files  0.9 GB   0.0 GB   0.0 GB   0.0 GB
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.41.0.dev0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typefloat16
Errorsreplace

Best Alternatives to Qllama .5B Base Wiki Chat RAG

Best Alternatives
Context / RAM
Downloads
Likes
Qllama Tiny .5B Test 14K / 0 GB1650
Qllama .5B RAG 14K / 0 GB792
Core1 Base 464M C44K / 0.9 GB1770
Core1 Base 464M Redpajama4K / 0.9 GB1691
Note: green Score (e.g. "73.2") means that the model is better than Josephgflowers/Qllama-.5B-Base-Wiki-Chat-RAG.

Rank the Qllama .5B Base Wiki Chat RAG Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227