Tinyllama Alpaca Cthulhu Small by theprint

 ยป  All LLMs  ยป  theprint  ยป  Tinyllama Alpaca Cthulhu Small   URL Share it on

  4bit Base model:quantized:unsloth/t... Base model:unsloth/tinyllama-b...   Conversational   En   Endpoints compatible   Gguf   Llama   Lora   Pytorch   Quantized   Region:us   Safetensors   Sft   Trl   Unsloth

Tinyllama Alpaca Cthulhu Small Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tinyllama Alpaca Cthulhu Small (theprint/tinyllama_alpaca_cthulhu_small)

Tinyllama Alpaca Cthulhu Small Parameters and Internals

Model Type 
text-generation
Additional Notes 
This model serves as a proof of concept for a larger model to be trained later.
Training Details 
Data Sources:
alpaca-cleaned, modified to Cthulhu context
Data Volume:
10k entries
Methodology:
Finetuned with Unsloth and Huggingface's TRL library
Release Notes 
Version:
5/3/24 Update
Notes:
More training was done and gguf files were uploaded.
LLM NameTinyllama Alpaca Cthulhu Small
Repository ๐Ÿค—https://huggingface.co/theprint/tinyllama_alpaca_cthulhu_small 
Base Model(s)  Tinyllama Bnb 4bit   unsloth/tinyllama-bnb-4bit
Required VRAM0.1 GB
Updated2025-02-22
Maintainertheprint
Model Files  0.1 GB   2.2 GB   2.2 GB   0.4 GB   0.6 GB   0.5 GB   0.7 GB   0.9 GB   1.2 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf|4bit
Model ArchitectureAutoModel
Licenseapache-2.0
Model Max Length4096
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesq_proj|o_proj|k_proj|v_proj|gate_proj|up_proj|down_proj
LoRA Alpha32
LoRA Dropout0
R Param32

Best Alternatives to Tinyllama Alpaca Cthulhu Small

Best Alternatives
Context / RAM
Downloads
Likes
ComicBot V.2 Gguf32K / 5 GB1520
Phi 2 GGUF0K / 1.2 GB3703103197
Marco O1 GGUF0K / 3 GB75312
...ixtral 8x7B Instruct V0.1 GGUF0K / 15.6 GB31035611
Mixtral 8x7B V0.1 GGUF0K / 15.6 GB6904424
Futfut By Zephyr7b Gguf0K / 5.1 GB8312
Dolphin 2.5 Mixtral 8x7b GGUF0K / 15.6 GB8203301
Dolphin 2.7 Mixtral 8x7b GGUF0K / 15.6 GB10762138
GOAT Llama3.1 V0.10K / 0.2 GB163
Phi 3 Mini 4K Instruct GGUF0K / 1.4 GB96116
Note: green Score (e.g. "73.2") means that the model is better than theprint/tinyllama_alpaca_cthulhu_small.

Rank the Tinyllama Alpaca Cthulhu Small Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227