Tinyllama Ja Wikipedia Databricks Dolly V0.1 by yuiseki

 ยป  All LLMs  ยป  yuiseki  ยป  Tinyllama Ja Wikipedia Databricks Dolly V0.1   URL Share it on

  Arxiv:1910.09700   Autotrain compatible Dataset:izumi-lab/wikipedia-ja... Dataset:kunishou/databricks-do...   Endpoints compatible   F16   Ggml   Gguf   Ja   Llama   Q4   Quantized   Region:us   Safetensors

Tinyllama Ja Wikipedia Databricks Dolly V0.1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tinyllama Ja Wikipedia Databricks Dolly V0.1 (yuiseki/tinyllama-ja-wikipedia-databricks-dolly-v0.1)

Tinyllama Ja Wikipedia Databricks Dolly V0.1 Parameters and Internals

LLM NameTinyllama Ja Wikipedia Databricks Dolly V0.1
Repository ๐Ÿค—https://huggingface.co/yuiseki/tinyllama-ja-wikipedia-databricks-dolly-v0.1 
Model Size1.1b
Required VRAM0.7 GB
Updated2025-02-22
Maintaineryuiseki
Model Typellama
Model Files  2.2 GB   2.2 GB   0.7 GB
Supported Languagesja
GGML QuantizationYes
GGUF QuantizationYes
Quantization Typegguf|ggml|q4|q4_k
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.39.1
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Tinyllama Ja Wikipedia Databricks Dolly V0.1

Best Alternatives
Context / RAM
Downloads
Likes
TinyLlama 1.1B Chat V0.62K / 0.6 GB1634997
Medical Chatbot2K / 0.7 GB1110
...u Chatvector Mlx Lm Chatalpaca2K / 0.9 GB1050
...on Ja Wikipedia Amenokaku V0.12K / 0.7 GB2020
...nyllama Coder Wizardlm En V0.12K / 0.7 GB1761
...a Coder Math Ja Wikipedia V0.12K / 0.7 GB1671
...inyllama Coder Dolphin En V0.12K / 0.7 GB1621
...ma Coder Python En Alpaca V0.12K / 0.7 GB1580
...Coder Python Ja Amenokaku V0.12K / 0.7 GB1420
...ama Sentiment Analyzer En V0.12K / 0.7 GB1390
Note: green Score (e.g. "73.2") means that the model is better than yuiseki/tinyllama-ja-wikipedia-databricks-dolly-v0.1.

Rank the Tinyllama Ja Wikipedia Databricks Dolly V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227