Taiwan ELM 270M Instruct by liswei

 ยป  All LLMs  ยป  liswei  ยป  Taiwan ELM 270M Instruct   URL Share it on

  Autotrain compatible   Base model:apple/openelm-270m Base model:finetune:apple/open...   Conversational   Custom code   Dataset:liswei/promptpair-tw Dataset:liswei/taiwan-text-exc...   Dataset:yentinglin/taiwanchat   Instruct   Openelm   Region:us   Safetensors   Tensorboard   Zh

Taiwan ELM 270M Instruct Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Taiwan ELM 270M Instruct (liswei/Taiwan-ELM-270M-Instruct)

Taiwan ELM 270M Instruct Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
Research
Applications:
Text Generation
Additional Notes 
The model is notable for its efficiency, making it feasible for researchers with limited computing resources.
Supported Languages 
zh (Traditional Chinese)
Training Details 
Data Sources:
liswei/Taiwan-Text-Excellence-2B, liswei/PromptPair-TW, yentinglin/TaiwanChat
Data Volume:
2B Traditional Chinese tokens and 500K instruction samples
Methodology:
Custom fork of LLaMA-Factory
LLM NameTaiwan ELM 270M Instruct
Repository ๐Ÿค—https://huggingface.co/liswei/Taiwan-ELM-270M-Instruct 
Base Model(s)  Taiwan ELM 270M   OpenELM 270M   liswei/Taiwan-ELM-270M   apple/OpenELM-270M
Model Size270m
Required VRAM1.2 GB
Updated2025-02-05
Maintainerliswei
Model Typeopenelm
Instruction-BasedYes
Model Files  1.2 GB   0.0 GB
Supported Languageszh
Model ArchitectureOpenELMForCausalLM
Licenseapache-2.0
Transformers Version4.40.1
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size61758
Torch Data Typefloat32

Best Alternatives to Taiwan ELM 270M Instruct

Best Alternatives
Context / RAM
Downloads
Likes
OpenELM 270M Instruct0K / 0.5 GB1152137
OpenELM 270M Instruct0K / 1.1 GB1310
...enELM270M DPO FINAL PREFERENCE0K / 0.5 GB1320
M2 Oelm270m0K / 1.1 GB1310
...M 270M Instruct With Tokenizer0K / 0.5 GB1320
Mlnp Project Base0K / 0.5 GB1330
Note: green Score (e.g. "73.2") means that the model is better than liswei/Taiwan-ELM-270M-Instruct.

Rank the Taiwan ELM 270M Instruct Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42565 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227