Zephyr Quiklang 3B by Walmart-the-bag

 ยป  All LLMs  ยป  Walmart-the-bag  ยป  Zephyr Quiklang 3B   URL Share it on

Base model:finetune:stabilitya... Base model:stabilityai/stablel...   Causal lm   Conversational   Custom code   Dataset:teknium/openhermes Dataset:unalignment/toxic-dpo-...   Feature-extraction   Pytorch   Region:us   Safetensors   Stablelm epoch

Zephyr Quiklang 3B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Zephyr Quiklang 3B (Walmart-the-bag/zephyr-quiklang-3b)

Zephyr Quiklang 3B Parameters and Internals

Model Type 
causal_lm
Training Details 
Data Sources:
teknium/openhermes, unalignment/toxic-dpo-v0.1
Data Volume:
10000 samples
Context Length:
1024
Hardware Used:
1xA6000-48GB
LLM NameZephyr Quiklang 3B
Repository ๐Ÿค—https://huggingface.co/Walmart-the-bag/zephyr-quiklang-3b 
Base Model(s)  Stablelm Zephyr 3B   stabilityai/stablelm-zephyr-3b
Model Size3b
Required VRAM5.9 GB
Updated2025-02-05
MaintainerWalmart-the-bag
Model Typestablelm_epoch
Model Files  5.9 GB   5.9 GB   0.0 GB
Model ArchitectureStableLMEpochForCausalLM
Licenseother
Context Length4096
Model Max Length4096
Transformers Version4.34.1
Tokenizer ClassGPTNeoXTokenizer
Padding Token<|endoftext|>
Vocabulary Size50304
Torch Data Typefloat16

Quantized Models of the Zephyr Quiklang 3B

Model
Likes
Downloads
VRAM
Zephyr Quiklang 3B GGUF4571 GB
Zephyr Quiklang 3B GPTQ1171 GB

Best Alternatives to Zephyr Quiklang 3B

Best Alternatives
Context / RAM
Downloads
Likes
Stable Code 3B Mlx16K / 5.6 GB41
Aura 3B4K / 5.6 GB42
Slim Tags 3B4K / 5.6 GB2454
Slim Extract4K / 5.6 GB13412
Slim Sa Ner4K / 5.6 GB1406
Slim Xsum4K / 5.6 GB1426
Slim Boolean4K / 5.6 GB84
Slim Summary4K / 5.6 GB257
Salami 3B4K / 5.6 GB1191
Memphis Scribe 3B Alpha4K / 5.6 GB1182
Note: green Score (e.g. "73.2") means that the model is better than Walmart-the-bag/zephyr-quiklang-3b.

Rank the Zephyr Quiklang 3B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227