Stablelm Tuned Alpha 7B by stabilityai

 ยป  All LLMs  ยป  stabilityai  ยป  Stablelm Tuned Alpha 7B   URL Share it on

  Autotrain compatible   Dataset:dahoas/full-hh-rlhf   Dataset:dmayhem93/chatcombined Dataset:huggingfaceh4/databric... Dataset:jeffwan/sharegpt vicun... Dataset:nomic-ai/gpt4all promp...   Dataset:tatsu-lab/alpaca   En   Endpoints compatible   Gpt neox   Pytorch   Region:us   Sharded

Stablelm Tuned Alpha 7B Benchmarks

Stablelm Tuned Alpha 7B (stabilityai/stablelm-tuned-alpha-7b)

Stablelm Tuned Alpha 7B Parameters and Internals

Model Type 
causal-lm
Use Cases 
Areas:
research, commercial applications
Limitations:
Not all biases and toxicity can be mitigated, Should not substitute human judgment
Considerations:
Should be used in a manner that is compliant with the CC BY-NC-SA-4.0 license
Additional Notes 
The model exhibits helpful and harmless behavior by design, and it's equipped to support users while adhering to safety guidelines.
Supported Languages 
en (fluent)
Training Details 
Data Sources:
dmayhem93/ChatCombined, tatsu-lab/alpaca, nomic-ai/gpt4all_prompt_generations, Dahoas/full-hh-rlhf, jeffwan/sharegpt_vicuna, HuggingFaceH4/databricks_dolly_15k
Context Length:
4096
Model Architecture:
NeoX transformer architecture
Input Output 
Accepted Modalities:
text
Output Format:
text
LLM NameStablelm Tuned Alpha 7B
Repository ๐Ÿค—https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b 
Model Size7b
Required VRAM31.9 GB
Updated2025-02-22
Maintainerstabilityai
Model Typegpt_neox
Model Files  9.8 GB: 1-of-4   9.8 GB: 2-of-4   9.8 GB: 3-of-4   2.5 GB: 4-of-4
Supported Languagesen
Model ArchitectureGPTNeoXForCausalLM
Licensecc-by-nc-sa-4.0
Context Length4096
Model Max Length4096
Transformers Version4.28.1
Tokenizer ClassGPTNeoXTokenizer
Vocabulary Size50432
Torch Data Typefloat32

Quantized Models of the Stablelm Tuned Alpha 7B

Model
Likes
Downloads
VRAM
Stablelm Tuned Alpha 7B 16bit51216 GB

Best Alternatives to Stablelm Tuned Alpha 7B

Best Alternatives
Context / RAM
Downloads
Likes
Literature 7B 1638416K / 36 GB1714
RedPajama 7B 1638416K / 36 GB144
Stablelm Base Alpha 7B4K / 31.9 GB2613209
Stablelm 7B Sft V7 Epoch 34K / 32.4 GB202567
StableLManticore 7B4K / 16 GB31
Pythia 6.9B Deduped 4K4K / 27.2 GB910
Stablelm 7B4K / 31.9 GB72
Sarashina1 7B2K / 13.9 GB3380
Dolly V2 7B2K / 13.8 GB11366149
Open Calm 7B2K / 13.9 GB3471206
Note: green Score (e.g. "73.2") means that the model is better than stabilityai/stablelm-tuned-alpha-7b.

Rank the Stablelm Tuned Alpha 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227