SD3 Prompt Llama 8B by matrixglitch

 ยป  All LLMs  ยป  matrixglitch  ยป  SD3 Prompt Llama 8B   URL Share it on

  Autoprompt   Autotrain compatible   Conversational   Endpoints compatible   Instruct   Llama   Llama3   Prompt enhance   Region:us   Safetensors   Sd3   Sharded   Tensorflow

SD3 Prompt Llama 8b Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
SD3 Prompt Llama 8B (matrixglitch/SD3_prompt-llama_8b)

SD3 Prompt Llama 8B Parameters and Internals

Additional Notes 
The model was trained for 1500 steps. Training for a longer duration yielded diminishing results due to the low batch size. It has not been trained on the whole dataset due to lack of compute, but the results are already considered amazing.
Training Details 
Data Volume:
90k+ original prompts, 220k modified prompts
Training Time:
2.5 hours
Hardware Used:
RTX 3090
LLM NameSD3 Prompt Llama 8b
Repository ๐Ÿค—https://huggingface.co/matrixglitch/SD3_prompt-llama_8b 
Model Size8b
Required VRAM16.1 GB
Updated2025-02-05
Maintainermatrixglitch
Model Typellama
Instruction-BasedYes
Model Files  2.0 GB: 1-of-9   1.9 GB: 2-of-9   2.0 GB: 3-of-9   1.9 GB: 4-of-9   2.0 GB: 5-of-9   1.9 GB: 6-of-9   2.0 GB: 7-of-9   1.3 GB: 8-of-9   1.1 GB: 9-of-9
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.41.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|eot_id|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to SD3 Prompt Llama 8B

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB6623678
Mpasila Viking 8B1024K / 16.1 GB590
161024K / 16.1 GB1690
Because Im Bored Nsfw11024K / 16.1 GB661
121024K / 16.1 GB600
MrRoboto ProLong 8B V4b1024K / 16.1 GB1070
MrRoboto ProLong 8B V1a1024K / 16.1 GB1080
MrRoboto ProLong 8B V2a1024K / 16.1 GB1020
MrRoboto ProLong 8B V4c1024K / 16.1 GB870
8B Unaligned BASE V2b1024K / 16.1 GB980
Note: green Score (e.g. "73.2") means that the model is better than matrixglitch/SD3_prompt-llama_8b.

Rank the SD3 Prompt Llama 8B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227