Alpaca 13B by chavinlo

 ยป  All LLMs  ยป  chavinlo  ยป  Alpaca 13B   URL Share it on

  Autotrain compatible   Endpoints compatible   Llama   Pytorch   Region:us   Sharded
Model Card on HF ๐Ÿค—: https://huggingface.co/chavinlo/alpaca-13b 

Alpaca 13B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Alpaca 13B (chavinlo/alpaca-13b)

Alpaca 13B Parameters and Internals

Model Type 
text generation, instruction-following
Use Cases 
Areas:
research, education
Applications:
instruction following tasks, text completion
Primary Use Cases:
Convert instructions to formal commands, Summarizing lengthy texts
Limitations:
May not reliably solve nuanced or ethical dilemmas, Should not be used for high-stakes decisions without human oversight
Considerations:
Ensure outputs are verified by a human for critical applications.
Additional Notes 
Developed without requiring the LoRA tuning strategy, enhancing deployment simplicity.
Training Details 
Data Sources:
OpenAI API, instruction-following datasets
Methodology:
Fine-tuning using instruction-following data
Context Length:
2048
Hardware Used:
NVIDIA A100 GPUs
Model Architecture:
Transformer
Safety Evaluation 
Methodologies:
red teaming, bias evaluations
Risk Categories:
bias, misinformation
Ethical Considerations:
Intended to reduce bias and ensure safe outputs
Responsible Ai Considerations 
Fairness:
The model aims to mitigate biases present in the data.
Transparency:
The model weights and code are openly available for audit.
Accountability:
Stanford University is accountable for developing and releasing the model.
Mitigation Strategies:
Continuous monitoring and updates to numerical thresholds and training datasets for fairness.
Input Output 
Input Format:
Text prompt following specific instruction formats
Accepted Modalities:
text
Output Format:
Text with actionable completion or responses
Performance Tips:
Tailor prompts to reduce ambiguity for coherent responses.
Release Notes 
Version:
0.1.0
Date:
2023-10-01
Notes:
Initial release without LoRA adaptation. Focuses on efficiency improvements.
LLM NameAlpaca 13B
Repository ๐Ÿค—https://huggingface.co/chavinlo/alpaca-13b 
Model Size13b
Required VRAM52.1 GB
Updated2024-12-14
Maintainerchavinlo
Model Typellama
Model Files  10.0 GB: 1-of-6   9.9 GB: 2-of-6   9.9 GB: 3-of-6   9.9 GB: 4-of-6   9.9 GB: 5-of-6   2.5 GB: 6-of-6   0.0 GB
Model ArchitectureLLaMAForCausalLM
Model Max Length512
Transformers Version4.27.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32001
Torch Data Typefloat32

Best Alternatives to Alpaca 13B

Best Alternatives
Context / RAM
Downloads
Likes
Llm Jp 13B V2.04K / 27.4 GB25914
Decapoda Research Llama 13B0K / 41 GB370
LIMA 13B0K / 42 GB7661
Llama 13B0K / 42 GB111
Vicuna20K / 0 GB616
Llama 13B0K / 42 GB123
... X Alpaca 13B Native 4bit 128g0K / 7.9 GB992735
... X Alpaca 13B Native 4bit 128g0K / 8.1 GB152
Llama 13B 4bit Hf0K / 7 GB232
Llama 13B 4bit Gr1280K / 7.5 GB212
Note: green Score (e.g. "73.2") means that the model is better than chavinlo/alpaca-13b.

Rank the Alpaca 13B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 39237 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124