Llama 13B by nonlinearshimada

 ยป  All LLMs  ยป  nonlinearshimada  ยป  Llama 13B   URL Share it on

  Autotrain compatible   Endpoints compatible   Llama   Pytorch   Region:us   Sharded

Llama 13B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 13B (nonlinearshimada/llama-13b)

Llama 13B Parameters and Internals

Model Type 
auto-regressive language model, transformer architecture
Use Cases 
Areas:
research, NLP exploratory tasks
Applications:
question answering, reading comprehension, natural language understanding
Primary Use Cases:
research on large language models, exploring potential applications
Limitations:
has not been trained with human feedback; can thus generate toxic or offensive content
Considerations:
Foundation model, should not be used on downstream applications without further risk evaluation and mitigation.
Supported Languages 
primary (English), others (Spanish, French, German, Dutch, Italian, Portuguese, Russian, Chinese, etc.)
Training Details 
Data Sources:
CCNet, C4, GitHub, Wikipedia, Books, ArXiv, Stack Exchange
Data Volume:
Approximately 1T tokens for smaller models, 1.4T tokens for larger models
Model Architecture:
Transformer
Responsible Ai Considerations 
Fairness:
Expected to reflect biases from sources due to internet data. Evaluated on RAI datasets for various biases.
Mitigation Strategies:
Filtered web data for proximity to Wikipedia with Kneser-Ney language model and fastText linear classifier.
LLM NameLlama 13B
Repository ๐Ÿค—https://huggingface.co/nonlinearshimada/llama-13b 
Model Size13b
Required VRAM42 GB
Updated2025-01-20
Maintainernonlinearshimada
Model Typellama
Model Files  1.0 GB: 0-of-41   1.0 GB: 1-of-41   1.0 GB: 2-of-41   1.0 GB: 3-of-41   1.0 GB: 4-of-41   1.0 GB: 5-of-41   1.0 GB: 6-of-41   1.0 GB: 7-of-41   1.0 GB: 8-of-41   1.0 GB: 9-of-41   1.0 GB: 10-of-41   1.0 GB: 11-of-41   1.0 GB: 12-of-41   1.0 GB: 13-of-41   1.0 GB: 14-of-41   1.0 GB: 15-of-41   1.0 GB: 16-of-41   1.0 GB: 17-of-41   1.0 GB: 18-of-41   1.0 GB: 19-of-41   1.0 GB: 20-of-41   1.0 GB: 21-of-41   1.0 GB: 22-of-41   1.0 GB: 23-of-41   1.0 GB: 24-of-41   1.0 GB: 25-of-41   1.0 GB: 26-of-41   1.0 GB: 27-of-41   1.0 GB: 28-of-41   1.0 GB: 29-of-41   1.0 GB: 30-of-41   1.0 GB: 31-of-41   1.0 GB: 32-of-41   1.0 GB: 33-of-41   1.0 GB: 34-of-41   1.0 GB: 35-of-41   1.0 GB: 36-of-41   1.0 GB: 37-of-41   1.0 GB: 38-of-41   1.0 GB: 39-of-41   1.0 GB: 40-of-41   1.0 GB: 41-of-41
Model ArchitectureLLaMAForCausalLM
Licenseother
Transformers Version4.27.0.dev0
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Llama 13B

Best Alternatives
Context / RAM
Downloads
Likes
Llm Jp 13B V2.04K / 27.4 GB27015
Decapoda Research Llama 13B0K / 41 GB100
LIMA 13B0K / 42 GB5831
Alpaca 13B0K / 52.1 GB1136108
Vicuna20K / 0 GB1616
Llama 13B0K / 42 GB163
... X Alpaca 13B Native 4bit 128g0K / 7.9 GB748736
... X Alpaca 13B Native 4bit 128g0K / 8.1 GB102
Llama 13B 4bit Hf0K / 7 GB162
Llama 13B 4bit Gr1280K / 7.5 GB92
Note: green Score (e.g. "73.2") means that the model is better than nonlinearshimada/llama-13b.

Rank the Llama 13B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41636 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227