Jambalpaca V0.1 by mlabonne

 ยป  All LLMs  ยป  mlabonne  ยป  Jambalpaca V0.1   URL Share it on

  4-bit   Adapter   Base model:ai21labs/jamba-v0.1   Custom code   Finetuned   Generated from trainer   Jamba   License:apache-2.0   Lora   Peft   Region:us   Safetensors   Sharded   Tensorflow

Rank the Jambalpaca V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Jambalpaca V0.1 (mlabonne/Jambalpaca-v0.1)

Jambalpaca V0.1 Parameters and Internals

LLM NameJambalpaca V0.1
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Jamba V0.1   ai21labs/Jamba-v0.1
Model Size51.6b
Required VRAM103.4 GB
Model Files  0.5 GB   1.1 GB   5.0 GB: 1-of-21   4.9 GB: 2-of-21   5.0 GB: 3-of-21   5.0 GB: 4-of-21   5.0 GB: 5-of-21   4.9 GB: 6-of-21   4.9 GB: 7-of-21   5.0 GB: 8-of-21   4.9 GB: 9-of-21   4.9 GB: 10-of-21   4.9 GB: 11-of-21   4.9 GB: 12-of-21   4.9 GB: 13-of-21   5.0 GB: 14-of-21   4.9 GB: 15-of-21   4.9 GB: 16-of-21   4.9 GB: 17-of-21   4.9 GB: 18-of-21   5.0 GB: 19-of-21   4.9 GB: 20-of-21   4.7 GB: 21-of-21   0.5 GB   0.0 GB   0.0 GB
Model ArchitectureAdapter
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token<|pad|>
LoRA ModelYes
PEFT Target Modulesk_proj|dt_proj|router|o_proj|gate_proj|down_proj|in_proj|out_proj|x_proj|q_proj|v_proj|up_proj
LoRA Alpha32
LoRA Dropout0.05
R Param16

What open-source LLMs or SLMs are you in search of? 35008 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024040901