Llama CBT 500 by MandTDigital

 ยป  All LLMs  ยป  MandTDigital  ยป  Llama CBT 500   URL Share it on

  4bit   Autotrain compatible   Endpoints compatible   Gguf   License:mit   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Rank the Llama CBT 500 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Llama CBT 500 (MandTDigital/Llama_CBT_500)

Best Alternatives to Llama CBT 500

Best Alternatives
HF Rank
...truct Gradient 1048K IMat GGUF1024K / 2 GB4966
...B Instruct Gradient 1048K GGUF1024K / 3.2 GB3443
Unhinged Llama3 8B 524K512K / 26.5 GB500
Llama 3 8B Instruct 262K GGUF256K / 3.2 GB2932
Alpha R S V2 Q8 0 GGUF39K / 8.5 GB500
Model8K / 1.2 GB380
OpenBioLLM Llama3 8B GGUF8K / 3.2 GB386932
Hermes 2 Pro Llama 3 8B GGUF8K / 3.2 GB17622
Llama 3 Instruct 8B SimPO GGUF8K / 3.2 GB2830
Llama 3 Instruct 8B SimPO GGUF8K / 3.2 GB2360

Llama CBT 500 Parameters and Internals

LLM NameLlama CBT 500
RepositoryOpen on ๐Ÿค— 
Model Size8b
Required VRAM1.2 GB
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4   8.5 GB
GGUF QuantizationYes
Quantization Typegguf|4bit
Model ArchitectureLlamaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.42.3
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|reserved_special_token_250|>
Vocabulary Size128256
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 36243 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801