RAFT Llama3 Quantized 5epochs 24 06 04 by Vipinap

 ยป  All LLMs  ยป  Vipinap  ยป  RAFT Llama3 Quantized 5epochs 24 06 04   URL Share it on

  Arxiv:1910.09700   4-bit   Autotrain compatible   Bitsandbytes   Conversational   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow   Unsloth

RAFT Llama3 Quantized 5epochs 24 06 04 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
RAFT Llama3 Quantized 5epochs 24 06 04 (Vipinap/RAFT_llama3_quantized_5epochs_24_06_04)

RAFT Llama3 Quantized 5epochs 24 06 04 Parameters and Internals

Additional Notes 
This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. The model description and other specific sections require more information to be filled in.
LLM NameRAFT Llama3 Quantized 5epochs 24 06 04
Repository ๐Ÿค—https://huggingface.co/Vipinap/RAFT_llama3_quantized_5epochs_24_06_04 
Model Size4.7b
Required VRAM5.8 GB
Updated2025-02-05
MaintainerVipinap
Model Typellama
Model Files  4.7 GB: 1-of-2   1.1 GB: 2-of-2
Model ArchitectureLlamaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.39.3
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|eot_id|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to RAFT Llama3 Quantized 5epochs 24 06 04

Best Alternatives
Context / RAM
Downloads
Likes
Hola8K / 5.8 GB770
Llama SciQ 4bits8K / 5.8 GB760
RAFT Llama3 Version 24.06.018K / 5.8 GB770
Llama 3 Merged Linear8K / 5.8 GB1240
Llama3.1 Merged128K / 5.8 GB750
EPFL TA Meister 4bit8K / 5.8 GB950
Book 4bitV58K / 5.8 GB770
Book 4bit8K / 5.8 GB50
Llama3 Ko 4bit8K / 5.8 GB770
Tenebra PreAlpha 128g 4BIT2K / 17.6 GB90
Note: green Score (e.g. "73.2") means that the model is better than Vipinap/RAFT_llama3_quantized_5epochs_24_06_04.

Rank the RAFT Llama3 Quantized 5epochs 24 06 04 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227