Llama 3 8B Instruct Gradient 1048K GGUF by second-state

 ยป  All LLMs  ยป  second-state  ยป  Llama 3 8B Instruct Gradient 1048K GGUF   URL Share it on

  Autotrain compatible Base model:gradientai/llama-3-... Base model:quantized:gradienta...   Conversational   En   Gguf   Instruct   Llama   Llama-3   Meta   Q2   Quantized   Region:us

Llama 3 8B Instruct Gradient 1048K GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3 8B Instruct Gradient 1048K GGUF (second-state/Llama-3-8B-Instruct-Gradient-1048k-GGUF)

Llama 3 8B Instruct Gradient 1048K GGUF Parameters and Internals

Model Type 
llama
Additional Notes 
Quantized with llama.cpp b2734 by Second State Inc.
Supported Languages 
en (Full)
Input Output 
Input Format:
<|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Accepted Modalities:
text
LLM NameLlama 3 8B Instruct Gradient 1048K GGUF
Repository ๐Ÿค—https://huggingface.co/second-state/Llama-3-8B-Instruct-Gradient-1048k-GGUF 
Model NameLlama-3-8B-Instruct-Gradient-1048k
Model Creatorgradient.ai
Base Model(s)  ...a 3 8B Instruct Gradient 1048K   gradientai/Llama-3-8B-Instruct-Gradient-1048k
Model Size8b
Required VRAM3.2 GB
Updated2024-12-22
Maintainersecond-state
Model Typellama
Instruction-BasedYes
Model Files  3.2 GB   4.3 GB   4.0 GB   3.7 GB   4.7 GB   4.9 GB   4.7 GB   5.6 GB   5.7 GB   5.6 GB   6.6 GB   8.5 GB   16.1 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf|q2|q4_k|q5_k
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length1048576
Model Max Length1048576
Transformers Version4.39.1
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama 3 8B Instruct Gradient 1048K GGUF

Best Alternatives
Context / RAM
Downloads
Likes
...truct Gradient 1048K IMat GGUF1024K / 2 GB2886
Llama 3 8B Instruct 262K GGUF256K / 3.2 GB1682
... 8B Instruct Reasoner 1o1 V0.3128K / 16.1 GB2247
...lama 3.1 Cantonese 8B Instruct128K / 16.1 GB6385
ProductLlama V2128K / 16.1 GB210
Alpha R S V2 Q8 0 GGUF39K / 8.5 GB90
SmolTulu 1.7B Instruct8K / 3.4 GB32413
Llama 3 Cantonese 8B Instruct8K / 16.1 GB8435
...ama3 8B Chinese Chat GGUF 8bit8K / 8.5 GB699166
...lama3 8B Chinese Chat GGUF F168K / 16.1 GB466627
Note: green Score (e.g. "73.2") means that the model is better than second-state/Llama-3-8B-Instruct-Gradient-1048k-GGUF.

Rank the Llama 3 8B Instruct Gradient 1048K GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217