Merge Llama3 Adapter Shan by NorHsangPha

 ยป  All LLMs  ยป  NorHsangPha  ยป  Merge Llama3 Adapter Shan   URL Share it on

  Merged Model   Autotrain compatible   Conversational Dataset:norhsangpha/oasst1 sha...   Endpoints compatible   F16   Facebook   Ggml   Gguf   Instruct   License:other   Llama   Llama-3   Meta   Pytorch   Q4   Quantized   Region:us   Safetensors   Sharded   Shn   Tensorflow

Rank the Merge Llama3 Adapter Shan Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Merge Llama3 Adapter Shan (NorHsangPha/merge_llama3_adapter_Shan)

Best Alternatives to Merge Llama3 Adapter Shan

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
...3 Empower Functions Small Gguf8K / 4.9 GB1500
Llama 8B Gguf8K / 4.9 GB330
Llama 3 8B Instruct Chinese8K / 16.1 GB110728
Llama 3 8B Instruct8K / 16.1 GB46538
Rag Tge Pl Llama 3 8B8K / 16.1 GB380
...truct Gradient 1048K IMat GGUF1024K / 2 GB4966
...B Instruct Gradient 1048K GGUF1024K / 3.2 GB3443
Llama 3 8B Instruct 262K GGUF256K / 3.2 GB2932
Alpha R S V2 Q8 0 GGUF39K / 8.5 GB500
Llama 3 Instruct 8B SimPO GGUF8K / 3.2 GB2830

Merge Llama3 Adapter Shan Parameters and Internals

LLM NameMerge Llama3 Adapter Shan
RepositoryOpen on ๐Ÿค— 
Merged ModelYes
Model Size8b
Required VRAM16.1 GB
Updated2024-07-13
MaintainerNorHsangPha
Model Typellama
Instruction-BasedYes
Model Files  4.9 GB   16.1 GB   5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
GGML QuantizationYes
GGUF QuantizationYes
Quantization Typeggml|q4|gguf|q4_k
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.42.3
Tokenizer ClassPreTrainedTokenizerFast
Padding Token!
Vocabulary Size128256
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 36243 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801