Llama2 7B Chat Uncensored by georgesung

 ยป  All LLMs  ยป  georgesung  ยป  Llama2 7B Chat Uncensored   URL Share it on

  Autotrain compatible Dataset:georgesung/wizard vicu...   Endpoints compatible   Fp16   License:other   Llama   Pytorch   Quantized   Region:us   Safetensors   Sharded   Tensorboard   Tensorflow   Uncensored

Rank the Llama2 7B Chat Uncensored Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Llama2 7B Chat Uncensored (georgesung/llama2_7b_chat_uncensored)

Quantized Models of the Llama2 7B Chat Uncensored

Model
Likes
Downloads
VRAM
Llama2 7b Chat Uncensored GGML114162 GB
Llama2 7b Chat Uncensored GPTQ651373 GB
Llama2 7b Chat Uncensored GGUF2570972 GB
Llama2 7b Chat Uncensored AWQ5166303 GB

Best Alternatives to Llama2 7B Chat Uncensored

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Vicuna 7B V1.555.274K / 13.5 GB711532237
Llama2 Chinese 7B Chat54.234K / 13.5 GB7153211
Vicuna 7B V1.5 16K54.024K / 13.5 GB3983183
Deepseek Llm 7B Base52.334K / 13.9 GB362629
Deepseek Llm 7B Chat49.554K / 13.9 GB869565
Vicuna 7B V1.5 GPTQ48.54K / 3.9 GB144615
Vicuna 7B V1.5 AWQ48.54K / 3.9 GB1283
Vicuna 7B V1.5 16K GPTQ47.44K / 3.9 GB3818010
Vicuna 7B V1.5 16K AWQ47.44K / 3.9 GB411
Vicuna 7B V1.5 16K Gptq47.44K / 3.9 GB150
Note: green Score (e.g. "73.2") means that the model is better than georgesung/llama2_7b_chat_uncensored.

Llama2 7B Chat Uncensored Parameters and Internals

LLM NameLlama2 7b Chat Uncensored
RepositoryOpen on ๐Ÿค— 
Model Size7b
Required VRAM27 GB
Updated2024-05-20
Maintainergeorgesung
Model Typellama
Model Files  9.9 GB: 1-of-3   9.9 GB: 2-of-3   7.2 GB: 3-of-3   9.9 GB: 1-of-3   9.9 GB: 2-of-3   7.2 GB: 3-of-3
Quantization Typefp16
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length2048
Model Max Length2048
Transformers Version4.30.2
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat32

What open-source LLMs or SLMs are you in search of? 34817 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801