EverythingLM 13B 16K GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  EverythingLM 13B 16K GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:quantized:totally-n... Base model:totally-not-an-llm/... Dataset:totally-not-an-llm/eve...   Gptq   Llama   Quantized   Region:us   Safetensors

EverythingLM 13B 16K GPTQ Benchmarks

EverythingLM 13B 16K GPTQ (TheBloke/EverythingLM-13B-16K-GPTQ)

EverythingLM 13B 16K GPTQ Parameters and Internals

Model Type 
llama
Additional Notes 
This model is an early test of the EverythingLM dataset and some new experimental principles, so don't consider it SOTA.
Training Details 
Data Sources:
EverythingLM dataset
Methodology:
QLoRa
Context Length:
16000
Training Time:
1 hour
Hardware Used:
1xA100
Model Architecture:
llama-2 based
LLM NameEverythingLM 13B 16K GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/EverythingLM-13B-16K-GPTQ 
Model NameEverythingLM 13B 16K
Model CreatorKai Howard
Base Model(s)  EverythingLM 13B 16K   totally-not-an-llm/EverythingLM-13b-16k
Model Size13b
Required VRAM7.3 GB
Updated2024-12-22
MaintainerTheBloke
Model Typellama
Model Files  7.3 GB
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length16384
Model Max Length16384
Transformers Version4.31.0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to EverythingLM 13B 16K GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
Yarn Llama 2 13B 128K GPTQ128K / 7.3 GB2816
LongAlign 13B 64K GPTQ64K / 7.3 GB291
...boros L2 13B 2 1 YaRN 64K GPTQ64K / 7.3 GB203
Yarn Llama 2 13B 64K GPTQ64K / 7.3 GB241
OrcaMaid V3 13B 32K GPTQ32K / 7.3 GB263
OrcaMaid V2 FIX 13B 32K GPTQ32K / 7.3 GB354
Tinybra 13B GPTQ 4BIT16K / 7 GB280
Tinybra 13B GPTQ 32g 4BIT16K / 8 GB171
WhiteRabbitNeo 13B GPTQ16K / 7.3 GB284
NexusRaven V2 13B GPTQ16K / 7.3 GB243

Rank the EverythingLM 13B 16K GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217