EverythingLM 13B 16K AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  EverythingLM 13B 16K AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:quantized:totally-n... Base model:totally-not-an-llm/... Dataset:totally-not-an-llm/eve...   Llama   Quantized   Region:us   Safetensors

EverythingLM 13B 16K AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
EverythingLM 13B 16K AWQ (TheBloke/EverythingLM-13B-16K-AWQ)

EverythingLM 13B 16K AWQ Parameters and Internals

Model Type 
llama
Use Cases 
Limitations:
Occasionally falls into repetition
Additional Notes 
This model is an early test of the EverythingLM dataset and some new experimental principles.
Training Details 
Data Sources:
EverythingLM dataset
Methodology:
Training took about 1 hour using QLoRa on 1xA100.
Context Length:
16000
Training Time:
~ cost on 1xA100
Hardware Used:
1xA100
Model Architecture:
llama-2 based, general-purpose 13b model with 16k context thanks to LlongMa.
Input Output 
Input Format:
Modified Vicuna format
Accepted Modalities:
text
Output Format:
Verbose and detailed replies
Performance Tips:
Works better with more detailed prompts.
LLM NameEverythingLM 13B 16K AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/EverythingLM-13B-16K-AWQ 
Model NameEverythingLM 13B 16K
Model CreatorKai Howard
Base Model(s)  EverythingLM 13B 16K   totally-not-an-llm/EverythingLM-13b-16k
Model Size13b
Required VRAM7.2 GB
Updated2024-12-22
MaintainerTheBloke
Model Typellama
Model Files  7.2 GB
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length16384
Model Max Length16384
Transformers Version4.31.0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to EverythingLM 13B 16K AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Yarn Llama 2 13B 128K AWQ128K / 7.2 GB182
LongAlign 13B 64K AWQ64K / 7.2 GB272
...oboros L2 13B 2 1 YaRN 64K AWQ64K / 7.2 GB212
OrcaMaid V3 13B 32K AWQ32K / 7.2 GB234
OrcaMaid V2 FIX 13B 32K AWQ32K / 7.2 GB191
NexusRaven V2 13B AWQ16K / 7.2 GB143
NexusRaven V2 13B AWQ16K / 7.2 GB113
...th CodeLlama 13B Python Hf AWQ16K / 7.5 GB70
WhiteRabbitNeo 13B AWQ16K / 7.2 GB164
NexusRaven V2 13B AWQ16K / 7.2 GB221
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/EverythingLM-13B-16K-AWQ.

Rank the EverythingLM 13B 16K AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217