Swallow 13B GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Swallow 13B GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:quantized:tokyotech... Base model:tokyotech-llm/swall...   En   Gptq   Ja   Llama   Quantized   Region:us   Safetensors

Swallow 13B GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Swallow 13B GPTQ (TheBloke/Swallow-13B-GPTQ)

Swallow 13B GPTQ Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
Research, Text generation
Limitations:
The model has not been fully aligned with human intent and safety considerations.
Considerations:
Early stage of research. Outputs should be reviewed for alignment with safety standards.
Supported Languages 
English (Fluent), Japanese (Fluent)
Training Details 
Data Sources:
Japanese Wikipedia, RefinedWeb, Swallow Corpus, The Pile
Model Architecture:
Llama 2 architecture, adjusted for Japanese language data
Input Output 
Accepted Modalities:
text
Output Format:
text
LLM NameSwallow 13B GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/Swallow-13B-GPTQ 
Model NameSwallow 13B
Model Creatortokyotech-llm
Base Model(s)  Swallow 13B Hf   tokyotech-llm/Swallow-13b-hf
Model Size13b
Required VRAM7.5 GB
Updated2025-02-05
MaintainerTheBloke
Model Typellama
Model Files  7.5 GB
Supported Languagesen ja
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLLaMAForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size43176
Torch Data Typebfloat16

Best Alternatives to Swallow 13B GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
ChristGPT 13B GPTQ0K / 7.3 GB100
... X Alpaca 13B Native 4bit 128g0K / 7.9 GB1498735
... X Alpaca 13B Native 4bit 128g0K / 8.1 GB42
Llama 13B 4bit Hf0K / 7 GB62
Llama 13B 4bit Gr1280K / 7.5 GB52
Llama 13B 3bit Gr1280K / 5.9 GB61
... X Alpaca 13B Native 4bit 128g0K / 7.9 GB53
Llama 13B 4bit Gr1280K / 7.5 GB45
Llama 13B 3bit Gr1280K / 5.9 GB63
Llm Jp 13B V2.04K / 27.4 GB68815
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Swallow-13B-GPTQ.

Rank the Swallow 13B GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227