Mistral Trismegistus 7B GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Mistral Trismegistus 7B GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:quantized:teknium/m... Base model:teknium/mistral-tri...   Distillation   En   Finetuned   Gpt4   Gptq   Instruct   Mistral   Mistral-7b   Quantized   Region:us   Safetensors   Synthetic data

Mistral Trismegistus 7B GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mistral Trismegistus 7B GPTQ (TheBloke/Mistral-Trismegistus-7B-GPTQ)

Mistral Trismegistus 7B GPTQ Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
esoteric, occult, spiritual
Applications:
chatbots for spiritual guidance, hypnosis simulations
Primary Use Cases:
Answering questions about occult artifacts, Playing the role of a hypnotist
Limitations:
Limited to esoteric and spiritual topics
Additional Notes 
This model is tuned for creative and esoteric tasks, emphasizing depth and richness in the occult.
Supported Languages 
en (fluent)
Training Details 
Data Sources:
GPT-4 generated dataset
Data Volume:
35,000 examples
Methodology:
Synthetic data generation, fine-tuning
Model Architecture:
Mistral architecture
Input Output 
Input Format:
USER: {prompt} ASSISTANT:
Accepted Modalities:
text
Output Format:
text
LLM NameMistral Trismegistus 7B GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/Mistral-Trismegistus-7B-GPTQ 
Model NameMistral Trismegistus 7B
Model CreatorTeknium
Base Model(s)  Mistral Trismegistus 7B   teknium/Mistral-Trismegistus-7B
Model Size7b
Required VRAM4.2 GB
Updated2025-02-05
MaintainerTheBloke
Model Typemistral
Model Files  4.2 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.34.0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Mistral Trismegistus 7B GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
Mistral 7B Instruct V0.2 GPTQ32K / 4.2 GB39163850
Mistral 7B Instruct V0.3 GPTQ32K / 4.2 GB86380
...ral 7B Instruct V0.3 GPTQ 4bit32K / 4.2 GB189618
...ephyr 7B Beta Channelwise Gptq32K / 4 GB99220
NeuralBeagle14 7B GPTQ32K / 4.2 GB169205
...baraHermes 2.5 Mistral 7B GPTQ32K / 4.2 GB370856
...istral 7B Pruned50 GPTQ Marlin32K / 4 GB760
...phyr 7B Beta Assistant V1 Gptq32K / 4.2 GB791
...l Neural Chat 7B V3.8 Bit Gptq32K / 7.7 GB770
...lai Mistral 7B V0.1 4 Bit Gptq32K / 4.2 GB790
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Mistral-Trismegistus-7B-GPTQ.

Rank the Mistral Trismegistus 7B GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227