Midnight Miqu 70B V1.5 by sophosympatheia

 ยป  All LLMs  ยป  sophosympatheia  ยป  Midnight Miqu 70B V1.5   URL Share it on

  Merged Model   Arxiv:2311.03099   Autotrain compatible Base model:migtissera/tess-70b... Base model:sophosympatheia/mid...   Conversational   Endpoints compatible   Llama   Model-index   Region:us   Safetensors   Sharded   Tensorflow

Midnight Miqu 70B V1.5 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Midnight Miqu 70B V1.5 (sophosympatheia/Midnight-Miqu-70B-v1.5)

Midnight Miqu 70B V1.5 Parameters and Internals

Model Type 
mergekit
Use Cases 
Areas:
personal
Applications:
roleplaying, storytelling
Primary Use Cases:
AI creative writing partner, roleplaying support
Limitations:
May not be suitable for other tasks due to lack of testing
Considerations:
Engage with care due to uncensored state and legal considerations.
Additional Notes 
This model requires 'warming up' at the start of a new chat to align with user preferences.
Training Details 
Methodology:
DARE Linear merge
Context Length:
32764
Input Output 
Input Format:
Instruct formats: Vicuna, Mistral, ChatML
Accepted Modalities:
text
Output Format:
Text responses for storytelling and roleplaying
Performance Tips:
Quadratic Sampling is recommended for creative work. Experiment with sampler settings such as Min-P and smoothing factor for best results.
LLM NameMidnight Miqu 70B V1.5
Repository ๐Ÿค—https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5 
Base Model(s)  sophosympatheia/Midnight-Miqu-70B-v1.0   migtissera/Tess-70B-v1.6   sophosympatheia/Midnight-Miqu-70B-v1.0   migtissera/Tess-70B-v1.6
Merged ModelYes
Model Size70b
Required VRAM138 GB
Updated2025-04-22
Maintainersophosympatheia
Model Typellama
Model Files  9.6 GB: 1-of-15   9.8 GB: 2-of-15   9.6 GB: 3-of-15   9.8 GB: 4-of-15   9.9 GB: 5-of-15   9.7 GB: 6-of-15   10.0 GB: 7-of-15   9.9 GB: 8-of-15   9.7 GB: 9-of-15   9.6 GB: 10-of-15   10.0 GB: 11-of-15   9.8 GB: 12-of-15   9.6 GB: 13-of-15   9.9 GB: 14-of-15   1.1 GB: 15-of-15
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length32764
Model Max Length32764
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Quantized Models of the Midnight Miqu 70B V1.5

Model
Likes
Downloads
VRAM
Midnight Miqu 70B V1.5 4bit322144436 GB

Best Alternatives to Midnight Miqu 70B V1.5

Best Alternatives
Context / RAM
Downloads
Likes
... Chat 1048K Chinese Llama3 70B1024K / 141.9 GB40355
... 3 70B Instruct Gradient 1048K1024K / 141.9 GB51121
... Chat 1048K Chinese Llama3 70B1024K / 141.9 GB245
Llama3 Function Calling 1048K1024K / 141.9 GB131
...a 3 70B Instruct Gradient 524K512K / 141.9 GB7723
...a 3 70B Instruct Gradient 262K256K / 141.9 GB9255
...ama 3 70B Arimas Story RP V2.0256K / 141.1 GB173
...ama 3 70B Arimas Story RP V1.6256K / 141.2 GB50
...ama 3 70B Arimas Story RP V1.5256K / 141.2 GB342
Yi 70B 200K RPMerge Franken195K / 142.4 GB61

Rank the Midnight Miqu 70B V1.5 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 46565 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227