Midnight Miqu 70B V1.0 GGUF by ooooz

 ยป  All LLMs  ยป  ooooz  ยป  Midnight Miqu 70B V1.0 GGUF   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Gguf   Llama   Quantized   Region:us

Midnight Miqu 70B V1.0 GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Midnight Miqu 70B V1.0 GGUF (ooooz/midnight-miqu-70b-v1.0-GGUF)

Midnight Miqu 70B V1.0 GGUF Parameters and Internals

Model Type 
text generation, multimodal
Use Cases 
Areas:
Research, Commercial Applications
Applications:
Interactive AI, Content Generation
Primary Use Cases:
AI Assistants, Text generation tools
Limitations:
Not suitable for critical decision-making tasks
Considerations:
Ensure usage aligns with ethical guidelines.
Additional Notes 
Excels in creative tasks and information synthesis.
Supported Languages 
English (High Proficiency)
Training Details 
Data Sources:
Diverse internet datasets
Data Volume:
Thousands of GBs
Methodology:
Standard pre-training with fine-tuning on specific tasks
Context Length:
4096
Training Time:
Several months on dedicated HPC
Hardware Used:
A100 GPUs
Model Architecture:
Transformer-based architecture
Safety Evaluation 
Methodologies:
Red teaming
Findings:
Reduced propensity for harmful outputs
Risk Categories:
Bias, Misinformation
Ethical Considerations:
Model should not be used in applications where inaccurate responses could result in harm.
Responsible Ai Considerations 
Fairness:
Training datasets include diverse data to reduce bias.
Transparency:
Full architectural details available in the accompanying paper.
Accountability:
Developers are accountable for the training data and initial release.
Mitigation Strategies:
Continuous monitoring and updates for harmful data.
Input Output 
Input Format:
Text
Accepted Modalities:
text
Output Format:
Text Output
Performance Tips:
Prefer GPU deployment for real-time inference.
Release Notes 
Version:
1.0
Date:
2023-10-15
Notes:
Initial release, introducing superior efficiency in text generation tasks.
LLM NameMidnight Miqu 70B V1.0 GGUF
Repository ๐Ÿค—https://huggingface.co/ooooz/midnight-miqu-70b-v1.0-GGUF 
Base Model(s)  sophosympatheia/Midnight-Miqu-70B-v1.0   sophosympatheia/Midnight-Miqu-70B-v1.0
Model Size70b
Required VRAM29.9 GB
Updated2024-12-22
Maintainerooooz
Model Typellama
Model Files  33.3 GB   29.9 GB   41.4 GB   39.2 GB   48.8 GB   47.5 GB
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureLlamaForCausalLM
Context Length32764
Model Max Length32764
Transformers Version4.36.2
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Midnight Miqu 70B V1.0 GGUF

Best Alternatives
Context / RAM
Downloads
Likes
Reflection Llama 3.1 70B Bf16128K / 141.9 GB2836
Reflection Llama 3.1 70B GGUF128K / 26.4 GB1494
...Horizon AI Korean Advanced 70B128K / 141.9 GB510
...qu 1 70B 24GB VRAM IQ2 XS SOTA31K / 20.3 GB100
...ma3 70B Chinese Chat GGUF 4bit8K / 40 GB93218
Llama 3 70B Quantised8K / 48.7 GB122
...3 Mega Dolphin 2.9.1 120b GGUF8K / 18.4 GB51
Meta Llama 3 70B Instruct8K / 40.3 GB101
CodeLlama 70B Instruct Hf GGUF4K / 25.5 GB1742
Openthaigpt 1.0.0 70B Chat2K / 138.4 GB49811
Note: green Score (e.g. "73.2") means that the model is better than ooooz/midnight-miqu-70b-v1.0-GGUF.

Rank the Midnight Miqu 70B V1.0 GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217