LLM Name | Meditron 70B GGUF |
Repository ๐ค | https://huggingface.co/TheBloke/meditron-70B-GGUF |
Model Name | Meditron 70B |
Model Creator | EPFL LLM Team |
Base Model(s) | |
Model Size | 70b |
Required VRAM | 29.3 GB |
Updated | 2024-09-16 |
Maintainer | TheBloke |
Model Type | llama |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | AutoModel |
License | llama2 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
CodeLlama 70B Instruct GGUF | 0K / 25.5 GB | 2641 | 56 |
...gekit Passthrough Yqhuxcv GGUF | 0K / 16.9 GB | 91 | 0 |
CodeLlama 70B Hf GGUF | 0K / 25.5 GB | 2300 | 43 |
CodeLlama 70B Python GGUF | 0K / 25.5 GB | 1835 | 34 |
Meta Llama 3 70B Instruct GGUF | 0K / 26.4 GB | 210 | 3 |
KafkaLM 70B German V0.1 GGUF | 0K / 25.5 GB | 1200 | 16 |
DAD Model V2 70B Q4 | 0K / 42.5 GB | 11 | 0 |
Llama 2 70B Chat GGUF | 0K / 29.3 GB | 9025 | 120 |
Llama 2 70B Guanaco QLoRA GGUF | 0K / 29.3 GB | 56 | 0 |
Aurora Nights 70B V1.0 GGUF | 0K / 29.3 GB | 105 | 8 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐