LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Meditron 70B AWQ by TheBloke

What open-source LLMs or SLMs are you in search of? 18732 in total.

 ยป  All LLMs  ยป  TheBloke  ยป  Meditron 70B AWQ   URL Share it on

  Arxiv:2311.16079   4-bit   Autotrain compatible   Awq Base model:epfl-llm/meditron-7...   Dataset:bigbio/med qa   Dataset:bigbio/pubmed qa   Dataset:epfl-llm/guidelines   Dataset:medmcqa   En   Health   License:llama2   Llama   Llama2   Medical   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Rank the Meditron 70B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Meditron 70B AWQ (TheBloke/meditron-70B-AWQ)

Best Alternatives to Meditron 70B AWQ

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
XuanYuan 70B79.558K / 138.3 GB69042
Tigerbot 70B Chat V277.924K / 139.2 GB2949
QuartetAnemoi 70B T0.000176.8631K / 137.8 GB50912
Miqu 70B Alpaca DPO76.631K / 138.7 GB6453
BoreanGale 70B76.4832K / 137.8 GB9074
Tigerbot 70B Chat V475.954K / 139.2 GB7091
OrcaHermes Mistral 70B Miqu75.5131K / 138 GB671
Senku 70B Full75.3631K / 138.7 GB1677106
Tulu 2 DPO 70B73.778K / 138 GB3347134
Aurora Nights 70B V1.073.774K / 137.8 GB169013
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/meditron-70B-AWQ.

Meditron 70B AWQ Parameters and Internals

LLM NameMeditron 70B AWQ
RepositoryOpen on ๐Ÿค— 
Model NameMeditron 70B
Model CreatorEPFL LLM Team
Base Model(s)  Meditron 70B   fedml/meditron-70b
Model Size70b
Required VRAM36.6 GB
Updated2024-02-21
MaintainerTheBloke
Model Typellama
Model Files  9.9 GB: 1-of-4   9.9 GB: 2-of-4   9.9 GB: 3-of-4   6.9 GB: 4-of-4
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typebfloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003