MMedLM2 by Henrychur

 ยป  All LLMs  ยป  Henrychur  ยป  MMedLM2   URL Share it on

  Arxiv:2402.13963   Custom code   Dataset:henrychur/mmedc   En   Es   Feature-extraction   Fr   Internlm   Ja   Medical   Pytorch   Region:us   Ru   Sharded   Zh
Model Card on HF ๐Ÿค—: https://huggingface.co/Henrychur/MMedLM2 

MMedLM2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
MMedLM2 (Henrychur/MMedLM2)

MMedLM2 Parameters and Internals

Model Type 
foundation model, multilingual medical model
Additional Notes 
The model is further pre-trained on a comprehensive multilingual medical corpus (MMedC).
Supported Languages 
en (high), zh (high), ja (high), fr (high), ru (high), es (high)
Training Details 
Data Sources:
MMedC
Data Volume:
25.5 billion tokens
Context Length:
2048
Input Output 
Input Format:
multilingual text
Accepted Modalities:
text
Output Format:
multilingual text
Performance Tips:
Loading with torch_dtype=torch.float16 improves performance.
Release Notes 
Date:
2024.2.21
Notes:
Pre-print paper released, Dive into our findings.
Date:
2024.2.20
Notes:
Released MMedLM and MMedLM 2
LLM NameMMedLM2
Repository ๐Ÿค—https://huggingface.co/Henrychur/MMedLM2 
Required VRAM30.9 GB
Updated2025-02-22
MaintainerHenrychur
Model Typeinternlm
Model Files  9.8 GB: 1-of-4   9.8 GB: 2-of-4   9.8 GB: 3-of-4   1.5 GB: 4-of-4
Supported Languagesen zh ja fr ru es
Model ArchitectureInternLM2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.28.1
Is Biased0
Tokenizer ClassInternLMTokenizer
Padding Token</s>
Vocabulary Size92544
Torch Data Typefloat32

Best Alternatives to MMedLM2

Best Alternatives
Context / RAM
Downloads
Likes
ShareCaptioner Video32K / 17.2 GB34017
Songcomposer Sft32K / 16.7 GB35711
Songcomposer Pretrain32K / 16.7 GB1734
Internlm2 5 Step Prover8K / 15.4 GB3874
Internlm2 Step Prover8K / 15.4 GB44521
Note: green Score (e.g. "73.2") means that the model is better than Henrychur/MMedLM2.

Rank the MMedLM2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227