Mbilal755 Radiology Bart 8bits by RichardErkhov

 ยป  All LLMs  ยป  RichardErkhov  ยป  Mbilal755 Radiology Bart 8bits   URL Share it on

  Arxiv:2204.03905   8-bit   Autotrain compatible   Bart   Bitsandbytes   Clinical   Endpoints compatible   Medical   Radiology   Radiology reports   Region:us   Safetensors   Summarization

Mbilal755 Radiology Bart 8bits Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mbilal755 Radiology Bart 8bits (RichardErkhov/Mbilal755_-_Radiology_Bart-8bits)

Mbilal755 Radiology Bart 8bits Parameters and Internals

Model Type 
Sequence-to-sequence
Use Cases 
Areas:
Medical, Clinical
Applications:
Radiology Summarization
Primary Use Cases:
Summarize radiology findings into impressions
Limitations:
Cannot be used for clinical decision-making or other NLP tasks
Additional Notes 
Sequence-to-sequence model built on BioBart architecture for biomedical text. Fine-tuned on radiology reports for generating summarized impressions.
Supported Languages 
English (Fluent)
Training Details 
Data Sources:
Deidentified Radiology Reports, PubMed
Data Volume:
70,000 reports
Methodology:
Fine-tuning on pretrained BioBart-v2-base with radiology reports
Model Architecture:
Encoder-decoder structure with attention mechanism
Input Output 
Input Format:
Radiology findings text
Accepted Modalities:
text
Output Format:
Summarized impression text
LLM NameMbilal755 Radiology Bart 8bits
Repository ๐Ÿค—https://huggingface.co/RichardErkhov/Mbilal755_-_Radiology_Bart-8bits 
Model Size290.2m
Required VRAM0.4 GB
Updated2025-02-22
MaintainerRichardErkhov
Model Typebart
Model Files  0.4 GB
Supported Languagesen
Model ArchitectureBartForCausalLM
Context Length1024
Model Max Length1024
Transformers Version4.40.2
Tokenizer ClassBartTokenizer
Padding Token<pad>
Vocabulary Size85401
Torch Data Typefloat16
Activation Functiongelu
Errorsreplace

Rank the Mbilal755 Radiology Bart 8bits Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227