Slim Summary by llmware

 ยป  All LLMs  ยป  llmware  ยป  Slim Summary   URL Share it on

  Autotrain compatible   Custom code   Pytorch   Region:us   Stablelm epoch
Model Card on HF ๐Ÿค—: https://huggingface.co/llmware/slim-summary 

Slim Summary Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Slim Summary (llmware/slim-summary)

Slim Summary Parameters and Internals

Model Type 
text summarization
Use Cases 
Primary Use Cases:
Summarize function-calls, generating output consisting of a python list of distinct summary points.
Additional Notes 
The model has an experimental feature where an optional list size can be specified in the parameters, guiding the model to generate a specific number of summary points.
Training Details 
Methodology:
The model is fine-tuned on top of llmware/bling-stable-lm-3b-4e1t-v0, which itself is a fine-tune of stabilityai/stablelm-3b-4elt.
Input Output 
Input Format:
Text passage
Accepted Modalities:
text
Output Format:
A list of the form: ['summary_point1', 'summary_point2', 'summary_point3']
Performance Tips:
Use the 'quantized tool' version for fast inference. Use the automatic conversion handler from llmware to handle conversion to a Python list.
LLM NameSlim Summary
Repository ๐Ÿค—https://huggingface.co/llmware/slim-summary 
Model Size3b
Required VRAM5.6 GB
Updated2025-02-05
Maintainerllmware
Model Typestablelm_epoch
Model Files  5.6 GB
Model ArchitectureStableLMEpochForCausalLM
Licensecc-by-sa-4.0
Context Length4096
Model Max Length4096
Transformers Version4.33.2
Tokenizer ClassGPTNeoXTokenizer
Vocabulary Size50304
Torch Data Typebfloat16

Best Alternatives to Slim Summary

Best Alternatives
Context / RAM
Downloads
Likes
Stable Code 3B Mlx16K / 5.6 GB41
Aura 3B4K / 5.6 GB42
Slim Tags 3B4K / 5.6 GB2454
Slim Extract4K / 5.6 GB13412
Slim Sa Ner4K / 5.6 GB1406
Slim Xsum4K / 5.6 GB1426
Slim Boolean4K / 5.6 GB84
Salami 3B4K / 5.6 GB1191
Memphis Scribe 3B Alpha4K / 5.6 GB1182
Memphis CoT 3B4K / 5.6 GB2229
Note: green Score (e.g. "73.2") means that the model is better than llmware/slim-summary.

Rank the Slim Summary Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227