MetaMath Cybertron Starling by Q-bert

 ยป  All LLMs  ยป  Q-bert  ยป  MetaMath Cybertron Starling   URL Share it on

  Autotrain compatible Base model:berkeley-nest/starl... Base model:merge:berkeley-nest... Base model:merge:q-bert/metama... Base model:q-bert/metamath-cyb...   Dataset:meta-math/metamathqa   En   Endpoints compatible   Math   Merge   Mistral   Region:us   Safetensors   Sharded   Tensorflow

MetaMath Cybertron Starling Benchmarks

MetaMath Cybertron Starling Parameters and Internals

LLM NameMetaMath Cybertron Starling
Repository ๐Ÿค—https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling 
Base Model(s)  MetaMath Cybertron   Starling LM 7B Alpha   Q-bert/MetaMath-Cybertron   berkeley-nest/Starling-LM-7B-alpha
Model Size7b
Required VRAM14.4 GB
Updated2024-10-18
MaintainerQ-bert
Model Typemistral
Model Files  9.9 GB: 1-of-2   4.5 GB: 2-of-2
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typebfloat16
MetaMath Cybertron Starling (Q-bert/MetaMath-Cybertron-Starling)

Quantized Models of the MetaMath Cybertron Starling

Model
Likes
Downloads
VRAM
...taMath Cybertron Starling GGUF02342 GB
...taMath Cybertron Starling GGUF125143 GB
...taMath Cybertron Starling GPTQ164 GB
...etaMath Cybertron Starling AWQ174 GB

Best Alternatives to MetaMath Cybertron Starling

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB21756
MegaBeam Mistral 7B 512K512K / 14.4 GB252641
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB106115
Hebrew Mistral 7B 200K256K / 30 GB267715
Astral 256K 7B V2250K / 14.4 GB80
Astral 256K 7B250K / 14.4 GB80
Boptruth Agatha 7B128K / 14.4 GB6080
Buddhi 128K Chat 7B128K / 14.4 GB130115
Test001128K / 14.5 GB50
Note: green Score (e.g. "73.2") means that the model is better than Q-bert/MetaMath-Cybertron-Starling.

Rank the MetaMath Cybertron Starling Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 36966 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024072803