Matter 0.2 32B by 0-hero

 ยป  All LLMs  ยป  0-hero  ยป  Matter 0.2 32B   URL Share it on

  Autotrain compatible   Conversational Dataset:0-hero/matter-0.2-alph...   En   Endpoints compatible   License:apache-2.0   Pytorch   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Rank the Matter 0.2 32B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Matter 0.2 32B (0-hero/Matter-0.2-32B)

Best Alternatives to Matter 0.2 32B

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
SauerkrautLM Qwen 32B32K / 64.6 GB21654
...penbuddy Qwen1.5 32B V21.2 32K32K / 64.6 GB22893
...penbuddy Qwen1.5 32B V21.1 32K32K / 64.6 GB19833
Einstein V4 Qwen 1.5 32B32K / 64.6 GB592
Deita 32B32K / 64.6 GB8611
Blossom V5 32B32K / 64.8 GB8784
Qwen1.5 32B Chat32K / 65.5 GB42897102
Qwen1.5 32B32K / 65.5 GB909175
...wen1.5 32B Chat 3.0bpw H6 EXL232K / 13.7 GB11
Qwen1.5 32B Chat Quip 3bit32K / 14.8 GB21

Matter 0.2 32B Parameters and Internals

LLM NameMatter 0.2 32B
RepositoryOpen on ๐Ÿค— 
Model Size32b
Required VRAM64.6 GB
Updated2024-07-01
Maintainer0-hero
Model Typeqwen2
Model Files  4.9 GB: 1-of-14   4.8 GB: 2-of-14   4.8 GB: 3-of-14   4.8 GB: 4-of-14   4.8 GB: 5-of-14   4.8 GB: 6-of-14   4.8 GB: 7-of-14   4.8 GB: 8-of-14   4.8 GB: 9-of-14   4.8 GB: 10-of-14   4.8 GB: 11-of-14   4.8 GB: 12-of-14   4.8 GB: 13-of-14   2.1 GB: 14-of-14   4.9 GB: 1-of-14   4.8 GB: 2-of-14   4.8 GB: 3-of-14   4.8 GB: 4-of-14   4.8 GB: 5-of-14   4.8 GB: 6-of-14   4.8 GB: 7-of-14   4.8 GB: 8-of-14   4.8 GB: 9-of-14   4.8 GB: 10-of-14   4.8 GB: 11-of-14   4.8 GB: 12-of-14   4.8 GB: 13-of-14   2.1 GB: 14-of-14
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.40.0.dev0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Initializer Range0.02
Torch Data Typebfloat16
Errorsreplace

What open-source LLMs or SLMs are you in search of? 34238 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801