LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Mintaka Mistral 7B Instruct V0.2 by msalnikov

What open-source LLMs or SLMs are you in search of? 18732 in total.

 ยป  All LLMs  ยป  msalnikov  ยป  Mintaka Mistral 7B Instruct V0.2   URL Share it on

  Arxiv:1910.09700   Autotrain compatible   Conversational   Endpoints compatible   Instruct   Mistral   Region:us   Safetensors   Sharded   Tensorflow

Rank the Mintaka Mistral 7B Instruct V0.2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Mintaka Mistral 7B Instruct V0.2 (msalnikov/Mintaka-Mistral-7B-Instruct-v0.2)

Best Alternatives to Mintaka Mistral 7B Instruct V0.2

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Metis Chat Instruct 7B74.6632K / 14.4 GB13530
SirUkrainian70.532K / 14.4 GB2750
Mistral 7B Merge 14 V0.369.6632K / 14.5 GB23846
Sour Marcoro 12.5B69.2332K / 25 GB7650
Mistral CatMacaroni Slerp 7B69.0832K / 14.5 GB12840
Neural Una Cybertron 7B69.0532K / 14.4 GB16882
Pic 7B Mistral Full V0.268.7232K / 15 GB22439
Instruct V0.2 Seraph 7B68.4832K / 14.4 GB23611
...lphin 2.6 Mistral 7B DPO Laser67.4332K / 14.4 GB19226
StarMix 7B Slerp67.4132K / 14.5 GB21851
Note: green Score (e.g. "73.2") means that the model is better than msalnikov/Mintaka-Mistral-7B-Instruct-v0.2.

Mintaka Mistral 7B Instruct V0.2 Parameters and Internals

LLM NameMintaka Mistral 7B Instruct V0.2
RepositoryOpen on ๐Ÿค— 
Model Size7b
Required VRAM28.9 GB
Updated2024-02-21
Maintainermsalnikov
Model Typemistral
Instruction-BasedYes
Model Files  5.0 GB: 1-of-6   4.9 GB: 2-of-6   5.0 GB: 3-of-6   5.0 GB: 4-of-6   4.8 GB: 5-of-6   4.2 GB: 6-of-6
Model ArchitectureMistralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.37.1
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat32
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003