Yam Jom 7B Dare by mayacinka

 ยป  All LLMs  ยป  mayacinka  ยป  Yam Jom 7B Dare   URL Share it on

  Merged Model   Autotrain compatible Base model:eren23/ogno-monarch... Base model:yam-peleg/experimen...   Endpoints compatible Eren23/ogno-monarch-jaskier-me...   License:apache-2.0   Mistral   Region:us   Safetensors   Sharded   Tensorflow   Yam-peleg/experiment26-7b

Rank the Yam Jom 7B Dare Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Yam Jom 7B Dare (mayacinka/yam-jom-7B-dare)

Best Alternatives to Yam Jom 7B Dare

Best Alternatives
HF Rank
KAI 7B V0.174.4532K / 14.4 GB3710
Dolphin 2.2.1 Mistral 7B73.1732K / 14.4 GB23457184
Mistral 7B V0.168.5332K / 14.4 GB20202313099
Mistral 7B Instruct V0.262.2332K / 14.4 GB29817401932
Notus 7B V160.1532K / 14.4 GB5906111
...t 3.5 0106 128K 3.0bpw H6 EXL260.1128K / 3 GB100
...t 3.5 0106 128K 4.0bpw H6 EXL260.1128K / 3.9 GB131
...t 3.5 0106 128K 5.0bpw H6 EXL260.1128K / 4.7 GB50
...t 3.5 0106 128K 6.0bpw H6 EXL260.1128K / 5.6 GB70
...t 3.5 0106 128K 8.0bpw H8 EXL260.1128K / 7.4 GB121
Note: green Score (e.g. "73.2") means that the model is better than mayacinka/yam-jom-7B-dare.

Yam Jom 7B Dare Parameters and Internals

LLM NameYam Jom 7B Dare
RepositoryOpen on ๐Ÿค— 
Base Model(s)  ...askier Merge 7B OH PREF DPO V2   Experiment26 7B   eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2   yam-peleg/Experiment26-7B
Merged ModelYes
Model Size7b
Required VRAM42.8 GB
Model Typemistral
Model Files  9.9 GB: 1-of-2   9.9 GB: 1-of-3   4.5 GB: 2-of-2   9.9 GB: 2-of-3   8.6 GB: 3-of-3
Model ArchitectureMistralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.38.1
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typebfloat16

What open-source LLMs or SLMs are you in search of? 35008 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024040901