LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Blockchainlabs Joe Bez Seminar by alnrg2arg

What open-source LLMs or SLMs are you in search of? 18870 in total.

 ยป  All LLMs  ยป  alnrg2arg  ยป  Blockchainlabs Joe Bez Seminar   URL Share it on

  Merged Model   Autotrain compatible   Endpoints compatible   Flemmingmiguel/mbx-7b-v3   License:apache-2.0   Mistral   Region:us   Safetensors   Sharded   Tensorflow   Vanillaovo/supermario v4

Blockchainlabs Joe Bez Seminar Benchmarks

Rank the Blockchainlabs Joe Bez Seminar Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Blockchainlabs Joe Bez Seminar (alnrg2arg/blockchainlabs_joe_bez_seminar)

Best Alternatives to Blockchainlabs Joe Bez Seminar

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Ogno Monarch Jaskier Merge 7B76.4332K / 14.5 GB1082
Jaskier 7B DPO V5.676.4132K / 14.4 GB30213
MonaTrix V476.3832K / 14.5 GB01
Jaskier 7B DPO V6.176.3632K / 14.4 GB214
OGNO 7B76.3432K / 14.4 GB103616
Omningotex 7B Slerp76.3332K / 14.4 GB6123
StrangeMerges 25 7B Dare Ties76.3332K / 14.4 GB1200
DPO Binarized NeutrixOmnibe 7B76.3132K / 14.4 GB5342
OgnoMonarch 7B76.332K / 14.5 GB1680
StrangeMerges 21 7B Slerp76.2932K / 14.4 GB6580
Note: green Score (e.g. "73.2") means that the model is better than alnrg2arg/blockchainlabs_joe_bez_seminar.

Blockchainlabs Joe Bez Seminar Parameters and Internals

LLM NameBlockchainlabs Joe Bez Seminar
RepositoryOpen on ๐Ÿค— 
Merged ModelYes
Model Size7b
Required VRAM14.5 GB
Updated2024-02-29
Maintaineralnrg2arg
Model Typemistral
Model Files  1.9 GB: 1-of-8   2.0 GB: 2-of-8   1.9 GB: 3-of-8   2.0 GB: 4-of-8   1.9 GB: 5-of-8   1.9 GB: 6-of-8   2.0 GB: 7-of-8   0.9 GB: 8-of-8
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Initializer Range0.02
Torch Data Typebfloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003