Experiment28 7B by yam-peleg

 ยป  All LLMs  ยป  yam-peleg  ยป  Experiment28 7B   URL Share it on

  Autotrain compatible   Chat   En   Endpoints compatible   Mistral   Region:us   Safetensors   Sharded   Tensorflow

Experiment28 7B Benchmarks

Experiment28 7B (yam-peleg/Experiment28-7B)

Experiment28 7B Parameters and Internals

Model Type 
text-generation
Additional Notes 
An experiment for testing and refining a specific training and evaluation pipeline research framework. This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance. The goal is to evaluate the effectiveness of a new training/evaluation pipeline for LLMs. The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement. More details in the future experiments.
LLM NameExperiment28 7B
Repository ๐Ÿค—https://huggingface.co/yam-peleg/Experiment28-7B 
Model Size7b
Required VRAM14.4 GB
Updated2024-12-22
Maintaineryam-peleg
Model Typemistral
Model Files  4.9 GB: 1-of-3   5.0 GB: 2-of-3   4.5 GB: 3-of-3
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.38.1
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Experiment28 7B

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB26689
MegaBeam Mistral 7B 512K512K / 14.4 GB711346
SpydazWeb AI HumanAI RP512K / 14.4 GB261
SpydazWeb AI HumanAI 002512K / 14.4 GB191
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB328215
Hebrew Mistral 7B 200K256K / 30 GB319315
Astral 256K 7B V2250K / 14.4 GB190
Astral 256K 7B250K / 14.4 GB160
Boptruth Agatha 7B128K / 14.4 GB4750
Note: green Score (e.g. "73.2") means that the model is better than yam-peleg/Experiment28-7B.

Rank the Experiment28 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217