MPT 7B Instruct QLora 8Bits Peft Train Eli5 1 Epoch V7 by NickyNicky

 ยป  All LLMs  ยป  NickyNicky  ยป  MPT 7B Instruct QLora 8Bits Peft Train Eli5 1 Epoch V7   URL Share it on

  Adapter   Finetuned   Instruct   Lora   Peft   Region:us

MPT 7B Instruct QLora 8Bits Peft Train Eli5 1 Epoch V7 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

MPT 7B Instruct QLora 8Bits Peft Train Eli5 1 Epoch V7 Parameters and Internals

LLM NameMPT 7B Instruct QLora 8Bits Peft Train Eli5 1 Epoch V7
Repository ๐Ÿค—https://huggingface.co/NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V7 
Model Size7b
Required VRAM0 GB
Updated2024-08-15
MaintainerNickyNicky
Instruction-BasedYes
Model Files  0.0 GB
Model ArchitectureAdapter
Is Biasednone
Tokenizer ClassGPTNeoXTokenizer
PEFT TypeLORA
LoRA ModelYes
PEFT Target ModulesWqkv
LoRA Alpha32
LoRA Dropout0.05
R Param8
MPT 7B Instruct QLora 8Bits Peft Train Eli5 1 Epoch V7 (NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V7)

Quantized Models of the MPT 7B Instruct QLora 8Bits Peft Train Eli5 1 Epoch V7

Model
Likes
Downloads
VRAM
MPT 7B Instruct GGML29673 GB

Best Alternatives to MPT 7B Instruct QLora 8Bits Peft Train Eli5 1 Epoch V7

Best Alternatives
Context / RAM
Downloads
Likes
...Sql Flash Attention 2 Dataeval0K / 1.9 GB2393
...82 6142 45d8 9455 Bc68ca4866eb0K / 1.2 GB60
...al 7B Instruct V0.3 17193012560K / 0.9 GB90
Text To Rule Mistral 20K / 0.3 GB60
...al 7B Instruct V0.3 17192465050K / 0 GB90
...al 7B Instruct V0.3 17192977500K / 0.4 GB60
Text To Rule Mistral0K / 0.4 GB60
...al 7B Instruct V0.3 17193413250K / 1.7 GB60
...al 7B Instruct V0.3 17193505280K / 0.9 GB50
...al 7B Instruct V0.3 17193444300K / 3.5 GB50
Note: green Score (e.g. "73.2") means that the model is better than NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V7.

Rank the MPT 7B Instruct QLora 8Bits Peft Train Eli5 1 Epoch V7 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 36966 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024072803