NeuralMaxime 7B Slerp by Kukedlc

 ยป  All LLMs  ยป  Kukedlc  ยป  NeuralMaxime 7B Slerp   URL Share it on

  Merged Model   Autotrain compatible Base model:mlabonne/alphamonar... Base model:mlabonne/neuralmona...   Conversational   Endpoints compatible   License:apache-2.0   Lora   Mistral   Mlabonne/alphamonarch-7b   Mlabonne/neuralmonarch-7b   Model-index   Region:us   Safetensors   Sharded   Tensorflow

NeuralMaxime 7B Slerp Benchmarks

Rank the NeuralMaxime 7B Slerp Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
NeuralMaxime 7B Slerp (Kukedlc/NeuralMaxime-7B-slerp)

Best Alternatives to NeuralMaxime 7B Slerp

Best Alternatives
HF Rank
RasGulla1 7B730K / 14.4 GB27602
Medilora Mistral 7B64.410K / 14.4 GB1124
Mistral 7B V0.1 Hitl550K / 14.4 GB230
...rix Philosophy Mistral 7B LoRA53.90K / 14.4 GB3991
V350.50K / 0 GB120
Mistral Alpaca Lora Full32K / 4.1 GB180
Full V4 Astromistral Final32K / 4.5 GB120
Llama 2 7B Pruned50 Retrained4K / 27.1 GB3530
Llama 2 7B Pruned70 Retrained4K / 27.1 GB220
Llama 7B Hf Prompt Answering0K / 0 GB163

NeuralMaxime 7B Slerp Parameters and Internals

LLM NameNeuralMaxime 7B Slerp
RepositoryOpen on ๐Ÿค— 
Base Model(s)  AlphaMonarch 7B   NeuralMonarch 7B   mlabonne/AlphaMonarch-7B   mlabonne/NeuralMonarch-7B
Merged ModelYes
Model Size7b
Required VRAM14.4 GB
Model Files  0.0 GB   2.0 GB: 1-of-8   1.9 GB: 2-of-8   2.0 GB: 3-of-8   2.0 GB: 4-of-8   1.9 GB: 5-of-8   1.9 GB: 6-of-8   1.9 GB: 7-of-8   0.8 GB: 8-of-8
Model ArchitectureAutoModelForCausalLM
Model Max Length8192
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token</s>
LoRA ModelYes
PEFT Target Modulesmodel.layers.31.mlp.gate_proj|model.layers.31.self_attn.o_proj|model.layers.31.mlp.up_proj|model.layers.31.self_attn.k_proj|model.layers.31.self_attn.q_proj|model.layers.31.self_attn.v_proj|model.layers.31.mlp.down_proj
LoRA Alpha32
LoRA Dropout0.05
R Param8

What open-source LLMs or SLMs are you in search of? 35526 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20240042001