NeuralMaxime 7B Slerp by Kukedlc

 ยป  All LLMs  ยป  Kukedlc  ยป  NeuralMaxime 7B Slerp   URL Share it on

  Merged Model   Autotrain compatible Base model:mlabonne/alphamonar... Base model:mlabonne/neuralmona...   Conversational   Endpoints compatible   Lora   Mistral   Mlabonne/alphamonarch-7b   Mlabonne/neuralmonarch-7b   Model-index   Region:us   Safetensors   Sharded   Tensorflow

NeuralMaxime 7B Slerp Benchmarks

NeuralMaxime 7B Slerp (Kukedlc/NeuralMaxime-7B-slerp)

NeuralMaxime 7B Slerp Parameters and Internals

Model Type 
text-generation
Training Details 
Methodology:
slerp merge using LazyMergekit
LLM NameNeuralMaxime 7B Slerp
Repository ๐Ÿค—https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp 
Base Model(s)  AlphaMonarch 7B   NeuralMonarch 7B   mlabonne/AlphaMonarch-7B   mlabonne/NeuralMonarch-7B
Merged ModelYes
Model Size7b
Required VRAM14.4 GB
Updated2025-02-22
MaintainerKukedlc
Model Files  0.0 GB   2.0 GB: 1-of-8   1.9 GB: 2-of-8   2.0 GB: 3-of-8   2.0 GB: 4-of-8   1.9 GB: 5-of-8   1.9 GB: 6-of-8   1.9 GB: 7-of-8   0.8 GB: 8-of-8
Model ArchitectureAutoModelForCausalLM
Licenseapache-2.0
Model Max Length8192
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token</s>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesmodel.layers.31.mlp.gate_proj|model.layers.31.self_attn.o_proj|model.layers.31.mlp.up_proj|model.layers.31.self_attn.k_proj|model.layers.31.self_attn.q_proj|model.layers.31.self_attn.v_proj|model.layers.31.mlp.down_proj
LoRA Alpha32
LoRA Dropout0.05
R Param8

Best Alternatives to NeuralMaxime 7B Slerp

Best Alternatives
Context / RAM
Downloads
Likes
Mistral 7B Instruct V0.332K / 14.5 GB5663571388
Mistral 7B V0.332K / 14.5 GB179293438
Mistral 7B V0.332K / 14.5 GB47365
Mistral 7B Instruct V0.332K / 14.5 GB40525
Mistral 7B Instruct V0.332K / 14.5 GB10453
...unoichi Lemon Royale V3 32K 7B32K / 14.5 GB264
Mistralai Mistral 7B V0.332K / 14.5 GB253
...ralai Mistral 7B Instruct V0.332K / 14.5 GB271
Mistral 7B Instruct V0.332K / 14.5 GB50
Mistral 7B V0.232K / 14.5 GB201
Note: green Score (e.g. "73.2") means that the model is better than Kukedlc/NeuralMaxime-7B-slerp.

Rank the NeuralMaxime 7B Slerp Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227