NeuralBeagle14 7B GGUF by MaziyarPanahi

 ยป  All LLMs  ยป  MaziyarPanahi  ยป  NeuralBeagle14 7B GGUF   URL Share it on

  2-bit   3-bit   4-bit   5-bit   6-bit   8-bit   Autotrain compatible Base model:mlabonne/beagle14-7... Base model:mlabonne/neuralbeag...   Dpo   Endpoints compatible   Gguf   Has space   Lazymergekit   License:apache-2.0   License:cc-by-nc-4.0   Merge   Mergekit   Mistral   Quantized   Region:us   Rlhf   Safetensors

Rank the NeuralBeagle14 7B GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
NeuralBeagle14 7B GGUF (MaziyarPanahi/NeuralBeagle14-7B-GGUF)

Best Alternatives to NeuralBeagle14 7B GGUF

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Mistral 7B V0.1 GGUF60.10K / 2.7 GB12450
Mistral 7B V0.1 GGUF60.10K / 3.1 GB8242231
Mistral 7B Instruct V0.2 GGUF54.60K / 2.7 GB5491
Mistral 7B Instruct V0.2 GGUF54.60K / 3.1 GB93538298
Mistral 7B Instruct V0.2 GGUF54.60K / 3.1 GB2320
Mistral 7B Instruct V0.2 GGUF54.60K / 4.4 GB200
Notus 7B V1 GGUF52.80K / 2.7 GB14460
Notus 7B V1 GGUF52.80K / 3.1 GB28023
Vicuna 7B V1.5 GGUF48.50K / 2.8 GB81214
Vicuna 7B V1.5 GGML48.50K / 2.9 GB314
Note: green Score (e.g. "73.2") means that the model is better than MaziyarPanahi/NeuralBeagle14-7B-GGUF.

NeuralBeagle14 7B GGUF Parameters and Internals

LLM NameNeuralBeagle14 7B GGUF
RepositoryOpen on ๐Ÿค— 
Model NameNeuralBeagle14-7B-GGUF
Model Creatormlabonne
Base Model(s)  NeuralBeagle14 7B   mlabonne/NeuralBeagle14-7B
Model Size7b
Required VRAM2.7 GB
Updated2024-04-13
MaintainerMaziyarPanahi
Model Typemistral
Model Files  2.7 GB   3.8 GB   3.5 GB   3.2 GB   4.4 GB   4.1 GB   5.1 GB   5.0 GB   5.9 GB   7.7 GB
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licenseapache-2.0

What open-source LLMs or SLMs are you in search of? 36560 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024040901