Cognitivecomputations Samantha Mistral 7B HQQ 1bit Smashed by PrunaAI

 ยป  All LLMs  ยป  PrunaAI  ยป  Cognitivecomputations Samantha Mistral 7B HQQ 1bit Smashed   URL Share it on

  1bit   Autotrain compatible Base model:cognitivecomputatio... Base model:finetune:cognitivec...   Endpoints compatible   Mistral   Pruna-ai   Quantized   Region:us

Cognitivecomputations Samantha Mistral 7B HQQ 1bit Smashed Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Cognitivecomputations Samantha Mistral 7B HQQ 1bit Smashed (PrunaAI/cognitivecomputations-samantha-mistral-7b-HQQ-1bit-smashed)

Cognitivecomputations Samantha Mistral 7B HQQ 1bit Smashed Parameters and Internals

Model Type 
compressed, turbo, tiny, green
Use Cases 
Areas:
General AI model applications that benefit from compression
Additional Notes 
Model can be run using safetensors.
Training Details 
Data Sources:
WikiText
Methodology:
Model is compressed using hqq with WikiText as calibration data if needed by the compression method.
Input Output 
Accepted Modalities:
text
Performance Tips:
Directly test efficiency gains in your use-cases as efficiency can vary based on different factors.
LLM NameCognitivecomputations Samantha Mistral 7B HQQ 1bit Smashed
Repository ๐Ÿค—https://huggingface.co/PrunaAI/cognitivecomputations-samantha-mistral-7b-HQQ-1bit-smashed 
Base Model(s)  Samantha Mistral 7B   cognitivecomputations/samantha-mistral-7b
Model Size7b
Required VRAM1.6 GB
Updated2025-01-20
MaintainerPrunaAI
Model Typemistral
Model Files  1.6 GB
Quantization Type1bit
Model ArchitectureMistralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.40.0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Cognitivecomputations Samantha Mistral 7B HQQ 1bit Smashed

Best Alternatives
Context / RAM
Downloads
Likes
...al Nemo Instruct 2407 Bnb 4bit1000K / 8.3 GB1286425
...istral Nemo Base 2407 Bnb 4bit1000K / 8.3 GB709413
...t 3.5 0106 128K 8.0bpw H8 EXL2128K / 7.4 GB181
...t 3.5 0106 128K 4.0bpw H6 EXL2128K / 3.9 GB131
...tral 7B Instruct V0.3 Bnb 4bit32K / 4.1 GB15148517
Mistral 7B Sci Pretrain32K / 4.1 GB2260
Mistral 7B V0.3 Bnb 4bit32K / 4.1 GB4167814
User23ContinuedFine32K / 14.5 GB8570
Mistral 7B Instruct V0.2 Fp1632K / 14.4 GB320
Mistral 7B Instruct V0.2 4bit32K / 4.3 GB2291
Note: green Score (e.g. "73.2") means that the model is better than PrunaAI/cognitivecomputations-samantha-mistral-7b-HQQ-1bit-smashed.

Rank the Cognitivecomputations Samantha Mistral 7B HQQ 1bit Smashed Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41636 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227