CollectiveCognition V1.1 Mistral 7B 5.0bpw H6 EXL2 by LoneStriker

 ยป  All LLMs  ยป  LoneStriker  ยป  CollectiveCognition V1.1 Mistral 7B 5.0bpw H6 EXL2   URL Share it on

  Autotrain compatible Base model:finetune:nousresear... Base model:nousresearch/llama-... Dataset:collectivecognition/ch...   Distillation   En   Endpoints compatible   Exl2   Finetuned   Gpt4   Instruct   Mistral   Mistral-7b   Pytorch   Quantized   Region:us   Sharegpt   Synthetic data

CollectiveCognition V1.1 Mistral 7B 5.0bpw H6 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
CollectiveCognition V1.1 Mistral 7B 5.0bpw H6 EXL2 (LoneStriker/CollectiveCognition-v1.1-Mistral-7B-5.0bpw-h6-exl2)

CollectiveCognition V1.1 Mistral 7B 5.0bpw H6 EXL2 Parameters and Internals

Model Type 
instruct, finetune
Use Cases 
Areas:
research, commercial applications
Applications:
TruthfulQA assessments
Primary Use Cases:
Understanding and rectifying misconceptions
Additional Notes 
Trained on synthetic data, fast training time
Supported Languages 
en (Unknown proficiency)
Training Details 
Data Sources:
CollectiveCognition/chats-data-2023-09-27
Data Volume:
100 data points
Methodology:
Mistral approach, qlora
Training Time:
3 minutes
Hardware Used:
single 4090
Input Output 
Input Format:
USER: ASSISTANT:
Accepted Modalities:
text
Output Format:
text
LLM NameCollectiveCognition V1.1 Mistral 7B 5.0bpw H6 EXL2
Repository ๐Ÿค—https://huggingface.co/LoneStriker/CollectiveCognition-v1.1-Mistral-7B-5.0bpw-h6-exl2 
Base Model(s)  Llama 2 13B Hf   NousResearch/Llama-2-13b-hf
Model Size13b
Required VRAM4.7 GB
Updated2024-12-22
MaintainerLoneStriker
Model Typemistral
Model Files  4.7 GB
Supported Languagesen
Quantization Typeexl2
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.34.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to CollectiveCognition V1.1 Mistral 7B 5.0bpw H6 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...inRP 13B 128K V0.5 6 5bpw EXL2128K / 10.3 GB70
...vot Mistral 13B 8.0bpw H8 EXL28K / 13.5 GB131
...vot Mistral 13B 4.0bpw H6 EXL28K / 6.9 GB121
LuminRP 13B 128K V0.5128K / 25.1 GB192
Breeze 13B 32K Instruct V1.032K / 25.6 GB170
Power WizardLM 2 13B32K / 25 GB152
Mistral V0.3 13B 32K Base V132K / 25.2 GB160
Breeze 13B 32K Base V1.032K / 25.6 GB140
Pandora 13B V132K / 24.9 GB10970
Pandora V1 13B GPTQ32K / 6.9 GB173
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/CollectiveCognition-v1.1-Mistral-7B-5.0bpw-h6-exl2.

Rank the CollectiveCognition V1.1 Mistral 7B 5.0bpw H6 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217