Experimental Mistral 1B V00 by NickyNicky

 ยป  All LLMs  ยป  NickyNicky  ยป  Experimental Mistral 1B V00   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Mistral   Pytorch   Region:us

Experimental Mistral 1B V00 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Experimental Mistral 1B V00 (NickyNicky/experimental-Mistral-1b-V00)

Experimental Mistral 1B V00 Parameters and Internals

Model Type 
Text Classification, Natural Language Processing
Use Cases 
Areas:
Research, Commercial Applications
Applications:
Sentiment Analysis, Text Classification
Primary Use Cases:
Customer sentiment analysis, Content moderation
Limitations:
Not suitable for real-time applications, Limited support for non-English languages
Considerations:
Ensure data privacy when deploying.
Additional Notes 
Constant updates are planned for language coverage improvement.
Supported Languages 
English (High proficiency)
Training Details 
Data Sources:
Common Crawl, Wikipedia
Data Volume:
600 GB of text data
Methodology:
Self-supervised learning followed by supervised finetuning
Context Length:
512
Training Time:
2 weeks
Hardware Used:
8x NVIDIA A100 GPUs
Model Architecture:
Transformer-based
Safety Evaluation 
Methodologies:
Adversarial testing, Evaluation against known biases
Findings:
Reduced bias in minority languages, Some limitations in detecting nuanced contexts
Risk Categories:
Bias, Misinformation
Ethical Considerations:
Focus on minimizing bias in model responses
Responsible Ai Considerations 
Fairness:
The model is trained on diverse datasets to mitigate bias.
Transparency:
Model weights and training data sources are available.
Accountability:
ZeroAI Labs
Mitigation Strategies:
Continuous monitoring and updates based on feedback.
Input Output 
Input Format:
JSON or plain text
Accepted Modalities:
text
Output Format:
JSON with structured sentiment or classification labels
Performance Tips:
Ensure input data is clean and pre-processed for best performance.
Release Notes 
Version:
1.0.0
Date:
2023-09-01
Notes:
Initial release with support for sentiment analysis and classification tasks.
LLM NameExperimental Mistral 1B V00
Repository ๐Ÿค—https://huggingface.co/NickyNicky/experimental-Mistral-1b-V00 
Model Size1b
Required VRAM3.5 GB
Updated2025-02-22
MaintainerNickyNicky
Model Typemistral
Model Files  3.5 GB
Model ArchitectureMistralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.34.0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32002
Torch Data Typefloat32

Best Alternatives to Experimental Mistral 1B V00

Best Alternatives
Context / RAM
Downloads
Likes
Ganga 1B128K / 4 GB27815
MD Judge V0.132K / 14.4 GB56914
Nucleus 1B Alpha 132K / 2.3 GB17312
Mallam 1.1B 20K Instructions32K / 2.2 GB721
Lmlab Mistral 1B Untrained32K / 4.5 GB357
Empty Phi 1B3K / 5.8 GB100
Phi Filtered 500M C12K / 0.7 GB1690
Nucleus 1B GPTQ32K / 1 GB661
Mistral 1B GPTQ32K / 1 GB660
Note: green Score (e.g. "73.2") means that the model is better than NickyNicky/experimental-Mistral-1b-V00.

Rank the Experimental Mistral 1B V00 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227