Fireball Llama 3.1 8B Philos Reflection by EpistemeAI2

 ยป  All LLMs  ยป  EpistemeAI2  ยป  Fireball Llama 3.1 8B Philos Reflection   URL Share it on

  Autotrain compatible Base model:epistemeai2/firebal... Base model:finetune:epistemeai...   Conversational   En   Endpoints compatible   Llama   Model-index   Pytorch   Region:us   Safetensors   Sharded   Tensorflow   Trl   Unsloth

Fireball Llama 3.1 8B Philos Reflection Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Fireball Llama 3.1 8B Philos Reflection (EpistemeAI2/Fireball-Llama-3.1-8B-Philos-Reflection)

Fireball Llama 3.1 8B Philos Reflection Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
commercial applications, research
Applications:
chatbots, text generation, sensitivity analysis, multilingual assistance
Primary Use Cases:
assistant-like chat, natural language generation tasks
Limitations:
Use in languages beyond those explicitly referenced as supported is out of scope without additional fine-tuning.
Considerations:
Developers may fine-tune models for unsupported languages while ensuring safe and responsible use.
Additional Notes 
Llama 3.1 models are not designed to be deployed in isolation and require additional safety guardrails when integrated into AI systems.
Supported Languages 
English (high), German (high), French (high), Italian (high), Portuguese (high), Hindi (high), Spanish (high), Thai (high)
Training Details 
Data Sources:
publicly available online data
Data Volume:
15T+ tokens
Context Length:
128000
Model Architecture:
Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Safety Evaluation 
Methodologies:
fine-tuning, adversarial testing, red teaming, multi-faceted data collection
Findings:
Model refusals to benign prompts as well as refusal tone have been an area of focus., Adversarial prompts and comprehensive safety data responses have been incorporated.
Risk Categories:
CBRNE helpfulness, Child Safety, Cyber attack enablement
Ethical Considerations:
Llama 3.1 addresses users and their needs without imposing unnecessary judgment or normativity, focusing on the values of free thought and expression.
Responsible Ai Considerations 
Fairness:
The model is designed to be accessible to people across different backgrounds and experiences.
Transparency:
Includes transparency tools for safety and content evaluations.
Accountability:
Llama models should be part of an overall AI system with additional safety guardrails deployed by developers.
Mitigation Strategies:
Strategies include a three-pronged approach to managing trust & safety risks, developer guidance, and community engagement.
Input Output 
Input Format:
ChatML prompt template or Alpaca prompt template
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Use specific prompt templates for better performance.
Release Notes 
Version:
3.1
Date:
July 23, 2024
Notes:
Introduces new capabilities including longer context window and multilingual inputs.
LLM NameFireball Llama 3.1 8B Philos Reflection
Repository ๐Ÿค—https://huggingface.co/EpistemeAI2/Fireball-Llama-3.1-8B-Philos-Reflection 
Base Model(s)  EpistemeAI2/Fireball-Alpaca-Llama3.1.08-8B-Philos-C-R1   EpistemeAI2/Fireball-Alpaca-Llama3.1.08-8B-Philos-C-R1
Model Size8b
Required VRAM16.1 GB
Updated2024-10-22
MaintainerEpistemeAI2
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4   5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesen
Gated ModelYes
Model ArchitectureLlamaForCausalLM
Licenseproprietary
Context Length131072
Model Max Length131072
Transformers Version4.44.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|finetune_right_pad_id|>
Vocabulary Size128256
Torch Data Typefloat16

Best Alternatives to Fireball Llama 3.1 8B Philos Reflection

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB6623678
MrRoboto ProLong 8B V4i1024K / 16.1 GB661
...o ProLongBASE Pt8 Unaligned 8B1024K / 16.1 GB240
Mpasila Viking 8B1024K / 16.1 GB590
41024K / 16.1 GB3220
Thor V1.4 8B DARK FICTION1024K / 16.1 GB9412
161024K / 16.1 GB1690
Because Im Bored Nsfw11024K / 16.1 GB661
111024K / 16.1 GB1130
NBeerbower Narrative 8B 64K1024K / 16.1 GB321
Note: green Score (e.g. "73.2") means that the model is better than EpistemeAI2/Fireball-Llama-3.1-8B-Philos-Reflection.

Rank the Fireball Llama 3.1 8B Philos Reflection Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42577 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227