Llama 3 SEC Base by arcee-ai

 ยป  All LLMs  ยป  arcee-ai  ยป  Llama 3 SEC Base   URL Share it on

  Merged Model   Autotrain compatible   Continual pre training   Dataset:sec filings   En   Endpoints compatible   Finance   Large language model   Llama   Model merging   Region:us   Safetensors   Sec data   Sharded   Tensorflow

Llama 3 SEC Base Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3 SEC Base (arcee-ai/Llama-3-SEC-Base)

Llama 3 SEC Base Parameters and Internals

Model Type 
large_language_model, financial_analysis
Use Cases 
Areas:
financial sector, investment analysis, risk management, regulatory compliance, corporate governance, market research
Applications:
investment analysis, risk assessment, regulatory compliance, corporate governance, market research
Primary Use Cases:
financial insights, regulatory analysis, investment decision support
Additional Notes 
The model has strong conversational abilities and incorporates domain-specific knowledge. It uses the llama3 chat template for fine-tuning on SEC data.
Supported Languages 
English (High)
Training Details 
Data Sources:
SEC Filings, Together AI's RedPajama dataset
Data Volume:
20B tokens seen so far, aiming for 72B tokens
Methodology:
Continual Pre-Training (CPT) and model merging
Hardware Used:
AWS SageMaker HyperPod cluster with 4 nodes, each with 32 H100 GPUs
Model Architecture:
Megatron-Core framework followed by TIES merging technique
Release Notes 
Version:
initial checkpoint
Notes:
Trained on 20B tokens of SEC data
LLM NameLlama 3 SEC Base
Repository ๐Ÿค—https://huggingface.co/arcee-ai/Llama-3-SEC-Base 
Merged ModelYes
Model Size70b
Required VRAM141.9 GB
Updated2025-02-22
Maintainerarcee-ai
Model Typellama
Model Files  4.7 GB: 1-of-30   4.7 GB: 2-of-30   5.0 GB: 3-of-30   5.0 GB: 4-of-30   4.7 GB: 5-of-30   4.7 GB: 6-of-30   4.7 GB: 7-of-30   5.0 GB: 8-of-30   5.0 GB: 9-of-30   4.7 GB: 10-of-30   4.7 GB: 11-of-30   4.7 GB: 12-of-30   5.0 GB: 13-of-30   5.0 GB: 14-of-30   4.7 GB: 15-of-30   4.7 GB: 16-of-30   4.7 GB: 17-of-30   5.0 GB: 18-of-30   5.0 GB: 19-of-30   4.7 GB: 20-of-30   4.7 GB: 21-of-30   4.7 GB: 22-of-30   5.0 GB: 23-of-30   5.0 GB: 24-of-30   4.7 GB: 25-of-30   4.7 GB: 26-of-30   4.7 GB: 27-of-30   5.0 GB: 28-of-30   5.0 GB: 29-of-30   2.0 GB: 30-of-30
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.41.2
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Llama 3 SEC Base

Best Alternatives
Context / RAM
Downloads
Likes
... Chat 1048K Chinese Llama3 70B1024K / 141.9 GB40355
... Chat 1048K Chinese Llama3 70B1024K / 141.9 GB25245
... 3 70B Instruct Gradient 1048K1024K / 141.9 GB199121
Llama3 Function Calling 1048K1024K / 141.9 GB61
...a 3 70B Instruct Gradient 524K512K / 141.9 GB23323
...a 3 70B Instruct Gradient 262K256K / 141.9 GB18655
...ama 3 70B Arimas Story RP V1.5256K / 141.2 GB3212
...ama 3 70B Arimas Story RP V2.0256K / 141.1 GB653
...ama 3 70B Arimas Story RP V1.6256K / 141.2 GB120
Yi 70B 200K RPMerge Franken195K / 142.4 GB101
Note: green Score (e.g. "73.2") means that the model is better than arcee-ai/Llama-3-SEC-Base.

Rank the Llama 3 SEC Base Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227