FinOPT Washington by MayaPH

 ยป  All LLMs  ยป  MayaPH  ยป  FinOPT Washington   URL Share it on

  Autotrain compatible   Endpoints compatible   Opt   Pytorch   Region:us   Safetensors

FinOPT Washington Benchmarks

FinOPT Washington (MayaPH/FinOPT-Washington)

FinOPT Washington Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
Research, Commercial applications
Applications:
Banking queries, Investment advice, General financial inquiries
Primary Use Cases:
Financial question-answering tasks
Limitations:
Domain-specific focus: may not perform well outside the financial domain, Potential bias, Lack of fact-checking capabilities
Considerations:
Verify information with reliable sources.
Additional Notes 
Consult with financial professionals or reliable sources for specific financial advice.
Training Details 
Data Sources:
Online sources, Financial forums
Model Architecture:
OPT-125M architecture
Responsible Ai Considerations 
Fairness:
Be aware of potential biases and evaluate responses critically.
Transparency:
Model operates as a predictive text generator based on learned patterns.
Accountability:
Users should take responsibility for financial decisions and not rely solely on the model.
LLM NameFinOPT Washington
Repository ๐Ÿค—https://huggingface.co/MayaPH/FinOPT-Washington 
Model Size125.2m
Required VRAM0.5 GB
Updated2025-02-22
MaintainerMayaPH
Model Typeopt
Model Files  0.5 GB   0.5 GB
Model ArchitectureOPTForCausalLM
Licensecc-by-sa-4.0
Context Length2048
Model Max Length2048
Transformers Version4.29.2
Tokenizer ClassGPT2Tokenizer
Beginning of Sentence Token</s>
End of Sentence Token</s>
Unk Token</s>
Vocabulary Size50272
Torch Data Typefloat32
Activation Functionrelu
Errorsreplace

Best Alternatives to FinOPT Washington

Best Alternatives
Context / RAM
Downloads
Likes
Galactica Finetuned2K / 0.5 GB540
Note: green Score (e.g. "73.2") means that the model is better than MayaPH/FinOPT-Washington.

Rank the FinOPT Washington Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227