LLM Name | Galactica Finetuned |
Repository ๐ค | https://huggingface.co/mariatager/galactica_finetuned |
Model Size | 125.2m |
Required VRAM | 0.5 GB |
Updated | 2025-02-22 |
Maintainer | mariatager |
Model Type | opt |
Model Files | |
Model Architecture | OPTForCausalLM |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.41.2 |
Is Biased | none |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | [PAD] |
Vocabulary Size | 50272 |
PEFT Type | LORA |
LoRA Model | Yes |
LoRA Alpha | 16 |
LoRA Dropout | 0 |
Torch Data Type | float32 |
Activation Function | relu |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
FinOPT Washington | 2K / 0.5 GB | 2125 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐