Trained Galactica by mariatager

 ยป  All LLMs  ยป  mariatager  ยป  Trained Galactica   URL Share it on

  Arxiv:1910.09700   Adapter Base model:adapter:facebook/ga... Base model:facebook/galactica-...   Finetuned   Lora   Peft   Region:us   Safetensors

Trained Galactica Parameters and Internals

LLM NameTrained Galactica
RepositoryOpen on ๐Ÿค— 
Base Model(s)  Galactica 125M   facebook/galactica-125m
Model Size125m
Required VRAM0 GB
Updated2024-07-27
Maintainermariatager
Model Files  0.3 GB   0.0 GB   0.0 GB   0.0 GB
Model ArchitectureAdapter
Is Biasednone
Tokenizer ClassPreTrainedTokenizerFast
Padding Token[PAD]
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesk_proj|v_proj|down_proj|q_proj|o_proj|gate_proj|up_proj
LoRA Alpha16
LoRA Dropout0
R Param16
Trained Galactica (mariatager/trained_galactica)

Best Alternatives to Trained Galactica

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Results Modified0.20K / 0 GB60

Rank the Trained Galactica Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 34446 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024072501