Trained Galactica by mariatager

 ยป  All LLMs  ยป  mariatager  ยป  Trained Galactica   URL Share it on

  Arxiv:1910.09700   Adapter Base model:adapter:facebook/ga... Base model:facebook/galactica-...   Finetuned   Lora   Peft   Region:us   Safetensors

Trained Galactica Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Trained Galactica Parameters and Internals

LLM NameTrained Galactica
Repository ๐Ÿค—https://huggingface.co/mariatager/trained_galactica 
Base Model(s)  Galactica 125M   facebook/galactica-125m
Model Size125m
Required VRAM0 GB
Updated2024-08-15
Maintainermariatager
Model Files  0.3 GB   0.0 GB   0.0 GB   0.0 GB
Model ArchitectureAdapter
Is Biasednone
Tokenizer ClassPreTrainedTokenizerFast
Padding Token[PAD]
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesk_proj|v_proj|down_proj|q_proj|o_proj|gate_proj|up_proj
LoRA Alpha16
LoRA Dropout0
R Param16
Trained Galactica (mariatager/trained_galactica)

Best Alternatives to Trained Galactica

Best Alternatives
Context / RAM
Downloads
Likes
Results Modified0K / 0 GB60

Rank the Trained Galactica Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 35693 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024072803