Hp Lat Llama PCA Epsilon6.0 Pgd Layer12 Def Layer13 14 15 Wikitext Fullrank 86 by PhillipGuo

 ยป  All LLMs  ยป  PhillipGuo  ยป  Hp Lat Llama PCA Epsilon6.0 Pgd Layer12 Def Layer13 14 15 Wikitext Fullrank 86   URL Share it on

  Arxiv:1910.09700   Autotrain compatible   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

Hp Lat Llama PCA Epsilon6.0 Pgd Layer12 Def Layer13 14 15 Wikitext Fullrank 86 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Hp Lat Llama PCA Epsilon6.0 Pgd Layer12 Def Layer13 14 15 Wikitext Fullrank 86 (PhillipGuo/hp-lat-llama-PCA-epsilon6.0-pgd_layer12-def_layer13_14_15-wikitext-fullrank-86)

Hp Lat Llama PCA Epsilon6.0 Pgd Layer12 Def Layer13 14 15 Wikitext Fullrank 86 Parameters and Internals

LLM NameHp Lat Llama PCA Epsilon6.0 Pgd Layer12 Def Layer13 14 15 Wikitext Fullrank 86
Repository ๐Ÿค—https://huggingface.co/PhillipGuo/hp-lat-llama-PCA-epsilon6.0-pgd_layer12-def_layer13_14_15-wikitext-fullrank-86 
Model Size7b
Required VRAM13.5 GB
Updated2025-06-01
MaintainerPhillipGuo
Model Typellama
Model Files  4.9 GB: 1-of-3   5.0 GB: 2-of-3   3.6 GB: 3-of-3
Model ArchitectureLlamaForCausalLM
Context Length4096
Model Max Length4096
Transformers Version4.38.0
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Hp Lat Llama PCA Epsilon6.0 Pgd Layer12 Def Layer13 14 15 Wikitext Fullrank 86

Best Alternatives
Context / RAM
Downloads
Likes
A6 L1024K / 16.1 GB2010
M1024K / 16.1 GB1270
1571024K / 16.1 GB1010
1241024K / 16.1 GB930
A3.41024K / 16.1 GB130
A5.41024K / 16.1 GB120
A2.41024K / 16.1 GB120
2 Very Sci Fi1024K / 16.1 GB3170
1621024K / 16.1 GB600
1181024K / 16.1 GB150
Note: green Score (e.g. "73.2") means that the model is better than PhillipGuo/hp-lat-llama-PCA-epsilon6.0-pgd_layer12-def_layer13_14_15-wikitext-fullrank-86.

Rank the Hp Lat Llama PCA Epsilon6.0 Pgd Layer12 Def Layer13 14 15 Wikitext Fullrank 86 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 47770 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227