RedPJ ProX 0.7B by gair-prox

 ยป  All LLMs  ยป  gair-prox  ยป  RedPJ ProX 0.7B   URL Share it on

  Arxiv:2409.17115 Dataset:gair-prox/redpajama-pr...   En   Llama   Pytorch   Region:us   Safetensors

RedPJ ProX 0.7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
RedPJ ProX 0.7B (gair-prox/RedPJ-ProX-0.7B)

RedPJ ProX 0.7B Parameters and Internals

Model Type 
language model
Supported Languages 
en (fluent)
Training Details 
Data Sources:
RedPajama-V2-pro
Data Volume:
25 billion tokens
LLM NameRedPJ ProX 0.7B
Repository ๐Ÿค—https://huggingface.co/gair-prox/RedPJ-ProX-0.7B 
Model Size0.7b
Required VRAM3 GB
Updated2025-02-22
Maintainergair-prox
Model Typellama
Model Files  3.0 GB   3.0 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.42.4
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat32

Rank the RedPJ ProX 0.7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227