Model Type |
| ||||||
Additional Notes |
| ||||||
Training Details |
| ||||||
Input Output |
|
LLM Name | Everyone Coder 33B V2 Base |
Repository 🤗 | https://huggingface.co/rombodawg/Everyone-Coder-33b-v2-Base |
Model Size | 33b |
Required VRAM | 66.5 GB |
Updated | 2025-06-02 |
Maintainer | rombodawg |
Model Type | llama |
Model Files | |
Generates Code | Yes |
Model Architecture | LlamaForCausalLM |
License | other |
Context Length | 16384 |
Model Max Length | 16384 |
Transformers Version | 4.36.2 |
Tokenizer Class | LlamaTokenizerFast |
Beginning of Sentence Token | <|begin▁of▁sentence|> |
End of Sentence Token | <|end▁of▁sentence|> |
Vocabulary Size | 32256 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
ReflectionCoder DS 33B | 16K / 67 GB | 3921 | 4 |
Deepseek Wizard 33B Slerp | 16K / 35.3 GB | 9 | 0 |
Deepseek Coder 33B Instruct | 16K / 66.5 GB | 7015 | 519 |
ValidateAI 3 33B Ties | 16K / 66.5 GB | 13 | 0 |
ValidateAI 2 33B AT | 16K / 66.5 GB | 15 | 0 |
WhiteRabbitNeo 33B V1 | 16K / 67 GB | 201 | 87 |
Everyone Coder 33B Base | 16K / 66.5 GB | 24 | 20 |
Fortran2Cpp | 16K / 67.3 GB | 24 | 4 |
F2C Translator | 16K / 67.3 GB | 12 | 1 |
Llm4decompile 33B | 16K / 66.5 GB | 19 | 8 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟