Release Notes |
|
LLM Name | Yi Bagel 2x34b |
Repository ๐ค | https://huggingface.co/NLPinas/yi-bagel-2x34b |
Base Model(s) | |
Model Size | 34.4b |
Required VRAM | 68.8 GB |
Updated | 2025-02-22 |
Maintainer | NLPinas |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
License | other |
Context Length | 200000 |
Model Max Length | 200000 |
Transformers Version | 4.36.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <unk> |
Vocabulary Size | 64000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Pallas 0.5 LASER 0.5 | 195K / 68.9 GB | 1998 | 0 |
Pallas 0.5 LASER 0.4 | 195K / 68.9 GB | 1998 | 1 |
Pallas 0.5 LASER 0.3 | 195K / 68.9 GB | 1995 | 0 |
Pallas 0.5 LASER 0.2 | 195K / 68.9 GB | 1998 | 0 |
Pallas 0.5 LASER 0.6 | 195K / 68.9 GB | 38 | 5 |
Pallas 0.5 LASER Exp2 0.1 | 195K / 68.9 GB | 16 | 0 |
AnFeng V3.1 Avocet | 128K / 69.2 GB | 5579 | 0 |
UNA 34Beagles 32K Bf16 V1 | 32K / 69.2 GB | 1779 | 10 |
UNA 34BeagleSimpleMath 32K V1 | 32K / 69.2 GB | 32 | 6 |
PiVoT SUS RP | 8K / 69.2 GB | 1746 | 5 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐