Model Type |
| ||||||||||||
Supported Languages |
| ||||||||||||
Training Details |
| ||||||||||||
Input Output |
|
LLM Name | Xiangxin 2XL Chat 1048K Chinese Llama3 70B |
Repository ๐ค | https://huggingface.co/xiangxinai/Xiangxin-2XL-Chat-1048k-Chinese-Llama3-70B |
Model Size | 70b |
Required VRAM | 141.9 GB |
Updated | 2024-12-14 |
Maintainer | xiangxinai |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
Supported Languages | zh en |
Model Architecture | LlamaForCausalLM |
License | llama3 |
Context Length | 1048576 |
Model Max Length | 1048576 |
Transformers Version | 4.40.2 |
Tokenizer Class | PreTrainedTokenizerFast |
Vocabulary Size | 128256 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
... 3 70B Instruct Gradient 1048K | 1024K / 141.9 GB | 1247 | 121 |
Llama3 Function Calling 1048K | 1024K / 141.9 GB | 1 | 1 |
...a 3 70B Instruct Gradient 524K | 512K / 141.9 GB | 138 | 23 |
...a 3 70B Instruct Gradient 262K | 256K / 141.9 GB | 276 | 55 |
...ama 3 70B Arimas Story RP V2.0 | 256K / 141.1 GB | 54 | 3 |
...ama 3 70B Arimas Story RP V1.5 | 256K / 141.2 GB | 51 | 2 |
...ama 3 70B Arimas Story RP V1.6 | 256K / 141.2 GB | 10 | 0 |
Meta Llama 3.1 70B Instruct | 128K / 141.9 GB | 521937 | 513 |
...a 3.1 Nemotron 70B Instruct HF | 128K / 141.9 GB | 147182 | 1874 |
Llama 3.3 70B Instruct | 128K / 141.9 GB | 161233 | 22 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐