Model Type |
| |
Additional Notes |
| |
Supported Languages |
|
LLM Name | CognitiveFusion2 4x7B BF16 |
Repository ๐ค | https://huggingface.co/Kquant03/CognitiveFusion2-4x7B-BF16 |
Model Size | 24.2b |
Required VRAM | 48.3 GB |
Updated | 2025-01-13 |
Maintainer | Kquant03 |
Model Type | mixtral |
Model Files | |
Supported Languages | en |
Model Architecture | MixtralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.38.2 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 32000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Dzakwan MoE 4x7b Beta | 32K / 48.4 GB | 5848 | 0 |
Beyonder 4x7B V3 | 32K / 48.3 GB | 6082 | 58 |
Calme 4x7B MoE V0.2 | 32K / 48.3 GB | 6597 | 2 |
Calme 4x7B MoE V0.1 | 32K / 48.3 GB | 5938 | 2 |
Mera Mix 4x7B | 32K / 48.3 GB | 4226 | 18 |
Proto Athena 4x7B | 32K / 48.4 GB | 5 | 0 |
Proto Athena V0.2 4x7B | 32K / 48.4 GB | 5 | 0 |
MixtureofMerges MoE 4x7b V5 | 32K / 48.3 GB | 3992 | 1 |
MixtureofMerges MoE 4x7b V4 | 32K / 48.3 GB | 3983 | 4 |
NeuralStar FusionWriter 4x7b | 32K / 48.3 GB | 29 | 5 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐