Text2cypher Codestral 16bit Gguf by tomasonjo

 ยป  All LLMs  ยป  tomasonjo  ยป  Text2cypher Codestral 16bit Gguf   URL Share it on

  16bit Base model:mistralai/codestral... Dataset:tomasonjo/text2cypher-...   En   Endpoints compatible   Gguf   License:apache-2.0   Mistral   Quantized   Region:us   Unsloth

Rank the Text2cypher Codestral 16bit Gguf Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Text2cypher Codestral 16bit Gguf (tomasonjo/text2cypher-codestral-16bit-gguf)

Best Alternatives to Text2cypher Codestral 16bit Gguf

Best Alternatives
HF Rank
Codestral 22B V0.1 GGUF0K / 4.8 GB947927
Llama2 22B GPLATTY GGUF0K / 9.1 GB6223
Nucleus 22B Token 500B GGUF0K / 9.1 GB2843
Huginn 22B Prototype GGUF0K / 9.1 GB6712
Llama2 22B Daydreamer V3 GGUF0K / 9.1 GB5042
Llama2 22B Daydreamer V2 GGUF0K / 9.1 GB5191
Qwen1.5 22B Chat Merge GGUF0K / 12.6 GB1601
...t2cypher Codestral Q4 K M Gguf0K / 13.3 GB502
Llama2 22B GPLATTY GGML0K / 9.2 GB17
Llama2 22B Daydreamer V3 GGML0K / 9.2 GB26

Text2cypher Codestral 16bit Gguf Parameters and Internals

LLM NameText2cypher Codestral 16bit Gguf
RepositoryOpen on ๐Ÿค— 
Base Model(s)  mistralai/Codestral-22B-v0.1   mistralai/Codestral-22B-v0.1
Model Size22b
Required VRAM44.5 GB
Model Typemistral
Model Files  44.5 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf|16bit
Model ArchitectureAutoModel

What open-source LLMs or SLMs are you in search of? 34902 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801