Speechless Tora Code 7B V1.0 AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Speechless Tora Code 7B V1.0 AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:quantized:uukuguy/s... Base model:uukuguy/speechless-...   Code Dataset:garage-baind/open-plat... Dataset:jondurbin/airoboros-2....   Dataset:open-orca/openorca Dataset:tokenbender/python eva... Dataset:wizardlm/wizardlm evol...   En   Instruct   Llama   Llama2   Model-index   Quantized   Region:us   Safetensors

Speechless Tora Code 7B V1.0 AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Speechless Tora Code 7B V1.0 AWQ (TheBloke/speechless-tora-code-7B-v1.0-AWQ)

Speechless Tora Code 7B V1.0 AWQ Parameters and Internals

Model Type 
llama
Additional Notes 
Supports AWQ / GPTQ quantization options for efficient inference.
Supported Languages 
en (fluent)
Training Details 
Data Sources:
jondurbin/airoboros-2.2, Open-Orca/OpenOrca, garage-bAInd/Open-Platypus, WizardLM/WizardLM_evol_instruct_V2_196k, TokenBender/python_eval_instruct_51k
Data Volume:
201,981 samples
Context Length:
4096
Training Time:
19:24:49.43
Hardware Used:
A800-80G x 2
Model Architecture:
Based on llama
Input Output 
Accepted Modalities:
text
LLM NameSpeechless Tora Code 7B V1.0 AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/speechless-tora-code-7B-v1.0-AWQ 
Model NameSpeechless Tora Code 7B v1.0
Model CreatorJiangwen Su
Base Model(s)  uukuguy/speechless-tora-code-7b-v1.0   uukuguy/speechless-tora-code-7b-v1.0
Model Size7b
Required VRAM3.9 GB
Updated2024-12-22
MaintainerTheBloke
Model Typellama
Instruction-BasedYes
Model Files  3.9 GB
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length16384
Model Max Length16384
Transformers Version4.34.0
Tokenizer ClassLlamaTokenizer
Padding Token<pad>
Vocabulary Size32001
Torch Data Typefloat16

Best Alternatives to Speechless Tora Code 7B V1.0 AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Llama 2 7B 32K Instruct AWQ32K / 3.9 GB222
CodeLlama 7B Instruct AWQ16K / 3.9 GB1764
...ama 7B Instruct Hf W4 G128 AWQ16K / 3.9 GB220
CausalLM 7B AWQ8K / 5.8 GB313
...essianai 7B Chat Bilingual AWQ8K / 3.9 GB232
Leo Hessianai 7B Chat AWQ8K / 3.9 GB211
...epseek Math 7B Instruct AWQ Q44K / 4.8 GB120
Llama 2 7B Ft Instruct Es AWQ4K / 3.9 GB171
Swallow 7B Instruct AWQ4K / 4.1 GB191
...i 7B Sft Qa Context Jaqket AWQ4K / 3.9 GB241
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/speechless-tora-code-7B-v1.0-AWQ.

Rank the Speechless Tora Code 7B V1.0 AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217