Japanese Stablelm Instruct Gamma 7B GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Japanese Stablelm Instruct Gamma 7B GPTQ   URL Share it on

  Arxiv:2310.06825   4-bit   Autotrain compatible Base model:stabilityai/japanes...   Gptq   Instruct   Ja   Japanese-stablelm   License:apache-2.0   Mistral   Quantized   Region:us   Safetensors

Rank the Japanese Stablelm Instruct Gamma 7B GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Japanese Stablelm Instruct Gamma 7B GPTQ (TheBloke/japanese-stablelm-instruct-gamma-7B-GPTQ)

Best Alternatives to Japanese Stablelm Instruct Gamma 7B GPTQ

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Maxine 7B 0401 Stock76.7332K / 14.4 GB8911
CalmExperiment 7B Slerp76.6732K / 14.4 GB9020
Myriad 7B Slerp76.6632K / 14.4 GB9160
Versatile 7B76.6632K / 14.5 GB7780
Calme 7B Instruct V0.976.6232K / 14.4 GB14839
Calme 7B Instruct V0.276.6132K / 14.5 GB112811
Calme 7B Instruct V0.376.532K / 14.4 GB8465
Calme 7B Instruct V0.1.176.4932K / 14.4 GB7610
Calme 7B Instruct V0.576.0532K / 14.4 GB91711
Metis Chat Instruct 7B74.6632K / 14.4 GB9180
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/japanese-stablelm-instruct-gamma-7B-GPTQ.

Japanese Stablelm Instruct Gamma 7B GPTQ Parameters and Internals

LLM NameJapanese Stablelm Instruct Gamma 7B GPTQ
RepositoryOpen on ๐Ÿค— 
Model NameJapanese StableLM Instruct Gamma 7B
Model CreatorStability AI
Base Model(s)  ...ese Stablelm Instruct Gamma 7B   stabilityai/japanese-stablelm-instruct-gamma-7b
Model Size7b
Required VRAM4.2 GB
Updated2024-07-01
MaintainerTheBloke
Model Typemistral
Instruction-BasedYes
Model Files  4.2 GB
Supported Languagesja
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.34.1
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 34238 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024042801