Japanese Stablelm Instruct Beta 70B GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Japanese Stablelm Instruct Beta 70B GGUF   URL Share it on

Base model:quantized:stability... Base model:stabilityai/japanes... Dataset:kunishou/databricks-do... Dataset:kunishou/hh-rlhf-49k-j...   Dataset:kunishou/oasst1-89k-ja   Gguf   Instruct   Ja   Japanese-stablelm   Llama   Quantized   Region:us

Japanese Stablelm Instruct Beta 70B GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Japanese Stablelm Instruct Beta 70B GGUF (TheBloke/japanese-stablelm-instruct-beta-70B-GGUF)

Japanese Stablelm Instruct Beta 70B GGUF Parameters and Internals

Model Type 
llama, text-generation, causal-lm
Use Cases 
Areas:
research, commercial applications
Limitations:
Model may generate offensive or inappropriate content
Considerations:
The pre-trained dataset has known limitations in terms of potential biases and content quality.
Additional Notes 
Quantized models are available on the Hugging Face repository under different formats for various use cases and hardware adaptations.
Supported Languages 
Japanese (fluent)
Training Details 
Data Sources:
kunishou/hh-rlhf-49k-ja, kunishou/databricks-dolly-15k-ja, kunishou/oasst1-89k-ja
Model Architecture:
decoder-only
Safety Evaluation 
Ethical Considerations:
Pre-training dataset may contain inappropriate content; exercise caution in production.
Input Output 
Input Format:
~~[INST] <> ใ‚ใชใŸใฏๅฝน็ซ‹ใคใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใงใ™ใ€‚ <> {prompt} [/INST]
Accepted Modalities:
text
LLM NameJapanese Stablelm Instruct Beta 70B GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/japanese-stablelm-instruct-beta-70B-GGUF 
Model NameJapanese StableLM Instruct Beta 70B
Model CreatorStability AI
Base Model(s)  ...ese Stablelm Instruct Beta 70B   stabilityai/japanese-stablelm-instruct-beta-70b
Model Size70b
Required VRAM29.3 GB
Updated2024-12-22
MaintainerTheBloke
Model Typellama
Instruction-BasedYes
Model Files  29.3 GB   36.1 GB   33.2 GB   29.9 GB   38.9 GB   41.4 GB   39.1 GB   47.5 GB   48.8 GB   47.5 GB
Supported Languagesja
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to Japanese Stablelm Instruct Beta 70B GGUF

Best Alternatives
Context / RAM
Downloads
Likes
CodeLlama 70B Instruct GGUF0K / 25.5 GB411456
Meta Llama 3 70B Instruct GGUF0K / 26.4 GB2463
DAD Model V2 70B Q40K / 42.5 GB50
Swallow 70B Instruct GGUF0K / 29.4 GB868
Leo Hessianai 70B Chat GGUF0K / 29.3 GB1141
Dolphin 2.2 70B GGUF0K / 29.3 GB21418
Llama 2 70B Orca 200K GGUF0K / 29.3 GB75322
...e Llama 2 70B Instruct V2 GGUF0K / 29.3 GB28810
Platypus2 70B Instruct GGUF0K / 29.3 GB11611
... 70B Instruct Abliterated LORA0K / 1.7 GB07
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/japanese-stablelm-instruct-beta-70B-GGUF.

Rank the Japanese Stablelm Instruct Beta 70B GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217