GemOmniscien by Warit2

 ยป  All LLMs  ยป  Warit2  ยป  GemOmniscien   URL Share it on

  Arxiv:1910.09700   4bit   Autotrain compatible   Conversational   Endpoints compatible   Gemma   Pytorch   Quantized   Region:us   Sft   Sharded   Trl   Unsloth

Rank the GemOmniscien Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
GemOmniscien (Warit2/GemOmniscien)

Best Alternatives to GemOmniscien

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
Gemma 2B AQLM 2Bit 2x8 Hf8K / 1.6 GB214
Gemma 2B AQLM 2Bit 1x16 Hf8K / 1.7 GB483
Gemma 2B It Bnb 4bit8K / 2.1 GB161629
Gemma 2B Bnb 4bit8K / 2.1 GB92897
Gemma 1.1 2B It Bnb 4bit8K / 2.1 GB4831
Codegemma 2B Bnb 4bit8K / 2.1 GB380
Gemma 2B Math8K / 2.1 GB250
Gemma 2B Python 4bit8K / 2.1 GB170
Gemma2b Code Javascript 4bit8K / 2.1 GB170
Gemma 2b Code Python 4bit8K / 2.1 GB160

GemOmniscien Parameters and Internals

LLM NameGemOmniscien
RepositoryOpen on ๐Ÿค— 
Model Size2b
Required VRAM5.1 GB
Updated2024-04-18
MaintainerWarit2
Model Typegemma
Model Files  5.0 GB: 1-of-2   0.1 GB: 2-of-2
Quantization Type4bit
Model ArchitectureGemmaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.38.2
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
Vocabulary Size256000
Initializer Range0.02
Torch Data Typefloat16

What open-source LLMs or SLMs are you in search of? 35008 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024040901