Gemma 7B by Kota123

 ยป  All LLMs  ยป  Kota123  ยป  Gemma 7B   URL Share it on

  Arxiv:1705.03551   Arxiv:1804.06876   Arxiv:1804.09301   Arxiv:1809.02789   Arxiv:1811.00937   Arxiv:1904.09728   Arxiv:1905.07830   Arxiv:1905.10044   Arxiv:1907.10641   Arxiv:1911.01547   Arxiv:1911.11641   Arxiv:2009.03300   Arxiv:2009.11462   Arxiv:2101.11718   Arxiv:2107.03374   Arxiv:2108.07732   Arxiv:2109.07958   Arxiv:2110.08193   Arxiv:2110.14168   Arxiv:2203.09509   Arxiv:2206.04615   Arxiv:2304.06364   Arxiv:2305.14314   Arxiv:2312.11805   Autotrain compatible   Endpoints compatible   Gemma   Gguf   Quantized   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/Kota123/gemma-7b 

Gemma 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Gemma 7B (Kota123/gemma-7b)

Gemma 7B Parameters and Internals

Model Type 
text-to-text, decoder-only, large language model
Use Cases 
Areas:
Content Creation and Communication, Research and Education
Applications:
Text Generation, Chatbots and Conversational AI, Text Summarization
Primary Use Cases:
Content Creation, Knowledge Exploration
Limitations:
Biases or gaps in training data can lead to limitations in model responses, Lack of common sense reasoning, May generate incorrect or outdated factual statements
Considerations:
Requires clear prompts and instructions for optimal task performance.
Additional Notes 
The models are designed from the ground up for Responsible AI development compared to similarly sized models.
Supported Languages 
English (fluent)
Training Details 
Data Sources:
Web Documents, Code, Mathematics
Data Volume:
6 trillion tokens
Context Length:
8192
Hardware Used:
TPUv5e
Model Architecture:
text-to-text, decoder-only
Safety Evaluation 
Methodologies:
Red-Teaming, Human Evaluation, Automated Evaluation
Findings:
Results within acceptable thresholds for categories such as child safety, content safety, representational harms, memorization, large-scale harms
Risk Categories:
Text-to-Text Content Safety, Representational Harms, Memorization, Large-scale harm
Responsible Ai Considerations 
Fairness:
These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported.
Transparency:
Accountability:
This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
Mitigation Strategies:
Continuous monitoring and exploration of de-biasing techniques; guidelines for content safety, prohibited uses policy.
Input Output 
Input Format:
Text string, such as a question, a prompt, or a document to be summarized.
Accepted Modalities:
text
Output Format:
Generated English-language text in response to the input.
LLM NameGemma 7B
Repository ๐Ÿค—https://huggingface.co/Kota123/gemma-7b 
Base Model(s)  Gemma Ei Oc Structured Train   Holmeister/Gemma_ei_oc_structured_train
Model Size7b
Required VRAM17.1 GB
Updated2024-12-21
MaintainerKota123
Model Typegemma
Model Files  34.2 GB   5.0 GB: 1-of-4   5.0 GB: 2-of-4   5.0 GB: 3-of-4   2.1 GB: 4-of-4
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureGemmaForCausalLM
Licensegemma
Context Length8192
Model Max Length8192
Transformers Version4.38.0.dev0
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
Vocabulary Size256000
Torch Data Typebfloat16

Best Alternatives to Gemma 7B

Best Alternatives
Context / RAM
Downloads
Likes
Gemma 7B It8K / 17.1 GB3315541143
Gemma 7B8K / 17.1 GB594913076
Gemma 1.1 7B It GGUF8K / 5.3 GB241
Train068K / 9.1 GB160
Llama2 Kazakh 7B GGUF8K / 4.1 GB160
Gemma 7B Translator 0.48K / 17.1 GB350
Gemma 7B Translator 0.38K / 17.1 GB230
Gemma7B Konosuba8K / 17.1 GB230
Gemma 7B It GGUF8K / 5.3 GB161
Gemma 7B It GGUF8K / 3.1 GB17311
Note: green Score (e.g. "73.2") means that the model is better than Kota123/gemma-7b.

Rank the Gemma 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40013 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217