SauerkrotLM 1.5B Wikiqa Singleturn Adapter 3000steps Merge by avemio-digital

 ยป  All LLMs  ยป  avemio-digital  ยป  SauerkrotLM 1.5B Wikiqa Singleturn Adapter 3000steps Merge   URL Share it on

  Arxiv:1910.09700   Autotrain compatible   Conversational   Endpoints compatible   Qwen2   Region:us   Safetensors   Sft   Trl   Unsloth

SauerkrotLM 1.5B Wikiqa Singleturn Adapter 3000steps Merge Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
SauerkrotLM 1.5B Wikiqa Singleturn Adapter 3000steps Merge (avemio-digital/SauerkrotLM_1.5B_wikiqa_singleturn_adapter_3000steps_merge)

SauerkrotLM 1.5B Wikiqa Singleturn Adapter 3000steps Merge Parameters and Internals

LLM NameSauerkrotLM 1.5B Wikiqa Singleturn Adapter 3000steps Merge
Repository ๐Ÿค—https://huggingface.co/avemio-digital/SauerkrotLM_1.5B_wikiqa_singleturn_adapter_3000steps_merge 
Model Size1.5b
Required VRAM3.1 GB
Updated2024-12-20
Maintaineravemio-digital
Model Typeqwen2
Model Files  3.1 GB
Model ArchitectureQwen2ForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.41.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to SauerkrotLM 1.5B Wikiqa Singleturn Adapter 3000steps Merge

Best Alternatives
Context / RAM
Downloads
Likes
ReaderLM V2500K / 3.5 GB26440512
Reader Lm 1.5B250K / 3.1 GB1295589
DeepSeek R1 Distill Qwen 1.5B128K / 3.5 GB1075846904
DeepScaleR 1.5B Preview128K / 7.1 GB14543462
...Seek R1 Distill Qwen 1.5B ONNX128K /  GB4638651
Qwen2.5 1.5B128K / 3.1 GB50561178
AceInstruct 1.5B128K / 3.5 GB59217
DeepSeek R1 Distill Qwen 1.5B128K / 3.5 GB1550610
Qwen2 1.5B128K / 3.1 GB10412288
Stella En 1.5B V5128K / 6.2 GB581890211
Note: green Score (e.g. "73.2") means that the model is better than avemio-digital/SauerkrotLM_1.5B_wikiqa_singleturn_adapter_3000steps_merge.

Rank the SauerkrotLM 1.5B Wikiqa Singleturn Adapter 3000steps Merge Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227