Danube2 Upscale 1.7 by Lambent

 ยป  All LLMs  ยป  Lambent  ยป  Danube2 Upscale 1.7   URL Share it on

  Arxiv:2203.05482   Autotrain compatible Base model:lambent/danube2-ups... Base model:lambent/danube2-ups... Base model:lambent/danube2-ups... Base model:lambent/danube2-ups... Base model:merge:lambent/danub... Base model:merge:lambent/danub... Base model:merge:lambent/danub... Base model:merge:lambent/danub... Dataset:huggingfacetb/cosmoped... Dataset:nampdn-ai/tiny-bridged... Dataset:severian/internal-know... Dataset:severian/internal-know... Dataset:sordonia/redpajama-sam... Dataset:teknium/gpteacher-gene... Dataset:vezora/tested-22k-pyth...   Endpoints compatible   Merge   Mergekit   Mistral   Region:us   Safetensors   Sharded   Tensorflow

Danube2 Upscale 1.7 Benchmarks

Danube2 Upscale 1.7 (Lambent/danube2-upscale-1.7)

Danube2 Upscale 1.7 Parameters and Internals

Model Type 
pre-trained, language model
Additional Notes 
Training methodology is experimental, not easy to replicate.
Training Details 
Data Sources:
HuggingFaceTB/cosmopedia-100k, Vezora/Tested-22k-Python-Alpaca, sordonia/redpajama-sample_from_valid_all, nampdn-ai/tiny-bridgedict, teknium/GPTeacher-General-Instruct, Severian/Internal-Knowledge-Map, Severian/Internal-Knowledge-Map-StoryWriter-RolePlaying
Methodology:
Linear merge of layers, duplicating layers 16-21, and attempting various repair methods. Merge achieved using EQ-Bench benchmark.
LLM NameDanube2 Upscale 1.7
Repository ๐Ÿค—https://huggingface.co/Lambent/danube2-upscale-1.7 
Base Model(s)  Lambent/danube2-upscale-1.53lisa   Lambent/danube2-upscale-1.51galore   Lambent/danube2-upscale-1.531qlora   Lambent/danube2-upscale-1.51qlora   Lambent/danube2-upscale-1.53lisa   Lambent/danube2-upscale-1.51galore   Lambent/danube2-upscale-1.531qlora   Lambent/danube2-upscale-1.51qlora
Model Size2.2b
Required VRAM4.5 GB
Updated2025-02-05
MaintainerLambent
Model Typemistral
Model Files  4.5 GB: 1-of-1
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Danube2 Upscale 1.7

Best Alternatives
Context / RAM
Downloads
Likes
... Phi 3 Medium AWQ 4bit Smashed4K / 7.8 GB780

Rank the Danube2 Upscale 1.7 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 42625 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227