L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc by Casual-Autopsy

 ยป  All LLMs  ยป  Casual-Autopsy  ยป  L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc   URL Share it on

  Merged Model   Autotrain compatible Base model:bluuwhale/l3-sao-mi... Base model:sao10k/l3-8b-lunari... Base model:sao10k/l3-8b-niitam... Base model:sao10k/l3-8b-stheno... Base model:sao10k/l3-8b-tamamo...   Conversational   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc (Casual-Autopsy/L3-bluuwhale-SAO-MIX-8B-V1_fp32-merge-calc)

L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc Parameters and Internals

LLM NameL3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc
Repository ๐Ÿค—https://huggingface.co/Casual-Autopsy/L3-bluuwhale-SAO-MIX-8B-V1_fp32-merge-calc 
Base Model(s)  L3 SAO MIX 8B V1   L3 8B Niitama V1   L3 8B Lunaris V1   L3 8B Tamamo V1   Sao10K/L3-8B-Stheno-v3.2   bluuwhale/L3-SAO-MIX-8B-V1   Sao10K/L3-8B-Niitama-v1   Sao10K/L3-8B-Lunaris-v1   Sao10K/L3-8B-Tamamo-v1   Sao10K/L3-8B-Stheno-v3.2
Merged ModelYes
Model Size8b
Required VRAM16.1 GB
Updated2024-12-22
MaintainerCasual-Autopsy
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Model ArchitectureLlamaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.42.3
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc

Best Alternatives
Context / RAM
Downloads
Likes
...a 3 8B Instruct Gradient 1048K1024K / 16.1 GB4850678
Thor V1.4 8B DARK FICTION1024K / 16.1 GB9412
MrRoboto ProLong 8B V2b1024K / 16.1 GB1770
MrRoboto ProLong 8B V4b1024K / 16.1 GB940
MrRoboto ProLong 8B V1n1024K / 16.1 GB1560
MrRoboto ProLong 8B V4c1024K / 16.1 GB790
MrRoboto ProLong 8B V4e1024K / 16.1 GB430
MrRoboto ProLong 8B V1a1024K / 16.1 GB1070
HEL V0.8 8B LONG DARK1024K / 16.1 GB2000
MrRoboto ProLong 8B V2a1024K / 16.1 GB1010
Note: green Score (e.g. "73.2") means that the model is better than Casual-Autopsy/L3-bluuwhale-SAO-MIX-8B-V1_fp32-merge-calc.

Rank the L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40066 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217