UNA 34Beagles 32K Bf16 V1 by one-man-army

 ยป  All LLMs  ยป  one-man-army  ยป  UNA 34Beagles 32K Bf16 V1   URL Share it on

  Autotrain compatible   Conversational   Dataset:allenai/ai2 arc Dataset:allenai/ultrafeedback ...   Dataset:boolq   Dataset:cais/mmlu   Dataset:cakiki/rosetta-code   Dataset:codeparrot/apps   Dataset:datasets/winogrande   Dataset:drop   Dataset:facebook/belebele   Dataset:intel/orca dpo pairs Dataset:jondurbin/cinematika-v... Dataset:jondurbin/truthy-dpo-v...   Dataset:julielab/emobank   Dataset:kingbri/pippa-sharegpt   Dataset:ldjnr/capybara   Dataset:lmsys/lmsys-chat-1m Dataset:migtissera/synthia-v1.... Dataset:muennighoff/natural-in...   Dataset:nvidia/helpsteer   Dataset:open-orca/slimorca   Dataset:openbookqa   Dataset:piqa   Dataset:spider   Dataset:squad v2 Dataset:squish42/bluemoon-fand...   Dataset:tiger-lab/mathinstruct   Dataset:unalignment/spicy-3.1 Dataset:unalignment/toxic-dpo-... Dataset:vezora/tested-22k-pyth...   Endpoints compatible   Llama   Model-index   Region:us   Safetensors   Sharded   Tensorflow

UNA 34Beagles 32K Bf16 V1 Benchmarks

UNA 34Beagles 32K Bf16 V1 (one-man-army/UNA-34Beagles-32K-bf16-v1)

UNA 34Beagles 32K Bf16 V1 Parameters and Internals

Model Type 
text generation
Additional Notes 
The model includes toxic DPO dataset and has less censorship than its counterparts.
Training Details 
Data Sources:
ai2_arc, unalignment/spicy-3.1, codeparrot/apps, facebook/belebele, Squish42/bluemoon-fandom-1-1-rp-cleaned, boolq, LDJnr/Capybara, jondurbin/cinematika-v0.1, drop, JULIELab/EmoBank, lmsys/lmsys-chat-1m, TIGER-Lab/MathInstruct, cais/mmlu, Muennighoff/natural-instructions, openbookqa, Vezora/Tested-22k-Python-Alpaca, cakiki/rosetta-code, Open-Orca/SlimOrca, spider, squad_v2, migtissera/Synthia-v1.3, datasets/winogrande
LLM NameUNA 34Beagles 32K Bf16 V1
Repository ๐Ÿค—https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1 
Model Size34.4b
Required VRAM69.2 GB
Updated2025-01-15
Maintainerone-man-army
Model Typellama
Model Files  4.8 GB: 1-of-15   4.8 GB: 2-of-15   5.0 GB: 3-of-15   4.8 GB: 4-of-15   4.8 GB: 5-of-15   5.0 GB: 6-of-15   4.8 GB: 7-of-15   4.8 GB: 8-of-15   5.0 GB: 9-of-15   4.8 GB: 10-of-15   4.8 GB: 11-of-15   5.0 GB: 12-of-15   4.8 GB: 13-of-15   4.8 GB: 14-of-15   1.2 GB: 15-of-15
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.35.0
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size64000
Torch Data Typebfloat16

Quantized Models of the UNA 34Beagles 32K Bf16 V1

Model
Likes
Downloads
VRAM
UNA 34Beagles 32K Bf16 V1 GPTQ33319 GB

Best Alternatives to UNA 34Beagles 32K Bf16 V1

Best Alternatives
Context / RAM
Downloads
Likes
Pallas 0.5 LASER 0.5195K / 68.9 GB7430
Pallas 0.5 LASER 0.4195K / 68.9 GB7571
Pallas 0.5 LASER 0.3195K / 68.9 GB7460
Pallas 0.5 LASER 0.2195K / 68.9 GB7570
Pallas 0.5 LASER 0.6195K / 68.9 GB165
Pallas 0.5 LASER Exp2 0.1195K / 68.9 GB140
Yi Bagel 2x34b195K / 68.8 GB822
AnFeng V3.1 Avocet128K / 69.2 GB52650
UNA 34BeagleSimpleMath 32K V132K / 69.2 GB686
PiVoT SUS RP8K / 69.2 GB7105
Note: green Score (e.g. "73.2") means that the model is better than one-man-army/UNA-34Beagles-32K-bf16-v1.

Rank the UNA 34Beagles 32K Bf16 V1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41363 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227