Llama 2 70B AQLM 2Bit QLoRA Function Calling by hiyouga

 ยป  All LLMs  ยป  hiyouga  ยป  Llama 2 70B AQLM 2Bit QLoRA Function Calling   URL Share it on

  2bit   Adapter Base model:adapter:ista-daslab... Base model:ista-daslab/llama-2...   Conversational Dataset:glaiveai/glaive-functi... Dataset:hiyouga/glaive-functio...   Dataset:vicgalle/alpaca-gpt4   En   Finetuned   Generated from trainer   Llama-factory   Lora   Peft   Quantized   Region:us   Safetensors

Llama 2 70B AQLM 2Bit QLoRA Function Calling Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 2 70B AQLM 2Bit QLoRA Function Calling (hiyouga/Llama-2-70b-AQLM-2Bit-QLoRA-function-calling)

Llama 2 70B AQLM 2Bit QLoRA Function Calling Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
Research, Tool-using, Conversation
Supported Languages 
en (Proficient)
Training Details 
Data Sources:
vicgalle/alpaca-gpt4, glaiveai/glaive-function-calling-v2, hiyouga/glaive-function-calling-v2-sharegpt
Data Volume:
2,000 examples
Methodology:
Fine-tuning using LLaMA Factory
Hardware Used:
Maximum GPU usage: 24GB
Model Architecture:
2-bit QLoRA with function calling
LLM NameLlama 2 70B AQLM 2Bit QLoRA Function Calling
Repository ๐Ÿค—https://huggingface.co/hiyouga/Llama-2-70b-AQLM-2Bit-QLoRA-function-calling 
Base Model(s)  BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf   BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf
Model Size70b
Required VRAM0.1 GB
Updated2024-12-23
Maintainerhiyouga
Model Files  0.1 GB   0.0 GB
Supported Languagesen
Quantization Type2bit
Model ArchitectureAdapter
Licensellama2
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token</s>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesv_proj|q_proj
LoRA Alpha16
LoRA Dropout0.1
R Param8

Best Alternatives to Llama 2 70B AQLM 2Bit QLoRA Function Calling

Best Alternatives
Context / RAM
Downloads
Likes
Llama3 70b Ft0K / 39.9 GB60
Llama 2 70B Chat 4bit Japanese0K / 3.3 GB25
...ma 2 70B Chat 4bit Japanese V10K / 3.3 GB34
...aiga Llama3 70b Sft M1 D5 Lora0K / 5.9 GB01
Llama 3 70B Instruct Spider0K / 141.9 GB60
Airoboros 70B 3.3 Peft0K / 0.4 GB02
Llama3v10K / 0.1 GB50
Xwin LM 70B V0.1 LORA0K / 1.7 GB01
Euryale 1.3 L2 70B LORA0K / 1.7 GB31
Miqu 1 70B Hermes2.5 Qlora0K / 4.8 GB04
Note: green Score (e.g. "73.2") means that the model is better than hiyouga/Llama-2-70b-AQLM-2Bit-QLoRA-function-calling.

Rank the Llama 2 70B AQLM 2Bit QLoRA Function Calling Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40123 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217