Llama 2 70B Instruct V0.1 by dfurman

 ยป  All LLMs  ยป  dfurman  ยป  Llama 2 70B Instruct V0.1   URL Share it on

  Arxiv:2002.05202   Arxiv:2104.09864   Arxiv:2305.13245   Adapter Base model:adapter:meta-llama/... Base model:meta-llama/llama-2-...   Dataset:ehartford/dolphin   Finetuned   Instruct   Llama2   Lora   Model-index   Peft   Region:us   Safetensors

Llama 2 70B Instruct V0.1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 2 70B Instruct V0.1 (dfurman/Llama-2-70B-Instruct-v0.1)

Llama 2 70B Instruct V0.1 Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research
Limitations:
Model can produce factually incorrect output that may include offensive material.
Considerations:
Should not be relied upon for factually accurate information.
Additional Notes 
Example prompts and specific usage scenarios are provided in the documentation.
Training Details 
Data Sources:
ehartford/dolphin
Methodology:
parameter-efficient QLoRA finetuning
Context Length:
4096
Training Time:
roughly 17 hours
Hardware Used:
single H100 (80 GB PCIe)
Model Architecture:
Standard decoder-only transformer modified with SwiGLU activation, rotary positional embeddings (RoPE), grouped-query attention (GQA), and other specifications.
Responsible Ai Considerations 
Fairness:
The model may produce biased outputs.
Transparency:
The evaluation results are made transparent through the open LLM leaderboard.
Accountability:
Model was fine-tuned by Daniel Furman.
Input Output 
Input Format:
Standard decoder-only transformer input format, processed using tokenization.
Accepted Modalities:
text
Output Format:
Text-based generation appropriate for instructed tasks.
Release Notes 
Version:
v0.1
Date:
2023-07-23
Notes:
This model, fine-tuned by Daniel Furman, is intended primarily for research purposes and ranks 6th on the Open LLM Leaderboard.
LLM NameLlama 2 70B Instruct V0.1
Repository ๐Ÿค—https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1 
Base Model(s)  Llama 2 70B Hf   meta-llama/Llama-2-70b-hf
Model Size70b
Required VRAM1.1 GB
Updated2024-12-22
Maintainerdfurman
Instruction-BasedYes
Model Files  1.1 GB   1.1 GB
Model ArchitectureAdapter
Licensellama2
Is Biasednone
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesq_proj|k_proj|v_proj|o_proj
LoRA Alpha16
LoRA Dropout0.1
R Param64

Best Alternatives to Llama 2 70B Instruct V0.1

Best Alternatives
Context / RAM
Downloads
Likes
Llama 3 70B Instruct Spider0K / 141.9 GB60
Llama3v10K / 0.1 GB50
LLaMA 2 Wizard 70B QLoRA0K / 1.7 GB04
Saiga2 70b Lora0K / 0.3 GB013
Note: green Score (e.g. "73.2") means that the model is better than dfurman/Llama-2-70B-Instruct-v0.1.

Rank the Llama 2 70B Instruct V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 40123 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241217