LLM Explorer: A Curated Large Language Model Directory and Analytics  // 

Giraffe V2 70B 32K by abacusai

What open-source LLMs or SLMs are you in search of? 18870 in total.

 ยป  All LLMs  ยป  abacusai  ยป  Giraffe V2 70B 32K   URL Share it on

  Arxiv:2308.10882   Autotrain compatible   Conversational   Endpoints compatible   Llama   Llama2   Region:us   Safetensors   Sharded   Tensorflow

Rank the Giraffe V2 70B 32K Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  
Giraffe V2 70B 32K (abacusai/Giraffe-v2-70b-32k)

Best Alternatives to Giraffe V2 70B 32K

Best Alternatives
HF Rank
Context/RAM
Downloads
Likes
XuanYuan 70B79.558K / 138.3 GB87442
Tigerbot 70B Chat V277.924K / 139.2 GB3249
QuartetAnemoi 70B T0.000176.8631K / 137.8 GB50918
Miqu 70B Alpaca DPO76.631K / 138.7 GB6455
Miqu 1 70B Sf76.5931K / 138.7 GB18738187
BoreanGale 70B76.4832K / 137.8 GB9074
Tigerbot 70B Chat V475.954K / 139.2 GB9001
OrcaHermes Mistral 70B Miqu75.5131K / 138 GB671
Senku 70B Full75.3631K / 138.7 GB1677111
Tulu 2 DPO 70B73.778K / 138 GB3847137
Note: green Score (e.g. "73.2") means that the model is better than abacusai/Giraffe-v2-70b-32k.

Giraffe V2 70B 32K Parameters and Internals

LLM NameGiraffe V2 70B 32K
RepositoryOpen on ๐Ÿค— 
Model Size70b
Required VRAM138 GB
Updated2024-02-29
Maintainerabacusai
Model Typellama
Model Files  9.8 GB: 1-of-15   9.8 GB: 2-of-15   10.0 GB: 3-of-15   9.8 GB: 4-of-15   9.8 GB: 5-of-15   9.8 GB: 6-of-15   10.0 GB: 7-of-15   9.8 GB: 8-of-15   9.8 GB: 9-of-15   9.8 GB: 10-of-15   10.0 GB: 11-of-15   9.8 GB: 12-of-15   9.8 GB: 13-of-15   9.5 GB: 14-of-15   0.5 GB: 15-of-15
Model ArchitectureLlamaForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.31.0
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Initializer Range0.02
Torch Data Typefloat16
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024022003