Infinity Instruct 3M 0625 Qwen2 7B by BAAI

 ยป  All LLMs  ยป  BAAI  ยป  Infinity Instruct 3M 0625 Qwen2 7B   URL Share it on

  Autotrain compatible   Conversational   Dataset:baai/infinity-instruct   En   Endpoints compatible   Instruct   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

Infinity Instruct 3M 0625 Qwen2 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Infinity Instruct 3M 0625 Qwen2 7B Parameters and Internals

Model Type
supervised instruction tuning, chatbot
Use Cases
Areas:
academic research
Primary Use Cases:
chat applications with instruction tuning
Limitations:
Accuracy and reliability may vary due to the influence of uncontrollable variables
Considerations:The accuracy of the output cannot be guaranteed. Legal liability for content not accepted.
Additional NotesEfforts focus on risk assessment and data generation with a target of 10M instructions finished by late June.
Supported Languages
en (Proficient)
Training Details
Data Sources:
BAAI/Infinity-Instruct
Data Volume:10 million instructions (planned)
Methodology:Supervised instruction tuning without RLHF
Hardware Used:
FlagScale acceleration techniques
Input Output
Input Format:Chat template sequence
Accepted Modalities:
text
Output Format:Textual responses
Performance Tips:Concatenate multiple training samples to optimize performance and use FlagScale for diverse acceleration techniques.
Release Notes
Version:0625
Date:2024-07-09
Notes:Model weights released for InfInstruct-Mistral-7B 0625, InfInstruct-Qwen2-7B 0625, and others.
Version:0613
Date:2024-06-28
Notes:Model weight for InfInstruct-Llama3-70B 0613 released; favorable results on AlpacaEval 2.0.
LLM NameInfinity Instruct 3M 0625 Qwen2 7B
Repository ๐Ÿค—https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Qwen2-7B 
Model Size7b
Required VRAM15.2 GB
Updated2024-11-13
MaintainerBAAI
Model Typeqwen2
Instruction-BasedYes
Model Files  4.9 GB: 1-of-4   4.9 GB: 2-of-4   4.3 GB: 3-of-4   1.1 GB: 4-of-4
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.37.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typebfloat16
Errorsreplace
Infinity Instruct 3M 0625 Qwen2 7B (BAAI/Infinity-Instruct-3M-0625-Qwen2-7B)

Best Alternatives to Infinity Instruct 3M 0625 Qwen2 7B

Best Alternatives
Context / RAM
Downloads
Likes
Gte Qwen2 7B Instruct128K / 30.5 GB35028205
Qwen2 7B Instruct V0.1128K / 15.2 GB409681
Qwen2 7B Instruct V0.8128K / 15.2 GB409903
Einstein V7 Qwen2 7B128K / 15.2 GB390834
Samantha Qwen 2 7B128K / 15.2 GB40882
Cybertron V4 Qw7B MGS128K / 15.2 GB8009
Meraj Mini128K / 15.2 GB98511
Rombos LLM V2.5 Qwen 7B128K / 15.2 GB56913
Qwen2 7B Instruct V0.5128K / 15.2 GB410191
Qwen2 7B Instruct V0.4128K / 15.2 GB410081
Note: green Score (e.g. "73.2") means that the model is better than BAAI/Infinity-Instruct-3M-0625-Qwen2-7B.

Rank the Infinity Instruct 3M 0625 Qwen2 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 37901 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241110