LLaMA2 7B Dcot by haritzpuerto

 ยป  All LLMs  ยป  haritzpuerto  ยป  LLaMA2 7B Dcot   URL Share it on

  Arxiv:2407.03181   Adapter Base model:adapter:meta-llama/... Base model:meta-llama/llama-2-...   Dataset:allenai/ai2 arc   Dataset:allenai/quartz Dataset:chilled/lastletterconc...   Dataset:conditionalqa   Dataset:hotpotqa/hotpot qa   Dataset:openai/gsm8k   Dataset:skrishna/coin flip Dataset:tasksource/boardgame-q...   Dataset:tasksource/strategy-qa   En   Finetuned   Lora   Peft   Region:us   Safetensors

LLaMA2 7B Dcot Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
LLaMA2 7B Dcot (haritzpuerto/LLaMA2-7B-dcot)

LLaMA2 7B Dcot Parameters and Internals

Model Type 
text-generation
Additional Notes 
Note that not all command inputs (like [Context] and [Options]) are mandatory. These depend on the task being performed (e.g., multiple-choice QA, span-extraction QA).
Supported Languages 
en (unknown)
Training Details 
Data Sources:
allenai/ai2_arc, tasksource/Boardgame-QA, skrishna/coin_flip, openai/gsm8k, hotpotqa/hotpot_qa, ChilleD/LastLetterConcat, allenai/quartz, tasksource/strategy-qa, ConditionalQA
Methodology:
Fine-tuning with Divergent Chains of Thought (DCoT) for self-correction in reasoning.
Input Output 
Input Format:
[Question] {question} [Context] {document} [Options] {answer_options} [Number of answers] {k}
Accepted Modalities:
text
Output Format:
[Answer 1]CoT_1 [Answer 2]CoT_2 ... [Final answer] answer
LLM NameLLaMA2 7B Dcot
Repository ๐Ÿค—https://huggingface.co/haritzpuerto/LLaMA2-7B-dcot 
Base Model(s)  Llama 2 7B Hf   meta-llama/Llama-2-7b-hf
Model Size7b
Required VRAM0.1 GB
Updated2025-02-22
Maintainerharitzpuerto
Model Files  0.1 GB
Supported Languagesen
Model ArchitectureAdapter
Licenseapache-2.0
Is Biasednone
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesq_proj|v_proj
LoRA Alpha16
LoRA Dropout0.1
R Param64

Best Alternatives to LLaMA2 7B Dcot

Best Alternatives
Context / RAM
Downloads
Likes
Qwen Megumin0K / 0.1 GB130
...s 25 Mistral 7B Irca DPO Pairs0K / 0.1 GB50
Qwen1.5 7B Chat Sa V0.10K / 0 GB90
Zephyr 7B Ipo 0K 15K I10K / 0.7 GB90
Deepthink Reasoning Adapter0K / 0.2 GB278
Deepseek Llm 7B Chat Sa V0.10K / 0 GB50
... Days Of Sodom LoRA Mistral 7B0K / 0.2 GB50
Mistral 7B Instruct Sa V0.10K / 0 GB60
CodeAstra 7B0K / 0 GB63910
...eze Embed Tokens Q V Proj Lora0K / 0.1 GB41
Note: green Score (e.g. "73.2") means that the model is better than haritzpuerto/LLaMA2-7B-dcot.

Rank the LLaMA2 7B Dcot Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227