Instruct En Vi 7500 1epoch TheBloke Mistralic 7B 1 GPTQ LORA CAUSAL LM by 1TuanPham

 ยป  All LLMs  ยป  1TuanPham  ยป  Instruct En Vi 7500 1epoch TheBloke Mistralic 7B 1 GPTQ LORA CAUSAL LM   URL Share it on

  Adapter   Finetuned   Gptq   Instruct   Lora   Peft   Quantized   Region:us

Instruct En Vi 7500 1epoch TheBloke Mistralic 7B 1 GPTQ LORA CAUSAL LM Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Instruct En Vi 7500 1epoch TheBloke Mistralic 7B 1 GPTQ LORA CAUSAL LM Parameters and Internals

Training Details 
Methodology:
The following `bitsandbytes` quantization config was used during training: - quant_method: gptq - bits: 4 - tokenizer: None - dataset: None - group_size: 128 - damp_percent: 0.1 - desc_act: True - sym: True - true_sequential: True - use_cuda_fp16: True - model_seqlen: 4096 - block_name_to_quantize: model.layers - module_name_preceding_first_block: ['model.embed_tokens'] - batch_size: 1 - pad_token_id: None - disable_exllama: False - max_input_length: None
LLM NameInstruct En Vi 7500 1epoch TheBloke Mistralic 7B 1 GPTQ LORA CAUSAL LM
Repository ๐Ÿค—https://huggingface.co/1TuanPham/Instruct_en-vi_7500_1epoch_TheBloke_Mistralic-7B-1-GPTQ_LORA_CAUSAL_LM 
Model Size7b
Required VRAM0.3 GB
Updated2024-11-21
Maintainer1TuanPham
Instruction-BasedYes
Model Files  0.3 GB
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureAdapter
Is Biasedlora_only
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesk_proj|v_proj|q_proj|c_proj
LoRA Alpha32
LoRA Dropout0.05
R Param128
Instruct En Vi 7500 1epoch TheBloke Mistralic 7B 1 GPTQ LORA CAUSAL LM (1TuanPham/Instruct_en-vi_7500_1epoch_TheBloke_Mistralic-7B-1-GPTQ_LORA_CAUSAL_LM)

Best Alternatives to Instruct En Vi 7500 1epoch TheBloke Mistralic 7B 1 GPTQ LORA CAUSAL LM

Best Alternatives
Context / RAM
Downloads
Likes
Term Qwen2 7 Json Lora0K / 0.1 GB130
Selfbot 256 Mistral0K / 0.4 GB50
Futadom Mistral0K / 0.2 GB31
Humiliation Mistral0K / 0.2 GB21
Mistral Finetuned DialogSumm0K / 0 GB161
Mistral Numericnlg FV0K / 0.3 GB50
Mistral Wikitable FV0K / 0.3 GB50
Mistral Charttotext FV0K / 0.3 GB50
Zephyr 7B Beta Agent Instruct0K / 0.3 GB31
Falcon 7B Instruct 4bit Lora0K / 0 GB01
Note: green Score (e.g. "73.2") means that the model is better than 1TuanPham/Instruct_en-vi_7500_1epoch_TheBloke_Mistralic-7B-1-GPTQ_LORA_CAUSAL_LM.

Rank the Instruct En Vi 7500 1epoch TheBloke Mistralic 7B 1 GPTQ LORA CAUSAL LM Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 38149 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241110