Deepmoney 34B 200K Base GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Deepmoney 34B 200K Base GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:quantized:triadpart... Base model:triadparty/deepmone...   En   Finance   Gptq   Invest   Llama   Quantized   Region:us   Safetensors   Zh

Deepmoney 34B 200K Base GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Deepmoney 34B 200K Base GPTQ (TheBloke/deepmoney-34b-200k-base-GPTQ)

Deepmoney 34B 200K Base GPTQ Parameters and Internals

Model Type 
financial analysis, investment decision-making
Use Cases 
Areas:
investment analysis, financial forecasting
Applications:
research reports analysis, quantitative and qualitative analysis
Additional Notes 
The model is part of a series named after the Seven Deadly Sins, focusing on the integration of qualitative and quantitative analysis methods for financial markets. The quantization processes for diverse hardware and needs are available as multiple versions in different repositories.
Supported Languages 
en (proficient), zh (proficient)
Training Details 
Data Sources:
college textbooks, professional books, research reports from 2019 to December 2023
Data Volume:
Not specified
Methodology:
Raw text full parameter training
Context Length:
8192
Hardware Used:
Hardware provided by Massed Compute
Model Architecture:
Yi-34b-200k
Input Output 
Input Format:
{prompt}
Accepted Modalities:
text
Output Format:
Model output: {response}
LLM NameDeepmoney 34B 200K Base GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/deepmoney-34b-200k-base-GPTQ 
Model NameDeepmoney 34B 200K Base
Model Creatortriad party
Base Model(s)  Deepmoney 34B 200K Base   TriadParty/deepmoney-34b-200k-base
Model Size34b
Required VRAM18.6 GB
Updated2025-01-13
MaintainerTheBloke
Model Typellama
Model Files  18.6 GB
Supported Languagesen zh
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length200000
Model Max Length200000
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size64000
Torch Data Typefloat16

Best Alternatives to Deepmoney 34B 200K Base GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
Smaug 34B V0.1 GPTQ195K / 21.2 GB111
Yi 34B 200K RPMerge GPTQ195K / 21.2 GB73
Tess 34B V1.5B GPTQ195K / 18.6 GB317
...4B 200K DARE Megamerge V8 GPTQ195K / 18.6 GB243
...y 34B 200K Chat Evaluator GPTQ195K / 18.6 GB133
...ous Capybara Limarpv3 34B GPTQ195K / 18.6 GB154
Bagel 34B V0.2 GPTQ195K / 18.6 GB332
Nontoxic Bagel 34B V0.2 GPTQ195K / 18.6 GB351
Bagel DPO 34B V0.2 GPTQ195K / 18.6 GB302
Yi 34B 200K AEZAKMI V2 GPTQ195K / 18.6 GB262
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/deepmoney-34b-200k-base-GPTQ.

Rank the Deepmoney 34B 200K Base GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 41301 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227