Bloom 560M Finetuned Fraud by jslin09

 ยป  All LLMs  ยป  jslin09  ยป  Bloom 560M Finetuned Fraud   URL Share it on

  Arxiv:2406.04202   Autotrain compatible   Bloom Dataset:jslin09/fraud case ver...   Endpoints compatible   Finetuned   Legal   Pytorch   Region:us   Safetensors   Zh

Bloom 560M Finetuned Fraud Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Bloom 560M Finetuned Fraud (jslin09/bloom-560m-finetuned-fraud)

Bloom 560M Finetuned Fraud Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
Legal research, Drafting legal arguments, Training AI on Chinese-language legal corpus
Applications:
Generating legal document drafts in Chinese, Fraud case text augmentation
Primary Use Cases:
Automated drafting of criminal fact paragraphs in fraud and theft cases
Limitations:
Limited non-generalized application outside of specified legal context, Performance on broader legal text tasks not guaranteed
Considerations:
Utilize draft text responsibly in conjunction with human legal expertise. Careful validation required.
Additional Notes 
Designed to handle legality texts focused on Chinese verbiage and legal systems.
Supported Languages 
zh (high)
Training Details 
Data Sources:
JsLin09/Fraud_Case_Verdicts
Data Volume:
74823 judgments (ๅˆคๆฑบใ€่ฃๅฎš)
Methodology:
Fine-tuned the BLOOM 560m model using public fraud case verdicts
Model Architecture:
BLOOM-based architecture, fine-tuned for specific legal text generation tasks
Responsible Ai Considerations 
Fairness:
The model is trained on unbiased public legal document datasets.
Transparency:
Open model with source and research details provided.
Accountability:
Developers provide disclaimer due to legal consequences of any generated text.
Mitigation Strategies:
Model is designed and documented with aware usage instructions, but no specific risk mitigation strategies implemented.
Input Output 
Input Format:
Legal incident summary or case fact text in Mandarin
Accepted Modalities:
text
Output Format:
Continued legal case drafts and crime facts
Performance Tips:
Ensure the prompts are clear to adhere to legal-jargon context for improved output quality.
LLM NameBloom 560M Finetuned Fraud
Repository ๐Ÿค—https://huggingface.co/jslin09/bloom-560m-finetuned-fraud 
Model Size560m
Required VRAM2.2 GB
Updated2025-02-22
Maintainerjslin09
Model Typebloom
Model Files  2.2 GB   2.2 GB   0.0 GB
Supported Languageszh
Model ArchitectureBloomForCausalLM
Licensebigscience-bloom-rail-1.0
Transformers Version4.26.1
Tokenizer ClassBloomTokenizer
Padding Token<pad>
Vocabulary Size250880
Torch Data Typefloat32

Best Alternatives to Bloom 560M Finetuned Fraud

Best Alternatives
Context / RAM
Downloads
Likes
Train Test Bloom5600K / 2.2 GB1530
Product Description Fr0K / 2.2 GB1140
Guitester0K / 2.2 GB1120
Bloomz 560M Sft Chat0K / 1.1 GB184210
Train Test0K / 2.2 GB300
Promt Generator0K / 2.2 GB36324
Bloom 560M RLHF0K / 1.1 GB20951
Bloom 560M RLHF V20K / 1.1 GB19983
ModeloAJustadoBloom10K / 2.2 GB680
Bloomz 560M0K / 1.1 GB374852117
Note: green Score (e.g. "73.2") means that the model is better than jslin09/bloom-560m-finetuned-fraud.

Rank the Bloom 560M Finetuned Fraud Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227