Lince Zero by clibrain

 ยป  All LLMs  ยป  clibrain  ยป  Lince Zero   URL Share it on

  Arxiv:1910.09700   Autotrain compatible   Custom code   Es   Falcon   Pytorch   Region:us   Sharded
Model Card on HF ๐Ÿค—: https://huggingface.co/clibrain/lince-zero 

Lince Zero Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Lince Zero (clibrain/lince-zero)

Lince Zero Parameters and Internals

Model Type 
Language model, instruction model, causal decoder-only
Use Cases 
Areas:
Research, Commercial applications
Applications:
Virtual assistants, Content generation
Primary Use Cases:
Following natural language instructions in Spanish
Limitations:
Not ideal for further fine-tuning without domain-specific data, Should not be used for production without risk assessment
Supported Languages 
es (NLP)
Training Details 
Data Sources:
Falcon-7B, 80k examples proprietary dataset inspired by Alpaca and Dolly
Methodology:
Fine-tuned
Training Time:
8h
Hardware Used:
1 X A100 - 40 GB
Model Architecture:
Causal decoder-only model based on the Falcon-7B architecture, modified from the GPT-3 paper
Responsible Ai Considerations 
Mitigation Strategies:
Conduct a comprehensive assessment to address potential biases and ensure compliance with legal and ethical standards
Input Output 
Accepted Modalities:
Text
LLM NameLince Zero
Repository ๐Ÿค—https://huggingface.co/clibrain/lince-zero 
Required VRAM13.8 GB
Updated2025-02-22
Maintainerclibrain
Model Typefalcon
Model Files  9.9 GB: 1-of-2   3.9 GB: 2-of-2
Supported Languageses
Model ArchitectureFalconForCausalLM
Licenseapache-2.0
Model Max Length2048
Transformers Version4.27.4
Is Biased0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size65024
Torch Data Typebfloat16

Quantized Models of the Lince Zero

Model
Likes
Downloads
VRAM
Lince Zero GGUF2474 GB
Lince Zero GPTQ194 GB

Best Alternatives to Lince Zero

Best Alternatives
Context / RAM
Downloads
Likes
Really Tiny Falcon Testing2K / 0 GB471
Tiny Random FalconForCausalLM0.5K / 0 GB157950
Try2 Deploy Falcon0K / 13.8 GB50
ChatLM0K / 5.2 GB2322
Note: green Score (e.g. "73.2") means that the model is better than clibrain/lince-zero.

Rank the Lince Zero Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227