Memepp Llama 512v 6l 8h 256e by mrsteyk

 ยป  All LLMs  ยป  mrsteyk  ยป  Memepp Llama 512v 6l 8h 256e   URL Share it on

  Arxiv:1910.09700   Autotrain compatible   En   Endpoints compatible   Llama   Meme++   Pytorch   Region:us   Safetensors   Tiny   W++

Memepp Llama 512v 6l 8h 256e Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Memepp Llama 512v 6l 8h 256e (mrsteyk/memepp-llama-512v-6l-8h-256e)

Memepp Llama 512v 6l 8h 256e Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
meme++ character card generation
Applications:
random meme++ card generation
Limitations:
CSAM related stuff
Considerations:
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
Additional Notes 
The model card mentions several areas yet to be filled with more information needed.
Supported Languages 
English (NLP)
Training Details 
Data Sources:
Meme++ character definition taken off the internet
Data Volume:
253952000 tokens
Hardware Used:
1050 Ti Mobile
LLM NameMemepp Llama 512v 6l 8h 256e
Repository ๐Ÿค—https://huggingface.co/mrsteyk/memepp-llama-512v-6l-8h-256e 
Model Size5.4m
Required VRAM0 GB
Updated2025-02-23
Maintainermrsteyk
Model Typellama
Model Files  0.0 GB   0.0 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensewtfpl
Context Length2048
Model Max Length2048
Transformers Version4.28.1
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size512
Torch Data Typefloat32

Best Alternatives to Memepp Llama 512v 6l 8h 256e

Best Alternatives
Context / RAM
Downloads
Likes
Pallas 0.5 LASER 0.6 AWQ195K / 19.3 GB111
Pallas 0.5 AWQ195K / 19.3 GB81
Pallas 0.3 AWQ195K / 19.3 GB111
Pallas 0.4 AWQ195K / 19.3 GB101
Tess M V1.3 AWQ195K / 19.3 GB121
Tess M V1.2 AWQ195K / 19.3 GB81
Tess M V1.1 AWQ195K / 19.3 GB61
Tess M Creative V1.0 AWQ195K / 19.3 GB151
UNA 34Beagles 32K Bf16 V1 GPTQ32K / 19.2 GB383
PiVoT SUS RP AWQ8K / 19.3 GB61
Note: green Score (e.g. "73.2") means that the model is better than mrsteyk/memepp-llama-512v-6l-8h-256e.

Rank the Memepp Llama 512v 6l 8h 256e Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43508 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227