GPT 7B Nordic Prerelease by HPLT

 ยป  All LLMs  ยป  HPLT  ยป  GPT 7B Nordic Prerelease   URL Share it on

  Autotrain compatible   Da   En   Endpoints compatible   Fi   Is   Llama   Nn   No   Region:us   Safetensors   Sharded   Sv   Tensorflow

GPT 7B Nordic Prerelease Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
GPT 7B Nordic Prerelease (HPLT/gpt-7b-nordic-prerelease)

GPT 7B Nordic Prerelease Parameters and Internals

Model Type 
decoder-only transformer
Additional Notes 
These are research checkpoints and the model is not fully trained yet. Care should be taken when using outputs.
Supported Languages 
fi (fluent), nn (fluent), en (fluent), no (fluent), da (fluent), sv (fluent), is (fluent)
Training Details 
Data Sources:
Finnish, English, Swedish, Danish, Norwegian, Icelandic, code
Data Volume:
2 trillion tokens (1.3 trillion as of this release)
Hardware Used:
LUMI supercomputer
Model Architecture:
decoder-only transformer
LLM NameGPT 7B Nordic Prerelease
Repository ๐Ÿค—https://huggingface.co/HPLT/gpt-7b-nordic-prerelease 
Model Size7b
Required VRAM15.1 GB
Updated2025-02-22
MaintainerHPLT
Model Typellama
Model Files  4.9 GB: 1-of-4   5.0 GB: 2-of-4   4.1 GB: 3-of-4   1.1 GB: 4-of-4
Supported Languagesfi nn en no da sv is
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.37.2
Tokenizer ClassBloomTokenizer
Padding Token<pad>
Vocabulary Size131072
Torch Data Typebfloat16

Quantized Models of the GPT 7B Nordic Prerelease

Model
Likes
Downloads
VRAM
...7B Nordic Prerelease EXL2 4bpw254 GB

Best Alternatives to GPT 7B Nordic Prerelease

Best Alternatives
Context / RAM
Downloads
Likes
2 Very Sci Fi1024K / 16.1 GB3170
...1M 1000000ctx AEZAKMI 3 1 17021024K / 13.5 GB231
... Qwen2.5llamaify 7B V23.1 200K195K / 15.2 GB39433
LlamaStock 8B128K / 16.1 GB111
SuperNeuralDreadDevil 8B128K / 16.1 GB541
Yarn Llama 2 7B 128K128K / 13.5 GB642239
LLaMA 7B PoSE YaRN 128K128K / 13.5 GB73
LLaMA 7B PoSE Linear 96K96K / 27 GB92
LLaMA 7B PoSE YaRN 96K96K / 13.5 GB111
Chat Llama2 7B 80K80K / 13.8 GB80
Note: green Score (e.g. "73.2") means that the model is better than HPLT/gpt-7b-nordic-prerelease.

Rank the GPT 7B Nordic Prerelease Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 43470 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241227