Model Type | |
Use Cases |
Areas: | |
Primary Use Cases: | |
Limitations: | This is an LLM, not a knowledge model, will generally perform better on tasks that involve summarization, question answering, and chat rather than tasks requiring more domain-specific knowledge, The data used for training is machine translated, and may contain grammatical errors and other errors |
|
Considerations: | This model requires prompt tuning to achieve optimal results. |
|
|
Additional Notes | This is a 4-bit GPTQ model aimed to be less computationally intensive but with reduced accuracy compared to full models. |
|
Supported Languages | norwegian (native), english (secondary) |
|
Training Details |
Data Sources: | NbAiLab/norwegian-alpaca, RuterNorway/OpenOrcaNo-15k |
|
Data Volume: | 15000 samples of machine-translated data + small custom-made instructional data |
|
Methodology: | Finetuned on norwegian datasets |
|
Training Time: | |
Model Architecture: | GPTQ version of Llama 2 13b |
|
|
Safety Evaluation |
Methodologies: | |
Ethical Considerations: | Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. |
|
|
Input Output |
Input Format: | Special prompt format for Llama2 Chat |
|
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Requires prompt tuning for better results. |
|
|