Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card: bert-base-multilingual-cased-finetuned-norsk-ner (Fine-Tuned with WikiANN)

Overview

  • Model Name: bert-base-multilingual-cased-finetuned-norsk-ner
  • Model Type: Named Entity Recognition (NER)
  • Language: Multilingual with focus on Norwegian (Norsk)
  • Fine-Tuned with: WikiANN dataset

Description

The bert-base-multilingual-cased-finetuned-norsk-ner is a pre-trained BERT (Bidirectional Encoder Representations from Transformers) model that has been fine-tuned for Named Entity Recognition (NER) in the Norwegian language (Norsk). This model has been fine-tuned using the WikiANN dataset, which includes annotated named entities from various languages, including Norwegian.

Named Entity Recognition is the task of identifying and classifying named entities in text, such as persons, organizations, locations, dates, and more. This model can be used to extract valuable information from Norwegian text with a focus on NER.

Intended Use

The bert-base-multilingual-cased-finetuned-norsk-ner model, fine-tuned with the WikiANN dataset, is designed for Named Entity Recognition (NER) applications in Norwegian text. It is particularly well-suited for identifying and classifying various types of named entities within Norwegian language content, including the following categories:

  • Persons (PER): Recognizing individuals' names, both at the beginning and within their names.
  • Organizations (ORG): Identifying organization names, distinguishing between the beginning and inside of these names.
  • Locations (LOC): Recognizing location names, including both the beginning and interior of these names.
  • Miscellaneous (MISC): Handling miscellaneous entities or categories within text.

Labels

Label Description
MISC Miscellaneous entities or categories.
B-PER Beginning of a person's name.
I-PER Inside of a person's name.
B-ORG Beginning of an organization name.
I-ORG Inside of an organization name.
B-LOC Beginning of a location name.
I-LOC Inside of a location name.

Usage

from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
import pandas as pd

tokenizer = AutoTokenizer.from_pretrained("Kushtrim/bert-base-multilingual-cased-finetuned-norsk-ner")

model = AutoModelForTokenClassification.from_pretrained("Kushtrim/bert-base-multilingual-cased-finetuned-norsk-ner")
ner = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy='first')

text = "Sett inn tekst her"

results = ner(text)

pd.DataFrame.from_records(results)
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Kushtrim/bert-base-multilingual-cased-finetuned-norsk-ner

Collections including Kushtrim/bert-base-multilingual-cased-finetuned-norsk-ner