Edit model card

araelectra-base-discriminator-89540-pretrain

Quran Passage Retrieval Model

This is a fine-tuned model on Arabic passage retrieval datasets, used for Quran QA 2023 Task A.

Model Description

This model was fine-tuned to perform text classification on an Arabic dataset. The task involves identifying relevant passages from the Quran in response to specific questions, focusing on retrieval quality.

  • Base model: Pretrained transformer-based model (e.g., AraBERT, CAMeLBERT, AraELECTRA).
  • Task: Passage retrieval (text classification).
  • Dataset: Fine-tuned on the Quran QA 2023 dataset.

Intended Use

  • Language: Arabic
  • Task: Passage retrieval for Quran QA
  • Usage: Use this model for ranking and retrieving relevant passages from a corpus of Arabic text, primarily for question answering tasks.

Evaluation Results

  • reported in the paper

How to Use

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("mohammed-elkomy/quran-qa")
tokenizer = AutoTokenizer.from_pretrained("mohammed-elkomy/quran-qa")

inputs = tokenizer("Your input text", return_tensors="pt")
outputs = model(**inputs)

## Citation
    If you use this model, please cite the following:

@inproceedings{elkomy2023quran, title={TCE at Qur鈥檃n QA 2023 Shared Task: Low Resource Enhanced Transformer-based Ensemble Approach for Qur鈥檃nic QA}, author={Mohammed ElKomy and Amany Sarhan}, year={2023}, url={https://github.com/mohammed-elkomy/quran-qa/}, }


@inproceedings{elkomy2022quran, title={TCE at Qur'an QA 2022: Arabic Language Question Answering Over Holy Qur'an Using a Post-Processed Ensemble of BERT-based Models}, author={Mohammed ElKomy and Amany Sarhan}, year={2022}, url={https://github.com/mohammed-elkomy/quran-qa/}, }


    
Downloads last month
14
Safetensors
Model size
135M params
Tensor type
F32
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.