BART model used to generate scientific papers' title given the highlights and the abstract of the paper.
This model is the result of a fine-tuning process done on sshleifer/distilbart-cnn-12-6. We performed the fine-tuning for one epoch on CSPubSumm (Ed Collins, et al. "A supervised approach to extractive summarisation of scientific papers."), BIOPubSumm, and AIPubSumm (L. Cagliero, M. La Quatra "Extracting highlights of scientific articles: A supervised summarization approach.").
You can find more details in the GitHub repo.
Usage
This checkpoint should be loaded into BartForConditionalGeneration.from_pretrained
. See the
BART docs for more information.
Metrics
We have tested the model on all three the test sets, with the following results:
Dataset | Rouge-1 F1 | Rouge-2 F1 | Rouge-L F1 | BERTScore F1 |
---|---|---|---|---|
AIPubSumm | 0.42713 | 0.21781 | 0.35251 | 0.90391 |
BIOPubSumm | 0.45758 | 0.25219 | 0.39350 | 0.90205 |
CSPubSumm | 0.51502 | 0.33377 | 0.45760 | 0.91703 |
- Downloads last month
- 20
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.