File size: 4,250 Bytes
2d472e1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
A dataset for benchmarking keyphrase extraction and generation techniques from long document English scientific papers. For more details about the dataset please refer the original paper - []().
Data source - []()
## Dataset Summary
## Dataset Structure
### Data Fields
- **id**: unique identifier of the document.
- **sections**: list of all the sections present in the document.
- **sec_text**: list of white space separated list of words present in each section.
- **sec_bio_tags**: list of BIO tags of white space separated list of words present in each section.
- **extractive_keyphrases**: List of all the present keyphrases.
- **abstractive_keyphrase**: List of all the absent keyphrases.
### Data Splits
|Split| #datapoints |
|--|--|
| Train-Small | 20,000 |
| Train-Medium | 50,000 |
| Train-Large | 1,296,613 |
| Test | 10,000 |
| Validation | 10,000 |
## Usage
### Small Dataset
```python
from datasets import load_dataset
# get small dataset
dataset = load_dataset("midas/ldkp10k", "small")
def order_sections(sample):
"""
corrects the order in which different sections appear in the document.
resulting order is: title, abstract, other sections in the body
"""
sections = []
sec_text = []
sec_bio_tags = []
if "title" in sample["sections"]:
title_idx = sample["sections"].index("title")
sections.append(sample["sections"].pop(title_idx))
sec_text.append(sample["sec_text"].pop(title_idx))
sec_bio_tags.append(sample["sec_bio_tags"].pop(title_idx))
if "abstract" in sample["sections"]:
abstract_idx = sample["sections"].index("abstract")
sections.append(sample["sections"].pop(abstract_idx))
sec_text.append(sample["sec_text"].pop(abstract_idx))
sec_bio_tags.append(sample["sec_bio_tags"].pop(abstract_idx))
sections += sample["sections"]
sec_text += sample["sec_text"]
sec_bio_tags += sample["sec_bio_tags"]
return sections, sec_text, sec_bio_tags
# sample from the train split
print("Sample from train data split")
train_sample = dataset["train"][0]
sections, sec_text, sec_bio_tags = order_sections(train_sample)
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the validation split
print("Sample from validation data split")
validation_sample = dataset["validation"][0]
sections, sec_text, sec_bio_tags = order_sections(validation_sample)
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
sections, sec_text, sec_bio_tags = order_sections(test_sample)
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
**Output**
```bash
```
### Medium Dataset
```python
from datasets import load_dataset
# get medium dataset
dataset = load_dataset("midas/ldkp10k", "medium")
```
### Large Dataset
```python
from datasets import load_dataset
# get large dataset
dataset = load_dataset("midas/ldkp10k", "large")
```
## Citation Information
Please cite the works below if you use this dataset in your work.
```
```
## Contributions
Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax), [@UmaGunturi](https://github.com/UmaGunturi) and [@ad6398](https://github.com/ad6398) for adding this dataset
|