Abstract | ||
---|---|---|
Creating datasets manually by human annotators is a laborious task that can lead to biased and inhomogeneous labels. We propose a flexible, semi-automatic framework for labeling data for relation extraction. Furthermore, we provide a dataset of preprocessed sentences from the requirements engineering domain, including a set of automatically created as well as hand-crafted labels. In our case study, we compare the human and automatic labels and show that there is a substantial overlap between both annotations. |
Year | Venue | DocType |
---|---|---|
2021 | BUCC@RANLP | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jeremias Bohn | 1 | 0 | 0.34 |
Jannik Fischbach | 2 | 0 | 0.68 |
Martin Schmitt | 3 | 0 | 1.35 |
Hinrich Schütze | 4 | 2113 | 362.21 |
Andreas Vogelsang | 5 | 83 | 31.23 |