Abstract
The Ontology Alignment Evaluation Initiative is a set of benchmarks for evaluating the performance of ontology alignment systems. In this paper we re-examine the Conference track of the OAEI, with a focus on the degree of agreement between the reference alignments within this track and the opinion of experts. We propose a new version of this benchmark that more closely corresponds to expert opinion and confidence on the matches. The performance of top alignment systems is compared on both versions of the benchmark. Additionally, a general method for crowdsourcing the development of more benchmarks of this type using Amazon’s Mechanical Turk is introduced and shown to be scalable, cost-effective and to agree well with expert opinion.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Bock, J., Danschel, C., Stumpp, M.: Mappso and mapevo results for oaei 2011. In: Proc. 6th ISWC Workshop on Ontology Matching (OM), Bonn, DE, pp. 179–183 (2011)
Cheatham, M., Hitzler, P.: String similarity metrics for ontology alignment. In: Alani, H., et al. (eds.) ISWC 2013, Part II. LNCS, vol. 8219, pp. 294–309. Springer, Heidelberg (2013)
Dekhtyar, A., Hayes, J.H.: Good benchmarks are hard to find: Toward the benchmark for information retrieval applications in software engineering (2006)
Euzenat, J., Meilicke, C., Stuckenschmidt, H., Shvaiko, P., Trojahn, C.: Ontology alignment evaluation initiative: Six years of experience. In: Spaccapietra, S. (ed.) Journal on Data Semantics XV. LNCS, vol. 6720, pp. 158–192. Springer, Heidelberg (2011)
Ferrara, A., Lorusso, D., Montanelli, S., Varese, G.: Towards a benchmark for instance matching. In: The 7th International Semantic Web Conference, p. 37 (2008)
Giunchiglia, F., Yatskevich, M., Avesani, P., Shivaiko, P.: A large dataset for the evaluation of ontology matching. The Knowledge Engineering Review 24(02), 137–157 (2009)
Huppler, K.: The art of building a good benchmark. In: Nambiar, R., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 18–30. Springer, Heidelberg (2009)
Ipeirotis, P.G.: Demographics of mechanical turk (2010)
Mortensen, J.M.: Crowdsourcing ontology verification. In: Alani, H., et al. (eds.) ISWC 2013, Part II. LNCS, vol. 8219, pp. 448–455. Springer, Heidelberg (2013)
Mortensen, J.M., Musen, M.A., Noy, N.F.: Crowdsourcing the verification of relationships in biomedical ontologies. In: AMIA Annual Symposium (submitted, 2013)
Mortensen, J.M., Musen, M.A., Noy, N.F.: Ontology quality assurance with the crowd. In: First AAAI Conference on Human Computation and Crowdsourcing (2013)
Nguyen, H.T., Walker, E.A.: A First Course in Fuzzy Logic, 3rd edn. Chapman and Hall / CRC (2005)
Noy, N.F., Mortensen, J., Musen, M.A., Alexander, P.R.: Mechanical turk as an ontology engineer?: using microtasks as a component of an ontology-engineering workflow. In: Proceedings of the 5th Annual ACM Web Science Conference, pp. 262–271. ACM (2013)
Rosoiu, M., dos Santos, C.T., Euzenat, J., et al.: Ontology matching benchmarks: generation and evaluation. In: Proc. 6th ISWC workshop on ontology matching (OM), pp. 73–84 (2011)
Sarasua, C., Simperl, E., Noy, N.F.: Crowdmap: Crowdsourcing ontology alignment with microtasks. In: Cudré-Mauroux, P., et al. (eds.) ISWC 2012, Part I. LNCS, vol. 7649, pp. 525–541. Springer, Heidelberg (2012)
Sim, S.E., Easterbrook, S., Holt, R.C.: Using benchmarking to advance research: A challenge to software engineering. In: Proceedings of the 25th International Conference on Software Engineering, pp. 74–83. IEEE Computer Society (2003)
Šváb, O., Svátek, V., Berka, P., Rak, D., Tomášek, P.: Ontofarm: Towards an experimental collection of parallel ontologies. In: Poster Track of ISWC 2005 (2005)
Ul Hassan, U., ORiain, S., Curry, E.: Towards expertise modelling for routing data cleaning tasks within a community of knowledge workers. In: Proceedings of the 17th International Conference on Information Quality (2012)
Wichmann, P., Borek, A., Kern, R., Woodall, P., Parlikad, A.K., Satzger, G.: Exploring the crowdas enabler of better information quality. In: Proceedings of the 16th International Conference on Information Quality, pp. 302–312 (2011)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Cheatham, M., Hitzler, P. (2014). Conference v2.0: An Uncertain Version of the OAEI Conference Benchmark. In: Mika, P., et al. The Semantic Web – ISWC 2014. ISWC 2014. Lecture Notes in Computer Science, vol 8797. Springer, Cham. https://doi.org/10.1007/978-3-319-11915-1_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-11915-1_3
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-11914-4
Online ISBN: 978-3-319-11915-1
eBook Packages: Computer ScienceComputer Science (R0)