Abstract
Although deep learning has shown good performance in many fields, it still lacks the most basic human intelligence, which we often called the ability to draw inferences about other cases from one instance. Therefore, how to empower model with logical reasoning ability has received much attention. Thus, we propose neural predicate networks, a model that combines deep learning methods with first-order logic. It converts visual tasks into first-order logic problems by deconstructing them into objects, concepts and relations. Then, achieve first-order logic differentiable by learning logical predicates as neural networks. Finally, the differentiable model can be trained by back propagation to simulate the formation of concepts in the human brain and solve the problem. Experimental results on two image concept classification datasets demonstrate the effectiveness and advantages of our approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Stammer, W., Schramowski, P., Kersting, K.: Right for the right concept: revising neuro-symbolic concepts by interacting with their explanations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3619–3629 (2021)
De Raedt, L., Dumancic, S., Manhaeve, R., Marram, G.: From statistical relational to neuro-symbolic artificial intelligence. In: Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020), pp. 4943–4950 (2021)
van Krieken, E., Acar, E., van Harmelen, F.: Analyzing differentiable fuzzy logic operators. Artif. Intell. 302, 103602 (2022)
Bengio, Y.: From system 1 deep learning to system 2 deep learning. In: Thirty-third Conference on Neural Information Processing Systems (2019)
Dehaene, S.: How we learn: why brains learn better than any machine... for now. Penguin (2021)
Jiang, Z., Zheng, Y., Tang, H., Zhou, H.: Variational deep embedding: an unsupervised and generative approach to clustering. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 1965–1972 (2017)
Hohenecker, P., Lukasiewicz, T.: Ontology reasoning with deep neural networks. J. Artif. Intell. Res. 68, 503–540 (2020)
Makni, B., Hendler, J.: Deep learning for noise-tolerant RDFS reasoning. Semantic Web 10(5), 823–862 (2019)
Johnson, J., et al.: Inferring and executing programs for visual reasoning. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2989–2998 (2017)
Yi, K., Wu, J., Gan, C., Torralba, A., Kohli, P., Tenenbaum, J.B.: Neural-symbolic VQA: disentangling reasoning from vision and language understanding. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 1039–1050 (2018)
Yang, F., Yang, Z., Cohen, W.W.: Differentiable learning of logical rules for knowledge base reasoning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 2316–2325 (2017)
Dong, H., Mao, J., Lin, T., Wang, C., Li, L., Zhou, D.: Neural logic machines. In: International Conference on Learning Representations (2018)
Shi, S., Chen, H., Ma, W., Mao, J., Zhang, M., Zhang, Y.: Neural logic reasoning. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 1365–1374 (2020)
Locatello, F., et al.: Object-centric learning with slot attention. Adv. Neural. Inf. Process. Syst. 33, 11525–11538 (2020)
Wu, J., Tenenbaum, J.B., Kohli, P.: Neural scene de-rendering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 699–707 (2017)
Thoma, S., Rettinger, A., Both, F.: Towards holistic concept representations: embedding relational knowledge, visual attributes, and distributional word semantics. In: d’Amato, C., et al. (eds.) ISWC 2017. LNCS, vol. 10587, pp. 694–710. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68288-4_41
Mascharka, D., Tran, P., Soklaski, R., Majumdar, A.: Transparency by design: closing the gap between performance and interpretability in visual reasoning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4942–4950 (2018)
Hudson, D.A, Manning, C.D.: Compositional attention networks for machine reasoning. In: International Conference on Learning Representations (2018)
Mao, J., Gan, C., Kohli, P., Tenenbaum, J.B., Wu, J.: The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. In: International Conference on Learning Representations (2019)
Yi, K., et al.: CLEVRER: collision events for video representation and reasoning. In: International Conference on Learning Representations (2019)
Acknowledgements
This work is supported in part by the Natural Science Found of China (61906066), and supported in part by the Zhejiang Provincial Education Department Scientific Research Project (Y202044192), Huzhou University Research and Innovation Fund (2022KYCX43).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chen, B., Wu, M., Zheng, B., Zhu, S., Peng, W. (2022). Predicate Logic Network: Vision Concept Formation. In: Chen, Y., Zhang, S. (eds) Artificial Intelligence Logic and Applications. AILA 2022. Communications in Computer and Information Science, vol 1657. Springer, Singapore. https://doi.org/10.1007/978-981-19-7510-3_3
Download citation
DOI: https://doi.org/10.1007/978-981-19-7510-3_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7509-7
Online ISBN: 978-981-19-7510-3
eBook Packages: Computer ScienceComputer Science (R0)