Abstract
Symbolic optimization methods have been used to solve varied challenging and relevant problems such as symbolic regression and neural architecture search. However, the current state of the art typically learns each problem from scratch and is unable to leverage pre-existing knowledge and datasets that are available for many applications. Inspired by the similarity between sequence representations learned in natural language processing and the formulation of symbolic optimization as a discrete sequence optimization problem, we propose language model-accelerated deep symbolic optimization (LA-DSO), a method that leverages language models to learn symbolic optimization solutions more efficiently. We demonstrate LA-DSO in two tasks: symbolic regression, which allows us to perform extensive experimentation due to its low computation requirements, and computational antibody optimization, which shows that our proposal accelerates learning in challenging real-world problems.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-023-08802-8/MediaObjects/521_2023_8802_Fig1_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-023-08802-8/MediaObjects/521_2023_8802_Fig2_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-023-08802-8/MediaObjects/521_2023_8802_Fig3_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-023-08802-8/MediaObjects/521_2023_8802_Fig4_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-023-08802-8/MediaObjects/521_2023_8802_Fig5_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-023-08802-8/MediaObjects/521_2023_8802_Fig6_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-023-08802-8/MediaObjects/521_2023_8802_Fig7_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-023-08802-8/MediaObjects/521_2023_8802_Fig8_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-023-08802-8/MediaObjects/521_2023_8802_Fig9_HTML.png)
Similar content being viewed by others
References
Lu Q, Ren J, Wang Z (2016) Using genetic programming with prior formula knowledge to solve symbolic regression problem. Comput Intell Neurosci 35:33985–33998
Yu K, Sciuto C, Jaggi M, Musat C, Salzmann M (2020) Evaluating the search phase of neural architecture search. In: International conference on learning representations (ICLR)
Kitzelmann E (2009) Inductive programming: a survey of program synthesis techniques. In: Workshop on approaches and applications of inductive programming, pp 50–73
Petersen BK, Landajuela M, Mundhenk TN, Santiago CP, Kim SK, Kim JT (2021) Deep symbolic regression: recovering mathematical expressions from data via risk-seeking policy gradients. In: Proceeding of the international conference on learning representations (ICLR)
Silva FLD, Costa AHR (2019) A survey on transfer learning for multiagent reinforcement learning systems. J Artif Intell Res (JAIR) 64:645–703
Barto AG, Thomas PS, Sutton RS (2017) Some recent applications of reinforcement learning. In: Proceedings of the eighteenth Yale workshop on adaptive and learning systems
Udrescu S-M, Tan A, Feng J, Neto O, Wu T, Tegmark M (2020) Ai Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity. Adv Neural Inf Process Syst 33:4860–4871
Brunton SL, Proctor JL, Kutz JN (2016) Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc Natl Acad Sci 113(15):3932–3937
Koza JR (1994) Genetic programming as a means for programming computers by natural selection. Stat Comput 4:87–112
Mundhenk T, Landajuela M, Glatt R, Santiago C, Petersen B et al (2021) Symbolic regression via deep reinforcement learning enhanced genetic programming seeding. Adv Neural Inf Process Syst 34:24912
Landajuela M, Lee CS, Yang J, Glatt R, Santiago CP, Aravena I, Mundhenk T, Mulcahy G, Petersen BK (2022) A unified framework for deep symbolic regression. Adv Neural Inf Process Syst 35:33985–33998
Landajuela M, Petersen BK, Kim S, Santiago CP, Glatt R, Mundhenk N, Pettit JF, Faissol D (2021) Discovering symbolic policies with deep reinforcement learning. In: International conference on machine learning (ICML). PMLR, pp 5979–5989
Pettit JF, Petersen BK, Cockrell C, Larie DB, Silva FL, An G, Faissol DM (2021) Learning sparse symbolic policies for sepsis treatment. In: Interpretable machine learning in healthcare workshop at ICML
Glatt R, Silva FLd, Bui VH, Huang C, Xue L, Wang M, Chang F, Murphey Y, Su W (2022) Deep symbolic optimization for electric component sizing in fixed topology power converters. In: Workshop on AI for design and manufacturing (ADAM)
Devlin J, Chang M, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Conference of the North American chapter of the association for computational linguistics: human language technologies, (NAACL-HLT), pp 4171–4186
Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. J Mach Learn Res 21(140):1–67
Pilehvar MT, Camacho-Collados J (2020) Embeddings in natural language processing: theory and advances in vector representations of meaning. Synth Lect Hum Lang Technol 13(4):1–175
Weiss K, Khoshgoftaar TM, Wang D (2016) A survey of transfer learning. J Big data 3(1):1–40
White DR, Mcdermott J, Castelli M, Manzoni L, Goldman BW, Kronberger G, Jaśkowski W, O’Reilly U-M, Luke S (2013) Better GP benchmarks: community survey results and proposals. Genet Program Evolvable Mach 14(1):3–29
Meurer A, Smith CP, Paprocki M, Čertík O, Kirpichev SB, Rocklin M, Kumar A, Ivanov S, Moore JK, Singh S et al (2017) Sympy: symbolic computing in python. PeerJ Comput Sci 3:103
Mikolov T, Karafiát M, Burget L, Černockỳ J, Khudanpur S (2010) Recurrent neural network based language model. In: Eleventh annual conference of the international speech communication association
Uy NQ, Hoai NX, O’Neill M, McKay RI, Galván-López E (2011) Semantically-based crossover in genetic programming: application to real-valued symbolic regression. Genet Program Evolvable Mach 12(2):91–119
Wu TT, Kabat EA (1970) An analysis of the sequences of the variable regions of Bence Jones proteins and myeloma light chains and their implications for antibody complementarity. J Exp Med 132(2):211–250
Carter PJ (2006) Potent antibody therapeutics by design. Nat Rev Immunol 6(5):343–357
Norman RA, Ambrosetti F, Bonvin AMJJ, Colwell LJ, Kelm S, Kumar S, Krawczyk K (2019) Computational approaches to therapeutic antibody design: established methods and emerging trends. Brief Bioinform 21(5):1549–1567
Desautels T, Zemla A, Lau E, Franco M, Faissol D (2020) Rapid in silico design of antibodies targeting SARS-CoV-2 using machine learning and supercomputing. BioRxiv
Leaver-Fay A, Tyka M, Lewis SM, Lange OF, Thompson J, Jacak R, Kaufman KW, Renfrew PD, Smith CA, Sheffler W et al (2011) ROSETTA3: an object-oriented software suite for the simulation and design of macromolecules. Methods Enzymol 487:545–574
Barlow KA, Ó Conchúir S, Thompson S, Suresh P, Lucas JE, Heinonen M, Kortemme T (2018) Flex ddG: Rosetta ensemble-based estimation of changes in protein-protein binding affinity upon mutation. J Phys Chem B 122(21):5389–5399
Snelson E, Ghahramani Z (2006) Sparse Gaussian processes using pseudo-inputs. Adv Neural Inf Process Syst 18:1257
Sui J, Li W, Murakami A, Tamin A, Matthews LJ, Wong SK, Moore MJ, Tallarico ASC, Olurinde M, Choe H et al (2004) Potent neutralization of severe acute respiratory syndrome (SARS) coronavirus by a human mAb to S1 protein that blocks receptor association. Proc Natl Acad Sci 101(8):2536–2541
Walls AC, **ong X, Park Y-J, Tortorici MA, Snijder J, Quispe J, Cameroni E, Gopal R, Dai M, Lanzavecchia A et al (2019) Unexpected receptor functional mimicry elucidates activation of coronavirus fusion. Cell 176(5):1026–1039
Zhu Z, Chakraborti S, He Y, Roberts A, Sheahan T, **ao X, Hensley LE, Prabakaran P, Rockx B, Sidorov IA et al (2007) Potent cross-reactive neutralization of SARS coronavirus isolates by human monoclonal antibodies. Proc Natl Acad Sci 104(29):12123–12128
Suzek BE, Wang Y, Huang H, McGarvey PB, Wu CH (2014) The UniProt Consortium: UniRef clusters: a comprehensive and scalable alternative for improving sequence similarity searches. Bioinformatics 31(6):926–932
Steinegger M, Mirdita M, Söding J (2018) Protein-level assembly increases protein sequence recovery from metagenomic samples manyfold. bioRxiv
Vashchenko D, Nguyen S, Goncalves A, Silva FLd, Petersen B, Desautels T, Faissol D (2022) AbBERT: learning antibody humanness via masked language modeling. In: Workshop on healthcare AI and Covid-19
Olsen TH, Boyles F, Deane CM (2022) Observed antibody space: a diverse database of cleaned, annotated, and translated unpaired and paired antibody sequences. Protein Sci 31(1):141–146
Azunre P (2021) Transfer learning for natural language processing. Simon and Schuster
Pennington J, Socher R, Manning CD (2014) Glove: global vectors for word representation. In: Empirical methods in natural language processing (EMNLP), pp 1532–1543
Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. ar**v:1301.3781 [cs.CL]
...Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler DM, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D (2000) Language models are few-shot learners (2020). ar**v:2005.14165 [cs.CL]
Valipour M, You B, Panju M, Ghodsi A. SymbolicGPT: A generative transformer model for symbolic regression. ar**v:2106.14131
Reid M, Yamada Y, Gu SS (2022) Can wikipedia help offline reinforcement learning? ar**v:2201.12122
Chai D, Wu W, Han Q, Wu F, Li J (2020) Description based text classification with reinforcement learning. In: International conference on machine learning (ICML), pp 1371–1382
Luketina J, Nardelli N, Farquhar G, Foerster J, Andreas J, Grefenstette E, Whiteson S, Rocktäschel T (2019) A survey of reinforcement learning informed by natural language. In: International joint conference on artificial intelligence (IJCAI), pp 6309–6317
Bahdanau D, Hill F, Leike J, Hughes E, Kohli P, Grefenstette E (2019) Learning to understand goal specifications by modelling reward. In: International conference on learning representations (ICRL)
Narasimhan K, Barzilay R, Jaakkola T (2018) Grounding language for transfer in deep reinforcement learning. J Artif Intell Res (JAIR) 63(1):849–874
Bahdanau D, Hill F, Leike J, Hughes E, Kohli P, Grefenstette E (2018) Learning to follow language instructions with adversarial reward induction. ar**v:1806.01946
Yu H, Zhang H, Xu W (2018) Interactive grounded language acquisition and generalization in a 2D world. In: International conference on learning representations (ICLR)
Hermann KM, Hill F, Green S, Wang F, Faulkner R, Soyer H, Szepesvari D, Czarnecki WM, Jaderberg M, Teplyashin D, Wainwright M, Apps C, Hassabis D, Blunsom P (2017) Grounded language learning in a simulated 3D world. ar**v:1706.06551
Kim JT, Larma ML, Petersen BK (2021) Distilling wikipedia mathematical knowledge into neural network models. In: Mathematical reasoning in general artificial intelligence workshop
Silva FLd, Goncalves A, Nguyen S, Vashchenko D, Glatt R, Desautels T, Landajuela M, Petersen B, Faissol D (2022) Leveraging language models to efficiently learn symbolic optimization solutions. In: Adaptive and learning agents (ALA) workshop
Acknowledgements
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344. Lawrence Livermore National Security, LLC. LLNL-JRNL-840809. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Evaluation of simple-LM baseline
Evaluation of simple-LM baseline
Figures 10 and 11 show the comparison between the best and average performance of expressions found by simple-LM against the performance of the DSO baseline and LA-DSO. Simple-LM unsurprisingly greatly underperforms in the best expression found (the main metric) for all the benchmarks. This happens because this baseline algorithm can only sample based on how probable a token is in equations “in general,” without regard to the specific problem being solved at the time. This results in a decent sampling average reward, but the algorithm fails to identify correct expressions for the problem at hand. Therefore, Simple-LM is not useful for our applications of interest.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
da Silva, F.L., Goncalves, A., Nguyen, S. et al. Language model-accelerated deep symbolic optimization. Neural Comput & Applic (2023). https://doi.org/10.1007/s00521-023-08802-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00521-023-08802-8