Abstract
Electronic music artists and sound designers have unique workflow practices that necessitate specialized approaches for develo** music information retrieval and creativity support tools. Furthermore, electronic music instruments, such as modular synthesizers, have near-infinite possibilities for sound creation and can be combined to create unique and complex audio paths. The process of discovering interesting sounds is often serendipitous and impossible to replicate. For this reason, many musicians in electronic genres record audio output at all times while they work in the studio. Subsequently, it is difficult for artists to rediscover audio segments that might be suitable for use in their compositions from thousands of hours of recordings. In this paper, we describe LyricJam Sonic, a creative tool for musicians to rediscover their previous recordings, re-contextualize them with other recordings, and create original live music compositions in real-time. A bi-modal AI-driven approach uses generated lyric lines to find compatible audio clips from the artist’s past studio recordings, and uses them to generate new lyric lines, which in turn are used to find other clips, thus creating a continuous and evolving stream of music and lyrics. The intent is to keep the artists in a state of creative flow conducive to music creation rather than taking them into an analytical/critical state of deliberately searching for past audio segments. The system can run in either a fully autonomous mode without user input, or in a live performance mode, where the artist plays live music, while the system “listens” and creates a continuous stream of music and lyrics in response (LyricJam Sonic: https://lyricjam.ai
Demo videos: https://sites.google.com/view/supplementary-material-for-evo/home).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
Superscripts \(^{(s)}\) and \(^{(t)}\) in our notation refer to spectrogram and text, respectively.
- 3.
References
Agarwal, S., Saxena, V., Singal, V., Aggarwal, S.: LSTM based music generation with dataset preprocessing and reconstruction techniques. In: 2018 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 455–462 (2018). https://doi.org/10.1109/SSCI.2018.8628712
Briot, J., Hadjeres, G., Pachet, F.: Deep learning techniques for music generation. Computational Synthesis and Creative Systems, Springer, Cham (2019). https://doi.org/10.1007/978-3-319-70163-9. https://books.google.ca/books?id=_flrswEACAAJ
Chen, Y., Lerch, A.: Melody-conditioned lyrics generation with seqGANs. In: 2020 IEEE International Symposium on Multimedia (ISM), pp. 189–196 (2020). https://doi.org/10.1109/ISM.2020.00040
Dua, M., Yadav, R., Mamgai, D., Brodiya, S.: An improved RNN-LSTM based novel approach for sheet music generation. Procedia Comput. Sci. 171, 465–474 (2020). In: Third International Conference on Computing and Network Communications (CoCoNet’19)
Eigenfeldt, A., Pasquier, P.: Negotiated content: generative soundscape composition by autonomous musical agents in coming together: Freesound. In: ICCC, pp. 27–32 (2011)
Eno, B.: Generative music. http://www.inmotionmagazine.com/eno1.html (1996). Accessed 17 Dec 2022
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems 27 (2014)
Harkins, P.: Digital sampling: the design and use of music technologies. Routledge (2019)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Hunt, S.J., Mitchell, T., Nash, C.: Thoughts on interactive generative music composition (2017)
Khan, K., Sahu, G., Balasubramanian, V., Mou, L., Vechtomova, O.: Adversarial learning on the latent space for diverse dialog generation. In: Proceedings of the 28th International Conference on Computational Linguistics, pp. 5026–5034. International Committee on Computational Linguistics, Barcelona, Spain (2020). https://doi.org/10.18653/v1/2020.coling-main.441. https://www.aclweb.org/anthology/2020.coling-main.441
Kingma, D.P., Welling, M.: auto-encoding variational Bayes. In: 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014, Conference Track Proceedings (2014)
Makris, D., Zixun, G., Kaliakatsos-Papakostas, M., Herremans, D.: Conditional drums generation using compound word representations. In: Martins, T., Rodríguez-Fernández, N., Rebelo, S.M. (eds.) EvoMUSART 2022. LNCS, vol. 13221, pp. 179–194. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-03789-4_12
Malmi, E., Takala, P., Toivonen, H., Raiko, T., Gionis, A.: DopeLearning: a computational approach to rap lyrics generation. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 195–204 (2016)
Oliveira, H.G.: Tra-la-Lyrics 2.0: automatic generation of song lyrics on a semantic domain. J. Artif. General Intell. 6(1), 87 (2015)
van den Oord, A., et al.: WaveNet: a generative model for raw audio. ar**v preprint ar**v:1609.03499 (2016)
Potash, P., Romanov, A., Rumshisky, A.: Ghostwriter: using an LSTM for automatic rap lyric generation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1919–1924 (2015)
Thorogood, M., Pasquier, P., Eigenfeldt, A.: Audio metaphor: audio information retrieval for soundscape composition. In: Proceedings of the Sound and Music Computing Conference (SMC), pp. 277–283 (2012)
Turchet, L., Zanetti, A.: Voice-based interface for accessible soundscape composition: composing soundscapes by vocally querying online sounds repositories. In: Proceedings of the 15th International Audio Mostly Conference (2020)
Vechtomova, O., Sahu, G., Kumar, D.: Generation of lyrics lines conditioned on music audio clips. In: Proceedings of the 1st Workshop on NLP for Music and Audio (NLP4MusA), pp. 33–37. Association for Computational Linguistics (2020). https://aclanthology.org/2020.nlp4musa-1.7
Vechtomova, O., Sahu, G., Kumar, D.: LyricJam: a system for generating lyrics for live instrumental music. In: Proceedings of the 12th Conference on Computational Creativity (2021)
Watanabe, K., Matsubayashi, Y., Fukayama, S., Goto, M., Inui, K., Nakano, T.: A melody-conditioned lyrics language model. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 163–172 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Vechtomova, O., Sahu, G. (2023). LyricJam Sonic: A Generative System for Real-Time Composition and Musical Improvisation. In: Johnson, C., Rodríguez-Fernández, N., Rebelo, S.M. (eds) Artificial Intelligence in Music, Sound, Art and Design. EvoMUSART 2023. Lecture Notes in Computer Science, vol 13988. Springer, Cham. https://doi.org/10.1007/978-3-031-29956-8_19
Download citation
DOI: https://doi.org/10.1007/978-3-031-29956-8_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-29955-1
Online ISBN: 978-3-031-29956-8
eBook Packages: Computer ScienceComputer Science (R0)