Log in

Investigating complementarities in subscription software usage using advertising experiments

  • Published:
Quantitative Marketing and Economics Aims and scope Submit manuscript

Abstract

In this study, we examine complementarities in usage across a set of related software products from a multi-product firm. We employ a novel experimental approach to causally estimate complementarities, leveraging rich usage data and advertising experiments that directly affect the usage of only one product at a time to measure complementarities based on consumption rather than purchase. Our approach is particularly useful as digital contexts are characterized by the simultaneous presence of both substitutability and complementarity between products. They also have scant price variation, bundled pricing plans, and infrequent purchase or subscription renewal decisions, often making typical cross-price elasticity measures for complementarities infeasible. We apply our approach to data from a software company with a suite of related products and find evidence for varying degrees of complementarity across both user groups and products. We show that accounting for complementarities significantly affects the measurement of ad effectiveness and may impact ad targeting decisions by the firm. We explore heterogeneity in complementarities, finding that they are larger for users who have used the products heavily in the past, but small or zero for those who have not. Ours is one of the first studies to causally examine complementarity in usage in the context of subscription products, and our identification strategy can be applied to a variety of contexts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. Across all pairs of campaigns, the median percentage overlap of users is 0.3%. 78% of campaign pairs have fewer than 5% user overlap.

  2. If the overall time investment was affected, this would represent a violation of the exclusion restriction. We examine alternative accounts in Section 6

  3. These additional results are available from the authors upon request

  4. Exceptions include (Hong et al., 2016; Lee et al., 2013) in the grocery context and (Liu et al., 2018) in the context of durable goods.

  5. We test for differences across campaigns by running a pooled 2SLS regression with 1,000 users sampled from each campaign and an interaction between usage (instrumented by email campaigns) and an indicator for each campaign in the second stage. We use an F-test to evaluate the restriction that all campaign-specific coefficients are zero and reject the null hypothesis for all eight combinations of independent and dependent usage variables. This implies complementarity varies across campaign populations. The full regression results are in the supplementary analysis (separate from the appendix) in Tables 1 and 2.

  6. We report the coefficients and standard errors of the cross-product ad effects in Table 10 in the appendix. Of the 108 regressions, 48 are significant. In these results, 43 of the 48 cross-product elasticity regressions that are significant are in the same direction and have a smaller magnitude than the own-product elasticity for the same campaign. The remaining five have larger magnitudes and the same sign; three have a difference within 10%, while two are significantly larger (28% and 212% increases). Across these significant results, the average difference between the non-targeted and targeted application is -53%, while the median is -66%.

  7. Because we are interested in cross-sectional effects across users of different activity levels, and these activity levels are collinear with the individual fixed effects, we remove individual fixed effects for these specifications.

  8. The full set of analyses is available from authors upon request.

References

  • Ainslie, A., & Rossi, P. E. (1998). Similarities in choice behavior across product categories. Marketing Science.

  • Arora, A., Forman, C., & Yoon, J. W. (2010). Complementarity and information technology adoption: Local area networks and the Internet. Information Economics and Policy, 22, 228–242.

    Article  Google Scholar 

  • Ascarza, E., & Hardie, B. G. (2013). A joint model of usage and churn in contractual settings. Marketing Science.

  • Becker, G. S. (1965). A theory of the allocation of time. The Economic Journal.

  • Berry, S., Khwaja, A., Kumar, V., Musalem, A., Wilbur, K. C., Allenby, G., Anand, B., Chintagunta, P., Hanemann, W. M., Jeziorski, P., & Mele, A. (2014). Structural models of complementary choices. Marketing Letters, 25, 245–256.

    Article  Google Scholar 

  • Bhat, C. R., Castro, M., & Pinjari, A. R. (2015). Allowing for complementarity and rich substitution patterns in multiple discrete-continuous models. Transportation Research Part B: Methodological.

  • Crawford, G. S., & Yurukoglu, A. (2012). The welfare effects of bundling in multichannel television markets. American Economic Review, 102, 643–685.

    Article  Google Scholar 

  • Deaton, A., & Muellbauer, J. (1980). Economics and consumer behavior.

  • Dube, J.-P. H. (2018). Microeconometric models of consumer demand. SSRN Electronic Journal.

  • Erdem, T. (1998). An empirical analysis of umbrella branding. Journal of Marketing Research, 35, 339–351.

    Article  Google Scholar 

  • Fox, J. T., & Lazzati, N. (2017). A note on identification of discrete choice models for bundles and binary games. Quantitative Economics.

  • Gentzkow, M. (2007). Valuing new goods in a model with complementarity: Online newspapers. American Economic Review, 97, 713–744.

    Article  Google Scholar 

  • Hanemann, W. M. (1984). Discrete/Continuous models of consumer demand. Econometrica.

  • Hong, S., Misra, K., & Vilcassim, N. J. (2016). The perils of category management: The effect of product assortment on multicategory purchase incidence. Journal of Marketing, 80, 34–52.

    Article  Google Scholar 

  • Kosyakova, T., Otter, T., Misra, S., & Neuerburg, C. (2020). Exact MCMC for choices from menus—measuring substitution and complementarity among menu items. Marketing Science.

  • Kumar, V., & Chou, C. (2020). Can willingness to pay be identified without price variation? What big data on usage tracking can (And Cannot) tell us.

  • Lee, S., & Allenby, G. M. (2011). A direct utility model for market basket data. SSRN Electronic Journal.

  • Lee, S., Kim, J., & Allenby, G. M. (2013). A direct utility model for asymmetric complements. Marketing Science, 32, 454–470.

    Article  Google Scholar 

  • Lewbel, A. (1985). Bundling of substitutes or complements. International Journal of Industrial Organization, 3, 101–107.

    Article  Google Scholar 

  • Lewis, R. A., & Reiley, D. H. (2014). Online ads and offline sales: Measuring the effect of retail advertising via a controlled experiment on Yahoo!. Quantitative Marketing and Economics.

  • Liu, H., Chintagunta, P. K., & Zhu, T. (2010). Complementarities and the demand for home broadband internet services. Marketing Science, 29, 701–720.

    Article  Google Scholar 

  • Liu, X., Derdenger, T., & Sun, B. (2018). An empirical analysis of consumer purchase behavior of base products and add-ons given compatibility constraints. Marketing Science, 37, 569–591.

    Article  Google Scholar 

  • Manchanda, P., Ansari, A., & Gupta, S. (1999). A model for multicategory purchase incidence decisions. Marketing Science, 18, 95–114.

    Article  Google Scholar 

  • Mehta, N., & Ma, Y. (2012). A multicategory model of consumers’ purchase incidence, quantity, and brand choice decisions: Methodological issues and implications on promotional decisions. Journal of Marketing Research, 49, 435–451.

    Article  Google Scholar 

  • Milgrom, P., & Roberts, J. (1990). The economics of modern manufacturing: Technology, strategy, and organization. American Economic Review.

  • Milgrom, P., & Shannon, C. (1994). Monotone comparative statics. Econometrica.

  • Nair, H. S., Manchanda, P., & Bhatia, T. (2010). Asymmetric social interactions in physician prescription behavior: The role of opinion leaders. Journal of Marketing Research, 47, 883–895.

    Article  Google Scholar 

  • Nedungadi, P. (1990). Recall and consumer consideration sets: Influencing choice without altering brand evaluations. Journal of Consumer Research.

  • Nevskaya, Y., & Albuquerque, P. (2019). How should firms manage excessive product use? A continuous-time demand model to test reward schedules, notifications, and time limits. Journal of Marketing Research.

  • Ruiz, F. J., Athey, S., & Blei, D. M. (2020). Shopper: A probabilistic model of consumer choice with substitutes and complements. Annals of Applied Statistics.

  • Runge, J., Nair, H., & Levav, J. (2021). Price promotions for ”Freemium” app monetization.

  • Sahni, N. S. (2016). Advertising spillovers: Evidence from online field experiments and implications for returns on advertising. Journal of Marketing Research.

  • Sahni, N. S., Zou, D., & Chintagunta, P. K. (2017). Do targeted discount offers serve as advertising? Evidence from 70 field experiments. Management Science, 63, 2688–2705.

    Article  Google Scholar 

  • Samuelson, P. A. (1974). Complementarity: An essay on the 40th anniversary of the Hicks-Allen revolution in demand theory. Journal of Economic Literature, 12, 1255–1289.

    Google Scholar 

  • Song, I., & Chintagunta, P. K. (2007). A discrete-continuous model for multicategory purchase behavior of households. Journal of Marketing Research, 44, 595–612.

    Article  Google Scholar 

  • Sridhar, S., & Sriram, S. (2015). Is online newspaper advertising cannibalizing print advertising? Quantitative Marketing and Economics.

  • Sriram, S., Chintagunta, P. K., & Agarwal, M. K. (2010). Investigating consumer purchase behavior in related technology product categories. Marketing Science.

  • Stourm, L., Iyengar, R., & Bradlow, E. T. (2020). A flexible demand model for complements using household production theory. Marketing Science.

  • Topkis, D. M. (1978). Minimizing a submodular function on a Lattice. Operations Research.

  • Topkis, D. M. (1998). Supermodularity and Complementarity. Frontiers of Economic Research: Princeton University Press.

    Google Scholar 

  • Train, K. E., McFadden, D. L., & Ben-Akiva, M. (1987). The demand for local telephone service: A fully discrete model of residential calling patterns and service choices. The RAND Journal of Economics, 18, 109.

    Article  Google Scholar 

  • Venkatesh, R., & Kamakura, W. (2003). Optimal bundling and pricing under a monopoly: Contrasting complements and substitutes from independently valued products. The Journal of business, 76, 211–231.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jon Zeller.

Ethics declarations

Competing Interests

Author A received a paid internship at the company that supplied the data for this project. This manuscript was subject to legal review by the company and any potentially identifying material was removed. The company reserves the right to review and remove any material which would reveal its identity. Author B has no funding or association with the company to report.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (tex 53 KB)

A. Appendix

A. Appendix

The appendix contains the following sets of additional analysis:

  1. 1.

    Full Campaign Details & Descriptions, Tables 4and  5: statistics and descriptions of the full set of campaigns for both app A and app B, taken from the firm’s campaign management platform.

  2. 2.

    Randomization Checks, Table 6: randomization checks from a subset of demographic variables for the users in each campaign. The variables are user account tenure, number of previously-canceled subscriptions, and the percent of users on promotions. We only include the p-value for a difference in means/proportions test to protect confidentiality. Other demographic variables were deemed too firm-specific to be made public. 4 of 144 (2.7%) variables are significant at the 1% level, and 13 of 144 (9%) at a 5% level, indicating slight randomization failures, but we do not view this as a major problem

  3. 3.

    App Usage Across Campaigns, Figs. 8 and 9

  4. 4.

    Individual App A Campaigns, Binary Specification, campaigns with non-significant first stage results, Table 7

  5. 5.

    Robustness Checks: Individual App B Campaigns, Binary Specification, Tables 8 and 9, Figs. 10 and  11

  6. 6.

    Robustness Checks: Cross-product Ad Elasticity Measures, Table 10

  7. 7.

    Robustness Checks: Individual App A Campaigns, Session Count Specification, Tables 11and 12, Figs. 12 and  13

  8. 8.

    Robustness Checks: Volume Specification Active User Subpopulations, App A Tables  131415

Table 4 Campaign Statistics: All Campaigns

The supplementary analysis, located here, contains additional analyses that are not included here for brevity. The authors intend to host this on their own sites. They are the following:

  1. 1.

    Full Pooled Regression Results with Campaign Dummies, Tables 1and 2

  2. 2.

    App B Session Count Regressions, Tables 3and 4

  3. 3.

    App B Volume Specification Active User Subpopulations Session Count Regressions, Tables 5and 6

  4. 4.

    Heterogeneity with Product Experience Regressions, Tables 7-18

Table 5 Campaign Descriptions: All App A Campaigns
Table 6 Randomization checks: demographic variables
Fig. 8
figure 8

This plot and Fig. 9 show the average daily usage likelihood across apps for users in the app A and B campaigns. Usage patterns are heterogeneous, indicating that a variety of user groups are contained in these populations

Fig. 9
figure 9

See Fig. 8

Table 7 Campaign Regression Comparison: Individual App A Campaigns, 7-day Window (Part 2/2)
Table 8 Campaign Regression Comparison: Individual App B Campaigns, 7-day Window (Part 1/2)
Table 9 Campaign Regression Comparison: Individual App B Campaigns, 7-day Window (Part 2/2)
Fig. 10
figure 10

App B first stage results: coefficients on the X-axis and log-transformed F-statistics on the Y-axis. Points above the blue line at \(\log (10)\) have a strong first stage. Over 40% of campaigns have a strong first stage and all but one have positive effects on application usage

Fig. 11
figure 11

Second stage app B results: each row is one campaign, and the colored bands represent 95% confidence intervals (Bonferroni adjusted) for each app. Campaigns are ordered by first stage F-stat, and campaigns above the dashed line have an F-stat exceeding 10. App B usage rarely leads to increased usage of other applications

Table 10 Cross-product Ad elasticity results: App A campaigns
Table 11 Campaign Regression Comparison: Individual App A Campaigns, 7-day Window, Session Count (Part 1/2)
Table 12 Campaign Regression Comparison: Individual App A Campaigns, 7-day Window, Session Count (Part 2/2)
Fig. 12
figure 12

App A volume specification first stage results: slightly fewer campaigns are significant in the first stage, which makes sense as advertising is more likely to remind inactive users than increase the intensity of usage

Fig. 13
figure 13

App A volume specification second stage results, Bonferroni-adjusted. Campaigns are ordered by first stage F-stat, and campaigns above the dashed line have an F-stat exceeding 10. We see significant results in the second stage roughly as frequently, indicating the presence of complementarity net of reminder-type effects

Table 13 Campaign Regression Comparison: Volume Specification Analysis - App A Campaigns (Part 1/3)
Table 14 Campaign Regression Comparison: Volume Specification Analysis - App A Campaigns (Part 2/3)
Table 15 Campaign Regression Comparison: Volume Specification Analysis - App A Campaigns (Part 3/3)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zeller, J., Narayanan, S. Investigating complementarities in subscription software usage using advertising experiments. Quant Mark Econ (2024). https://doi.org/10.1007/s11129-024-09282-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11129-024-09282-3

Keywords

JEL Classification

Navigation