Log in

Partial-monotone adaptive submodular maximization

  • Published:
Journal of Combinatorial Optimization Aims and scope Submit manuscript

Abstract

Many AI/Machine learning problems require adaptively selecting a sequence of items, each selected item might provide some feedback that is valuable for making better selections in the future, with the goal of maximizing an adaptive submodular function. Most of existing studies in this field focus on either monotone case or non-monotone case. Specifically, if the utility function is monotone and adaptive submodular, Golovin and Krause (J Artif Intell Res 42:427–486, 2011) developed \((1-1/e)\) approximation solution subject to a cardinality constraint. For the cardinality-constrained non-monotone case, Tang (Theor Comput Sci 850:249–261, 2021) showed that a random greedy policy attains an approximation ratio of 1/e. In this work, we generalize the above mentioned results by studying the partial-monotone adaptive submodular maximization problem. To this end, we introduce the notation of adaptive monotonicity ratio \(m\in [0,1]\) to measure the degree of monotonicity of a function. Our main result is to show that for the case of cardinality constraints, if the utility function has an adaptive monotonicity ratio of m and it is adaptive submodular, then a random greedy policy attains an approximation ratio of \(m(1-1/e)+(1-m)(1/e)\). Notably this result recovers the aforementioned \((1-1/e)\) and 1/e approximation ratios when \(m = 1\) and \(m = 0\), respectively. We further extend our results to consider a knapsack constraint and develop a \((m+1)/10\) approximation solution for this general case. One important implication of our results is that even for a non-monotone utility function, we still can attain an approximation ratio close to \((1-1/e)\) if this function is “close” to a monotone function. This leads to improved performance bounds for many machine learning applications whose utility functions are almost adaptive monotone.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Amanatidis G, Fusco F, Lazos P, Leonardi S, Reiffenhäuser R (2020) Fast adaptive non-monotone submodular maximization subject to a knapsack constraint. In: Advances in neural information processing systems

  • Chen Y, Krause A (2013) Near-optimal batch mode active learning and adaptive submodular optimization. ICML (1) 28(160–168):8–1

  • Fujii K, Sakaue S (2019) Beyond adaptive submodularity: approximation guarantees of greedy policy with adaptive submodularity ratio. In: International conference on machine learning, pp 2042–2051

  • Golovin D, Krause A (2011) Adaptive submodularity: theory and applications in active learning and stochastic optimization. J Artif Intell Res 42:427–486

    MathSciNet  MATH  Google Scholar 

  • Gotovos A, Karbasi A, Krause A (2015) Non-monotone adaptive submodular maximization. In: Twenty-fourth international joint conference on artificial intelligence

  • Iyer RK (2015) Submodular optimization and machine learning: Theoretical results, unifying and scalable algorithms, and applications. Ph.D. thesis

  • Mualem L, Feldman M (2022) Using partial monotonicity in submodular maximization. Adv Neural Inform Process Syst

  • Tang S (2020) Price of dependence: stochastic submodular maximization with dependent items. J Comb Optim 39(2):305–314

    Article  MathSciNet  MATH  Google Scholar 

  • Tang S (2021) Beyond pointwise submodularity: non-monotone adaptive submodular maximization in linear time. Theor Comput Sci 850:249–261

    Article  MathSciNet  MATH  Google Scholar 

  • Tang S (2021) Beyond pointwise submodularity: Non-monotone adaptive submodular maximization subject to knapsack and k-system constraints. In: International conference on modelling, computation and optimization in information systems and management sciences. Springer, pp 16–27

  • Tang S (2022) Robust adaptive submodular maximization. INFORMS J Comput

  • Tang S, Yuan J (2020) Influence maximization with partial feedback. Oper Res Lett 48(1):24–28

    Article  MathSciNet  MATH  Google Scholar 

  • Tang S, Yuan J (2021) Adaptive regularized submodular maximization. In: 32nd international symposium on algorithms and computation (ISAAC 2021). Schloss Dagstuhl-Leibniz-Zentrum für Informatik

  • Tang S, Yuan J (2021) Non-monotone adaptive submodular meta-learning. In: SIAM conference on applied and computational discrete algorithms (ACDA21). SIAM, pp 57–65

  • Tang S, Yuan J (2021) Partial-adaptive submodular maximization. ar**v:2111.00986

  • Tang S, Yuan J (2022) Group equality in adaptive submodular maximization. arxiv:2207.03364

  • Tang S, Yuan J (2022) Optimal sampling gaps for adaptive submodular maximization. In: AAAI

  • Yuan J, Tang SJ (2017) Adaptive discount allocation in social networks. In: Proceedings of the 18th ACM international symposium on mobile ad hoc networking and computing, pp 1–10

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaojie Tang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tang, S., Yuan, J. Partial-monotone adaptive submodular maximization. J Comb Optim 45, 35 (2023). https://doi.org/10.1007/s10878-022-00965-9

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10878-022-00965-9

Keywords

Navigation