Log in

A game theoretic framework for distributed computing with dynamic set of agents

  • Original Research
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

We consider a distributed computing setting wherein a central entity seeks power from computational providers by offering a certain reward in return. The computational providers are classified into long-term stakeholders that invest a constant amount of power over time and players that can strategize on their computational investment. In this paper, we model and analyze a stochastic game in such a distributed computing setting, wherein players arrive and depart over time. While our model is formulated with a focus on volunteer computing, it equally applies to certain other distributed computing applications such as mining in blockchain. We prove that, in Markov perfect equilibrium, only players with cost parameters in a relatively low range which collectively satisfy a certain constraint in a given state, invest. We infer that players need not have knowledge about the system state and other players’ parameters, if the total power that is being received by the central entity is communicated to the players as part of the system’s protocol. If players are homogeneous and the system consists of a reasonably large number of players, we observe that the total power received by the central entity is proportional to the offered reward and does not vary significantly despite the players’ arrivals and departures, thus resulting in a robust and reliable system. We then study by way of simulations and mean field approximation, how the players’ utilities are influenced by their arrival and departure rates as well as the system parameters such as the reward’s amount and dispensing rate. We observe that the players’ expected utilities are maximized when their arrival and departure rates are such that the average number of players present in the system is typically between 1 and 2, since this leads to the system being in the condition of least competition with high probability. Further, their expected utilities increase almost linearly with the offered reward and converge to a constant value with respect to its dispensing rate. We conclude by studying a Stackelberg game, where the central entity decides the amount of reward to offer, and the computational providers decide how much power to invest based on the offered reward.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Abraham, I., Dolev, D., Gonen, R., & Halpern, J. (2006). Distributed computing meets game theory: Robust mechanisms for rational secret sharing and multiparty computation. In ACM symposium on principles of distributed computing (pp. 53–62). ACM.

  • Al Ridhawi, I., Aloqaily, M., & Jararweh, Y. (2021). An incentive-based mechanism for volunteer computing using blockchain. ACM Transactions on Internet Technology (TOIT), 21(4), 1–22.

    Article  Google Scholar 

  • Altman, E. (1996). Non zero-sum stochastic games in admission, service and routing control in queueing systems. Queueing Systems, 23(1–4), 259–279.

    Article  Google Scholar 

  • Altman, E., & Shimkin, N. (1998). Individual equilibrium and learning in processor sharing systems. Operations Research, 46(6), 776–784.

    Article  Google Scholar 

  • Altman, E., Reiffers, A., Menasché, D. S., Touati, C., & El-Azouzi, R. (2020). Blockchain competition between miners: A game theoretic perspective. Frontiers in Blockchain, 2, 26.

    Article  Google Scholar 

  • Alves, B. (2022). Average retail price of electricity to the residential sector in the United States in selected years from 1975 to 2021. Statista. https://www.statista.com/statistics/200199/residential-sector-electricity-prices-in-the-us-since-1975/.

  • Anderson, D. P., & Fedak, G. (2006). The computational and storage potential of volunteer computing. In Sixth IEEE international symposium on cluster computing and the grid (CCGRID’06) (pp. 73–80). IEEE.

  • Bellomo, N. (2008). Modeling complex living systems: A kinetic theory and stochastic game approach. Berlin: Springer.

    Google Scholar 

  • Biais, B., Bisiere, C., Bouvard, M., & Casamatta, C. (2019). The blockchain folk theorem. The Review of Financial Studies, 32(5), 1662–1715.

    Article  Google Scholar 

  • Bitpanda. (2021). What is the purpose of mining pools and how do they work? Bitpanda. https://www.bitpanda.com/academy/en/lessons/what-is-the-purpose-of-mining-pools-and-how-do-they-work/.

  • Bowling, M., & Veloso, M. (2000). An analysis of stochastic game theory for multiagent reinforcement learning. Technical report, Carnegie-Mellon University Pittsburgh Pennsylvania School of Computer Science Technical Report No. CMU-CS-00-165

  • Cohen, J. (1957). The generalized Engset formulae. Philips Telecommunication Review, 18(4), 158–170.

    Google Scholar 

  • Conway, L. (2022). What is Bitcoin halving? Definition, how it works, why it matters. Investopedia. https://www.investopedia.com/bitcoin-halving-4843769.

  • Dimitri, N. (2017). Bitcoin mining as a contest. Ledger, 2, 31–37.

    Article  Google Scholar 

  • Eyal, I. (2015). The miner’s dilemma. In 2015 IEEE symposium on security and privacy (pp. 89–103). IEEE.

  • Eyal, I., & Sirer, E. G. (2014). Majority is not enough: Bitcoin mining is vulnerable. In International conference on financial cryptography and data security (pp. 436–454). Springer.

  • Fu, F., & Kozat, U. C. (2013). Stochastic game for wireless network virtualization. IEEE/ACM Transactions on Networking, 21(1), 84–97.

    Article  Google Scholar 

  • Goeree, J. K., & Holt, C. A. (1999). Stochastic game theory: For playing games, not just for doing theory. Proceedings of the National Academy of Sciences, 96(19), 10,564-10,567.

    Article  Google Scholar 

  • Grunspan, C., & Pérez-Marco, R. (2020). The mathematics of Bitcoin. EMS Newsletter, 115, 31–37.

    Article  Google Scholar 

  • Hassin, R., & Haviv, M. (2002). Nash equilibrium and subgame perfection in observable queues. Annals of Operations Research, 113(1–4), 15–26.

    Article  Google Scholar 

  • Hu, J., & Wellman, M. (2003). Nash Q-learning for general-sum stochastic games. Journal of Machine Learning Research, 4(Nov), 1039–1069.

    Google Scholar 

  • Hubbard, J. H., & Hubbard, B. B. (2015). Vector calculus, linear algebra, and differential forms: A unified approach. Matrix Editions.

    Google Scholar 

  • Jiang, C., Chen, Y., Yang, Y. H., Wang, C. Y., & Liu, K. R. (2014). Dynamic Chinese restaurant game: Theory and application to cognitive radio networks. IEEE Transactions on Wireless Communications, 13(4), 1960–1973.

    Article  Google Scholar 

  • Kwok, Y. K., Song, S., & Hwang, K. (2005). Selfish grid computing: Game-theoretic modeling and NAS performance results. In IEEE international symposium on cluster computing and the grid (pp 1143–1150). IEEE.

  • Kwon, Y., Kim, D., Son, Y., Vasserman, E., & Kim, Y. (2017). Be selfish and avoid dilemmas: Fork after withholding (faw) attacks on bitcoin. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security (pp. 195–209).

  • Lewenberg, Y., Bachrach, Y., Sompolinsky, Y., Zohar, A., & Rosenschein, J. S. (2015). Bitcoin mining pools: A cooperative game theoretic analysis. In International conference on autonomous agents and multiagent systems (AAMAS) (pp. 919–927). IFAAMAS.

  • Liu, Z., Luong, N. C., Wang, W., Niyato, D., Wang, P., Liang, Y. C., & Kim, D. I. (2019). A survey on blockchain: A game theoretical perspective. IEEE Access, 7, 47615–47643.

    Article  Google Scholar 

  • Maskin, E., & Tirole, J. (2001). Markov perfect equilibrium: I. observable actions. Journal of Economic Theory, 100(2), 191–219.

    Article  Google Scholar 

  • Mengistu, T. M., & Che, D. (2019). Survey and taxonomy of volunteer computing. ACM Computing Surveys (CSUR), 52(3), 1–35.

    Article  Google Scholar 

  • Murata, Y., Inaba, T., Takizawa, H., & Kobayashi, H. (2008). Implementation and evaluation of a distributed and cooperative load-balancing mechanism for dependable volunteer computing. In 2008 IEEE international conference on dependable systems and networks with FTCS and DCC (DSN) (pp. 316–325). IEEE.

  • Nahir, A., Orda, A., & Raz, D. (2012). Workload factoring with the cloud: A game-theoretic perspective. In IEEE international conference on computer communications (INFOCOM) (pp. 2566–2570). IEEE.

  • Nakamoto, S. (2008). A peer-to-peer electronic cash system. Bitcoin.org. https://bitcoin.org/bitcoin.pdf.

  • Sapirshtein, A., Sompolinsky, Y., & Zohar, A. (2016). Optimal selfish mining strategies in Bitcoin. In International conference on financial cryptography and data security (pp. 515–532). Springer.

  • Sarmenta, L. F. (2001). Sabotage-tolerance mechanisms for volunteer computing systems. In Proceedings first IEEE/ACM international symposium on cluster computing and the grid (pp. 337–346). IEEE.

  • Sarmenta, L. F. G. (2001). Volunteer computing. Ph.D. thesis, Massachusetts Institute of Technology.

  • Shapley, L. S. (1953). Stochastic games. Proceedings of the National Academy of Sciences, 39(10), 1095–1100.

    Article  Google Scholar 

  • Wang, C. Y., Chen, Y., & Liu, K. R. (2018). Game-theoretic cross social media analytic: How Yelp ratings affect deal selection on Groupon? IEEE Transactions on Knowledge and Data Engineering, 30(5), 908–921.

    Article  Google Scholar 

  • Wang, J., & Zhang, F. (2013). Strategic joining in M/M/1 retrial queues. European Journal of Operational Research, 230(1), 76–87.

    Article  Google Scholar 

  • Watanabe, K., Fukushi, M., & Horiguchi, S. (2009). Collusion-resistant sabotage-tolerance mechanisms for volunteer computing systems. In 2009 IEEE international conference on e-business engineering (pp. 213–218). IEEE.

  • Wong, P., & McArdle, P. (2020). Average U.S. residential monthly electricity price, consumption, and bill (2009–2019). U.S. Energy Information Administration. https://www.eia.gov/todayinenergy/detail.php?id=46276.

  • Zeng, Y., & Zuo, S. (2019). The Matthew effect in computation contests: High difficulty may lead to 51% dominance? In The world wide web conference (pp. 2281–2289). ACM.

  • Zheng, Z., & **e, S. (2018). Blockchain challenges and opportunities: A survey. International Journal of Web and Grid Services, 14(4), 352–375.

    Article  Google Scholar 

Download references

Acknowledgements

This work was partly supported by CEFIPRA grant No. IFC/DST-Inria-2016-01/448 “Machine Learning for Network Analytics”. The contribution of Eitan Altman was supported by LION “Learning In Operations and Networks”, a joint Indo-French research team between INRIA and IIT Bombay. We are thankful to the anonymous reviewers for their comments which led to improvements across all sections of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Swapnil Dhamal.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Other applications of the proposed model

We now briefly discuss the applicability of our proposed model and the utility function given by Eq. (1), to distributed computing applications apart from volunteer computing, such as mining in blockchain. Mining relies on a proof-of-work procedure (Nakamoto, 2008) wherein players (termed miners) collect data that is to be encapsulated in a block and repeatedly compute hashes on potential solutions from a very large search space. A player is said to have mined a block and is rewarded a monetary amount, say r, if the player is the first to find one of the solutions that generates a hash value satisfying certain constraints. The algorithms employed for finding such a solution are typically based on randomized search over an exponentially large search space. The time required to find a solution in such a large search space is independent of the search space explored thus far, resulting in the search being practically memoryless. Now, if a continuous random variable has the memoryless property over the set of reals, it is necessarily exponentially distributed. Hence, the time required to find a solution and hence mine a given block is exponentially distributed, whose rate parameter can be considered to be \(\beta \) (i.e., the expected time is \(\frac{1}{\beta }\)). In Bitcoin mining, the expected time to mine a block is set at 10 minutes.

Now, consider that a player i invests computational power of \(x_i^{(S)}\) to mine when the system is in state S, and let \(\ell \) be the amount of power that is invested by large mining firms over a large period of time (i.e., irrespective of the system state). It is known that if the number of solutions is \(\xi \), the distance of the probability of a player finding a solution before others, from being proportional to the player’s invested power, is \(\tilde{O}(1/\xi )\) (Zeng and Zuo, 2019). As \(\xi \) is typically large in mining, the probability of a player being the first to mine a block at any given time is proportional to its invested power at that time. Hence, the probability of player i being the first to mine the block in state S and winning the reward of r, is \(\frac{x_i^{(S)}}{\sum _{j\in S} x_j^{(S)} +\ell }\).

The possible events that can occur in state S are similar to what we discussed for volunteer computing, namely, the current block getting mined (in place of current segment ending) with rate \(\beta \), a player \(j \notin S\) arriving with rate \(\lambda _j\), and a player \(j \in S\) departing with rate \(\mu _j\). In the event of the block getting mined, player i receives a reward of \(\frac{x_i^{(S)}}{\sum _{j\in S} x_j^{(S)} +\ell } r\) in expectation, and the system stays in the same state S for the next block for which i’s expected utility is perceived as \(\delta R_i^{(S,\textbf{x})}\). Since the sojourn time in state S for the current block is \(\frac{1}{B^{(S)}} = (\beta + \sum _{j \notin S} \lambda _j + \sum _{j \in S} \mu _j)^{-1}\), the corresponding expected cost incurred is \(\frac{c_i x_i^{(S)}}{B^{(S)} }\). Hence, player i’s expected utility as computed in state S is:

$$\begin{aligned} R_i^{(S,\textbf{x})} :=&\frac{\beta }{B^{(S)}} \!\cdot \! \left( \frac{x_i^{(S)}}{\sum _{j\in S} x_j^{(S)} +\ell } r + \delta R_i^{(S,\textbf{x})} \right) - \frac{c_i x_i^{(S)}}{B^{(S)}} \\&+\! \sum _{j \notin S} \frac{\lambda _j}{B^{(S)}} \!\cdot \! R_i^{(S \cup \{j\},\textbf{x})} \!+\! \sum _{j \in S } \frac{\mu _j}{B^{(S)}} \!\cdot \! R_i^{(S \setminus \{j\},\textbf{x})} \end{aligned}$$

It is worth highlighting that the mathematical form of \(R_i^{(S,\textbf{x})}\) is the same as Eq. (1) and so, our analysis and results will hold also for such other applications.

Apart from the above application of block mining at individual level, our model can be applied to mining pools as well, especially those which offer pay-per-share payouts. In general, since most applications involving distributed computing share the same underlying concepts, our model is applicable to a wide variety of applications.

Appendix B: Convergence of expected utility

Let \(\textbf{M}\) be the state transition matrix, among the states corresponding to the set of strategic players present in the system. In what follows, instead of writing \(M(\mathcal {O}(S),\mathcal {O}(S'))\), we simply write \(M(S,S')\) since it does not introduce any ambiguity. So, the elements of \(\textbf{M}\) are:

$$\begin{aligned}&M(S,S) = \frac{\delta \beta }{B^{(S)}} \;, \\ \hbox { for}\ j \!\notin \! S:\,&M(S,S \cup \{j\}) = \frac{\lambda _j}{B^{(S)}} \;, \\ \hbox { for}\ j \!\in \! S:\,&M(S,S \setminus \{j\}) = \frac{\mu _j}{B^{(S)}} \;, \\ \hbox {and all }&\text {other elements of}\, \textbf{M}\, \hbox {are}\, 0. \end{aligned}$$

Here, \(B^{(S)} = \beta + \sum _{j \notin S} \lambda _j + \sum _{j \in S} \mu _j\). Since \(\beta >0\), we have \(B^{(S)} > \sum _{j \notin S} \lambda _j + \sum _{j \in S} \mu _j \). Hence, \(\,\textbf{M}\,\) is strictly substochastic (sum of the elements in each of its rows is less than 1).

Let \(\textbf{F}_i^{(\textbf{x})}\) be the vector whose component \(\mathcal {O}(S)\) is \(F_i^{(S,\textbf{x})}\!\), where

$$\begin{aligned}&F_i^{(S,\textbf{x})} \!=\! \Bigg ( \frac{r \beta }{\sum _{j\in S} x_j^{(S)}+\ell } - c_i \Bigg )\frac{x_i^{(S)}}{B^{(S)}} \end{aligned}$$

We now provide a proof of Lemma 1, which states that the recursive equation for \(\textbf{R}_i^{(\textbf{x})}\), Eq. (1), converges for any policy profile \(\textbf{x}\).

Proof

Let \(\textbf{R}_{i\langle t \rangle }^{(\textbf{x})} = (R_{i \langle t \rangle }^{(1,\textbf{x})},\ldots ,R_{i \langle t \rangle }^{(2^{\vert \mathcal {U} \vert },\textbf{x})})^T\), where t is the iteration number and \((\cdot )^T\) stands for matrix transpose. The iteration for the value of \(\textbf{R}_{i\langle t \rangle }^{(\textbf{x})}\) starts at \(t=0\); we examine if it converges when \(t \rightarrow \infty \). Now, the expression for the expected utility in all states can be written in matrix form and then solving the recursion, as

$$\begin{aligned} \textbf{R}_{i \langle t \rangle }^{(\textbf{x})}&= \textbf{M} \textbf{R}_{i \langle t-1 \rangle }^{(\textbf{x})} + \textbf{F}_i^{(\textbf{x})} = \left( \textbf{M}\right) ^t \textbf{R}_{i \langle 0 \rangle }^{(\textbf{x})} + \Bigg ( \sum _{\eta =0}^{t-1} \left( \textbf{M}\right) ^\eta \Bigg ) \textbf{F}_i^{(\textbf{x})} . \end{aligned}$$

Now, since \(\textbf{M}\) is strictly substochastic, its spectral radius is less than 1. So when \(t \rightarrow \infty \), we have \(\lim _{t \rightarrow \infty } (\textbf{M})^t\! = \textbf{0}\). Since \(\textbf{R}_{i \langle 0 \rangle }^{(\textbf{x})}\) is a finite constant, we have \(\lim _{t \rightarrow \infty } (\textbf{M})^t \textbf{R}_{i \langle 0 \rangle }^{(\textbf{x})} = \textbf{0}\). Further, \(\lim _{t \rightarrow \infty } \sum _{\eta =0}^{t-1} (\textbf{M})^\eta = (\textbf{I}-\textbf{M})^{-1}\) (Hubbard and Hubbard, 2015). This implicitly means that \((\textbf{I}-\textbf{M})\) is invertible. Hence,

$$\begin{aligned} \lim _{t \rightarrow \infty } \textbf{R}_{i \langle t \rangle }^{(\textbf{x})}&= \lim _{t \rightarrow \infty } \left( \textbf{M}\right) ^t \textbf{R}_{i \langle 0 \rangle }^{(\textbf{x})} + \Bigg ( \sum _{\eta =0}^{\infty } \left( \textbf{M}\right) ^\eta \Bigg ) \textbf{F}_i^{(\textbf{x})} \\&= \textbf{0} + (\textbf{I}-\textbf{M})^{-1} \textbf{F}_i^{(\textbf{x})} . \end{aligned}$$

\(\square \)

Note also that Proposition 1 can be proved alternatively along the same line as the above proof of Lemma 1, by having \(\textbf{W}\) in place of \(\textbf{M}\) and \(\textbf{Z}_i^{(\textbf{x})}\) in place of \(\textbf{F}_i^{(\textbf{x})}\).

Appendix C: Effect of offered reward on total power received by the center in a state

Here, we discuss the general case where players could have different cost parameters. Note from Proposition 2 that since the set of investing players in any given state could change with r, it is not even clear whether the total power received by the center in a state would increase monotonically with r. In particular, we need to inspect whether the total power received by the center could decrease when the set of investing players expands owing to the increased reward. We show the following result.

Proposition 3

If players invest as per Proposition 2, the total power received by the center in any given state is a monotone increasing continuous function of the reward parameter.

Proof

Recall that in a state S, \({\psi }^{(S)} = r \beta \frac{ \vert \hat{S} \vert -1 + \sqrt{ ( \vert \hat{S} \vert -1)^2 + \frac{4\ell }{r \beta } \sum _{j \in \hat{S}} c_j }}{2 \sum _{j \in \hat{S}} c_j} \), where \(\hat{S} \subseteq S\) is the set of investing players. It is clear that for a given set of investors \(\hat{S}\), \(\psi ^{(S)}\) increases monotonically with r. As r varies, set \(\hat{S}\) may change, thus changing the values of \( \vert \hat{S} \vert \) as well as \(\sum _{j \in \hat{S}} c_j\). In order to show a monotonic increase of \(\psi ^{(S)}\) with r despite any changes in set \(\hat{S}\), we need to show that at any value of r where players get added to \(\hat{S}\), the value of \(\psi ^{(S)}\) does not decrease (i.e., either increases or stays the same). Without loss of generality, consider that only one player gets added at any such value of r. In what follows, we show continuity at values of r where the set of investing players changes.

Consider a value of r such that the set of investing players is \(\hat{S}\setminus \{i\}\) when the reward parameter is infinitesimally lower than r, while it is \(\hat{S}\) (i.e., player i gets added to the set of investing players) when the reward parameter is infinitesimally higher than r. At this value of r, let \(\underline{\psi }^{(S)}\) be the limit of \({\psi }^{(S)}\) from the left and \(\overline{\psi }^{(S)}\) be its limit from the right. We will now show that \(\underline{\psi }^{(S)} = \overline{\psi }^{(S)}\).

Since player i barely satisfies the cost constraint at this value of r, we have (the following equality is in limit): \(c_i = \frac{2 \sum _{j \in \hat{S}} c_j}{ \vert \hat{S} \vert -1 + \sqrt{ ( \vert \hat{S} \vert -1)^2 + \frac{4\ell }{r \beta } \sum _{j \in \hat{S}} c_j }}\). So, the limit of \({\psi }^{(S)}\) from the right is

$$\begin{aligned}&\overline{\psi }^{(S)} = r \beta \frac{ \vert \hat{S} \vert -1 + \sqrt{ ( \vert \hat{S} \vert -1)^2 + \frac{4\ell }{r \beta } \sum _{j \in \hat{S}} c_j }}{2 \sum _{j \in \hat{S}} c_j} = \frac{r \beta }{c_i} . \end{aligned}$$
(C1)

Now, \(c_i = \frac{2 \sum _{j \in \hat{S}} c_j}{ \vert \hat{S} \vert -1 + \sqrt{ ( \vert \hat{S} \vert -1)^2 + \frac{4\ell }{r \beta } \sum _{j \in \hat{S}} c_j }} \) is equivalent to

$$\begin{aligned} r = \frac{\ell }{\beta } \cdot \frac{c_i^2 }{\sum _{j \in \hat{S}\setminus \{i\}} c_j - c_i ( \vert \hat{S} \vert -2)} . \end{aligned}$$
(C2)

This gives us an expression for r at which the set of investing players expands from \(\hat{S}\setminus \{i\}\) to \(\hat{S}\).

Now, the limit of \({\psi }^{(S)}\) from the left is \(\underline{\psi }^{(S)} = r \beta \frac{ \vert \hat{S} \vert -2 + \sqrt{ ( \vert \hat{S} \vert -2)^2 + \frac{4\ell }{r \beta } \sum _{j \in \hat{S}\setminus \{i\}} c_j }}{2 \sum _{j \in \hat{S}\setminus \{i\}} c_j} \).

Let \(\underline{\psi }^{(S)} = r \beta y\), where \(y = \frac{ \vert \hat{S} \vert -2 + \sqrt{ ( \vert \hat{S} \vert -2)^2 + \frac{4\ell }{r \beta } \sum _{j \in \hat{S}\setminus \{i\}} c_j }}{2 \sum _{j \in \hat{S}\setminus \{i\}} c_j}\). This, in conjunction with Eq. (C2), gives

$$\begin{aligned} y^2 \! \sum _{j \in \hat{S}\setminus \{i\}} c_j - y ( \vert \hat{S} \vert -2) = \left( \frac{1}{c_i}\right) ^2 \! \sum _{j \in \hat{S}\setminus \{i\}} c_j - \frac{1}{c_i} ( \vert \hat{S} \vert -2) . \end{aligned}$$

It can be easily seen that the above equation is satisfied when the value of y is \(\frac{1}{c_i}\), and since y has a unique value from its definition, we must have \(y = \frac{1}{c_i}\). Hence, from the above and Eq. (C1), we have \(\underline{\psi }^{(S)} = r \beta y = \frac{r \beta }{c_i} = \overline{\psi }^{(S)}\). This completes the proof. \(\square \)

Fig. 14
figure 14

Effect of the reward parameter r on the total power received by the center in a state

Figure    presents representative plots showing the effect of the reward parameter r on the total power received by the center in a given state S. We consider the following values for the purpose of visualization (the plots for any other values follow similar behavior): \(\beta = 6, \ell = 10^6, \vert S \vert = 5, \text { and } \{c_i\}_{i \in S} = \{0.40,0.45,0.50,0.55,0.60\}\). We vary the value of r from 0 up to \(10^6\) with a resolution of \(10^3\). As r increases, the set of investing players expands (which is intuitive and also can be seen from the proof of Proposition 3). In the plots, the points at which a previously non-investing player turns into an investing player are marked by red dots. It can be seen that with an increase in r, the total power increases similar to a piecewise-linear ramp function.

Recall that in a state S, the investing players \(i \in \hat{S}\) collectively satisfy: \(c_i \!<\! \frac{2 \sum _{j \in \hat{S}} c_j}{ \vert \hat{S} \vert -1 + \sqrt{ ( \vert \hat{S} \vert -1)^2 + \frac{4\ell }{r \beta }\! \sum _{j \in \hat{S}} c_j }} \). For low values of r, the threshold is too low for the players’ cost parameters to satisfy; hence no strategic players invest and the total power equals \(\ell \) (this is the base of the ramp function). For values of r which attract investments, the term \(\frac{4\ell }{r \beta } \sum _{j \in \hat{S}} c_j \) is of a similar order as \( \vert \hat{S} \vert \) or lower (this can be seen from the critical value of r derived in Eq. (C2), which consequently results in \(\frac{4\ell }{r \beta } \sum _{j \in \hat{S}} c_j \) being upper bounded by \(4 \vert \hat{S} \vert \)). From Proposition 2, the total power received by the center in state S is \( \psi ^{(S)} = r \beta \frac{ \vert \hat{S} \vert -1 + \sqrt{ ( \vert \hat{S} \vert -1)^2 + \frac{4\ell }{r \beta }\! \sum _{j \in \hat{S}} c_j }}{2 \sum _{j \in \hat{S}} c_j}\). It can be seen that within any range of r wherein \(\hat{S}\) does not change, \(\psi ^{(S)}\) increases as a concave function of r. It can also be seen that in general, \(\psi ^{(S)}\) is not differentiable with respect to r at the breakpoints where set \(\hat{S}\) changes. Thus, \(\psi ^{(S)}\) increases in r as a piecewise-concave function. Moreover, due to the suppressed nature of the term \(\frac{4\ell }{r \beta } \sum _{j \in \hat{S}} c_j \) in \(\psi ^{(S)}\) for values of r which attract investments, the increase in \(\psi ^{(S)}\) with r is close to being linear within any range of r wherein \(\hat{S}\) does not change. Hence, the increase in \(\psi ^{(S)}\) with respect to r would be typically close to a piecewise-linear ramp function.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dhamal, S., Ben-Ameur, W., Chahed, T. et al. A game theoretic framework for distributed computing with dynamic set of agents. Ann Oper Res 336, 1871–1904 (2024). https://doi.org/10.1007/s10479-023-05231-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-023-05231-7

Keywords

Navigation