Keywords

1 Introduction

In this paper, we study a variant of a hat guessing game. In these types of games, there are some entities—players, pirates, sages, or, as in our case, bears. A bear sits on each vertex of graph G. There is some adversary (a demon in our case) that puts a colored hat on the head of each bear. A bear on a vertex v sees only the hats of bears on the neighboring vertices of v but he does not know the color of his own hat. Now to defeat the demon, the bears should guess correctly the color of their hats. However, the bears can only discuss their strategy before they are given the hats. After they get them, no communication is allowed, each bear can only guess his hat color. The variants of the game differ in the bears’ winning condition.

The first variant was introduced by Ebert [8]. In this version, each bear gets a red or blue hat (chosen uniformly and independently) and they can either guess a color or pass. The bears see each other, i.e. they stay on vertices of a clique. They win if at least one bear guesses his color correctly and no bear guesses a wrong color. The question is what is the highest probability that the bears win achievable by some strategy. Soon, the game became quite popular and it was even mentioned in NY Times [26].

Winkler [29] studied a variant where the bears cannot pass and the objective is how many of them guess correctly their hat color. A generalization of this variant for more than two colors was studied by Feige [11] and Aggarwal [1]. Butler et al. [6] studied a variant where the bears are sitting on vertices of a general graph, not only a clique. For a survey of various hat guessing games, we refer to theses of Farnik [10] or Krzywkowski [23].

In this paper, we study a variant of the game introduced by Farnik [10], where each bear has to guess and they win if at least one bear guesses correctly. He introduced a hat guessing number HG of a graph G (also named as hat chromatic number and denoted \(\mu \) in later works) which is defined as the maximum h such that bears win the game with h hat colors. We study a variant where each bear can guess multiple times and we consider that a bear guesses correctly if the color of his hat is included in his guesses. We introduce a parameter fractional hat chromatic number \(\hat{\mu }\) of a graph G, which we define as the supremum of \(\frac{h}{g}\) such that each bear has g guesses and they win the game with h hat colors.

Albeit the hat guessing game looks like a recreational puzzle, connections to more “serious” areas of mathematics and computer science were shown—like coding theory [9, 19], network coding [14, 25], auctions [1], finite dynamical systems [12], and circuits [30]. In this paper, we exhibit a connection between the hat guessing game and the independence polynomial of graphs, which is our main result. This connection allows us to compute the optimal strategy of bears (and thus the value of \(\hat{\mu }\)) of an arbitrary chordal graph in polynomial time. We also prove that the fractional hat chromatic number \(\hat{\mu }\) is asymptotically equal, up to a logarithmic factor, to the maximum degree of a graph. Finally, we compute the exact value of \(\hat{\mu }\) of graphs from some classes, like paths, cycles, and cliques.

We would like to point out that the existence of the algorithm computing \(\hat{\mu }\) of a chordal graph is far from obvious. Butler et al. [6] asked how hard is to compute \(\mu (G)\) and the optimal strategy for the bears. Note that a trivial non-deterministic algorithm for computing the optimal strategy (or just the value of \(\mu (G)\) or \(\hat{\mu }(G)\)) needs exponential time because a strategy of a bear on v is a function of hat colors of bears on neighbors of v (we formally define the strategy in Sect. 2). It is not clear if the existence of a strategy for bears would imply a strategy for bears where each bear computes his guesses by some efficiently computable function (like linear, computable by a polynomial circuit, etc.). This would allow us to put the problem of computing \(\mu \) into some level of the polynomial hierarchy, as noted by Butler et al. [6]. However, we are not aware of any hardness results for the hat guessing games. The maximum degree bound for \(\hat{\mu }\) does not imply an exact efficient algorithm computing \(\hat{\mu }(G)\) as well. This phenomenon can be illustrated by the edge chromatic number \(\chi '\) of graphs. By Vizing’s theorem [7, Chapter 5], it holds for any graph G that \(\varDelta (G) \le \chi '(G) \le \varDelta (G) + 1\). However, it is NP-hard to distinguish between these two cases [18].

Organization of the Paper. We finish this section with a summary of results about the variant of the hat guessing game we are studying. In the next section, we present notions used in this paper and we define formally the hat guessing game. In Sect. 3, we formally define the fractional hat chromatic number \(\hat{\mu }\) and compare it to \(\mu \). In Sect. 4, we generalize some previous results to the multi-guess setting. We use these tools to prove our main result in Sect. 5 including the poly-time algorithm that computes \(\hat{\mu }\) for chordal graphs. The maximum degree bound for \(\hat{\mu }\) and computation of exact values of paths and cycles are provided in Sect. 6.

1.1 Related Works

As mentioned above, Farnik [10] introduced a hat chromatic number \(\mu (G)\) of a graph G as the maximum number of colors h such that the bears win the hat guessing game with h colors and played on G. He proved that \(\mu (G) \le O\bigl (\varDelta (G)\bigr )\) where \(\varDelta (G)\) is the maximum degree of G.

Since then, the parameter \(\mu (G)\) was extensively studied. The parameter \(\mu \) for multipartite graphs was studied by Gadouleau and Georgiu [13] and by Alon et al. [2]. Szczechla [28] proved that \(\mu \) of cycles is equal to 3 if and only if the length of the cycle is 4 or it is divisible by 3 (otherwise it is 2). Bosek et al. [5] gave bounds of \(\mu \) for some graphs, like trees and cliques. They also provided some connections between \(\mu (G)\) and other parameters like chromatic number and degeneracy. They conjectured that \(\mu (G)\) is bounded by some function of the degeneracy d(G) of the graph G. They showed that such function has to be at least exponential as they presented a graph G of \(\mu (G) \ge 2^{d(G)}\). This result was improved by He and Li [16] who showed there is a graph G such that \(\mu (G) \ge 2^{2^{d(G) - 1}}\). Since \(\hat{\mu }(G)\) is upper-bounded \(O\bigl (\varDelta (G)\bigr )\) [10] it holds that \(\hat{\mu }\) can not be bounded by any function of degeneracy as there are graph classes of unbounded maximum degree and bounded degeneracy (e.g. trees or planar graphs). Recently, Kokhas et al. [21, 22] studied a non-uniform version of the game, i.e., for each bear, there could be a different number of colors of the hat. They considered cliques and almost cliques. They also provided a technique to build a strategy for a graph G whenever G is made up by combining \(G_1\) and \(G_2\) with known strategies. We generalize some of their results and use them as “basic blocks” for our main result.

2 Preliminaries

We use standard notions of the graph theory. For an introduction to this topic, we refer to the book by Diestel [7]. We denote a clique as \(K_n\), a cycle as \(C_n\), and a path as \(P_n\), each on n vertices. The maximum degree of a graph G is denoted by \(\varDelta (G)\), where we shorten it to \(\varDelta \) if the graph G is clear from the context. The neighbors of a vertex v are denoted by N(v). We use \(N^+(v)\) to denote the closed neighborhood of v, i.e. \(N^+(v) = N(v) \cup \{v\}\). For a set U of vertices of a graph G, we denote \(G \setminus U\) a graph induced by vertices \(V(G) \setminus U\), i.e., a graph arising from G by removing the vertices in U.

A hat guessing game is a triple \(\mathcal {H}= (G, h, g)\) where

  • \(G=(V,E)\) is an undirected graph, called the visibility graph,

  • \(h \in \mathbb {N}\) is a hatness that determines the number of different possible hat colors for each bear, and

  • \(g \in \mathbb {N}\) is a guessing number that determines the number of guesses each bear is allowed to make.

The rules of the game are defined as follows. On each vertex of G sits a bear. The demon puts a hat on the head of each bear. Each hat has one of h colors. We would like to point out, that it is allowed that bears on adjacent vertices get a hat of the same color. The only information the bear on a vertex v knows are the colors of hats put on bears sitting on neighbors of v. Based on this information only, the bear has to guess a set of g colors according to a deterministic strategy agreed to in advance. We say bear guesses correctly if he included the color of his hat in his guesses. The bears win if at least one bear guesses correctly.

Formally, we associate the colors with natural numbers and say that each bear can receive a hat colored by a color from the set \(S = [h] = \{0, \ldots , h-1\}\). A hats arrangement is a function \(\varphi : V \rightarrow S\). A strategy of a bear on v is a function \(\varGamma _v: {S}^{|N(v)|} \rightarrow \left( {\begin{array}{c}S\\ g\end{array}}\right) \), and a strategy for \(\mathcal {H}\) is a collection of strategies for all vertices, i.e. \((\varGamma _v)_{v\in V}\). We say that a strategy is winning if for any possible hats arrangement \(\varphi : V \rightarrow S\) there exists at least one vertex v such that \(\varphi (v)\) is contained in the image of \(\varGamma _v\) on \(\varphi \), i.e., \(\varphi (v) \in \varGamma _v \bigl ( (\varphi (u))_{u \in N(v)} \bigr )\). Finally, the game \(\mathcal {H}\) is winning if there exists a winning strategy of the bears.

As a classical example, we describe a winning strategy for the hat guessing game \((K_3, 3, 1)\). Let us denote the vertices of \(K_3\) by \(v_0\), \(v_1\) and \(v_2\) and fix a hats arrangement \(\varphi \). For every \(i \in [3]\), the bear on the vertex \(v_i\) assumes that the sum \(\sum _{j \in [3]} \varphi (v_j)\) is equal to i modulo 3 and computes its guess accordingly. It follows that for any hat arrangement \(\varphi \) there is always exactly one bear that guesses correctly, namely the bear on the vertex \(v_i\) for \(i = \sum _j \varphi (v_j) \pmod {3}\).

Some of our results are stated for a non-uniform variant of the hat guessing game. A non-uniform game is a triple \(\bigl (G = (V,E), \mathbf {h}, \mathbf {g}\bigr )\) where \(\mathbf {h}= (h_v)_{v \in V}\) and \(\mathbf {g}= (g_v)_{v \in V}\) are vectors of natural numbers indexed by the vertices of G and a bear on v gets a hat of one of \(h_v\) colors and is allowed to guess exactly \(g_v\) colors. Other rules are the same as in the standard hat guessing game. To distinguish between the uniform and non-uniform games, we always use plain letters h and g for the hatness and the guessing number, respectively, and bold letters (e.g. \(\mathbf {h},\mathbf {g}\)) for vectors indexed by the vertices of G.

3 Fractional Hat Chromatic Number

From the hat guessing games, we can derive parameters of the underlying visibility graph G. Namely, the hat chromatic number \(\mu (G)\) is the maximum integer h for which the hat guessing game (Gh, 1) is winning, i.e., each bear gets a hat colored by one of h colors and each bear has only one guess—we call such game a single-guessing game. In this paper, we study a parameter fractional hat chromatic number \(\hat{\mu }(G)\) arising from the hat multi-guessing game and defined as

Observe that \(\mu (G) \le \hat{\mu }(G)\). Farnik [10] and Bosek et al. [5] also study multi-guessing games. They considered a parameter \(\mu _g(G)\) that is the maximum number of colors h such that the bears win the game (Ghg). The difference between \(\mu _g\) and \(\hat{\mu }\) is the following. If \(\mu _g(G) \ge k\), then the bears win the game (Gkg) and \(\hat{\mu } \ge \frac{k}{g}\). If \(\hat{\mu }(G) \ge \frac{p}{q}\), then there are \(h,g \in \mathbb {N}\) such that \(\frac{p}{q} = \frac{h}{g}\) and the bears win the game (Ghg). However, it does not imply that the bears would win the game (Gpq). It is easy to prove that if the bears win the game (Ghg) then they win the game (Gkhkg) for any constant \(k \in \mathbb {N}\) (see the full version [4] for the details). The opposite implication does not hold– we discuss a counterexample at the end of this section. Unfortunately, this property prevents us from using our algorithm, which computes \(\hat{\mu }\), to compute also \(\mu \) of chordal graphs.

Moreover, by definition, the parameter \(\hat{\mu }\) does not even have to be a rational number. In such a case, for each \(p,q \in \mathbb {N}\), it holds that

  • If \(\frac{p}{q} < \hat{\mu }(G)\) then there are \(h,g \in \mathbb {N}\) such that \(\frac{p}{q} = \frac{h}{g}\) and the bears win the game (Ghg).

  • If \(\frac{p}{q} > \hat{\mu }(G)\) then the demon wins the game (Gpq).

For example, the fractional hat chromatic number \(\hat{\mu }(P_3)\) of the path \(P_3\) is irrational. We discuss path \(P_3\) the full version [4]. In the case of an irrational \(\hat{\mu }(G)\), our algorithm computing the value of \(\hat{\mu }\) of chordal graphs outputs an estimate of \(\hat{\mu }(G)\) with arbitrary precision. The next lemma state that the multi-guessing game is in some sense monotone. The proof is in the full version [4].

Lemma 1

Let \(\bigl (G = (V,E) ,h,g \bigr )\) be a winning hat guessing game. Let \(r'\) be a rational number such that \(r' \le h/g\). Then, there exist numbers \(h', g' \in \mathbb {N}\) such that \(h'/g' = r'\) and the hat guessing game \((G,h',g')\) is winning.

It is straight-forward to prove a generalization of Lemma 1 for non-uniform games. However, for simplicity, we state it only for the uniform games. By the proof of the previous lemma, we know that we can use a strategy for (Ghg) to create a strategy for a game \((G, k\cdot h, k \cdot g + \ell )\) for arbitrary \(k,\ell \in \mathbb {N}\). A question is if we can do it in general: Can we derive a winning strategy if we decrease the fraction h/g, but the hatness h and the guessing number g are changed arbitrarily? It is true for cliques. We show in Sect. 4 that the bears win the game \((K_n, h, g)\) if and only if \(h/g \le n\). However, it is not true in general. For example, for n large enough it holds that \(\hat{\mu }(P_n) \ge 3\), as we show in Sect. 6 that \(\hat{\mu }(P_n)\) converges to 4 when n goes to infinity. However, Butler et al. [6] proved that \(\mu (T) = 2\) for any tree T. Thus, the bears lose the game \((P_n, 3, 1)\).

4 Basic Blocks

In this section, we generalize some results of Kokhas et al. [21, 22] about cliques and strategies for graph products, which we use for proving our main result. The single-guessing version of the next theorem (without the algorithmic consequences) was proved by Kokhas et al. [21, 22]. The proof of the following theorem is stated in the full version [4].

Theorem 1

Bears win a game \(\bigl (K_n = (V,E), \mathbf {h}, \mathbf {g}\bigr )\) if and only if

$$ \sum _{v \in V} \frac{g_v}{h_v} \ge 1. $$

Moreover, if there is a winning strategy, then there is a winning strategy \((\varGamma _v)_{v \in V}\) such that each \(\varGamma _v\) can be described by two linear inequalities whose coefficients can be computed in linear time.

By Theorem 1, we can conclude the following corollary.

Corollary 1

For each \(n \in \mathbb {N}\), it holds that \(\hat{\mu }(K_n) = n\).

Further, we generalize a result of Kokhas and Latyshev [21]. In particular, we provide a new way to combine two hat guessing games on graphs \(G_1\) and \(G_2\) into a hat guessing game on graph obtained by gluing \(G_1\) and \(G_2\) together in a specific way.

Let \(G_1 = (V_1, E_1)\) and \(G_2 = (V_2, E_2)\) be graphs, let \(S \subseteq V_1\) be a set of vertices inducing a clique in \(G_1\), and let \(v \in V_2\) be an arbitrary vertex of \(G_2\). The clique join of graphs \(G_1\) and \(G_2\) with respect to S and v is the graph \(G = (V,E)\) such that \(V = V_1 \cup V_2 \setminus \{v\}\); and E contains all the edges of \(E_1\), all the edges of \(E_2\) that do not contain v, and an edge between every \(w \in S\) and every neighbor of v in \(G_2\). See Fig. 1 for an example of a clique join and the application of the following lemma.

Lemma 2

Let \(\mathcal {H}_1 = \bigl (G_1 =(V_1, E_1), \mathbf {h}^1, \mathbf {g}^1\bigr )\) and \(\mathcal {H}_2 = \bigl (G_2 = (V_2, E_2), \mathbf {h}^2, \mathbf {g}^2\bigr )\) be two hat guessing games and let \(S \subseteq V_1\) be a set inducing a clique in \(G_1\) and \(v\in V_2\). Set G to be the clique join of graphs \(G_1\) and \(G_2\) with respect to S and v. If the bears win the games \(\mathcal {H}_1\) and \(\mathcal {H}_2\), then they also win the game \(\mathcal {H}= (G, \mathbf {h}, \mathbf {g})\) where

$$\begin{aligned} h_u = {\left\{ \begin{array}{ll} h^1_u &{}u\in V_1\setminus S\\ h^2_u &{}u\in V_2\setminus \{v\}\\ h^1_u\cdot h^2_v &{}u \in S \text {, and} \end{array}\right. }\qquad g_u = {\left\{ \begin{array}{ll} g^1_u &{}u\in V_1\setminus S\\ g^2_u &{}u\in V_2\setminus \{v\}\\ g^1_u\cdot g^2_v &{}u \in S. \end{array}\right. } \end{aligned}$$

Proof Idea. For every bear \(u \in S\), we interpret his color as a tuple \((c^1_u, c^2_u)\) where \(c^1_u \in [h^1_u]\) and \(c^2_u \in [h^2_v]\). The bears in \(G_1 \setminus S\) or \(G_2 \setminus \{v\}\) use the strategies for \(\mathcal {H}_1\) or \(\mathcal {H}_2\), respectively. The bears in S combine the winning strategies for \(\mathcal {H}_1\) and \(\mathcal {H}_2\). The full proof is in the full version [4].    \(\Box \)

We remark that Lemma 2 generalizes Theorem 3.1 and Theorem 3.5 of [21] not only by introducing multiple guesses but also by allowing for more general ways to glue two graphs together. Thus, it provides new constructions of winning games even for single-guessing games.

Fig. 1.
figure 1

Applying Lemma 2 on winning hat guessing games \((C_4, 3, 1)\) (see [28]) and \((K_3, 3, 1)\), we obtain a winning hat guessing game \((G, \mathbf {h}, 1)\) where G is the result of identifying an edge in \(C_4\) and \(K_4\), and \(\mathbf {h}\) is given in the picture.

5 Independence Polynomial

The multivariate independence polynomial of a graph \(G = (V,E)\) on variables \(\mathbf {x} = (x_v)_{v\in V}\) is

First, we describe informally the connection between the multi-guessing game and the independence polynomial. Consider the game (Ghg) and fix a strategy of bears. Suppose that the demon put on the head of each bear a hat of random color (chosen uniformly and independently). Let \(A_v\) be an event that the bear on the vertex v guesses correctly. Then, the probability of \(A_v\) is exactly g/h. Moreover, for any independent set I holds that \(A_v\) is independent on all events \(A_w\) for \(w \in I, w \ne v\). Thus, we can use the inclusion-exclusion principle to compute the probability that \(A_v\) occurs for at least one \(v \in I\), i.e., at least one bear sitting on some vertex of I guesses correctly.

Assume that no two bears on adjacent vertices guess correctly their hat colors at once; it turns out that if we plug \(-g/h\) into all variables of the non-constant terms of \(-P_G\), then we get exactly the fraction of all hat arrangements on which the bears win. The non-constant terms of \(P_G\) correspond (up to sign) to the terms of the formula from the inclusion-exclusion principle. Because of that, we have to plug \(-g/h\) into the polynomial \(P_G\).

To avoid confusion with the negative fraction \(-g/h\), we define signed independence polynomial as \(Z_G(\mathbf {x}) = P_G(-\mathbf {x})\), i.e.,

We also introduce the monovariate signed independence polynomial \(U_G(x)\) obtained by plugging x for each variable \(x_v\) of \(Z_G\).

Note that the constant term of any independence polynomial \(P_G(\mathbf {x})\) equals to 1, arising from taking \(I = \emptyset \) in the sum from the definition of \(P_G\). When \(U_G(g/h) = 0\) and no two adjacent bears guess correctly at the same time, then the bears win the game (Ghg) because the fraction of all hat arrangements, on which at least one bear guesses correctly, is exactly 1, however, the proof is far from trivial.

Slightly abusing the notation, we use \(Z_{G'}(\mathbf {x})\) to denote the independence polynomial of an induced subgraph \(G'\) with variables \(\mathbf {x}\) restricted to the vertices of \(G'\). The independence polynomial \(P_G\) can be expanded according to a vertex \(v \in V\) in the following way.

$$ P_G(\mathbf {x}) = P_{G\setminus \{v\}}(\mathbf {x}) + x_v P_{G \setminus N^+(v)}(\mathbf {x}) $$

The analogous expansions hold for the polynomials \(Z_G\) and \(U_G\) as well. This expansion follows from the fact that for any independent set I of G, it holds that either v is not in I (the first term of the expansion), or v is in I but in that case, no neighbor of v is in I (the second term). The formal proof of this expansion of \(P_G\) was provided by Hoede and Li [17].

For a graph G, we let \(\mathcal {R}(G)\) denote the set of all vectors \(\mathbf {r}\in [0,\infty )^V\) such that \(Z_G(\mathbf {w}) > 0\) for all \(0 \le \mathbf {w}\le \mathbf {r}\), where the comparison is done entry-wise. For the monovariate independence polynomial \(U_G\), an analogous set to \(\mathcal {R}(G)\) would be exactly the real interval [0, r) where r is the smallest positive root of \(U_G\). (Note that \(Z_G(\mathbf{0})=1\) and \(U_G(0)=1\).)

Our first connection of the independence polynomial to the hat guessing game comes in the shape of a sufficient condition for bears to lose. Consider the following beautiful connection between Lovász Local Lemma and independence polynomial due to Scott and Sokal [27].

Theorem 2

([27] Theorem 4.1). Let \(G =(V,E)\) be a graph and let \((A_v)_{v \in V}\) be a family of events on some probability space such that for every v, the event \(A_v\) is independent of \(\{A_w \mid w \not \in N^+(v)\}\). Suppose that \(\mathbf {p}\in [0,1]^V\) is a vector of real numbers such that for each v we have \(P(A_v) \le p_v\) and \(\mathbf {p}\in \mathcal {R}(G)\). Then

$$\begin{aligned} P\bigl (\bigcap _{v \in V}\bar{A_v}\bigr ) \ge Z_G(\mathbf {p}) > 0. \end{aligned}$$

The full proofs omitted in this section are stated in the full version [4].

Proposition 1

A hat guessing game \(\mathcal {H}= (G=(V,E),\mathbf {h},\mathbf {g})\) is losing whenever \(\mathbf {r}\in \mathcal {R}(G)\) where \(\mathbf {r}= (g_v/h_v)_{v \in V}\).

Proof Idea. We let the demon assign a hat to each bear uniformly at random and independently from the other bears. Let \(A_v\) be the event that the bear on the vertex v guesses correctly. Applying Theorem 2 to G and the events \(A_v\), we conclude that the bears lose (no event \(A_v\) occurs) with a non-zero probability.    \(\Box \)

A strategy for a hat guessing game \(\mathcal {H}\) is perfect if it is winning and in every hat arrangement, no two bears that guess correctly are on adjacent vertices. We remark that perfect strategies exist, for example the strategy for a single-guessing game on a clique \(K_n\) and exactly n colors [20], or for a multi-guessing game on a clique \(K_n\) and \(h / g = n\) (Corollary 1). The following proposition shows that a perfect strategy can occur only when \(\mathbf {r}= (g_v/h_v)_{v \in V}\) lies in some sense just outside of \(\mathcal {R}(G)\).

Proposition 2

If there is a perfect strategy for the hat guessing game \((G,\mathbf {h},\mathbf {g})\) then for \(\mathbf {r}= (g_v/h_v)_{v \in V}\) we have that \(Z_G(\mathbf {r}) = 0\) and \(Z_G(\mathbf {w}) \ge 0\) for every \(0 \le \mathbf {w}\le \mathbf {r}\).

Proof Idea. We fix a perfect strategy and show that if we plug the vector \(\mathbf {r}\) into \(Z_G\) then the non-constant terms of \(Z_G\) compute exactly the negative fraction of hat arrangements for which at least one bear guesses his hat color correctly. We point out that the assumption of the perfect strategy is crucial and this step would not be true without this assumption. Since the constant term of \(Z_G\) is always equal to 1, it follows that \(Z_G(\mathbf {r}) = 0\).

Scott and Sokal [27, Corollary 2.20] proved that \(Z_G(\mathbf {w}) \ge 0\) for every \(0 \le \mathbf {w}\le \mathbf {r}\) if and only if \(\mathbf {r}\) lies in the closure of \(\mathcal {R}(G)\). Therefore, Proposition 2 further implies that if a perfect strategy for game \((G,\mathbf {h},\mathbf {g})\) exists, then \(\mathbf {r}= (g_v/h_v)_{v \in V}\) lies in the closure of \(\mathcal {R}(G)\). And since \(\mathbf {r}\) cannot lie inside \(\mathcal {R}(G)\) due to Proposition 1, it must belong to the boundary of the set \(\mathcal {R}(G)\).

The natural question is what happens outside of the closure of \(\mathcal {R}(G)\). We proceed to answer this question for chordal graphs.

A graph G is chordal if every cycle of length at least 4 has a chord. For our purposes, it is more convenient to work with a different equivalent definition of chordal graphs. For a graph \(G = (V,E)\), a clique tree of G is a tree T whose vertex set is precisely the subsets of V that induce maximal cliques in G and for each \(v \in V\) the vertices of T containing v induces a connected subtree. Gavril [15] showed that G is chordal if and only if there exists a clique tree of G.

Theorem 3

Let \(G = (V,E)\) be a chordal graph and let \(\mathbf {r}= (r_v)_{v \in V}\) be a vector of rational numbers from the interval [0, 1]. If \(\mathbf {r}\not \in \mathcal {R}(G)\) then there are vectors \(\mathbf {g},\mathbf {h}\in \mathbb {N}^V\) such that \(g_v/h_v \le r_v\) for every \(v \in V\) and the hat guessing game \((G,\mathbf {h},\mathbf {g})\) is winning.

Proof Idea. The proof is done by induction over the vertices of a clique tree T of G. We take a leaf of T, which represents a clique C of G. If the vector \(\mathbf {r}\) is such that the bears win on C by Theorem 1, then we are done. Otherwise, let \(G'\) be a graph arising from G by removing vertices that are only in C and no other maximal clique. We define new vectors \(\mathbf {g}_1, \mathbf {g}_2, \mathbf {g}_1\), and \(\mathbf {h}_2\) arising from \(\mathbf {g}\) and \(\mathbf {h}\) in such a way that the bears would win the game \(\mathcal {H}_1 = (C,\mathbf {h}_1,\mathbf {g}_1)\) by Theorem 1 and the game \(\mathcal {H}_2 = (G',\mathbf {h}_2,\mathbf {h}_1)\) by induction hypothesis. We use the winning strategies for \(\mathcal {H}_1\) and \(\mathcal {H}_2\) and combine them into a winning strategy for the game \((G,\mathbf {h},\mathbf {g})\) using Lemma 2. See Fig. 2 for an illustration of the proof.    \(\Box \)

Fig. 2.
figure 2

Application of Theorem 1 on a chordal graph G with vector \(\mathbf {r}\in \mathcal {R}(G)\). In each step, we highlight the clique S and vertex w that are used for Lemma 2 to inductively build a strategy for G from strategies on cliques given by Theorem 1.

Theorem 3 applied for the uniform polynomial \(U_G\) immediately gives us the following corollary.

Corollary 2

For any chordal graph G, the fractional hat chromatic number \(\hat{\mu }(G)\) is equal to 1/r where r is the smallest positive root of \(U_G(x)\).

Proof

Theorem 3 implies that \(\hat{\mu }(G) \ge 1/r\). For the other direction, let \((w_i)_{i \in \mathbb {N}}\) be a sequence of rational numbers such that \(w_i < r\) for every i and \(\lim _{i \rightarrow \infty } w_i = r\). Set \(\mathbf {w}_i = (w_i)_{v\in V}\). Scott and Sokal [27, Thereom 2.10] prove that \(\mathbf {r}\in \mathcal {R}(G)\) if and only if there is a path in \([0, \infty )^V\) connecting \(\mathbf{0}\) and \(\mathbf {r}\) such that \(Z_G(\mathbf {p}) > 0\) for any \(\mathbf {p}\) on the path. Taking the path \(\{\lambda \mathbf {w}_i \mid \lambda \in [0,1]\}\), we see that \(Z_G(\lambda \mathbf {w}_i) = U_G(\lambda \cdot w_i) > 0\) and thus \(\mathbf {w}_i \in \mathcal {R}(G)\) for every i. Therefore by Proposition 1, the hat guessing game (Ghg) is losing for any hg such that \(g/h = w_i\) and \(\hat{\mu }(G) \le 1/w_i\) for every i. It follows that \(\hat{\mu }(G) \le 1/r\).    \(\Box \)

We would like to remark that the proof of Theorem 3 (and also Theorem 1) is constructive in the sense that given a graph G and a vector \(\mathbf {r}\) it either greedily finds vectors \(\mathbf {g}, \mathbf {h}\in \mathbb {N}^V\) such that \(g_v/h_v \le r_v\) together with a succinct representation of a winning strategy on \((G,\mathbf {h},\mathbf {g})\) or it reaches a contradiction if \(\mathbf {r}\in \mathcal {R}(G)\). Moreover, it is easy to see that it can be implemented to run in polynomial time if the clique tree of G is provided. Combining it with the well-known fact that a clique tree of a chordal graph can be obtained in polynomial time (see Blair and Peyton [3]) we get the following corollary.

Corollary 3

There is a polynomial-time algorithm that for a chordal graph \(G = (V,E)\) and vector \(\mathbf {r}\) decides whether \(\mathbf {r}\in \mathcal {R}(G)\). Moreover, if \(\mathbf {r}\not \in \mathcal {R}(G)\) it outputs vectors \(\mathbf {h}, \mathbf {g}\in \mathbb {N}^V\) such that \(g_v/h_v \le r_v\) for every \(v \in V\), together with a polynomial-size representation of a winning strategy for the hat guessing game \((G,\mathbf {h},\mathbf {g})\).

This result is consistent with the fact that chordal graphs are in general well-behaved with respect to Lovász Local Lemma—Pegden [24] showed that for a chordal graph G, we can decide in polynomial time whether a given vector \(\mathbf {r}\) belongs to \(\mathcal {R}(G)\). We finish this section by presenting an algorithm that computes hat chromatic number of chordal graphs.

Theorem 4

There is an algorithm \(\mathcal {A}\) such that given a chordal graph G as an input, it approximates \(\hat{\mu }(G)\) up to an additive error \(1/2^k\). The running time of \(\mathcal {A}\) is \(2k\cdot \textit{poly}(n)\), where n is the number of vertices of G. Moreover, if \(\hat{\mu }(G)\) is rational, then the algorithm \(\mathcal {A}\) outputs the exact value of \(\hat{\mu }(G)\).

Proof Idea. We start with an interval \(I_0 = [0,1]\). We repeatedly use the algorithm given by Corollary 3 to produce intervals \(I_j\) such that \(1/\hat{\mu }(G)\) is in \(I_j\). We gradually decrease the length of the intervals \(I_j\) until it is small enough to determine \(\hat{\mu }(G)\) with the sought precision \(1/2^k\).    \(\Box \)

6 Applications

In this section, we present applications of the relation between the hat guessing game and independence polynomials which was presented in the previous section.

First, we prove that \(\hat{\mu }(G)\) is asymptotically equal to \(\varDelta (G)\) up to a logarithmic factor. Since the bears can use a strategy for trees on a star with a central vertex of degree \(\varDelta (G)\) (which is always a subgraph of any graph G), we deduce a lower bound stated as Proposition 3. The formal proof is in the full version [4].

Proposition 3

The fractional hat chromatic number of any graph \(G = (V,E)\) is at least \(\varOmega (\varDelta / \log \varDelta )\).

Farnik [10] proved that \(\mu _g(G) \in O\bigl (g\cdot \varDelta (G)\bigr )\), from which we can deduce that \(\hat{\mu }(G) \in O\bigl (\varDelta (G)\bigr )\). It gives with Proposition 3 the following corollary that \(\hat{\mu }(G)\) is almost linear in \(\varDelta (G)\).

Corollary 4

For any graph G, it holds that \(\hat{\mu }(G) \in \varOmega (\varDelta / \log \varDelta )\) and \(\hat{\mu }(G) \in O(\varDelta )\).

It follows from Corollary 4, that \(\hat{\mu }(P_n)\) and \(\hat{\mu }(C_n)\) are some constants. In the full version [4] we prove the following proposition that the fractional hat chromatic number of paths and cycles goes to 4 with their increasing length.

Proposition 4

\( \lim _{n \rightarrow \infty } \hat{\mu }(P_n) = \lim _{n \rightarrow \infty } \hat{\mu }(C_n) = 4 \)

We remark that Proposition 4 follows also from the results of Scott and Sokal [27] as they proved that the small positive roots of \(U_{P_n}\) and \(U_{C_n}\) go to 1/4 when n goes to infinity. However, their proof is purely algebraic whereas we provide a combinatorial proof.