Abstract
The hesitant fuzzy set has been an important tool to address problems of decision making. There are several various improved hesitant fuzzy sets, such as dual hesitant fuzzy set, hesitant interval-valued fuzzy set, and intuitionistic hesitant fuzzy set, however, no one kind of improved fuzzy sets could reflect attitude characteristics of decision makers on time-sequences. In reality, time-sequence is one important sector to reflect hesitant situations as decision makers might have different knowledges of the same alternative at different moments. To perfect the description of such hesitant situations and obtain more reasonable results of decision making, we define a new kind of hesitant fuzzy set, namely, time-sequential hesitant fuzzy set. Meanwhile, its corresponding basic operators, score function and distance measures are proposed. We also propose the concept of fluctuated hesitant information to describe hesitant degrees of decision makers on time-sequences. By comprehensively utilizing the score function, fluctuated hesitant information and distance measures under time-sequential hesitant fuzzy set, a synthetic decision model is proposed. Two illustrated examples and one real-application are utilized to illustrate the effectiveness and advantage of the synthetic decision model under time-sequential hesitant fuzzy set.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Decision making [1] acts as an important role in science researches [2, 3], energy management and investments [4, 5], manufacturing [6, 7], construction management [8,
The rest structure of the paper is organized as follows. “Preliminaries” introduces basic theories of traditional hesitant fuzzy set. The concept, basic operators, aggregation operators and distance measures of TSHFS are proposed in “Time-sequential hesitant fuzzy set”, also the fluctuated hesitant information. “The proposed model for MADM” presents the proposed synthetic decision model. In “Experiments”, to demonstrate effectiveness and advantages of the synthetic decision model in dealing with problems of decision making, we apply which to two illustrated examples of MADM and one real-application under TSHFS environment. “Conclusions” concludes our work and prospects future researches.
Preliminaries
For each element in HFS, a set of membership values are utilized to present its characteristic properties. In this section, the definition of HFS is reviewed, and then several basic operator laws and information measure methods are shown.
Definition 2.1
[20, 21] If X is a reference set, a HFS on X can be represented by a function \(\hbar\) which returns a subset of values in [0,1],
As ordinary fuzzy set is a special case of HFS, a HFS can also be obtained from a family of fuzzy sets. **a et al. [34] introduced the following mathematical representation of HFS,
where \(h_{E} (x)\) is a set combined by some values in [0,1], which implies possible membership degrees of the element \(x \in X\) to the set E, \(h = h_{E} (x)\) is also called hesitant fuzzy element (HFE).
Definition 2.2
[20, 21] Given an HFE h, its lower and upper bounds are,
Definition 2.3
[20, 21] Let h be an HFE, its complement is defined as,
Definition 2.4
[20, 21] Let h1 and h2 be two HFEs, the union between them is defined as,
Definition 2.5
[20, 21] Let h1 and h2 be two HFEs, the intersection between them is defined as,
To make comparisons between HFEs, **a et al. [34] introduced a comparison law by defining a score function.
Definition 2.6
[35] Let h be an HFE, the score function of h is defined as follows,
where \(l(h)\) is the length of h.
Distance and similarity measures play important roles in processes of decision making, a series of corresponding methods have been introduced in “Introduction”. In this section, first we give axioms of distance and similarity measures, and then classical measure methods as research foundations are given.
Definition 2.7
Assume that a map** is \(s:{\text{HFS}}(X) \times {\text{HFS}}(X) \to [0,1],\) P, Q are two HFSs on reference set, \((P,Q) \to s(P,Q)\) is called the similarity measure between P and Q if it satisfies that,
As s(P,Q) = 1 − d(P,Q), we can obtain a similarity measure if we obtain corresponding distance measure. Therefore, we mainly discuss about distance measure methods in the paper. A general hesitant weighted distance [33] is show as follows,
where \(l_{{x_{i} }}\) denotes the length of \(x_{i}\),\(\sum\nolimits_{i = 1}^{n} {w_{i} } = 1\). When λ = 1, λ = 2, the well-known Hamming distance and Euclidean distance are formed respectively.
It is noted that, \(h_{P} (x_{i} )\) and \(h_{Q} (x_{i} )\) should be with the same length. When they have different lengths, the shorter one should be extended by adding corresponding values [33]. If decision makers are optimistic about decided objects, the maximum among possible values of HFE will be repeated and added. If decision makers are pessimistic about decided objects, the minimum among possible values of HFE will be repeated and added.
Time-sequential hesitant fuzzy set
To perfect descriptions of hesitant attitudes of decision makers when faced with problems of decision making, and to obtain more reasonable results of decision making, we define time-sequential hesitant fuzzy set as follows on the assumption that membership degrees about describing attitudes of decision makers are given one by one on time-sequences.
Definition 3.1.
Assume X is a reference set, a time-sequential hesitant fuzzy set (TSHFS) on X is defined as follows,
where \(\overrightarrow {{t_{A} }}\) denotes the set of membership degrees among [0,1], which is with fixed sequences according to giving moments of respective membership degrees, and \(\overrightarrow {{t_{A} }} = \left\{ {\gamma_{1}^{(1)} ,\gamma_{2}^{(2)} , \cdots ,\gamma_{m}^{(m)} } \right\}\) is named after time-sequential hesitant fuzzy set element (TSHFSE). In the membership degree \(\gamma_{i}^{(i)}\), (i) denotes the sequence number, \(\gamma_{i}\) is utilized to represent the value of membership degree.
Example 3.1.
If \(\overrightarrow {t} ,\) \(\overrightarrow {{t_{1} }}\) and \(\overrightarrow {{t_{2} }}\) are three TSHFSEs of a TSHFS on reference set X,\(\overrightarrow {t} ,\)\(\overrightarrow {{t_{1} }}\) and \(\overrightarrow {{t_{2} }}\) can be shown as following examples,
Though \(\overrightarrow {{t_{1} }}\) and \(\overrightarrow {{t_{2} }}\) are with same membership degrees, fluctuations on time-sequences are different. Under TSHFS environment,\(\overrightarrow {{t_{1} }}\) and \(\overrightarrow {{t_{2} }}\) are different TSHFSEs, which represent more information compared with traditional HFE under HFS environment.
Basic operators under TSHFS
Definition 3.2.
Given a TSHFSE \(\overrightarrow {t}\) of the TSHFS on reference set X, its lower and higher bounds are defined as,
where m denotes the length of \(\overrightarrow {t}\).
Example 3.2.
Given a TSHFSE \(\overrightarrow {t} = \left\{ {0.2^{(1)} ,0.7^{(2)} ,0.4^{(3)} ,0.8^{(4)} } \right\}\), its lower and higher bounds are,
Definition 3.3.
Given three TSHFSEs \(\overrightarrow {t} ,\) \(\overrightarrow {{t_{1} }} ,\) \(\overrightarrow {{t_{2} }}\) and \(\overrightarrow {{t_{1} }} ,\) \(\overrightarrow {{t_{2} }}\) are with the same length, the basic operators are defined as,
(1) Union operator:
(2) Intersection operator:
(3) Complement operator:
(4) Exponentiation operator:
(5) Number multiplication operator:
(6) Probability sum operator:
(7) Probability product operator:
where \(\lambda > 0\). Little arrows on operators mean that operators need to be executed according to time-sequences of TSHFSEs, and it is easy to obtain that calculated results through above operators are TSHFSEs.
Example 3.3.
As TSHFSEs \(\overrightarrow {t} = \left\{ {0.2^{(1)} ,0.7^{(2)} ,0.4^{(3)} ,0.8^{(4)} } \right\}\), \(\overrightarrow {{t_{1} }} = \left\{ {0.9^{(1)} ,0.5^{(2)} ,0.3^{(3)} ,0.7^{(4)} } \right\}\), \(\overrightarrow {{t_{2} }} = \left\{ {0.5^{(1)} ,0.7^{(2)} ,0.9^{(3)} ,0.3^{(4)} } \right\}\), then,
(1) on union operator:
(2) on intersection operator:
(3) on complement operator:
(4) on exponentiation operator (λ = 2):
\(\begin{aligned}\overrightarrow {t}^{\lambda } = &\mathop \cup \limits_{{\gamma_{{}}^{(i)} \in \overrightarrow {t} }}^{ \to } \left\{ {(\gamma_{{}}^{2} )^{(i)} } \right\} \\ =& \left\{ {0.04^{(1)} ,0.49^{(2)} ,0.16^{(3)} ,0.64^{(4)} }\right\}\end{aligned}\),
(5) on number multiplication operation (λ = 2):
(6) on probability sum operation:
(7) on probability product operation:
The calculated results above also are illustrated in the Fig. 2. In (a) of the Fig. 2, \(\overrightarrow {t}\) is represented by purple dots and curves, green dots and curves denote \(\overrightarrow {{t_{1} }}\), blue dots and curves represent the TSHFE \(\overrightarrow {{t_{2} }}\) in Example 3.3. Compared with \(\overrightarrow { \cap }\) and \(\overrightarrow { \cup } ,\) calculated results of \(\overrightarrow { \otimes }\) and \(\overrightarrow { \oplus }\) are less fluctuated respectively. Meanwhile, \(\overrightarrow {t}\) and its complement are with opposite volatilities.
Property 3.1.
Given two TSHFSEs \(\overrightarrow {t} ,\) \(\overrightarrow {{t_{1} }} ,\) \(\overrightarrow {{t_{2} }}\) and \(\overrightarrow {t} ,\) \(\overrightarrow {{t_{1} }} ,\) \(\overrightarrow {{t_{2} }}\) are with the same length. Operators \(\overrightarrow { \cup } ,\) \(\overrightarrow { \cap }\) satisfy following properties,
(1) commutativity:
(2) associativity:
(3) distributivity:
The proofs are trivial.
Property 3.2.
Given three TSHFSEs \(\overrightarrow {t} ,\) \(\overrightarrow {{t_{1} }} ,\) \(\overrightarrow {{t_{2} }} ,\) and \(\overrightarrow {t} ,\)\(\overrightarrow {{t_{1} }}\),\(\overrightarrow {{t_{2} }}\) are with the same length. Operators \(\overrightarrow { \oplus } ,\)\(\overrightarrow { \otimes }\) satisfy following properties,
(1) commutativity:
(2) associativity:
(3) distributivity:
The proofs are trivial.
Property 3.3.
Given two TSHFSEs \(\overrightarrow {{t_{1} }} = \{ \alpha_{1}^{(1)} ,\alpha_{2}^{(2)} , \cdots ,\alpha_{m}^{(m)} \}\), \(\overrightarrow {{t_{2} }} = \{ \beta_{1}^{(1)} ,\beta_{2}^{(2)} , \cdots ,\beta_{m}^{(m)} \}\),and \(\overrightarrow {{t_{1} }} ,\) \(\overrightarrow {{t_{2} }}\) are with the same length. Operators \(\overrightarrow { \cup } ,\)\(\overrightarrow { \cap }\),\(\overrightarrow { \oplus } ,\) \(\overrightarrow { \otimes }\) satisfy following properties,
Proof.
First, we verify that \(\overrightarrow {{t_{1} }} \mathop \oplus \limits^{ \to } \overrightarrow {{t_{2} }} \ge \overrightarrow {{t_{1} }} \mathop \cup \limits^{ \to } \overrightarrow {{t_{2} }} .\) As \(\overrightarrow {{t_{1} }} \mathop \oplus \limits^{ \to } \overrightarrow {{t_{2} }} = \mathop \cup \limits_{{\alpha_{i}^{(i)} \in \overrightarrow {{t_{1} }} ,\beta_{i}^{(i)} \in \overrightarrow {{t_{2} }} }}^{ \to } \{ \alpha_{i}^{(i)} + \beta_{i}^{(i)} - (\alpha_{i} \beta_{i} )^{(i)} \} ,\) \(\overrightarrow {{t_{1} }} \mathop \cup \limits^{ \to } \overrightarrow {{t_{2} }} = \mathop \cup \limits_{{\alpha_{i}^{(i)} \in \overrightarrow {{t_{1} }} ,\beta_{i}^{(i)} \in \overrightarrow {{t_{2} }} }}^{ \to } \max \{ \alpha_{i}^{(i)} ,\beta_{i}^{(i)} \} ,\) \(0 \le \alpha_{i}^{(i)} \le 1,0 \le \beta_{i}^{(i)} \le 1,\) then,
By combing above two inequalities,
It is obvious that \(\overrightarrow {{t_{1} }} \mathop \cup \limits^{ \to } \overrightarrow {{t_{2} }} \ge \overrightarrow {{t_{1} }} \mathop \cap \limits^{ \to } \overrightarrow {{t_{2} }}\), and \(\overrightarrow {{t_{1} }} \mathop \cap \limits^{ \to } \overrightarrow {{t_{2} }} \ge \overrightarrow {{t_{1} }} \mathop \otimes \limits^{ \to } \overrightarrow {{t_{2} }}\) can be obtained by using the same method above.
The proof is completed.
Theorem 3.1.
Given three TSHFSEs \(\overrightarrow {t} ,\)\(\overrightarrow {{t_{1} }}\),\(\overrightarrow {{t_{2} }} ,\) and \(\overrightarrow {{t_{1} }}\),\(\overrightarrow {{t_{2} }}\) are with the same length. The operators of Definition 3.3 satisfy following properties:
(1)
(2)
(3)
(4)
(5)
(6)
Proof.
(1).
(2)
(3)
(4)
(5)
(6)
Score function and fluctuated hesitant information under TSHFS
Definition 3.4.
Given a TSHFSE \(\overrightarrow {t}\), to compare different TSHFSEs, the score function of \(\overrightarrow {t}\) is defined as,
where \(n(\overrightarrow {t} )\) denotes the length of \(\overrightarrow {t}\), and it is easy to be verified that \(K(\overrightarrow {t} ) \in [0,1].\)
Remark 3.1.
Assume that there are two TSHFSEs \(\overrightarrow {{t_{1} }} ,\) \(\overrightarrow {{t_{2} }} ,\) if \(K(\overrightarrow {{t_{1} }} )\) > \(K(\overrightarrow {{t_{2} }} )\), \(\overrightarrow {{t_{1} }}\) > \(\overrightarrow {{t_{2} }}\) is derived. If \(K(\overrightarrow {{t_{1} }} )\) < \(K(\overrightarrow {{t_{2} }} )\), \(\overrightarrow {{t_{1} }}\) < \(\overrightarrow {{t_{2} }}\) can be derived, and if \(K(\overrightarrow {{t_{1} }} )\) = \(K(\overrightarrow {{t_{2} }} )\), we can achieve that \(\overrightarrow {{t_{1} }}\) = \(\overrightarrow {{t_{2} }}\).
Example 3.4.
Given TSHFSEs \(\overrightarrow {{t_{1} }} = \Big\{ 0.9^{(1)} ,0.5^{(2)} ,0.3^{(3)} ,0.7^{(4)} \Big\}\), \(\overrightarrow {{t_{2} }} = \Big\{ 0.5^{(1)} ,0.7^{(2)} ,0.9^{(3)} ,0.3^{(4)} \Big\}\), we compare \(\overrightarrow {{t_{1} }}\) and \(\overrightarrow {{t_{2} }}\) by using the score function above,
As \(K(\overrightarrow {{t_{1} }} )\) < \(K(\overrightarrow {{t_{2} }} )\), it is derived that \(\overrightarrow {{t_{1} }}\) < \(\overrightarrow {{t_{2} }}\). However, if we do not consider influences of sequences of membership degrees, according to Definition 2.6, \(\overrightarrow {{t_{1} }}\) is equal to \(\overrightarrow {{t_{2} }}\) under HFS environment. The score function of Definition 3.4 not only can reflect attitudes as what usually utilized under traditional HFS environments, but also can imply that deeper understandings of decision makers about alternatives as time goes forwards, which conforms to common senses.
The property of reflecting hesitant attitudes by fluctuated information is the core of TSHFS, and following fluctuated hesitant information is defined to describe such hesitance reflected on time-sequences.
Definition 3.5.
Assume there is a TSHFSE \(\overrightarrow {t} = \Big\{ \gamma_{1}^{(1)} ,\gamma_{2}^{(2)} , \cdots ,\gamma_{m}^{(m)} \Big\}\), it can be transformed into the new form \(\overrightarrow {t^{\prime}} = \Big\{ \{ (\gamma_{i}^{(i)} )^{(1)} ,(\gamma_{j}^{(j)} )^{(2)} , \ldots , (\gamma_{k}^{(k)} )^{(m - 1)} ,(\gamma_{g}^{(g)} )^{(m)} \} \Big| \gamma_{i}^{(i)} \le \gamma_{j}^{(j)} \le \cdots \le \gamma_{k}^{(k)} \le \gamma_{g}^{(g)} ,i \le m,j \le m, \ldots ,k \le m,g \le m \Big\}\) by arranging values of membership degrees of \(\overrightarrow {t}\) from the smallest one to the largest one and \( \overrightarrow {t^{\prime\prime}} = \left\{ \{ (\gamma_{g}^{(g)} )^{(1)} ,(\gamma_{k}^{(k)} )^{(2)} , \ldots , (\gamma_{j}^{(j)} )^{(m - 1)} ,(\gamma_{i}^{(i)} )^{(m)} \} \left| \gamma_{g}^{(i)} \ge \gamma_{k}^{(k)} \right.\right. \break\left.\ge\cdots \ge \gamma_{j}^{(j)} \ge \gamma_{i}^{(i)} ,i \le m,j \le m, \ldots ,k \le m,g \le m \right\}\) by arranging values of membership degrees of \(\overrightarrow {t}\) from the largest one to the smallest one. To describe hesitant degrees induced by fluctuations of membership degrees, the fluctuated hesitant information (FI) of \(\overrightarrow {t}\) is defined as,
Furthermore, it is easy to be verified that \({\text{FI}}(\overrightarrow {t} ) \in [0,1].\)
Example 3.5.
For two TSHFSEs \(\overrightarrow {{t_{1} }} = \{ 0.9^{(1)} ,0.5^{(2)} ,0.3^{(3)} ,0.7^{(4)} \} ,\) \(\overrightarrow {{t_{2} }} = \{ 0.5^{(1)} ,0.7^{(2)} ,0.9^{(3)} ,0.3^{(4)} \} ,\) according to Definition 3.5,
Though \(\overrightarrow {{t_{1} }}\) and \(\overrightarrow {{t_{2} }}\) are with same values of membership degrees, they are with different fluctuations on time-sequences, which reflect important and various kinds of hesitant attitudes of decision makers.
Example 3.6.
If let HFEs h1 = {0.2,0.3,0.6,0.8,0.9} and h2 = {0.2,0.8,0.6,0.3,0.9} be transformed into TSHFSEs in “Introduction”, namely, \(\overrightarrow {{h_{1} }} = \{ 0.2^{(1)} ,0.3^{(2)} ,0.6^{(3)} ,0.8^{(4)} ,0.9^{(5)} \} ,\) \(\overrightarrow {{h_{2} }} = \{ 0.2^{(1)} ,0.8^{(2)} ,0.6^{(3)} ,0.3^{(4)} ,0.9^{(5)} \} .\) Then \(FI(\overrightarrow {{h_{1} }} ) = 0.2538,\) \(FI(\overrightarrow {{h_{2} }} ) = 0.4644,\) which correspond correctly to fluctuations of \(h_{1} ,\) \(h_{2}\) with fixed sequences in Fig. 1.
By combing the score function of Definition 3.4 and the fluctuated hesitant information above, the following method can be more reasonable to compare TSHFSEs.
Remark 3.2.
Given two TSHFSEs \(\overrightarrow {{t_{1} }}\),\(\overrightarrow {{t_{2} }}\) are with the same length. According to Definitions 3.4 and 3.5,
-
1.
if \(K(\overrightarrow {{t_{1} }} )^{{{\text{FI}}(\overrightarrow {{t_{1} }} ) + 1}} < K(\overrightarrow {{t_{2} }} )^{{{\text{FI}}(\overrightarrow {{t_{2} }} ) + 1}} ,\) then \(\overrightarrow {{t_{1} }} < \overrightarrow {{t_{2} }} ;\)
-
2.
if \(K(\overrightarrow {{t_{1} }} )^{{{\text{FI}}(\overrightarrow {{t_{1} }} ) + 1}} > K(\overrightarrow {{t_{2} }} )^{{{\text{FI}}(\overrightarrow {{t_{2} }} ) + 1}} ,\) then \(\overrightarrow {{t_{1} }} > \overrightarrow {{t_{2} }} ;\)
-
3.
if \(K(\overrightarrow {{t_{1} }} )^{{{\text{FI}}(\overrightarrow {{t_{1} }} ) + 1}} = K(\overrightarrow {{t_{2} }} )^{{{\text{FI}}(\overrightarrow {{t_{2} }} ) + 1}} ,\) then \(\overrightarrow {{t_{1} }} = \overrightarrow {{t_{2} }} .\)
Specify \(\overrightarrow {{t_{1} }}\),\(\overrightarrow {{t_{2} }}\) as in Examples 3.4 and 3.5, then \(K(\overrightarrow {{t_{1} }} )^{{{\text{FI}}(\overrightarrow {{t_{1} }} ) + 1}} = 0.56^{1.5058} = 0.4177 < K(\overrightarrow {{t_{2} }} )^{{{\text{FI}}(\overrightarrow {{t_{2} }} ) + 1}} = 0.58^{1.4954} = 0.4428,\) it is derived that \(\overrightarrow {{t_{1} }} < \overrightarrow {{t_{2} }} .\) What’s more, following relationship can be acquired.
Theorem 3.2.
Assume that two TSHFSEs \(\overrightarrow {{t_{1} }}\),\(\overrightarrow {{t_{2} }}\) are with the same length, according to Definitions 3.4 and 3.5, \(K(\overrightarrow {{t_{1} }} ),\) \(K(\overrightarrow {{t_{2} }} ),\) \({\text{FI(}}\overrightarrow {{t_{1} }} )\) and \({\text{FI}}(\overrightarrow {{t_{2} }} )\) are derived. If \(K(\overrightarrow {{t_{1} }} ) \le K(\overrightarrow {{t_{2} }} ),\) and \({\text{FI}}(\overrightarrow {{t_{1} }} ) \ge {\text{FI}}(\overrightarrow {{t_{2} }} ),\) it holds that,
Proof.
It is easy to obtain that \(K(\overrightarrow {{t_{1} }} ) \in [0,1],\) \(K(\overrightarrow {{t_{2} }} ) \in [0,1],\) \({\text{FI}}(\overrightarrow {{t_{1} }} ) \in [0,1],\) and \({\text{FI}}(\overrightarrow {{t_{2} }} ) \in [0,1],\) then,
Namely, \(K(\overrightarrow {{t_{1} }} ) - K(\overrightarrow {{t_{2} }} ) \le K(\overrightarrow {{t_{1} }} )^{{{\text{FI}}(\overrightarrow {{t_{1} }} ) + 1}} - K(\overrightarrow {{t_{2} }} )^{{{\text{FI}}(\overrightarrow {{t_{2} }} ) + 1}}\), the proof is finished.
Theorem 3.2 reflects that after inducing fluctuated hesitant information into the score function, which is with better differential characteristic among TSHFSEs.
Aggregation operators under TSHFS
Definition 3.6.
Assume that there are TSHFSEs \(\{ \overrightarrow {{t_{1} }} ,\overrightarrow {{t_{2} }} , \ldots ,\overrightarrow {{t_{m} }} \}\) with the same length and with respective weights \(\left\{ {w_{j} \left| {0 \le w_{j} \le 1,\sum\nolimits_{1 \le i \le m} {w_{j} } = 1} \right.} \right\}\) need to be aggregated, we defined the weighted averaging operator under TSHFS as follows,
Example 3.7.
Assume that there are two TSHFSEs \(\overrightarrow {{t_{1} }} = \left\{ {0.9^{(1)} ,0.5^{(2)} ,0.3^{(3)} ,0.7^{(4)} } \right\}\),\(\overrightarrow {{t_{2} }} = \left\{ {0.5^{(1)} ,0.7^{(2)} ,0.9^{(3)} ,0.3^{(4)} } \right\}\), with respective weights {0.5,0.5}, then,
Definition 3.7.
Assume that there are TSHFSEs \(\{ \overrightarrow {{t_{1} }} ,\overrightarrow {{t_{2} }} , \cdots ,\overrightarrow {{t_{m} }} \}\) with the same length and with respective weights \(\left\{ {w_{i} \left| {0 \le w_{i} \le 1,\sum\nolimits_{1 \le i \le m} {w_{i} } = 1} \right.} \right\}\) need to be aggregated, the geometric averaging operator under TSHFS is defined as follows,
Example 3.8.
Assume that there are two TSHFSEs \(\overrightarrow {{t_{1} }} = \left\{ {0.9^{(1)} ,0.5^{(2)} ,0.3^{(3)} ,0.7^{(4)} } \right\}\),\(\overrightarrow {{t_{2} }} = \left\{ {0.5^{(1)} ,0.7^{(2)} ,0.9^{(3)} ,0.3^{(4)} } \right\}\), with averaged weights then,
Theorem 3.3.
Assume that there are some TSHFSEs \(\{ \overrightarrow {{t_{1} }} ,\overrightarrow {{t_{2} }} , \ldots ,\overrightarrow {{t_{m} }} \}\) of TSHFS on the reference set with respective weights \(\left\{ {w_{j}| {0 \le| 1},\left|{\sum\nolimits_{1 \le j \le n} {w_{j} = 1} } \right.} \right\}\) need to be aggregated, then,
According to Lemma 1 in **a et al. [34], the result of Theorem 3.3 is easy to be derived.
Distance measures under TSHFS
Definition 3.8
Given a map** \(d:{\text{TSHFS}}(X) \times {\text{TSHFS}}(X) \to [0,1],\) given that there are two TSHFSs T, P, \((T,P) \to d(T,P)\) is called distance measure between T and P if it satisfies following conditions for any TSHFS G,
Definition 3.9.
For two TSHFSs T, P on the reference set, \(T = \{ x,\overrightarrow {{t_{1} }} ,\overrightarrow {{t_{2} }} , \ldots ,\overrightarrow {{t_{n} }} \} ,\)\(P = \{ x,\overrightarrow {{p_{1} }} ,\overrightarrow {{p_{2} }} , \ldots ,\overrightarrow {{p_{n} }} \} ,\) \(\{ \overrightarrow {{t_{j} }} |\overrightarrow {{t_{j} }} = \{ \alpha_{j1}^{(1)} ,\alpha_{j2}^{(2)} , \ldots ,\alpha_{jm}^{(m)} \} ,j \in [1,n]\} ,\)\(\{ \overrightarrow {{p_{j} }} |\overrightarrow {{p_{j} }} = \{ \beta_{j1}^{(1)} ,\beta_{j2}^{(2)} , \cdots ,\beta_{jm}^{(m)} \} ,j \in [1,n]\} ,\) according to Definition 3.8, we define distance measures between T, P as follows,
where \(\left\{ {w_{j}| {0 \le| 1},\left|{\sum\nolimits_{0 \in [1,n]} {w_{j} = 1} } \right.} \right\}\) are weights of TSHFSEs,\(\lambda \ge 1,\) and m denotes the length of TSHFSEs.
Example 3.9.
Given two TSHFSs \(T_{1} = \{ x,\{ 0.9^{(1)} ,0.5^{(2)} ,0.3^{(3)} ,0.7^{(4)} \} \} ,\) \(P_{1} = \{ x,\{ 0.5^{(1)} ,0.7^{(2)} ,0.9^{(3)} ,0.3^{(4)} \} \} ,\) according to the Eq. (3.37), the distance between T1 and P1 is,
Theorem 3.4.
Distances measure d1 defined in Definition 3.9 satisfies all conditions of Definition 3.8, distance measures d2 and dλ only satisfy the fore three conditions of Definition 3.8.
Proof.
It is easy to verify that d1, d2 and dλ satisfy the fore three conditions of Definition 3.8, now we just prove that d1 satisfies the fourth condition. Assume that there are three TSHFSs T, P, G on the reference set, \(T = \{ x,\overrightarrow {{t_{1} }} ,\overrightarrow {{t_{2} }} , \ldots ,\overrightarrow {{t_{n} }} \} ,\)\(P = \{ x,\overrightarrow {{p_{1} }} ,\overrightarrow {{p_{2} }} , \ldots ,\overrightarrow {{p_{n} }} \} ,\)\(G = \{ x,\overrightarrow {{g_{1} }} ,\overrightarrow {{g_{2} }} , \ldots ,\overrightarrow {{g_{n} }} \} ,\)\(\{ \overrightarrow {{t_{j} }} |\overrightarrow {{t_{j} }} = \{ \alpha_{j1}^{(1)} ,\alpha_{j2}^{(2)} , \ldots ,\alpha_{jm}^{(m)} \} ,j \in [1,n]\} ,\)\(\{ \overrightarrow {{p_{j} }} |\overrightarrow {{p_{j} }} = \{ \beta_{j1}^{(1)} ,\beta_{j2}^{(2)} , \ldots ,\beta_{jm}^{(m)} \} ,j \in [1,n]\} ,\) \(\{ \overrightarrow {{g_{j} }} |\overrightarrow {{g_{j} }} = \{ \gamma_{j1}^{(1)} ,\gamma_{j2}^{(2)} , \ldots ,\gamma_{jm}^{(m)} \} ,j \in [1,n]\} .\) According to Definitions 3.8 and 3.9,
And,
namely, \(d_{1} (T,P) + d_{1} (P,G) \ge d_{1} (T,G),\) other situations can be proved as the same method. The proof is finished.
Definition 3.10.
For two TSHFSs T, P on the reference set X, \(T = \{ x,\overrightarrow {{t_{1} }} ,\overrightarrow {{t_{2} }} , \ldots ,\overrightarrow {{t_{n} }} \} ,\)\(P = \{ x,\overrightarrow {{p_{1} }} ,\overrightarrow {{p_{2} }} , \ldots ,\overrightarrow {{p_{n} }} \} ,\) \(\{ \overrightarrow {{t_{j} }} |\overrightarrow {{t_{j} }} = \{ \alpha_{j1}^{(1)} ,\alpha_{j2}^{(2)} , \ldots ,\alpha_{jm}^{(m)} \} ,j \in [1,n]\} ,\) \(\{ \overrightarrow {{p_{j} }} |\overrightarrow {{p_{j} }} = \{ \beta_{j1}^{(1)} ,\beta_{j2}^{(2)} , \ldots ,\beta_{jm}^{(m)} \} ,j \in [1,n]\} .\) By adding the fluctuated hesitant information, the following distance measure is defined,
where \(\left\{ {w_{j}| {0 \le| 1},\left|{\sum\nolimits_{1 \le j \le n} {w_{j} = 1} } \right.} \right\}\) are weights of TSHFSEs,\(\lambda \ge 1\).When \(\lambda = 1\),\(\lambda = 2\), the form of Hamming distance and the form of Euclidean distance are obtained respectively.
The proposed model for MADM
As usual, to solve problems of MADM with several kinds of necessary attributes under traditional HFS environment, a series of alternatives \(\{ A_{1} ,A_{2} , \ldots ,A_{h} \}\) need to be ranked. Each alternative is with n attributes \(\{ t_{i1} ,t_{i2} , \ldots ,t_{in} \}\)(i denotes i-th alternative), which are with weights \(\left\{ {w_{j} \left| {0 \le w_{j} } \right. \le 1,\;\sum\nolimits_{j = 1}^{n} {w_{j} = 1} } \right\}\)). Under TSHFS, attributes are formed as \(\{ \overrightarrow {{t_{i1} }} ,\overrightarrow {{t_{i2} }} , \ldots ,\overrightarrow {{t_{in} }} \}\) (i denotes i-th alternative). We propose following five steps to solve problems of MADM, and final results will be sorted from the best alternative to the worst one, which is also illustrated in Fig. 3.
Assume that alternatives are represented by several attributes under TSHFS as,
The ideal alternative is \(A_{I} = \{ \overrightarrow {{t_{I1} }} ,\overrightarrow {{t_{I2} }} , \ldots ,\overrightarrow {{t_{In} }} \} = \{ \{ \alpha_{I1}^{(1)} ,\alpha_{I2}^{(2)} , \ldots ,\alpha_{Im}^{(m)} \} ,\{ \beta_{I1}^{(1)} ,\beta_{I2}^{(2)} , \ldots ,\beta_{Im}^{(m)} \} , \ldots ,\{ \gamma_{I1}^{(1)} ,\gamma_{I2}^{(2)} , \ldots ,\gamma_{Im}^{(m)} \} \} .\)
Step1: Utilize TSWA of the Eq. (3.34),
\(\overrightarrow {{t_{1} }} = {\text{TSWA(}}\overrightarrow {{t_{11} }} ,\overrightarrow {{t_{12} }} , \ldots ,\overrightarrow {{t_{1m} }} ),\) \(\overrightarrow {{t_{2} }} = {\text{TSWA}}(\overrightarrow {{t_{21} }} ,\overrightarrow {{t_{22} }} , \ldots ,\overrightarrow {{t_{2m} }} ), \ldots ,\) \(\overrightarrow {{t_{h} }} = {\text{TSWA}}(\overrightarrow {{t_{h1} }} ,\overrightarrow {{t_{h2} }} , \ldots ,\overrightarrow {{t_{hm} }} ),\) \(\overrightarrow {{t_{I} }} = {\text{TSWA}}(\overrightarrow {{t_{I1} }} ,\overrightarrow {{t_{I2} }} , \ldots ,\overrightarrow {{t_{Im} }} ).\)
Step 2: Utilize the score function of the Eq. (3.29) and the fluctuated hesitant information of Eq. (3.30–3.32),
Step 3: According to d1 of the Eq. (3.37),
\(D(A_{1} ) = d_{1} (\overrightarrow {{t_{1} }} ,\overrightarrow {{t_{I} }} ),\) \(D(A_{2} ) = d_{1} (\overrightarrow {{t_{2} }} ,\overrightarrow {{t_{I} }} ), \ldots ,\) \(D(A_{h} ) = d_{1} (\overrightarrow {{t_{h} }} ,\overrightarrow {{t_{I} }} ).\)
As d1 satisfies all conditions of definition about distance measure under TSHFS, we choose it in the proposed model. If \(D(A_{i} ) = 0,\) Ai would be the finest alternative.
Step 4: The synthetic values can be achieved,
\(\mathop {A_{1} }\limits^{ \cdot } = \frac{{S(A_{1} )}}{{D(A_{1} ) + 10^{ - 5} }},\) \(\mathop {A_{2} }\limits^{ \cdot } = \frac{{S(A_{2} )}}{{D(A_{2} ) + 10^{ - 5} }} \cdots ,\) \(\mathop {A_{h} }\limits^{ \cdot } = \frac{{S(A_{h} )}}{{D(A_{h} ) + 10^{ - 5} }}.\)
Step 5: According to corresponding synthetic values \(\mathop {A_{1} }\limits^{ \cdot } ,\mathop {A_{2} }\limits^{ \cdot } , \ldots ,\mathop {A_{h} }\limits^{ \cdot }\), arrange \(\{ A_{1} ,A_{2} , \cdots ,A_{h} \}\) from the feast one to the worst one.
Experiments
Illustrated examples
Case 1. MADM of Data association [34]. Data association is an important research topic in multiple objective tracking under complex flight environments. Assume that four different targets (A1, A2, A3, A4) are simultaneously tracking, which are with four categories of attribute-information sampled from four different types of sensors (ti1, ti2, ti3, ti4, i denotes i-th target) under TSHFS. To improve the tracking accuracy, it is needed to categorize certain trajectory points sampled from complex flight environment into the different trajectories of four different targets. Given the sampled information of a trajectory point is AI = {{0.4(1),0.6(2),0.5(3)}, {0.2(1),0.4(2)}, {0.5(1),0.7(2),0.6(3)}, {0.8(1),0.9(2)}}, the weight vector of four sensors is w = (0.25, 0.25, 0.25, 0.25)T, the corresponding decision matrix is shown in Table 2.
To make sure that TSHFEs are in the same length and avoid changing “attitudes” of sensors, inserting “0” at the front of TSHFSEs with shorter lengths is used in the paper. TSHFSEs in Table 2 can be formed as follows,
Step 1: Utilize TSWA of the Eq. (3.34),
Step 2: Utilize the score function of the Eq. (3.29) and the fluctuated hesitant information of Eqs. (3.30–3.32),
Step 3: According to d1 of the Eq. (3.27),
\(D(A_{1} ) = d_{1} (\overrightarrow {{t_{1} }} ,\overrightarrow {{t_{I} }} ) = 0.1283,\) \(D(A_{2} ) = d_{1} (\overrightarrow {{t_{2} }} ,\overrightarrow {{t_{I} }} ) = 0.1433,\) \(D(A_{3} ) = d_{1} (\overrightarrow {{t_{3} }} ,\overrightarrow {{t_{I} }} ) = 0.1183,\) \(D(A_{4} ) = d_{1} (\overrightarrow {{t_{4} }} ,\overrightarrow {{t_{I} }} ) = 0.0050.\)
Step 4: The synthetic values can be achieved,
\(\mathop {A_{1} }\limits^{ \cdot } = \frac{{S(A_{1} )}}{{D(A_{1} ) + 10^{ - 5} }} = 3.6231,\) \(\mathop {A_{2} }\limits^{ \cdot } = \frac{{S(A_{2} )}}{{D(A_{2} ) + 10^{ - 5} }} = 3.3784,\) \(\mathop {A_{3} }\limits^{ \cdot } = \frac{{S(A_{3} )}}{{D(A_{3} ) + 10^{ - 5} }} = 3.9092,\) \(\mathop {A_{4} }\limits^{ \cdot } = \frac{{S(A_{4} )}}{{D(A_{4} ) + 10^{ - 5} }} = 69.1182.\)
Step 5: According to corresponding synthetic values \(\mathop {A_{1} }\limits^{ \cdot } ,\mathop {A_{2} }\limits^{ \cdot } ,\mathop {A_{3} }\limits^{ \cdot } \mathop {,A_{4} }\limits^{ \cdot } ,\) arrange A1, A2, A3, A4 from the feast one to the worst one (Table 3).
Common sense says that the ideal alternative is most similar to A4, all ranking results can find out the feast alternative, though which are with more or less different in arrangements. If let the decision matrix of Case 1 in Table 2 be under HFS environment, similar ranking result \(A_{4} \succ A_{2} \succ A_{3} \succ A_{1}\) can be obtained by dn Xu et al. [33], which is the same with dλ(λ = 2). The influence of parameter λ in dλ and dkfλ on ranking results are shown in Tables 4 and 5. When λ is larger than 2, the ranking results of dλ and dkfλ will be steady. As d1 satisfies the triangle inequality, the ranking result of d1 is more reliable than that of dλ when λ is larger than 2. With combining comprehensively with the score function, fluctuated hesitant information and distance measure d1, ranking result of the proposed decision model is the most reasonable. What’s more, from calculated values of alternatives in above tables, the proposed model can make better differences among alternatives.
Case 2. MADM of supplier selection [48]. An enterprise’s board of directors with five members is to make a plan of the development of following large projects. Four possible large projects (A1, A2, A3, A4) are alternatives, which are with four categories of attributes (ti1: financial perspective, ti2: the customer satisfaction, ti3: internal business process perspective, ti4: learning and growth perspective, all of them are maximization types, i denotes i-th alternative). To select the most appropriate one, the directors should make comparisons between alternatives and rank them. The weight vector of the attributes is supposed as w = (0.25, 0.25, 0.25, 0.25)T. The corresponding decision matrix is shown in Table 6, and the ideal alternative is represented as AI = {{0.4(1),0.6(2),0.8(3)}, {0.1(1),0.2(2),0.4(3),0.6(4)}, {0.3(1),0.4(2),0.6(3),0.8(4),0.9(5)}, {0.1(1), 0.3(2),0.9(3)}} under TSHFS.
To make sure that TSHFEs are in the same length and avoid changing attitudes of decision makers, inserting “0” at the front of TSHFSEs with shorter lengths is used in the paper. TSHFSEs in Table 6 can be formed as follows,
Step 1: Utilize TSWA of the Eq. (3.34),
Step 2: Utilize the score function of the Eq. (3.29) and the fluctuated hesitant information of Eqs. (3.30–3.32),
Step 3: According to d1 of the Eq. (3.27),
\(D(A_{1} ) = d_{1} (\overrightarrow {{t_{1} }} ,\overrightarrow {{t_{I} }} ) = 0.1317,\) \(D(A_{2} ) = d_{1} (\overrightarrow {{t_{2} }} ,\overrightarrow {{t_{I} }} ) = 0.0500,\) \(D(A_{3} ) = d_{1} (\overrightarrow {{t_{3} }} ,\overrightarrow {{t_{I} }} ) = 0.1217,\) \(D(A_{4} ) = d_{1} (\overrightarrow {{t_{4} }} ,\overrightarrow {{t_{I} }} ) = 0.1033.\)
Step 4: The synthetic values can be achieved,
\(\mathop {A_{1} }\limits^{ \cdot } = \frac{{S(A_{1} )}}{{D(A_{1} ) + 10^{ - 5} }} = 2.4902,\) \(\mathop {A_{2} }\limits^{ \cdot } = \frac{{S(A_{2} )}}{{D(A_{2} ) + 10^{ - 5} }} = 7.6812, \cdot\) \(\mathop {A_{3} }\limits^{ \cdot } = \frac{{S(A_{3} )}}{{D(A_{3} ) + 10^{ - 5} }} = 2.9557,\) \(\mathop {A_{4} }\limits^{ \cdot } = \frac{{S(A_{4} )}}{{D(A_{4} ) + 10^{ - 5} }} = 3.4009.\)
Step 5: According to corresponding synthetic values \(\mathop {A_{1} }\limits^{ \cdot } ,\mathop {A_{2} }\limits^{ \cdot } ,\mathop {A_{3} }\limits^{ \cdot } \mathop {,A_{4} }\limits^{ \cdot } ,\) arrange A1, A2, A3, A4 from the feast one to the worst one,
Ranking results of Case 2 are shown in Table 7. As distance measure d1 satisfies the triangle inequality, we choose it as the measure method in the Case 2. From the initiatively stable ranking results of Table 5, λ is set at 3 respectively dkfλ. Common sense says that the ideal alternative is most similar with A2, all ranking results can find out the feast alternative by proposed methods in the paper, though which are with more or less different in arrangements. However, if let the decision matrix in Table 6 be under HFS environment,\(A_{1} \succ A_{2} \succ A_{4} \succ A_{3}\) is obtained by dn Xu et al. [33], which cannot differentialize out the feast alternative. The reason why the method dn Xu et al. [33] could not receive the feast alternative under HFS is that when HFEs are in different length, the shorter ones should be extended by adding value [33] of the biggest membership degree or the smallest membership degree, such operator is with the risk of changing attitudes of decision makers. Faced with the same situation, we adopt it by inserting 0 into the TSHFS decision matrix, which avoids such risk.
Real application to ranking evaluation of world universities
A reasonable evaluation of running level of a university can not only provide guidance for candidates applying for universities, but also help universities realize their own advantages and disadvantages needed to be improved. Every year, some well-known agencies rank and evaluate global universities according to several attributes, usually including Arts and Humanities (A–H), Life Sciences and Medicine (L–M), Engineering and Technology (ENG), Natural Sciences and Mathematics (SCI), and Social Sciences (SOC). There are seven evaluated top universities including Stanford University, Harvard University, Oxford University, Cambridge University, Berkeley University, Princeton University and Yale University, and corresponding decision matrix adopted from [49, 50] is shown in Table 8 according to mentioned five attributes, with weights w = (0.2, 0.2, 0.2, 0.2, 0.2)T. The ideal university is represented as AI = {{1(1),1(2),1(3)}, {1(1),1(2),1(3)}, {1(1),1(2),1(3)}, {1(1), 1(2),1(3)}, {1(1), 1(2),1(3)}}.
Step 1: Utilize TSWA of the Eq. (3.34),
\({\text{Stanford}}:\overrightarrow {{t_{1} }} = \{ 0.9002^{(1)} ,0.8326^{(2)} ,0.9060^{(3)} \} ,\) \({\text{Harvard}}:\overrightarrow {{t_{2} }} = \{ 0.8988^{(1)} ,0.9248^{(2)} ,0.9244^{(3)} \} ,\)
\({\text{Oxford}}:\overrightarrow {{t_{3} }} = \{ 0.8878^{(1)} ,0.6438^{(2)} ,0.9242^{(3)} \} ,\) \({\text{Cambridge}}:\overrightarrow {{t_{4} }} = \{ \{ 0.8750^{(1)} ,0.7550^{(2)} ,0.9280^{(3)} \} ,\)
\({\text{Berkeley}}:\overrightarrow {{t_{5} }} = \{ 0.8608^{(1)} ,0.8018^{(2)} ,0.8874^{(3)} \} ,\) \({\text{Princeton}}:\overrightarrow {{t_{6} }} = \{ 0.7906^{(1)} ,0.6650^{(2)} ,0.8316^{(3)} \} ,\)
\({\text{Yale}}:\overrightarrow {{t_{7} }} = \{ 0.8462^{(1)} ,0.6238^{(2)} ,0.8490^{(3)} \} ,\) \(A_{I} :\overrightarrow {{t_{I} }} = \{ 1^{(1)} ,1^{(2)} ,1^{(3)} \} .\)
Step 2: Utilize the score function of the Eq. (3.29) and the fluctuated hesitant information of Eqs. (3.30–3.32),
Step 3: According to d1 of the Eq. (3.27),
\(D({\text{Stanford}}) = d_{1} (\overrightarrow {{t_{1} }} ,\overrightarrow {{t_{I} }} ) = 0.1194,\) \(D({\text{Harvard}}) = d_{1} (\overrightarrow {{t_{2} }} ,\overrightarrow {{t_{I} }} ) = 0.0797,\) \(D({\text{Oxford}}) = d_{1} (\overrightarrow {{t_{3} }} ,\overrightarrow {{t_{I} }} ) = 0.1753,\) \(D({\text{Cambridge}}) = d_{1} (\overrightarrow {{t_{4} }} ,\overrightarrow {{t_{I} }} ) = 0.1385,\) \(D({\text{Berkeley}}) = d_{1} (\overrightarrow {{t_{5} }} ,\overrightarrow {{t_{I} }} ) = 0.1456,\)
\(D({\text{Ptinceton)}} = d_{1} (\overrightarrow {{t_{6} }} ,\overrightarrow {{t_{I} }} ) = 0.2308,\) \(D({\text{Yale}}) = d_{1} (\overrightarrow {{t_{7} }} ,\overrightarrow {{t_{I} }} ) = 0.2265.\)
Step 4: The synthetic values can be achieved,
\(\mathop {A_{1} }\limits^{ \cdot } = \frac{{S({\text{Stanford}})}}{{D({\text{Stanford}}) + 10^{ - 5} }} = 6.7931,\) \(\mathop {A_{2} }\limits^{ \cdot } = \frac{{{\text{S}}({\text{Harvard}})}}{{D({\text{Harvard}}) + 10^{ - 5} }} = 10.9128,\) \(\mathop {A_{3} }\limits^{ \cdot } = \frac{{{\text{S}}({\text{Oxford}})}}{{D({\text{Oxford}}) + 10^{ - 5} }} = 4.1949,\) \(\mathop {A_{4} }\limits^{ \cdot } = \frac{{{\text{S}}({\text{Cambridge}})}}{{D({\text{Cambridge}}) + 10^{ - 5} }} = 5.6704,\) \(\mathop {A_{5} }\limits^{ \cdot } = \frac{{{\text{S}}({\text{Berkeley}})}}{{D({\text{Berkeley}}) + 10^{ - 5} }} = 5.3232,\) \(\mathop {A_{6} }\limits^{ \cdot } = \frac{{{\text{S}}({\text{Ptinceton)}}}}{{D({\text{Ptinceton)}} + 10^{ - 5} }} = 2.8820,\)
\(\mathop {A_{7} }\limits^{ \cdot } = \frac{{{\text{S}}({\text{Yale}})}}{{D({\text{Yale}}) + 10^{ - 5} }} = 2.9554.\)
Step 5: According to corresponding synthetic values \(\mathop {A_{1} }\limits^{ \cdot } ,\mathop {A_{2} }\limits^{ \cdot } ,\mathop {A_{3} }\limits^{ \cdot } \mathop {,A_{4} }\limits^{ \cdot } ,\mathop {A_{5} }\limits^{ \cdot } \mathop {,A_{6} }\limits^{ \cdot } \mathop {,A_{7} }\limits^{ \cdot } ,\) arrange seven mentioned universities from the best one to the worst one,
Ranking results of Alcantuda et al. [49], Farhadinia et al. [50] and the proposed model are shown in Table 9. It is evident that three different methods obtain the same ranking result, which demonstrates the effectiveness of the proposed model under TSHFS environment in the real-application. What’s more, the same ranking result also can be obtained by proposed d1 and dkfλ(λ = 3) under TSHFS environment. From values of three different methods in Table 9, the proposed model is with higher resolution as various kinds of information are integrated in it, including distance measure information, score information and fluctuated hesitant information.
Conclusions
In this paper, by taking fluctuated hesitance on time-sequences into account, we have proposed the new type of hesitant fuzzy set, namely, time-sequential hesitant fuzzy set (TSHFS). We have also defined basic operators, score function, aggregation operators and a series of distance measures under TSHFS. What’s more, we have proposed the definition of fluctuated hesitant information to measure hesitant degrees brought by fluctuations of membership degrees on time-sequences. By integrating the proposed score function, fluctuated hesitant information and distance measure into a synthetic decision model under TSHFS environment, we have applied which to solve three different decision-making problems and reasonable feast results and ranking results have been obtained.
We put forward TSHFS on the assumption that membership degrees of TSHFSEs are given one by one on time-sequences. If membership degrees of TSHFSEs are simultaneously given almost at one moment or time-sequences are neglected, TSHFS would become into classical HFS. In the proposed decision model, we used the distance measure as the denominator. To avoid the situation that distance measure might be 0, 10–5 was added in the denominator. And the influence of when replace 10–5 with other values on final results should be studied in future. What’s more, ranking results of the proposed model could not always correspond to ones of d1, which is acted as the denominator in the proposed model. We consider ranking results of the proposed model as the accurate ones, for that it consists of not only distance measure information but also score information and fluctuated hesitant information. Compared with other types of HFS, some important concepts are not discussed in the paper, such as entropy measures, cross-entropy measures, correlation coefficients. As another sector reflecting hesitant information through fluctuations of membership degrees on time-sequences, extending which to other types of hesitant fuzzy set would be interesting studies, such as DHFS, IVHFS. In order to dig out better potential meanings of TSHFS, future works will mainly focus on mentioned above.