Keywords

1 Introduction

Worldwide, the population is aging. With recent developments in sensor and networking technologies, particularly the Internet of Things (IoT), has made it possible to develop a wide range of valuable applications such as the remote health monitoring and intervention. These applications offer the potential to enhance the quality of life for the elderly, afford them a greater sense of security, and facilitate independent living [2, 10, 18]. As an example, in the following, we depict an assistant service, which tracks the completeness and consistency of daily routines performed by a person with dementia and determines when the resident needs assistance.

Scenario 1

Thomas is a 77-year-old frail man living alone in an apartment. He is making a cup of tea and his kitchen knows it. Tiny sensors monitor his every move and track each tea-making step. If he pauses for too long, a nearby computer reminds him about what to do next. Later that day, Thomas’s daughter accesses a secure website and scans a check-list, which was created from the computer in her father’s apartment. She finds that her father took his medicine on schedule, ate normally, and continued to manage his daily activities on his own. This puts her mind at ease.

There are much significant research efforts on building intelligent ambient environments like smart homes, such as House_n in MIT [5], CASAS [12], Adaptive House [11]. However, all these efforts focus on people’s direct interaction with the technologies. Most of them either heavily rely on people’s involvement such as wearing battery-powered sensors, or lacking a synthetic method of deploying ubiquitous available sensor data. To overcome these issues, on the one hand, we design an unobtrusive radio frequency-based smart home monitoring system, where we infer people’s activities by analyzing received signal strength fluctuations of sensors [4, 21]. On the other hand, we adopt the recently emerging Internet of Things (IoT) to make smart home interactivity a reality By providing the infrastructure for transparently and seamlessly gluing heterogeneous resources and services, IoT enables affordable smart homes that are accessible and interoperable from end user devices (e.g., smartphones and tablets) and cloud services based on the Web architecture.

In this paper, we present the design and development of a prototype system called WITSCare, to enable a smart home monitoring process built upon our previous work on Web-based management of the Internet of Things [20]. WITSCare can automatically learn the contexts in a smart home environment (such as the daily human activities and object usage) by analyzing the radio signal fluctuation caused by human mobility and interactions with objects, help the elderly residents be aware of their surroundings, and make better decisions. Sensing contexts are exposed as universal services (i.e., context as a service) for higher level process integration and customization via a dedicated real-time Web-based interface. The proposed system allows users to interact with IoT-enabled devices, objects, and contextual events (such as human activity and location) for creating personal rules and service customization through graphical icons and visual indicators as opposed to specifying them textually. With this first-hand exploration of smart home automation in the context of Internet of Things, the development of new IoT applications can benefit from the architecture and implementation of our system. In a nutshell, the main contributions are as the following:

  • We develop a holistic view of smart home process management with Internet of Things support, consolidating the resource and service management all in one place. In particular, we present a layered monitoring architecture based on IoT and cloud, which provides the infrastructure to transparently access sensors, processors, and actuators using standardized protocols. Our architecture can achieve better interoperability, scalability, and flexibility in terms of resource management and personalized care management processes.

  • We analyze and identify the IoT-enabled devices and contexts (e.g., resident activities, locations and their interactions with home appliances), which are abstracted and managed as Context as a Service under the Internet of Things architecture. We develop a series of machine learning approaches to acquire contexts by learning the signal fluctuation of deployed sensors in the IoT-enabled smart home environment. The obtained real-time contexts are automatically wrapped up and exposed as resources.

  • We develop an easy-to-use Web interface for visual service customization, allowing end users to visually specify and manage complex rules with semantic information of these resources by manipulating the graphical icons of contexts as building blocks. Our system supports drag-and-drop with no programming efforts.

The remainder of this paper is organized as follows. Section 2 presents the layered architecture of the WITSCare system. Section 3 provides detailed processes and techniques on detecting main contextual information in WITSCare. Section 4 describes our design of visual process management in WITSCare. Section 5 reports the implementation and evaluations of WITSCare. Finally, Sect. 6 overviews some relevant literature and Sect. 7 offers some concluding remarks.

2 WITSCare Overview

In this section, we first overview the architecture of our system, followed by a brief description of the key components. WITSCare consists of hybrid pervasive sensor networks deployed in the house, and an intelligent in-home monitoring system running over the Internet of Things framework. It enables the seamless information access, exchange, and manipulation between the digital and physical worlds. WITSCare offers an integrated Web-based interface to manage (that is, connect, monitor, control, mashup, and visualize) things in an IoT environment, which helps people be aware of their surroundings and thereby make better decisions.

As depicted in Fig. 1, the system provides a layered architecture for managing and sharing the information produced by physical things. It is developed using the Microsoft .NET framework and SQL Server 2012. The bottom layer, Data Access and the Sensor Hive, takes care of: (i) managing sensors associated with the physical things, (ii) collecting and processing raw sensor data streams, and (iii) providing an API for internal or external programs to retrieve the status and abstraction. Virtual Things, Localization and Activity Recognition are on the same layer, which is responsible for generating and learning contextual events based on the collected sensor streams. The Event Management layer focuses on event processing that automatically extracts and aggregates localization, activity and object use events based on the data feeds from the virtual things layer in a pipelined fashion. Such contextual events can be directly used by high-level applications or further processed by other modules (e.g., event processing unit). We will give more technical details on context acquisition in Sect. 3. The Service layer provides flexible and full spectrum support for process management and customization, which can be harnessed to enable visual service composition (Sect. 4.1) for more Web-based enchanting applications via a user-friendly Web interface (Sect. 4.2.)

Fig. 1.
figure 1

Overview of the WITSCare system

Fig. 2.
figure 2

Illustration of system setup

Fig. 3.
figure 3

(Top) Sensor signal strength distribution from different activities Sitting leaning left and Sitting leaning right, (bottom) sensor signal strength distribution from different rooms Bedroom and Living room

3 Context Acquisition

In this section, we focus on contextual information recognition techniques that are important for the care management process in the WITSCare system, including activity recognition, localization and object use detection.

Figure 2 depicts a typical system setup in a room setting, where a sensor array (e.g., passive RFID tags in this work) is deployed on the wall, and sensor transmitter (e.g., a RFID reader) are placed on the other side, facing these tags. When a person performs different activities or presents in the room, the system collects signal strength variations, e.g., Received Signal Strength Indicator (RSSI), and discovers fluctuation patterns. The motivation of our work arises from the following observations. Although RSSI reflects the uncertainty and non-linear distributed patterns, we discover that the variations of RSSI values allow us to distinguish among different activities and locations. Figure 3 (top) shows the distinctive changes of RSSI values of the tag array (e.g., passive RFID tags) according to different postures that a subject performs. Figure 3 (bottom) shows RSSI signal strength distribution in different rooms. From these observations, we believe that RSSI signals embody different patterns for different activities and locations, which can be further exploited for our activity recognition and localization.

3.1 Activity Recognition

We propose a sparse representation based approach to recognize human activities by investigating signal fluctuations. We learn one single dictionary for each activity, which is formed by a set of basis vectors learned by solving a sparse optimization problem. Each basis vector can effectively capture part of key structural information of given training data from each activity. Assuming we have predefined K types of activities, we can construct K dictionaries (one dictionary for each activity). A new signal strength vector is measured by using the coefficients of K dictionaries. We propose to compare the largest absolute value of coefficients of the different dictionary for given new signal, larger of which indicates that the new testing signal sample fits better to the specific corresponding dictionary than others. Let \(\mathbf {O}^{k} = \{\mathbf {o}^{k}_{1}, \mathbf {o}^{k}_{2},...,\mathbf {o}^{k}_{i}\}\) be the training sample from activity class \(\mathcal {C}^k\), to learn and encode the information of the testing samples belonging to a particular activity class, we first construct an overcomplete dictionary \(\mathcal {D}^{k}\) for each class \(\mathcal {C}^k\). Recall the set of training samples from \(k^{th}\) activity as \(\mathbf {O}^{k} = \{\mathbf {o}^{k}_{1}, \mathbf {o}^{k}_{2},...,\mathbf {o}^{k}_{N}\}\), where \(\mathbf {o}^{k}_{i}\in \mathbb {R}^{m}\), m is the feature dimensions. We intend to find a dictionary matrix \(\mathbf {D}^{k} \in \mathbb {R}^{m\times K}\) having \(K (K > m)\) vectors \(\{\mathbf {d}^{k}_1,...,\mathbf {d}^{k}_{K}\}\), over which \(\mathbf {O}^{k}\) has a sparse representation \(\mathbf {X}^{k} = \{\mathbf {x}^{k}_1,...,\mathbf {x}^{k}_N\}\), where \(\mathbf {x}^{k}_{i}\in \mathbb {R}^{K}\). In this case, the original training matrix \(\mathbf {O}^{k}\) can be represented as a linear combination of no more than \(\tau _{0}^{k} (\tau _{0}^{k}<< K)\) dictionary vectors. The optimization problem can be formalized as:

$$\begin{aligned} \min _{\mathbf {D}^{k},\mathbf {X}^{k}} ||\mathbf {O}^{k} - \mathbf {D}^{k}\mathbf {X}^{k}||_{2}^{2}, \, \, \text {s.t.}\, \, ||\mathbf {x}_{i}^{k}||_{0}\le \tau _{o}^{k} \end{aligned}$$
(1)

After profiling the dictionary for each activity, for a given query feature vector of signal samples \(\mathbf {o}^{*}\), the activity label is associated with the training samples that have the largest absolute value of coefficients of \(\mathbf {X}^{k}\):

$$\begin{aligned} l_{\mathbf {o}^{*}} = l(\max _{i}\mathbf {|X|}_{i}^{k}) \end{aligned}$$
(2)

3.2 Presence Detection

Localization is critical to track and locate people in indoor environments. Monitoring the movements of an elderly person inside the home is especially important to spot abnormal behaviors, e.g., staying in the toilet over 30 min. Such contextual information is also important for the system to perform commonsense reasoning. For example, when an old person is detected lying down somewhere other than the bedroom, an alarm should be produced. Our system can provide coarse-grained location support, e.g., positioning which room a person presents.

We decompose the continuous signal stream collected from each room into 30-s segments. Since the common sampling rate is 0.5 s in this work, we get 60 time frames in each time interval. Therefore, each interval is formed as signal-vs-time \(\mathbf {O}^{k} \in \mathbb {R}^{m^k \times t}\), where \(m^{k}\) is the dimension of features extracted from hybrid sensors in each room k, and t is the number of samples during each 30 s time window (\(t=60\)). Each resulting feature matrix contains the signal values in 30 s. The coupling feature matrix is more informative and robust than one-dimensional feature vector. Since our objective is to identify whether a person presents in the room, our room-level localization problem can be formulated as: given N sample features \(\{\mathbf {X}_i,y_i\}_{i=1}^{N}\) where \(y_{i}\in \{-1,+1\}\) of the \(i^{th}\) sample, indicating whether the person is in a room or not. Let \(\mathbf {O} \in \mathbb {R}^{m\times n}\) be the coupling feature matrices every 30 s. To identify whether a person presents, we adopt the bilinear classifier, which can be formulated as:

$$\begin{aligned} \hat{y} = tr(\mathbf {W}_{m}^{T}\mathbf {O}\mathbf {W}_{n}) + b \end{aligned}$$
(3)

where \(\mathbf {W} \in \mathbb {R}^{m \times n} = \mathbf {W}_{m}\mathbf {W}_{n}^{T}\), \(\mathbf {W}_{m} \in \mathbb {R}^{m \times d}\), \(\mathbf {W}_{n}\in \mathbb {R}^{n\times d}\), and \(d \le \min {m,n}\). We can solve this problem under the maximum margin framework by measuring the margin of the bilinear classifier in Eq. 3 with matrix trace norm, e.g., sum of singular values for minimizing the matrix rank, resulting in the following optimization problem:

$$\begin{aligned} \min _{\mathbf {W},b}\dfrac{1}{2}tr(\mathbf {W}^{T}\mathbf {W})+ C\sum _{i=1}^{N}\max \Big (0,1 - y_{i}\{tr(\mathbf {W}^{T}\mathbf {O}_{i}) + b\}\Big ) \end{aligned}$$
(4)

3.3 Object Use Detection

Object usage is another important contextual information for our elderly care management process. For example, an interaction with a pill bottle may indicate that the older person takes her medication. We design two ways to detect usage events of things: a sensor-based approach for detecting state changes and an RFID-based approach for detecting mobility. In sensor-based detection, an object’s usage is learned from changes of the attached sensors’ status. For example, the status of a microwave oven moves from idle to busy when it’s being used. In the RFID-based detection, an object’s movement indicates that the object is being used. For example, if a coffee mug is moving, it’s likely that the mug is being used. In this situation, we adopt a generic method that compares the descriptive statistics of signal variations in consecutive sliding windows. The statistics obtained from two consecutive windows are expected to differ significantly when an object is moved.

4 Context Recognition Services and Smart Home Processes

Based on the captured contexts, our WITSCare system can automatically wrap up the real-time contextual events and expose them as services in the form of RESTful APIs and further represent the APIs as graphical icons in the Visual Process Management interface (Fig. 5). In this section, we introduce how contexts work as services for building enchanting IoT-aware smart home business process through visual process management and real-time web presence.

4.1 Visual Smart Home Process Definition and Customization

The key challenges for advanced IoT applications (e.g., smart homes) is to create customizable services in an effective and user-friendly manner and to integrate an easy-to-use interface for information access, exchange, and manipulation with the applications. Service customization aims at integrating and adapting dynamic information to the contextual situations and personal needs via specifying business process patterns and rules.

Fig. 4.
figure 4

An illustrative example of business process in fall detection

Most current systems that specify customizable rules are either too difficult to use or only support simple patterns from limited context sources. In contrast, WITSCare provides a visual and intuitive interface for service customization and supports a higher level composition by aggregate the inferred contexts. In particular, complex rules can be specified and managed in a drag-and-drop fashion without any programming efforts. A rule engine works behind the visual interface and receives the string expressions of rules by analyzing and annotating the string statement based on a state machine. The string expression is then translated to a list of annotated objects. The rule parser is implemented based on the shunting-yard algorithm: it first compiles each part of the input sequence into a .NET Expression object, and then combines all such objects together into a complex Expression Tree, which is compiled into a Lambda expression. This Lambda expression object is stored in memory when the system is running. It can be invoked when a status changes or time elapses. If the Lambda expression returns true, a corresponding action will be called.

End users can access the smart home process via monitoring and remotely controlling Cloud Pub/Sub to publish events to subscribers. For example, the caregivers of older people can access the personal data like pulse, blood pressure, if the older people take medicine on time, and if they fall. It could be also subscribed from other public service providers, like a WeatherForecast service or a TrafficCoordination system. The whole fall detection process is illustrated in Fig. 4. Firstly, the activities learned from signal strength variation are wrapped up as events and then sent to the Event Bus at real-time. Once the abnormal event like fall is captured, an alarm (such as SMS) will be sent to Thomas’s daughter and the caregiver via the cloud platform for medical assistance. This personalized care management process can be easily accomplished in WITSCare via its Web interface as shown in Fig. 5.

Fig. 5.
figure 5

Visual process management in WITSCare. For example, to edit a rule like sending an alarm when a person falls down, the user needs only to drag the person and fall icons to the conditions subpanel, the alarm icon to the actions subpanel and perform some simple adjustments (e.g., adjust the clock slider to set the time period).

4.2 Real-Time Web Presence

Our system offers two kinds of user-centric real-time Web presence. We further build up a case wall to simplify the integration of disparate services and to effectively build customized processes [14], in which a case in the context of in-home aged care consists of Entities and Events and Tasks. Entity describes who have the access to the case, especially the person who has the privilege to edit the case, activity recognition service and presence detection are also included. A case wall is to inform caseworkers about task progress to enable such participants to uniformly and collectively react, interact, and collaborate on the relevant case.

Cases can contain automated tasks, e.g., we can build an interactive case (Fig. 6) to list CaseWalls for medical assistance related to fall detection. The left side of the figure shows a set of entities involved in this case, like a caregiver, activity recognition service, and rule engine. The middle part shows a set of events that inform users the actions to take (if any). The right side shows the action sequence of interacting with external software such as a cloud publish/subscriber platform, or interacting between user to perform manual tasks (e.g., a caregiver decides whether to offer medical assistance after receiving a confirmation from Thomas’s daughter) except the automatic task (e.g., fall detection is an automatic task). Besides automating tasks, CaseWalls also automates the notification process, making it simpler for participants to identify their tasks.

Fig. 6.
figure 6

An illustration of real-time CaseWall with interactive behavior of medical assistance

We also develop a Web-based interface (Web UI) that offers a 3D scene in Web browsers. We particularly adopt the Web Graphics Library (WebGL) in HTML5 to enable 3D scene recreation. The 3D models are stored as Digital Asset Exchange (DAE) files, and imported and rendered by using three.jsFootnote 1 with plugins. Things are visualized and managed by device plugins. Each visualized thing is considered as a device plugin, which contains one or more 3D model or animation settings. For instance, the kettle will show steam when it is boiling water. We use the ShaderParticleEngine pluginFootnote 2 for three.js to create the steam effect for the kettle (see Scenario 1 in Fig. 7). Each device plug-in also provides a serial of APIs (i.e., Web APIs), to communicate with the service layer for status changes of the corresponding things, and to reflect such changes on the Web browser. All the control and data flow can be manipulated through this lightweight Web interface, which provides an administrative portal for things management and activity reasoning (e.g., connecting and disconnecting things, and viewing event logs).

5 Evaluation

In this section, we report the main studies regarding the performance of several main modules of the WITSCare prototype, including activity recognition and human presence, to evaluate our approach.

Fig. 7.
figure 7

3D scene of the real-world on the Web browser. The microwave oven will be in a highlighted status (yellow) while it is being used; a steam is shown on the kettle icon when the real kettle in the kitchen is boiling water (circled 1), real-time location tracking (circled 2), visual service composition interface (circled 3) and social network notification in Twitter (circled 4). (Color figure online)

We deployed our system using one Alien 9900+ RFID reader and four circular antenna (each antenna for each room) and multiple Squigg inlay passive RFID tags in the experiments. The tags were placed along the walls in a bathroom, a kitchen and several bedrooms, where each grid is roughly 0.8 m \(\times \) 0.8 m. The antennas were arranged between \(\approx \) 1.3 m\(\sim \)1.6 m height with angle \({\approx }\,70^\circ \). The hardware and setup are shown in Fig. 8.

Fig. 8.
figure 8

Experimental setup: (a) kitchen setup and (b) bedroom setup

To evaluate the usability of our WITSCare system. we conducted a survey with 20 participants from diverse backgrounds and ask them to use the system and answer a questionnaire. Almost all participants reported that the system is fun and easy to use.

To test the consistency of performance, we continuously evaluated the system over 7 days. Figure 9 shows an example of the sequence of activities performed by subjects. The recognition accuracy is depicted in Fig. 10(b), where we observe the stable and consistent performance and only minor misclassification during the activity transitions of our method.

Fig. 9.
figure 9

An example of activity sequence

Fig. 10.
figure 10

(a) Accuracy of room-level person presence detection; (b) activity recognition performance consistency over 7 days evaluation

We also studied the performance of localization by detecting person’s presence in a room. We collected the signal data in terms of empty status (no one is in a room) and the person showing up in a room, respectively. Then, followed by the person-independent strategy, we validated our presence detection approach. The results Fig. 10(a) show that bilinear classifier is most accurate than other methods since it models the streaming signal data as the confluence of a pair of signals and time interval, which better captures the dependency of data factors than the one-dimensional signal vector used by other classifiers. Figure 11 visualizes a subject traces of moving between four rooms, which shows bilinear performs good discriminative results in handling switching rooms.

Fig. 11.
figure 11

Accuracy of room-level person presence detection

We conclude this section with some brief discussions on the latency handling in WITSCare. Fast detecting and notification are critical, particularly for aged care applications. For example, for the fall detection, we should send an alert to notify the caregivers as quickly as possible to offer medical assistance for the elderly people after a fall happens. Our system has \(4 \sim 4.5\) s recognition latency, which results from three main reasons:

  • Our system evaluates subject’s postures every 0.5 s using the latest 2 s of signal stream. In other words, if the current system time is at timestamp t, our system will produce the predicted postures in the \([t-2,t-1]\) seconds, and \([t-1,t]\) seconds is used to backtrack check if the predicted label complies with predefined rules. For instance, assume that the label is estimated as: lying in bed at \([t-2,t-1]\) interval, if the predicted label in interval \([t-1,t]\) is nobody, our system will determine the predicted posture is still lying in bed.

  • Signal collector is programmed with a timer to poll the signal variations with a predefined order of transmission, which takes around 1 s to complete a new measurement with no workarounds.

  • It should be noted that we integrated our system into a Web-based interface, which sends AJAX requests to services for the latest results and then looks up the database to retrieve data for sending back to the Web interface with updating DOM (document object model) element. Completing such a querying process normally takes 300 ms to 500 ms.

6 Related Work

With billions of things interconnected and presented over the Web, there are significant challenges in develo** IoT applications, due to their unique and inherent characteristics. The SENSEI projectFootnote 3 proposes an architectural framework that focuses on addressing scalability issues in wireless sensor and actuator networks. The SemSorGrid4EnvFootnote 4 develops a service-oriented architecture and a middleware that assists developers in building large-scale semantic-based sensor network applications. Both projects, however, deal with the connectivity issues IoT: how to compose heterogeneous things to the Web rather than how to describe and model things. The recent research and development activities at CSIRO [16] offer some interesting experience in applying IoT in a number of application domains such as smart farming. An ontology-enabled architecture has been developed where the sensor observations are published as linked data cube for long-term data analysis and sharing at the national scale. The system does not provide sufficiently suitable integrated abstractions for things.

The researchers from the University of Washington develop an IoT application, which unfortunately only focuses on managing the collected RFID data [19]. The work by Tranquillini et al. [17] focuses on wireless sensor networks (WSN) and proposes an extension for Business Process Modeling Notation (BPMN) and a compiler that transforms the extended BPMN models into WSN-specific code to distribute process execution over both a WSN and a standard business process engine. ParaimpuFootnote 5 provides a social platform for people to connect, compose, and share things. It is unclear on how this platform is realized. In Hyperpipe projectFootnote 6, things are represented as Web services and connected using pipes so that users can easily compose. However, things are mostly resource-constrained and the traditional SOA standards like SOAP and BPEL may not be applicable. Many research projects are actively solving these challenges and one notable effort is IoT6 projectFootnote 7, which focuses on the investigation of IPv6 and related standards (e.g., 6LoWPAN, CoAP) to overcome current fragmentation of the IoT. Compared to the existing work, our work provides a framework to seamlessly integrate IoT, specific context recognition techniques, and process management. We also present an intuitive visual environment that allows ordinary end users to easily and efficiently create personalized processes.

The goal of activity recognition is to detect human physical activities from the data collected from various sensors. There are generally two main ways for activity recognition: (i) to instrument people, where sensors and RFID tags are attached to people, and (ii) to instrument the environment, where sensors are deployed inside the environment. Wearable sensors such as accelerometers and gyros are commonly used for recognizing activities. For example, the authors in [6] design a network of three-axis accelerometers distributed over a user’s body. The user’s activities can then be inferred by learning the data provided by these accelerometers about the orientation and movement of the corresponding body parts. However, such approaches have obvious disadvantages including discomfort of wires attached to the body as well as the irritability that comes from wearing sensors for a long duration. More work related to indoor location tracking and activity sensing include [1, 7].

Recently, researchers are exploring smartphones equipped with accelerometers and gyroscopes to recognize activities and gesture patterns [9]. In a very recent work, Krishnan et al. propose an activity inference approach based on motion sensors installed in a home environment [8]. Apart from sensors, RFID has been increasingly explored in the area of human activity recognition. Some research efforts propose to realize human activity recognition by combining passive RFID tags with traditional sensors (e.g., accelerometers). Other efforts dedicate to exploit the potential of using “pure” RFID techniques for activity recognition [22]. There are also research on advanced Human-computer interaction. For example, both [3, 13] focus on leveraging sensor techniques to assist better life for the elderly. The concept of smart home is proposed in [15] to theorize the domain of research.

However, such solutions either require people to carry RFID tags or even readers (e.g., wearing a bracelet) or only focus on localization and tracking. There are not much work on study fine-grain activity recognition. To our knowledge, the work by Hong and Ohtsuki [4] is the only such effort, which proposes a solution on device-free activity recognition by using a sensor array.

7 Conclusion

In this paper, we have proposed the design and development of a smart home system that leverages the emerging Internet of Things (IoT) for providing personalized, context-aware services that help older people stay at their homes safely. Our IoT-based system provides seamless integration of digital world and the physical world by managing things of interest and accessing their corresponding services over the Web. In particular, the system realizes care management processes for elderly people who live alone, in which both people’s location and activities can be monitored by learning the signal strength fluctuations collected by passive RFID tags. We implemented the system and conducted extensive experiments to validate our system. The first hand practical experience from this system will be useful for building complex IoT applications.