Nowadays, we experience an abundance of Internet of Things middleware solutions that make the sensors and the actuators are able to connect to the Internet. These solutions, referred to as platforms to gain a widespread adoption, have to meet the expectations of different players in the IoT ecosystem, including devices [1] . Low cost devices are easily able to connect wirelessly to the Internet, from handhelds to coffee machines, also known as Internet of Things (IoT). This research describes the methodology and the development process of creating an IoT platform. This paper also presents the architecture and implementation for the IoT platform. The goal of this research is to develop an analytics engine which can gather sensor data from different devices and provide the ability to gain meaningful information from IoT data and act on it using machine learning algorithms. The proposed system is introducing the use of a messaging system to improve the overall system performance as well as provide easy scalability.
With the enormous improvement in technology nowadays, there are billions of devices that are producing data continuously. Examples of such devices are temperature sensors, motion detectors, humidity sensors or even the luminosity sensor in a smart phone. Due to the vast amount of sensors that exist, the volume of data that get produced every second, will make it difficult to organize in a good and easy way. There are many attempts to create platforms that allow users to register their sensors, actuators and visualize the data produced from theses sensors/actuators, such as Xively [
The IoT is estimated to consist of almost 50 billion devices by 2020 [
In this section, we present two of the most used IoT platforms as shown in
Reference architectures for integrating sensors with cloud services and IoT platforms have been discussed in the literature [
Platform | Protocol | Capabilities | Architecture | Open Source |
---|---|---|---|---|
Thing Speak [ | HTTP | ・ Visualization ・ Data Analytics ・ Store Data ・ Integrate with Matlab | Centralized/Cloud Based | Yes |
Xively [ | HTTP | ・ Visualization ・ Data Analytics ・ Integrate with Sales Force | Cloud Based | Yes |
Sense Egypt (proposed in this paper) | MQTT | ・ Visualization ・ Data Analytics ・ Send SMS/Email Alerts ・ Send Commands to Actuators based on Analytics results ・ Store Raw and analyzed Sensors Data | Cloud Based | Yes |
We adopt the architecture of a typical IoT system proposed in [
1) Devices layer
2) Communication layer
3) Messaging layer
4) Real-time Data analytics layer
5) Data Storage layer
6) Visualization layer
Each layer in Sense Layer proposed architecture as shown in
We consider a device as a set of sensors and actuators. The below diagram shows the IoT devices and their connection to internet.
The diagram shown in
1) Devices that don’t have operating system like Netduino and Arduino.
2) Devices that may run operating system like linux or another suitable operating system. These devices may be used as a gateway for sensors and small devices, e.g. if a wearable sensor connects to a smart phone via Bluetooth or Raspberry Pi, which then enable sensors to connect to the internet.
This layer provides the connectivity of the devices and IoT gateway to the rest of IoT platform pipeline. The gateway is the interface between sensors and the rest of the IoT pipeline. The role of IoT gateway is to abstract and encapsulate the sensors platform, aggregate data from sensors and then sending sensors data to the rest of IoT pipeline.
There are different communication models between IoT devices, IoT gateway and the Internet:
1) Direct WIFI or Ethernet connectivity via UDP or TCP/IP.
2) Connectivity through IoT Gateway.
There are different protocols for communication between IoT devices, IoT gateway and the internet. The most well-known three potential protocols are:
・ HTTP/HTTPS (and RESTFUL approaches on those) [
・ Universal Plug and Play (UPnP) [
・ Constrained application protocol (COAP) [
・ MQTT (MQTT official website) [
・ Extensible Messaging and Presence Protocol (XMPP) [
1) Hypertext Transfer Protocol (HTTP):
HTTP has become much more than navigation between pages on the Internet. Today, it is also used in Internet of Things, among other things. So much is done on the Internet today, using the HTTP protocol, because it is easily accessible and easy to relate
to. HTTP is a stateless request/response protocol where clients request information from a server and the server responds to these requests accordingly as shown in
2) Universal Plug and Play Protocol (UPnP)
UPnP is a protocol or an architecture that uses multiple protocols, helps devices in ad hoc IP networks to discover each other, detects services hosted by each device and reports events. Ad hoc networks are networks with no predefined topology or configuration. Devices can find themselves and adapt themselves to the surrounding environment. UPnP is used by almost all network-enabled consumer electronics products used in your home or office, and as such, it is a vital part of Digital Living Network Alliance (DLNA). UPnP is largely based on an HTTP application where both clients and servers are participants. This HTTP is, however, extended so that it can be used over TCP as well as UDP, where both use unicast addressing (HTTPU) and multicast addressing (HTTPMU) [
3) Constrained Application Protocol (CoAP):
CoAP is a very light weight protocol based on HTTP but the main difference between CoAP and HTTPU is that CoAP replaces the text headers used in HTTPU with more compact binary headers, and furthermore, it reduces the number of options available in the header. This makes it much easier to encode and parse CoAP messages. CoAP also reduces the set of methods that can be used; it allows you to have four methods: GET, POST, PUT, and DELETE. Also, in CoAP, method calls can be made using confirmable and non confirmable message services. When you receive a confirmable message, the receiver always returns an acknowledgement. The sender can, in turn, resend messages if an acknowledgement is not returned within the given time period. The response code has also been reduced to make implementation simpler.
4) Message Queue Telemetry Transport (MQTT):
The MQTT protocol is based on the publish/subscribe pattern, as opposed to the request/response in the previous protocols. The publish/subscribe pattern has three types of actors:
・ Publisher (MQTT Client): The role of the publisher is to connect to the message broker and publish the content.
・ Subscriber (MQTT client): They connect to the same message broker and subscribe to content that they are interested in.
・ Message broker: This makes sure that the published content is related to interested subscribers.
Content is identified by topic. When publishing content, the publisher can choose whether the content should be retained by the server or not. If retained, each subscriber will receive the latest published value directly when subscribing. Furthermore, topics are ordered into a tree structure of topics, much like a file system.
5) Extensible Messaging and Presence Protocol (XMPP)
The XMPP [
Data generated by the many sensors and devices of an IoT system typically needs to be delivered to the storage and analytics systems using (HTTP - UPnP - COAP - MQTT - XMPP) protocols as discussed in the previous section.
It is an important layer of the architecture because that it aggregates and brokers communications. It is a very important layer for the following reasons:
1) It supports MQ Telemetry Transport broker and HTTP Server in order to connect IOT devices to the Internet.
2) It’s ability to mediate and route communications between different devices in system that may be connected via IoT gateway.
Big Data generated by IoT devices is categorized into volume, velocity, and variety of the data [
There are more than one step for real-time analytics systems to be able to handle the data velocity, volume and Varity as follows:
1) Real-time data analytics system should collect the data produced by IoT devices coming in at a rate of thousands and millions of event/second [
2) Real-time analytics system should support parallel processing for collected data.
3) The real time system should be a low latency ? and fault tolerant distributed system [
Objectives of Real-time Data analytics
1) Process data produced by IoT devices in real time or near real-time.
2) Extract meaningful information from data produced by IoT devices by performing event correlation using CEP (Complex Event Processing).
3) Provide predictive analytics for data produced by IoT devices.
4) Take actions based on results of analysis like sending SMS and Email alerts or sending commands to actuators registered in system.
The data produced by IoT devices needed to be stored at each processing phase like the raw data produced IoT devices, pre-processed data, and analytics results. Storing data makes it possible to perform additional analytics later using the tool of your choice.
Visualization is critical for IoT application as this allows interaction of the user with the environment. This layer presents the raw data produced by sensors and the analysis to the end users of the platform.
The implementation and components of the IoT platform is depending on the communication protocol between the IoT devices, IoT gateway and the internet so if HTTP is used as a communication protocol between IoT devices and the internet then the IoT devices will act as HTTP clients and the IoT analytics platform will act as HTTP server. So IoT devices (HTTP client) will emit their data to IoT analytics platform (HTTP Server). In Sense Egypt IoT platform (Proposed in this paper) we selected the MQTT as a communication protocol because as described in the communication and connectivity layer in the proposed architecture section, MQTT compares with other protocols like (HTTP, CoAP) is designed mainly for devices and is lightweight on the wire, this enables low cost device communication. MQTT is able to keep the bandwidth at an absolute minimum and it can deal with unreliable networks without the need for complex error handling and a huge effort in implementation. It was designed for keeping an steady line to your devices at a minimal cost to support real push notifications and real time communication. So if we need to connect IoT devices to Sense Egypt IoT platform for real time analysis of their data then the IoT devices will act as MQTT clients that
publish their data periodically to the MQTT broker which will forward and send the data received from IoT devices to the rest of IoT pipeline for real time analysis and processing. After that MQTT Broker send the data to the Apache Storm framework that is a real time analytics engine and it’s role is to analyze the data generated from IoT devices in real time and extract the meaningful information which will help in taking decisions. After the real time analysis is completed then we visualize the results and take actions accordingly such as send SMS and Email alerts as a notifications to the owners of IoT devices or sending commands to actuators registered in Sense Egypt IoT platform. The raw data gathered from IoT devices and the analyzed data are stored in apache Cassandra DataBase.
The proposed structure of Sense Egyp IoT platform is shown in
Every IoT system is consisting of a set of devices and we each device is a set of sensors and actuators. IoT devices such as (Netduino, Arduino, Intel Galileo and Raspberry Pi) will act as MQTT client. The role of any of the mentioned devices is to read inputs from sensors such as (Temperature sensor, Light sensor, Motion detection sensor) and turn it into an output (actuators such as turning on a motor, turning on an LED).
The process of capturing of IoT Devices Data as shown in
1) IoT device which is a set of sensors and actuators should be registered on Egypt Sense platform portal. IoT device is acting as MQTT client that connects to MQTT broker. MQTT client is responsible for collecting information from a telemetry devices and publishing the readings to the MQTT broker. It can also subscribe to topics, receive messages, and use this information to control the telemetry devices so it can be a publisher and a subscriber to the MQTT broker at the same time. MQTT clients implement the published MQTT v3 protocol [
2) Sense Egypt platform is generating a unique topic (MQTT Id) for each sensor and actuator registered in the system.
3) The IoT device sensors should publish events (readings) to MQTT broker using the generated topic from the previous step.
4) The IoT device actuators should subscribe to MQTT broker to receive commands using the topic generated from step no 2.
5) MQTT broker will emit the received sensors data to the IoT platform for advanced analytics as will shown below in the next sections.
MQTT client libraries are available in many different programming languages such as (Java, .Net .PHP, C#, Java Script, Node.Js, C++, C, Arduino) [
Structure of MQTT client application:
1) Create a client object
2) Set the options to connect to an MQTT server
3) Set up callback functions
4) Connect the client to an MQTT server
5) Subscribe to any topics the client needs to receive
6) Repeat until finished:
i) Publish any messages the client needs to
ii) Handle any incoming messages
7) Disconnect the client
8) Free any memory being used by the client
MQTT broker provides the ability to connect your devices over the Internet. It provides the capability to deliver messages in real time and it guarantees messages delivery. Connect thousands of devices to your platform, what makes MQTT truly excels is sending instant updates and broadcast push notifications. There are different implementations for message broker for example (Hive MQ, mosquito). Hive MQ [
1) High performance MQTT broker
2) Open Source Plugin System
3) Native Web sockets Support
4) Cluster functionality
5) Embeddable.
MQTT Broker doesn’t provide any buffering mechanism and is not scalable. When a large amount of data is coming in from multiple different sources, then both of these features are necessary. Systems like Apache Kafka should be used as an intermediate messaging system. Using intermediate messaging system between the MQTT broker and the rest of IoT pipeline system can help to improve the overall system performance as well as provide easy scalability. Apache Kafka is a publish/subscribe open source messaging system. A message broker is a programming module which translates messages from sender messaging protocol to receiver messaging protocol. Kafka is a publish/subscribe messaging system which can handle huge amounts of reads and writes per second from thousands of clients [
The real-time analytics engine is the brain of the IoT platform as the raw data received from IoT devices through the MQTT broker in real time need further processing. MQTT is already supported by Apache Kafka which makes integration effortless. The data received from an MQTT broker will be sent by Apache Kafka to different consumers. For example, one Kafka consumer could be used to send data to Apache Storm for data analysis and the other Kafka consumer could be used to send raw data to a database. The data received from Apache Kafka need further processing, such as adding a time stamp (if not already existed), expecting missing readings, filtering, analysis, predictions etc. We used Apache Storm [
Storm Cluster as shown in
・ Nimbus node:
Executes uploaded computations
Responsible for code distribution across the cluster
Starting workers across the cluster
Computations monitoring and workers reallocation as required
・ Zookeeper nodes: coordinates between nimbus and supervisor nodes in the Storm cluster
・ Supervisor nodes―starting and stopping workers nodes
In
are responsible for doing some computations and processing over the data received from spouts or received from other bolts.
In Storm Cluster you run topologies. Stream is the core abstraction in Apache Storm framework. A stream consists of unlimited sequence of tuples. Apache Storm has three high level entities which actually run Topologies in Storm cluster [
1) Worker Process
2) Executors
3) Task
A machine in a storm cluster may run one or more worker processes for one or more topologies. Each worker process runs executers for a specific topology and has it’s own
JVM. One or more executers may run within a single worker process.
Storm based Sensors Data Analytics in SenseEgypt
In the following section, we introduce a general workflow [
The phases of extracting meaningful information from the raw Data are as follow:
1) Pre-processing
2) Dimensionality reduction
3) Features Extraction
4) Classification
5) Visualization
The components of the data analytics layer as shown in
1) Kafka Consumer Spout:
Storm kafka spout [
2) Preprocessing Bolt:
IoT devices generate data in a raw form which is not necessarily suited directly for analytics engine implemented using Apache Storm framework. The generated Data may be missing, requiring an enrichment step, additional preparation or representations of values may need transformation (such as add time stamp for sensors readings) and we achieve this by applying any of the Pre-processing techniques.
i) Mathematical/Statistical Methods
a) Z-Normalization Algorithm
b) Min, Max Algorithms
c) Mean, Median Algorithms
d) Variance and Standard Deviation
e) Correlation and Integration techniques.
ii) Signal Processing Methods
a) Low Pass Filter
b) High Pass Filter
c) Band Pass Filter
The Pre-processing is consisting of the following phases:
i) Data Cleaning Phase:
In real time dynamic environment, a faulty or a missing sensor reading may occur due to bad communication channel or loss of service so we propose the following steps
for data cleaning phase:
a) Filtering out-of-range value:
To filter the sensor data values that are out of specific range we can use the Bandpass Filter. A Bandpass Filter has two cutoff frequencies, the lower and the upper frequencies and will only pass the signal in between.
b) Filling out missing values:
Missing values can be filled by the mean value of the sensor over some time window, by last recorded value and this can be done using mean/median algorithm
ii) Data Transformation Phase:
The second phase in pre-processing process is the data transformation and it involves transforming the data into the form which is optimum for machine learning process and this can be done using Z-Normalization.
Z-Normalization features:
i) Allows comparison of one time series data with another directly.
ii) Simplifying and enhancing the algorithm complexity.
The transformation formula is shown below:
As shown, the time series mean is subtracted from original values at first, and then the difference is divided by the standard deviation value second.
3) Analytics Bolt:
In the Pre-processing bolt we implemented the pre-processing technique that is the first phase of the proposed workflow to extract a meaningful information from the raw data. In the Analytics bolt we will implement the remaining phases of workflow.
i) Dimensionality Reduction Phase
After the raw data has passed to the pre-processing phase, we will pass the pre-pro- cessed data to the dimension reduction phase to reduce the data size. Several variants of aggregation techniques are used in order to reduce the data size without any loss of information. PAA and extended version of PAA called SAX are the most commonly used aggregation techniques in IoT for data reduction [
We will use Piecewise Aggregation Approximation (PAA) algorithm for the dimension reduction phase.
Piecewise Aggregation [
ii) Features Extraction Phase
Feature extraction is another technique used widely for data reduction where the number of features of data are large and mostly correlated to each other. Feature extraction enables to extract most relevant and uncorrelated features in order to perform optimum analysis [
There are many techniques developed for Feature Extraction as shown in
iii) Data Classifications Phase
After the raw data has passed to the dimensionality reduction and the features of the data produced by IoT devices have been extracted [
There are many techniques developed for IoT data classification as shown in
The two main advantages which gives SVM an edge on others are:
a) Its ability to generate nonlinear decision boundaries using kernel methods.
b) It gives a large margin boundary classifier.
We can conclude the following workflow for IoT Data Analytics Process the selected algorithms for each phase as shown in
All the above algorithms and techniques can be implemented using Apache Mahout Library [
iv) Storage Bolt:
Storage Bolt is used to interact with various databases. To store the generated raw data from IoT devices and the data generated from the preprocessing bolt, Apache Cassandra DB, Apache Couch DB or Mongo DB are good alternatives.
v) Alerts Bolt:
When any of the matching rules and thresholds are met then the appropriate action handlers in the alerts bolt can be executed such as sending SMS or Email to users. For example, if we have temperature sensor and we set rules and thresholds for this sensor such as minimum threshold is 10 and the maximum threshold is 60 and we selected to send SMS if any of these matching rules are met, and if the sensor reading is below the minimum threshold or above the maximum threshold then SMS should be sent to a certain mobile phone number.
vi) Visualization Bolt:
This bolt just send analytics results to Apache Kafka which sends it to the MQTT broker so that results are visualized to the system users using a dashboard and then act on it.
Apache CassandraTM [
・ It has the fastest writes amongst its peers such as HBase and so on.
・ No single point of failure.
・ Read and write requests can be handled without impacting each other’s performance.
・ Handles search queries comprising millions of transactions and lightning-fast speeds.
・ Fail-safe and highly available with replication factors in place.
For the visualization of sensor data, we have developed a simple dashboard to display
charts of raw and analyzed data received from sensors through a Message Broker (Hive MQ) and to display the analyzed data. The dashboard is a simple Node.JS [
Real-Time Analysis of Sensors Data:
After discussing all the components and the structure of Sense Egypt platform now we are concluding how the real-time analysis of sensors data is done.
Sense Egypt IoT platform handling the real time analysis of the sensors data as shown in
1) The IoT devices acting as MQTT clients to the MQTT broker so that the data generated from sensors will be published periodically to the MQTT broker.
2) The MQTT broker (Hive Mq) receive sensors data and send them to the messaging system (Apache Kafka) for buffering.
3) The data received from an MQTT broker will be sent by Apache Kafka to different consumers. For example, one Kafka consumer could be used to send data to Apache Storm for data analysis and the and the other Kafka consumer could be used to send raw data to a database.
4) Apache Kafka send the data received to Apache Storm through kafka consumer spout. The data received from Apache Kafka need further processing, such as adding a time stamp (if not already existed), expecting missing readings, filtering, analysis,
predictions etc. We used Apache Storm to achieve these goals.
5) Apache storm is responsible for the real-time analytics of sensors data and the proposed system is using machine learning algorithms for real time analytics of data as shown above. The analyzed data are stored in Apache Cassandra DB through Apache storm storage bolt. Actions also are taken based on analytics results so that SMS/Email alerts can be sent to users and also commands can also be sent to the subscribed actuators(IoT device) through MQTT broker to take an action.
6) The results of real-time analytics of sensors data are visualized and displayed in charts in a simple dashboard developed using D3 library in Node.Js framework.
The main objective of this research is to build a platform for real time analysis of IoT data streams. The Web portal consist of the following pages:
IoT Platform Main page from which the user can Sign In, Sign up and open channels page that enables the user to enroll sensors and actuators to the system.
The first step for users to be able to use the portal is to create an account from Sign up page on the platform to be able to add sensors and actuators to their channels. After the user has created his account in the system, he can now be logged on from the Sign on page and then he can add new sensors and actuators to his account. To add a new device to your channel then user should click on the new device button in the channel page then a new form will be displayed and these data should be entered (Device Type, Device Name, Device Description, Mobile No, Device Latitude and Longitude, Minimum and maximum thresholds, Select the triggered actuator if found, Check SMS Box if needed to send SMS message to user using the mobile number entered above if any of the threshold rules is true and Check Email Box if needed to send email messages to users using the email entered during account registration if any of the thresholds rules applies and then click Save button.
After the user Click on the save button in the devices registration page
Device Registration Sequence Diagram:
The below sequence diagram in
Sensors Interaction with IOT platform sequence diagram as shown in
The MQTT Broker (Hive MQ) is one of the main components in the system that enables sensors and actuators to connect to the rest of IoT pipeline, so we need to evaluate it’s performance as follows. All Tests were executed on Amazon Web Services (AWS), a cloud infrastructure provider.
Hive MQ Server Instance Hardware specs as shown below in
1) Latency Test:
This test shows the latency of Hive MQ for different Quality of Service levels for different amounts of MQTT clients and high throughput.
Latency is key for IoT systems at high scale where responsiveness and the real time experience are key acceptance factors of end users or downstream systems. The following benchmark shows how Hive MQ performs in an end-to-end scenario with real network round trip for latencies.
QoS 1 Results:
This benchmark tests the end-to-end latency of MQTT messages with QoS 1 guarantees. This means, Hive MQ uses disk persistence for every outgoing MQTT message due to the at least once semantics of QoS 1. No messages were lost in this test since the TCP connection was stable all the time and the QoS 1 guarantees were in place as shown in
Discussion
This test shows that the throughput and latency was stable all the time for the whole measurement time (45 minutes) of every individual test. With an increasing number of clients and messages per second the latency did not increase significantly. Average round trip time was always in the lower one-digit milliseconds. Even with linearly increasing throughput and number of subscriptions all measured latencies remain very low. Every single message was persisted to disk before delivering so the additional latency compared to QoS 0 messages are a result of the additional disk I/O overhead. This benchmark demonstrated that Hive MQ delivers very high QoS 1 message throughput (>15,000 messages per second) with a one-digit latency by average, while complying to the QoS 1 at-least-once guarantees as
2) Telemetry Test:
Name | Value |
---|---|
Instance Type | C4.2x large |
RAM | 15 GiB (~16 GB) |
v CPU | 8 |
Physical Processor | Intel Xeon E5-2666 v3 |
Clock Speed (GHz) | 2.9 |
2.500 | 5.000 | 7.500 | 10.000 | 12.500 | 15.000 | |
---|---|---|---|---|---|---|
Mean | 0.415047152 | 0.604632525 | 0.767690498 | 1.062635311 | 1.163355852 | 2.032116374 |
75th | 0.358394 | 0.389408 | 0.461854 | 0.525717875 | 0.675208875 | 0.857987 |
95th | 1.17566645 | 1.2139728 | 1.374251375 | 1.3377911 | 2.4430323 | 3.134619025 |
98th | 2.64368922 | 3.07982922 | 2.95271823 | 3.6996623 | 7.28728491 | 13.49425474 |
99th | 2.8415169 | 5.4699688 | 7.403215115 | 12.88660823 | 19.44312396 | 32.44365795 |
Median | 0.3032895 | 0.328127 | 0.3717255 | 0.40715475 | 0.481312 | 0.55862825 |
Std Dev | 0.506549184 | 2.889852539 | 5.889325998 | 7.383906539 | 5.198095769 | 12.02608314 |
MQTT brokers are often deployed in environments where it’s key to collect data from a huge amount of devices while only a few subscribers process the data published by the devices. A typical use case are telemetry scenarios where the MQTT broker needs to process a very high incoming MQTT message rate. The following benchmarks focus on the throughput of Hive MQ in such a scenario. In order to understand the runtime behavior of Hive MQ in a telemetry scenario, all relevant runtime statistics like CPU usage, and RAM and used bandwidth are measured. So this benchmark is focused on resource consumption of Hive MQ while delivering constant message throughput.
QoS 1 Results
This benchmark tests the resource consumption of Hive MQ with incoming QoS 1 messages. As discussed in the Benchmark Setup section, the subscribing clients subscribe with QoS 0. The following measurements were executed during the test executions: Average CPU utilization as shown in
incoming and outgoing traffic per minute as shown in
Discussion
Increasing the total number of QoS 1 messages per second linearly result in linear bandwidth increase while CPU and RAM usage grow at a predictable level. A notable observation is, that while the bandwidth usage increases linearly with the number of messages/second, the CPU and RAM usage do not increase linearly. Hive MQ delivers
constant and predictable results until CPU limits of the EC2 instance are reached.
RAM is negligible in this test since 3 GB of RAM usage were never exceeded although the machine was configured to reserve up to 10 GB of RAM for Hive MQ. The limiting factor in this test is clearly CPU and even higher throughput can be expected for machines with more computing power. The multithreaded nature of Hive MQ allows to scale with the number of CPUs.
In this paper, we proposed a platform for real time IoT Data analytics using MQTT Protocol to support delivery of large volumes of data. An architecture is presented for IOT data analytics platform and also the implementation for each layer in the proposed architecture. Also we presented the open source technologies that can be used in messaging layer (Apache Kafka, Hive MQ), Analytics (Apache Storm) layer, Storage (Apache Cassandra) layer and Visualization layers (Node.JS Framework) layer. In addition to analytics layer, a workflow to extract meaningful information that is human and/or machine-understandable from raw data generated by sensors and the algorithms that should be applied in each of the work flow stage is also presented. Also a dashboard is implemented to visualize sensors data and commands to actuators registered in platform.
Future research will focus on extending the platform with new analytics techniques to work with high performance computing and Big Data analytics tools such as Hadoop [
Rozik, A.S., Tolba, A.S. and El-Dosuky, M.A. (2016) Design and Implementation of the Sense Egypt Platform for Real-Time Analysis of IoT Data Streams. Advances in Internet of Things, 6, 65-91. http://dx.doi.org/10.4236/ait.2016.64005