Today, we observe that more and more, radio frequency identification (RFID) technology has been used to identify and track objects in enterprises and institutions. In addition, we also perceive the growing adoption of cloud computing, either public or private, to process and store data from the objects. In this context, the literature does not present an initiative that looks into the network on enterprise-cloud interactions, so neglecting network performance and congestion information when transmitting data to the cloud. Thus, we are presenting a model named ACMA—Automatic Control and Management of Assets. ACMA employs context awareness to control and monitor corporate assets in companies with multiple units. ACMA provides a centralized point of access in the cloud in which interested actors can get online data about each corporate asset. In particular, our scientific contribution consists in considering network congestion to control dynamically the data updating interval from sensors to the cloud. The idea is to search for reliability and integrity of operations, without losing or corrupting data when updating the information to cloud. Thus, this article describes the ACMA model, its architecture, algorithms and features. In addition, we describe the evaluation methodology and the results obtained through experiments and simulations based on the developed prototype.
In recent years, there has been an exponential growth in the use of radio frequency identification sensors (RFID) [
In addition to RFID, another technology that has emerged in recent years is cloud computing [
Together, RFID and cloud computing can be used under the control and management of assets. Initially, the asset management business was a support activity, constituting an auxiliary control system. Nowadays, this management is less aid and more active. When we build a proprietary asset management system in a broader context, with speed and reliability of the information obtained in each stage, the company’s management becomes simpler and more rational. This benefits the company with strategic and operational decisions, as well as contributes to sustainability of business. Moreover, the traditional mode of management of assets with bar code is a complex task that requires more team effort, since it is exercised by a person who collects the information. In this case, you cannot have the accuracy of the data collected, still requiring a high time for the conference [
In this field of business, one may utilize the advantages of RFID sensor technology combined with the benefits of the cloud. However, there must be a concern with the possible massive volume of data generated by sensors. This is a gap still open in the literature and at the same time, it is very important, since the data generated by sensors can cause network congestion. This congestion can lead to slow network, packet loss and it can cause a negative impact on other important activities to the business [
Considering the aforementioned context, this article has the following objective: developing a context awareness using RFID as a solution for management and monitoring corporate assets considering network congestion to control, dynamically, the update interval of the data from these sensors to the cloud. For this purpose, we present the ACMA model, which has as a main idea to act in congestion control network for communications from RFID readers to the administration point centralized in the cloud. In this sense, it aimed at offering greater availability and reliability to update the sensor data coming from companies. For this intention, ACMA presents an adaptive algorithm that controls the frequency range to update such data transparently to users. A prototype was developed and it was evaluated in a company which meets the requirements defined in the model. The qualitative and quantitative results demonstrate the feasibility of the proposed model and the benefits of using the adaptation for network data traffic.
To present and evaluate the ACMA model, this paper is organized in seven sections: (i) in Section 2 we examine the literature and state of the art in related works; (ii) the ACMA model is detailed in Section 3, exploring architectural details, features and actors; (iii) Section 4 shows up design decisions for the implementation of the prototype; (iv) later in Section 5 we describe the evaluation methodology and scenarios for evaluation of the prototype; (v) in Section 6 the results are discussed; (vi) finally, Section 7 presents the relevant contributions and suggests opportunities for future work.
In this section, we present four solutions related to the research topic. Therefore, we carried out searches with the following keywords: RFID, cloud computing, network congestion and Patrimonial Assets Control on a scientific basis as IEEE and ACM. The criteria for selection were the similarity with the model and the problem in question.
Jing and Tang [
Chattopadhyay et al. [
McGreen and Xie [
Dubey and Sinha [
For a better understanding,
It is observed that most of the related works propose the use of RFID technology to monitor corporate assets, but there is not a concern with reliability and integrity issues in the transmitted data. Therefore, it opens an important gap in the network congestion that comes from the high volume of data generated by the sensors. For these reasons, we must develop an algorithm to perform estimates in current network usage with the ability to automatically reconfigure the communication times between the sensors and the database in a transparent way. Our algorithm reduces the impact that high network usage can generate at
Research and design of the intelligent inventory management system based on RFID | Web based RFID asset management solution established on cloud services | Driving new insights in asset utilisation by utilising RFID technology to deliver asset status updates in real-time | Congestion control for self similar traffic in wireless sensor network | |
---|---|---|---|---|
Identification technology | RFID | RFID | RFID | Wireless sensor network |
Movement log | Yes | Yes | Yes | No |
History movement log | No | No | No | No |
Location tracking | No | Yes | Yes | No |
Architecture modeling | Four local centralized servers | Two local centralized servers and one cloud database | One local centralized server | No |
Active readers monitor | No | No | No | No |
Network bandwidth monitor | No | No | No | Yes |
Adaptative communication with database | No | No | No | No |
peak times, avoiding negative impact on critical applications for business, and helping to maintain the integrity and reliability, avoiding delays and packet loss.
Another important gap in related works is the modeling and developing of a resource with the ability to monitor the sensors in order to find unavailability signals. Such functionality would be important to help those in charge of information technology to act as quickly as possible in situations such as network failure or defect in the sensor.
In this section, we present the ACMA model. ACMA is a context awareness system model used for controlling and managing the assets in companies with multiple units. In this model, we seek to explore the ability of RFID sensors and at the same time rely on the benefits that the cloud has to offer. In addition, a network congestion awareness feature for communications sensor with centralized point in the cloud is seen as a differential. In general, this feature performs estimates of current use of the network and reconfigures the sensors of the communication times with the database in the cloud dynamically and with complete transparency to the user. Thus, it avoids congestion and overloading in the network in high-demand times, providing greater integrity in the data to avoid delays and packet loss, and does not impact on other more vital activities. Soon, it contributes to the sustainability of the business.
As a basis for ACMA model, we chose the use of RFID technology to capture data. RFID is used as input in the system due to its ability to allow automatic and accurate identification, reducing the possibility of errors in the assets control. When used in the monitoring and controlling of corporate assets, RFID can deliver benefits: (i) reduction in the time of the reconciliation of assets; (ii) increase the safety and accuracy of the collected operations; (iii) greater assurance on compliance with tax obligations.
In order to search the system requirements to approach a real need for companies, we conducted a case study based on the needs of a company from Porto Alegre, Brazil. It is important to notice that the researcher keeps the company name confidential, due to company’s security policy as a way to maintain the confidentiality and protection of strategic information. This company is big and made up of multiple units, totaling approximately 3800 employees. In this study, we focus in the inventory area employees and assets, comprising the technical area of the company responsible for all process monitoring and asset management.
For the data storage, we chose to use a database in the cloud as a point of centralized access. Thus, data is accessed globally and all branches can get information about the assets. Regarding the type of tags, the option is to use the type of passive UHF tags. It was chosen due to its low cost, small size and long life. Still, its detection radius RFID sensors reach a few meters away, providing detection requirements for the proposed model.
The model assumes that there are RFID readers distributed in and out of all enterprise environments. This way you can identify the objects that cross the sensor range. Each asset and subject to accounting in equity management process must have an RFID tag attached to each other, which must be maintained throughout its life cycle. In the system, these assets must be registered with the tag code, cost center and business unit to which it belongs. As well as movements of the object between the different environments of the company, the system will be able to track its current location, and other important and relevant attributes to the business context, as a sector and cost center.
In
ment, the RFID sensor sends its IP address and the tag code to a server through a reliable communication (2). In turn, the capture server, interprets, filters and processes this data (3), in addition to providing the information in the centralized database in the cloud (4, 5). After that, the server responsible for the user interface, (7) search such data (6) and makes them available to the stakeholders (8).
In order to fill the open gaps in the related works, a specific module will be developed for monitoring the sensors and avoiding network congestion. The sensors monitor module aims at monitoring all the sensors and generate alerts when necessary for the leaders of Information Technology in the problematic events caused by reader hardware, light failure or problems on the network. The module for avoiding network congestion is a distinction from other related works too. This module considers the different demands of use in the course of the day due to the big data from the sensors to the cloud database. For this operation, the algorithm generates an estimate of current network usage, and automatically makes adjustments to the communication time between the database and the RFID sensors. Thus, it generates a smaller impact on the network during the communications and it does not interfere in others business activities.
The system has three distinct types of actors, IT Analyst, manager and employee. The actors are only allowed to access features that meet their interests and agree with the functions they perform in the company. The IT Analyst is responsible for tasks related to infrastructure and maintenance of the system. He/she handles the settings of RFID sensors, and is responsible for the operation of the system integrity, working in corrections and necessary maintenance when prompted by sensors monitoring module. The manager is responsible for the approval of new cost centers and units, as well as for the disposal of equipment that has the life cycle expired. Lastly, the employee assumes the everyday tasks, making him/her the main user of the system. The manager must register tags in newly acquired assets, registering all the necessary information from the system. This professional also controls the manipulations related to cost centers, asset tracking and reporting for audits.
The system has many features to meet the requirements observed in the questionnaire effectively monitor and manage corporate assets. The features were raised after an analysis of the information collection requirements and based on the manager and control analyst that works in the previously mentioned company. Thus, it was possible to approximate more accurately the system functionality to the real needs of scenery.
To develop the system, we modeled and classified tasks within the following feature packs: (i) detection of assets, which unites the necessary functionality for the detection and capture of assets moved in the premises of the environments; (ii) sensors monitor, which includes the features required for real-time monitoring sensors connected in the network for find problems; (iii) network congestion monitor, which gathers functionality in order to generate current network usage estimates and reconfigure the communication interval for the sensors and the database in the cloud; (iv) management application, which meets the necessary resources for the presentation of data to the actors concerned through a graphical interface.
The model proposes an architecture that consists of four high-level layers: (i) Physical Layer, layer that deals with the physical readers; (ii) Business Layer, layer that filters, processes and interprets data according to business rules; (iii) Data Access Layer, layer that encompasses the repository logic for data persistence; (iv) Presentation Layer, layer that implements, hosts and manages the final interface and user interaction. Each of these layers is composed by several modules, totaling six modules. The modules have well defined responsibilities, with a specific function that has an input information and generates a result in
the output for the next module. Thus, together, the modules are responsible for all information processing flow, since its capture in RFID sensor to the end result made available to interested actors through a Web application.
The operation main flow starts in capture module (CaptureMOD). CaptureMOD captures information from the RFID sensors and stores in memory as raw data. This data is read by the DistributionMOD, which distributes and sorts them into their system tables. From this point, two distinct streams are formed. The first is from the ReaderMonitoringMOD that performs processing in network sensors in order to detect downtime and alert those responsible. The second stream is performed by the NetworkMOD which generates estimates for the current network usage and performs the changes in communication metrics between the sensors and the cloud database. Finally, the data is available for manipulation and visualization, and the actors concerned by WebApplicationMOD. In
The CaptureMOD has the responsibility to capture and interpret the data from the sensors. It must be performed quickly and efficiently to avoid bottlenecks due to the large volume of data received. To achieve this goal, this operation implements a standard of producer and consumer for reception and processing of data with a critical section algorithm for access to shared memory. The communication between the sensors and the server is given through a reliable communication. In this scenario, when CaptureMOD receives an event of captured tags,
it processes and stores the information in memory. This stored data is formed by the tag code, the IP address and the date and time.
The distribution module (DistributionMOD) has the responsibility to seek the memory data from the CaptureMOD. At this stage, the processing of the data happens with a critical section algorithm. The information is processed, distributed and, in the right time, it is persisted in the database. The processing of this information includes searching and updating important information to the business context like the unit, cost center and people responsible for the asset.
The reader monitoring module (ReaderMonitoringMOD) has the responsibility to monitor all readers of the network searching for unavailability signals. This unavailability may occur by several factors, such as power failure, defective sensor hardware or packet loss. When facing anomalies, this module alerts the responsible, informing the event occurred.
The network monitoring module (NetworkMOD) has the responsibility to gen-
erate current use estimates of the network and adapt the communication time to update the sensor data to the database. When there is an increase in the network usage demand, the communication time decreases. By contrast, when a decrease in the network usage demand happened, the frequency in communication time increases. In this way, we seek to avoid network congestion, from the high volume of data generated by the sensors, which can impact in other vital business activities. For this task, an adaptive algorithm was developed based on the concepts of the widely-used congestion control algorithm of the TCP protocol [
The algorithm is divided into two main stages. The first phase is the generation of estimates related to network usage. A connection to the network interface is performed, as well as a calculation based on the number of packets sent and received. This information classifies the current state of the network as OK, warning or critical. Each of these states refers to a maximum percentage barrier and can be parametrized as a percentage value. This makes the algorithm more open for each specific need. The second phase is the communication time setting in the database. The algorithm starts at an exponential decrement phase in the communication time, very similar to exponential slow start of TCP protocol. Upon reaching the warning barrier, it starts the linear decrement phase in the communication time, very similar to congestion avoidance phase of TCP. Finally, when it achieves the critical barrier, it enters in the state of multiplicative increment in which the communication time has an exponential increase (double) and the process starts again from the beginning. In
The web application module (WebApplicationMOD) has the responsibility to show the information processed by other modules. Moreover, it is a form of inserting data into the system by human intervention. This module supports access from heterogeneous devices that have a rendering engine for HTML5 and CSS. To this end, it designed a project with responsive design enabling visualization in different ways and in different contexts, where it is able to adapt optimally to different sizes and resolutions screens. Thus, it provides a good user experience of accessing the application through smart phones, tablets or notebooks [
The necessary infrastructure for the ACMA model assumes the existence of RFID sensors connected to the network. These sensors communicate with a server through a device capable of connecting them to the network, such as a router or switch. This inference server is responsible for all the flow and man-
agement of local data generated by one unit company. It also communicates with the external network through a security firewall, and replicates the information generated in the unit company to the central point of the units in the cloud. All described modules are running in the inference server.
The actors involved only have access to their own features previously defined in the model. The access is realized through a web browser and the request for the information can come from an internal environment of the company (internal network) or external environment (external network) by the information of their credentials. In
Among the various Database Management Systems (DBMS), the chosen one for implementation was SQL Server. We chose SQL Server because it uses SQL
DBMS that implements the concepts of atomicity, consistency, isolation and durability in their transactions, ensuring the integrity and reliability of data [
The cloud database is available through the SQL Azure resources, which runs on the Windows Azure platform. It is a set of services that provides processing power and relational data storage in the cloud. SQL Azure was chosen because the application can take advantage of various resources available, such as persistence of centralized data and emission of performance reports. In addition, by opting for this cloud database, the focus of development turns to the application, since it is not necessary to worry about activities related to environmental infrastructure.
The language chosen for development of applications is C#, a strongly typed language, which follows the object-oriented model and is designed to run on the .NET platform. With this choice, the application stack is easily integrated, because all of the technologies are from Microsoft environments: the database (SQL Server), development language (C#) and the database server in the cloud (Windows Azure). The system was developed with a modular standard, seeking to reinforce concepts and best practices related to object orientation. It has a strong feature of low coupling and high cohesion, as well as being open to new implementations in the future, providing an easy maintenance and integration capacity for new plugins in the system.
This section details the research method used in the article and its stages. It is an applied research because it generates knowledge for practical application addressed to the solution of specific problems. The acquired knowledge can be applied within a real context.
We used two different procedures, the literature and the case study. For the literature, the work is developed based on resources already developed, such as books and scientific periodicals. It is also a case study, because it has as a differentiation the ability to handle the variety of evidence that can be identified, such as interviews and observations. Thus, the researcher approaches a real context [
Regarding the data collection techniques, we use two different research methods: (i) the qualitative, based on the company scenario that meets the conditions set out in the model, using questionnaires as the direct source for data collection with the team responsible for the activity of assets control; (ii) the quantitative, which involves numbers and statistics accomplished through performance and load tests [
For the qualitative data, we applied a questionnaire in the same company where we sought for real needs for lifting the requirements for the application of monitoring assets. Before the application of the questionnaire, the ACMA system was presented for a manager and an analyst responsible for asset tracking activity in the company. Then we applied a questionnaire with eight questions that were classified into the following groups: usability (questions 1, 2 and 7); recommendation (questions 4 and 8); performance (question 6); features (question 5); quality (question 3).
used was Likert in which the interviewed specify their level of agreement with certain statements.
For quantitative evaluation of the developed prototype, it was necessary the configuration of three distinct environments: (i) the first one is in the cloud, where the display module (WebApplicationMOD) is hosted on IIS 7.0, support for MVC 5 and .NET Framework 4.5. The database is also hosted in the cloud through SQL Azure with the basic option that includes five DTU’s; (ii) the second environment is a local server, where the .NET Framework platform is configured, the basis for C# applications. It has an Intel Core i5 processor, 8GB memory with 1600 MHz, 256GB hard drive, operating system Windows Server. The CaptureMOD, DistributionMOD, NetworkMOD and ReaderMonitoringMOD are running on that server; (iii) the latter is the environment for issuing information from the RFID sensors. For this task, we used the simulation platform RFID Rifidi where we simulated up tags GID-96 GEN2 and readers Alien ALR 9800.
The first proposed test scenario is to check the system’s features. In the first part of the test we used the Rifidi based on a theoretical plan. This plan is a theoretical business unit where each of its areas is represented by an environment that has an RFID sensor linked. A sequence of random paths from a starting setting to an end setting was generated. In Rifidi we simulated tags that run through these sequences of trajectories and the end of the process we analyzed the final results made available to the user through the web application. So, we observe the consistency and integrity of information. In
work usage generated by the NetwotkMOD of ACMA with NetSpeed Monitor2 and NetStress3. Both softwares have an acceptance in the market and have features for traffic analyzer with monitoring of traffic information in the network.
In a second stage, there was a performance testing through the gradual increase from 10 to 10 tags sent from an RFID reader simulated in Rifidi software to the ACMA system. It analyzed the capture module (CaptureMOD) to identify the maximum number of read items that can be captured in a one second interval. To find the system bottlenecks, we did tag processing tests from its origin in the capture module to its persistence and access to final user in the web application. In this way, we found the processing time of each module to observe the bottlenecks in the system that may receive performance improvements in the future.
Lastly, we run a test with the network congestion monitoring algorithm (NetworkMOD) and without the use of it. We analyzed the gains against packet loss reduction, current network usage and the time for making data available to the user. So, it worked up lots of tags simulated by Rifidi in an environment where the network data traffic is increased and decreased randomly during processing.
Number of readers | Number of tags for processing | Time interval for readers monitor module execution | Time interval for network module execution |
---|---|---|---|
50 | 1-50-100-200-300-400 | 20 seconds | 20 seconds |
Num. of readers | Number of tags for processing | Time interval for readers monitor module execution | Time interval for network module execution | Initial time for send data from sensors to the cloud database | Percentage of network utilization for network states | Percentage for network utilization during the execution period |
---|---|---|---|---|---|---|
1 | 20 transmissions with 50 tags and interval of 5 sec. between transmissions | 20 sec. | 20 sec. | 10 sec. | Ok: 1% to 75% Warning: 76% to 90% Critical: more than 90% | 1 s ? 100 s: 65% 101 s ? 220 s: 80% 221 s ? 340 s: 95% 341 s ? 460 s: 85% 461 s ? 520 s: 95% 521 s ? 700 s: 70% 701 s ? 780 s: 80% |
This section presents the qualitative and quantitative assessments as well as an analysis of the results.
In this evaluation, it was found that the ACMA system brings innovation in the management and monitoring of assets. It streamlines mobile asset inventory and provides an automated control, with features that meet the needs. Moreover, it is observed that the interviewees would like to use the system frequently and recommend its use in the company. However, for an effective solution implementation, it is necessary a more in depth and detailed analysis in relation to the costs. It is important to point out that even though the data obtained in the evaluation is encouraging, it is not sufficient to completely validate the model due to the low amount of reviews answered. In
This section presents the quantitative assessments for ACMA model. It is divided into two main areas: (i) functional tests, which refer to system functionality tests in order to verify the consistency and behavior of the proposed resources; (ii) performance tests for evaluation of performance towards different data loads.
In the first system functionality test, we analyzed through the web application the traceability of each tag forward to the paths taken in the theoretical plan proposal. All paths were captured and stored properly, depicting the consistency of the data processed in this test. After that, while executing tests to verify the accuracy of current use estimates of NetworkMOD module, we collected the data generated by the NetworkMOD, NetSpeed Monitor and NetStress. The estimates of the three software were very close and the greatest difference in the
measurements has reached a range of 3%. Thus, this system functionality is considered valid as regards the accuracy of the current network usage estimates. The full results of this test can be seen in
With the objective of analyzing the application performance, we captured the total execution time, in seconds, as the gradual increase in the amount of received tags in the application. It can be seen that the most effective module is CaptureMOD. In this module, there is a real need for high performance processing to provide the highest throughput possible for the data that is received. On the other hand, the formatting process ended up being costlier. The main factor for the longer processing time is explained by the need of consulting, processing, waiting the right time and distributing the data to several system tables. With that, it releases this data ready to be accessed by the web application. The results of these tests can be seen in
We have also conducted performance testing on the system to verify the application of the processing capacity within a given time interval. Therefore, we can estimate the maximum amount of tags to be captured by a particular group of readers in the same time interval. It was verified that the system is capable of capturing up to 900 tags in a range of a second in the communication of a group of readers with the application.
Finally, in the test of data capture from sensors to persistence in the database with and without the features of NetworkMOD, the processing with NetworkMOD was 6% longer than the processing without its use. This difference is because of NetworkMOD congestion control capabilities. At the time when the network utilization increases, the time of communication with the database increases, prolonging the time to update the data. As a benefit, there is the congestion avoidance and, as a result, the reduction of miscommunication, delays and packet loss. The full results of this test can be seen in
Based on the tests and simulations performed on ACMA model, it is possible to notice evidence of its ability to monitor corporate assets. In functional tests, the tags path was simulated in certain scenarios paths and the results of traceability had been successfully captured, with consistency and integrity on the data collected. In addition, we obtained satisfactory results in the system evaluation questionnaire in the company used in the study. However, it is important to point out that even though the data obtained in the assessment is encouraging, they cannot be generalized.
As for performance, the results show a processing time slightly above a linear curve as the progressive increase in capture tags, which suggests a good performance for the system. In addition, it can be seen that the module with higher cost processing is DistributionMOD, which is responsible for formatting the data captured in memory and distribution to their tables. The high time processing compared to other modules occurs because the need for multiple queries to the database in the cloud. On the other hand, the lower cost module for processing is CaptureMOD that needs an operation with high performance to vent the large amount of data from the sensors.
Finally, the NetworkMOD obtained satisfactory performance compared to the same process without its use. The difference between the processing times in the proposed scenario was 6% better with its use. This difference happens due to its concern with network congestion. When the network usage increases, the frequency time for communication with the database decrease the time to update the data. Besides, we can see that with NetworkMOD we can get better quality in network usage avoiding a negative impact on other applications running on parallel in the same network, because we have a reduction in delay and packet loss, contributing to the reliability and data integrity.
The adoption of an RFID system for control and monitoring of corporate assets has a positive impact on business. It contributes to the achievement of the objectives, increasing the company’s competitiveness, improving organizational efficiency and contributing to the sustainability of the business. In the literature, we studied and raised concepts and characteristics of RFID technology in the context of IoT. There was the possibility of using RFID sensors under control and monitoring of property assets. Thus, it is possible to improve the asset management by increasing speed in operations, as well as the accuracy and reliability of the information collected.
Different from the work proposed in [
Both quantitative and qualitative evaluation showed a positive feedback. Despite of the fact that the quantitative assessment cannot be generalized due to the low number of questionnaires, we found encouraging results in which the ACMA system was recommended and performed with the necessary features for a correct and effective monitoring of corporate assets. As for performance testing, we obtained a performance situated slightly above the linear curve, with capacity to process up to 900 tags per second, suggesting a sufficient capacity to control active. Using the proposed congestion prevention feature, no congestion occurred, and consequently, we avoided delay and packet loss during the process. ACMA is structured using well-defined modules, with high cohesion and low coupling, leaving the system open to new implementations in the future by inserting new features or plug ins. Finally, the web application was developed with a responsive design, aiming thus a better experience to the user who can access it from devices with different screen sizes and resolutions.
Regarding future works, it is possible to use other metrics besides communication in adaptation algorithm so that the model reacts to processing overloads and network simultaneously. Also, it is possible to think of the use of a distributed database to solve the fault scalability and bottleneck in the system due to the centralized point in the cloud. Finally, we want to implement the ACMA system in a real business environment to collect data and assess the functioning over time.
This work was partially supported by the following Brazilian agencies: CNPq, FAPERGS and CAPES.
Andrioli, L., da Rosa Righi, R., da Costa, C.A. and Graebin, L. (2017) Observing Network Performance and Congestion on Managing Assets with RFID and Cloud Computing. Journal of Com- puter and Communications, 5, 43-66. https://doi.org/10.4236/jcc.2017.59004