Nowadays, the usage of mobile devices is progressively increased. Until, delay sensitive applications (Augmented Reality, Online Banking and 3D Game) are required lower delay while executed in the mobile device. Mobile Cloud Computing provides a rich resource environment to the constrained-resource mobility to run above mentioned applications, but due to long distance between mobile user application and cloud server introduces hybrid delay ( i.e. , network delay and process delay). To cope with the hybrid delay in mobile cloud computing for delay sensitive applications, we have proposed novel hybrid delay task assignment (HDWA) algorithm. The preliminary objective of the HDWA is to run the application on the cloud server in an efficient way that minimizes the response time of the application. Simulation re sults show that proposed HDWA has better performance as compared to baseline approaches.
An increasing proportion of mobile cloud application is developing day by day. Still, cloud computing provides the richer environment to run the computation workload of application for execution. Furthermore, more than thousands of running applications are by managing by rich resource cloud servers. While, each individual application is unparalleled and has different resource requirements. Nevertheless, due to long WAN far away cloud services from UE devices, it incurred hybrid latency (i.e., transmission and process delay). The central requirements of latency bound application must meet lower response time. The new paradigm mobile edge computing (MEC) has been proposed by European Telecommunications Standards Institute [
In this paper, we are studying the response time minimization problem of latency bound application, and task scheduling over hybrid computing server (cloudlet and remote cloud) infrastructure simultaneously [
・ Proposed the dynamic application task scheduling framework, the ultimate goal of proposed framework which application task components either task execution locally or remaining part offloading to the cloudlet server, in order to minimize average execution time in this paper we have following contribution in this paper.
・ Proposed the machine learning (reinforcement learning) based algorithm to choose the efficient, optimize network route with lower transmission delay for computational task offloading.
・ Proposed the HDWAH heuristic which always allocates the optimal cloudlet to offloaded task with minimize response time including hybrid latency.
The rest of the paper is organized as follows. Section 2 elaborates related work and Section 3 explains the problem description and formalizes the problem under study. A heuristic is proposed for the considered problem in Section 4 which describes the propose algorithm and sequences. Section 5 evaluates the simulation part. Section 6 is about conclusions.
Moreover, offloading cost is the important mechanism nowadays for mobile cloud application. Existing literature research has been proposed their frameworks to improve the offloaded cost at the mobile run time environment.
In [
In this composition, the scenario is restricted by two cloudlets as shown in
A set a cloudlet server C K = { c k 1 , c k 2 , ⋯ , c k n } whereas, c k ∈ C K and individual cloudlet has similar service rate and capacity CP. A cloudlet is the collection of homogeneous virtual machines which are deployed at cloudlet data centers. Resources (i.e., memory, CPU processing, storage and bandwidth) are similar in all set of cloudlets. If the requested user works excessive cloudlet capacity, then some tasks forwarded to the remote cloud for further execution.
The response time of application is calculated as input from user mobility and gets back their result; it is the round trip time. The hybrid latency (transmission delay and process delay) has whipped influence on task offloading. The term latency defines many kinds of delay such transmission delay, process delay (queue and execution delay) and propagation delay between inter-servers [
Transmission delay has thumping in influence on mobile data offloading, network congestions occurred due to many factors such as noise, interference and network traffic [
X u c k = { 1 , if X u c k = 1 0 , otherwise (1)
To end with the average transmission delay is followed:
∑ u ∈ U ∑ c k ∈ C K ∈ U C K + ∈ C K R C (2)
The system offloading takes randomly generated input with different arrival rates for I/O System, after being processed I/O System, the result will dispatch as departure [
y u c k = ∑ 0 1 (3)
By using queuing model [
C c k u = y u c k c m u λ u u c k − ∑ u = 1 U y u c k c m u λ u (4)
This equation y u c k c m u λ u computation task offload to the cloudlet server whereas, 1 u c k − ∑ u = 1 U y u c k c m u λ u is the process and queue time in the system. Furthermore, average service rate of cloudlet must meet the service the result will dispatch as departure; the vector latency bounds such as u c k − ∑ u = 1 U y u c k c m u λ u > 0 . If requested workload exceeds the capacity of cloudlet C p few of the tasks migrate to remote cloud for further execution as shown below:
R C u c k = ( 1 − ∑ c k ∈ C K ) c m u λ u R C u − ( 1 − ∑ c k ∈ C K ) λ u , (5)
( 1 − ∑ c k ∈ C K ) c m u λ u workload allocation for service at remote cloud, and it must be ( 1 − ∑ c k ∈ C K ) c m u λ u > 0 . However, θ u is the processing capacity of local device to execute some small tasks on the local device? UEs workload offloaded to the proximity closer cloudlet server, the unit transmission delay follows:
T U C K = ∑ c k ∈ C K ∈ U C K ( y u c k c m u + d w u ) , (6)
Local cloudlet located on the edge network and remote cloud situated at long WAN distributed network, therefore, the unit propagation delay describes as follows:
T C K R C = ∑ c k ∈ C K ∈ C K R C ( 1 − x u c k ) d w u , (7)
The final road map of application response time is the combination of transmission delay and process delay
And depict as go behind:
T = ∑ u ∈ U ( C c k u + R C u c k + T C C K + T C K R C ) . (8)
λ ( x ) = λ 0 1 x ≤ T (9)
The response time minimization problem has decision variables and set constraints, system follow feasible solution in linear integer programming as follows:
Minimize T = ∑ u ∈ U ( C c k u + R C u c k + T C C K + T C K R C ) (10)
s.t. ∑ u = 1 U d w u x u c k c m u ≤ C p , ∀ c k ∈ C K , ∀ u ∈ U (11)
u c k − ∑ u = 1 U y u c k λ u > 0 , ∀ c k ∈ C K , ∀ u ∈ U (12)
x u c k = { 0 , 1 } , ∀ c k ∈ C K , ∀ u ∈ U (13)
∑ u ∈ U x u c k = 1 , ∀ c k ∈ C K (14)
Equations (10) (11) illustrate that the arrival rate UEs with computational workload must be less than the service rate and capacity of cloudlet as system become more and more stable. Whereas, Equations (5) (12) (13) exemplify single workload if only allocated to single cloudlet as minimizing the chances of overhead at the system is the minimization problem; it is well known as NP-hard problem.
To satisfy the constraints (13), (14) and (15), we have designed the efficient HDWA to solve the problem. The primary objective of HDWA, iteratively chooses the optimal network path and cloudlet, and allocates the user computational task according to the given deadline. In algorithm 1, first we initialize end user workload assignment Z to cloudlet in step-2-3, in step-5 iteratively select the optimal cloudlet, which has not been influenced by disk bound, and allocate user with that optimal one cloudlet, in order to reduce the average response time of computational tasks; in step-7-8 allocate the resources offloaded tasks at the server which has minimum lateness and completed within deadline; step-9 terminate the loop, if all requested user has been allocated with their respective cloudlet.
The time complexity of this algorithm o ( I | log n | k ) whereas I expressed the user workload, K the number of iterations in the system, till allocated all user workloads in the system.
Algorithm 1 is iterative in nature. Whereas, in step 1 we initialize the application workload zero; in steps 2-3, in step-5 iteratively select. The optimal cloudlet, which has not been influenced by disk bound, allocate user with that optimal one cloudlet, in order to reduce the average response time of computational tasks. In steps 7-8 we allocate the resources offloaded tasks at the server which has minimum lateness and completed within deadline, step 9 terminate the loop, if all requested user has been allocated with their respective cloudlet. The time complexity of this algorithm o ( I | log n | k ) whereas I expressed the user workload, K the number of iterations in the system, till allocated all user workloads in the system. The application response time T must be less than application deadline b.
In the results analysis, we are comparing the performance and evaluation of proposed Latency Aware Task Assignment (HDWA) with the conventional approach [
We have designed a simulation framework based on SimPY python API [
Notation | Explanation |
---|---|
λ i | [1; 20] = s |
Mobile users (MUs) | [100; 1000] Numbers |
ψ U i | [0:2; 1:5] 102 |
C i | [1024; 5120] KB |
d i | [1024; 10; 240] KB |
∈ U K | 2ms = MB Augmented Reality |
K C | 4ms = MB Propagation Latency |
μ k | [2; 3:5] 104 |
Bandwidth (B) ξ k | [50; 100] = mbps [ |
Comparison of proposed algorithm HDWA and existing approaches are followed.
The conventional approach has a basic offloading scheme either executes of computation task locally on the device or offloaded to remote cloud for additional processing. This approach does not handle the end to end latency due to cloud services are multiple hops away from mobile device [
Baseline Approaches brings the computing resources at the mobile network edge, application scheduler randomly decides either computation task execute on the mobile device or offload to the server. This approach attempts to reduce the application response time at some extent, but requested computation workload increase its capacity, it leads lateness in systems such as queue and process delay.
The proposed approach randomly takes the advantage of the local task computation at the device and local cloudlet server. If the requested computation tasks exceed the limit of cloudlet dynamically it migrates some computation tasks on the remote server for further execution.
In order to achieve the objective of proposed strategy works better as compared to existing approaches.
In this study, we have examined the problem response time minimization problem with heterogeneous latency (transmission and process delay). We have focus to support the latency sensitive application with lower response time. Our proposed methods and framework has better performance and progress as compared to existing methods and frameworks. In this paper, we have studied the minimization problem, divided it into three sub problems and efficiently solved all sub problems with minimum response time. This problem is NP-complete problem. By integer linear programming we proof it, and our proposed solution always satisfies the feasible solution. In future includes the detail study of above latency aware computation offloading problem with more detail constraints such as (load-balancing latency, communication energy, transmission energy) via geographically distributed cloudlet network.
The team members Abdul Rasheed Mahesar, Abdullah Lakhan, DIleep Kumar Sajnani and Irfan Ali Jamali work hard to complete this paper, and proposed a novel idea as compared to an existing offloading scheme. Hope In future we will achieve many milestones in our lives.
The authors declare no conflicts of interest regarding the publication of this paper.
Mahesar, A.R., Lakhan, A., Sajnani, D.K. and Jamali, I.A. (2018) Hybrid Delay Optimization and Workload Assignment in Mobile Edge Cloud Networks. Open Access Library Journal, 5: e4854. https://doi.org/10.4236/oalib.1104854