In this paper, we have used the distributed mean value analysis (DMVA) technique with the help of random observe property (ROP) and palm probabilities to improve the network queuing system throughput. In such networks, where finding the complete communication path from source to destination, especially when these nodes are not in the same region while sending data between two nodes. So, an algorithm is developed for single and multi-server centers which give more interesting and successful results. The network is designed by a closed queuing network model and we will use mean value analysis to determine the network throughput ( b ) for its different values. For certain chosen values of parameters involved in this model, we found that the maximum network throughput for β≥0.7 remains consistent in a single server case, while in multi-server case for β≥ 0.5 throughput surpass the Marko chain queuing system.
Networks where a communication path between nodes doesn’t exist refer to delay tolerant networks [
Therefore, in practical these distribution contains an extra detail, such as mean queue size, mean waiting time and throughput is needed. The framework of conventional algorithm shows that these properties can be obtained by normalizing constants. The proposed algorithm given in this paper correlates directly with the required statistics. Its complexity is asymptotically is almost equal to the already defined algorithms, but the implementation of program is very simple.
Choosing the right queuing discipline and the adequate queue length (how long a packet resides in a queue) may be a difficult task, especially if your network is a unique one with different traffic patterns. Monitoring of the network determines which queuing discipline is adequate for the network. It is also important to select a queuing length that is suitable for your environment. Configuring a queue length that is too shallow could easily transmit traffic into the network too fast for the network to accept, which could result in discarded packets. If the queue length is too long, you could introduce an unacceptable amount of latency and Round-Trip Time (RTT) jitter. Program sessions would not work and end-to-end transport protocols (TCP) would time out or not work.
Because queue management is one of the fundamental techniques in differentiating traffic and supporting QoS functionality, choosing the correct implementation can contribute to your network operating optimally.
Included in the Quality of Service Internetwork architecture is a discipline sometimes called queue management. Queuing is a technique used in internetwork devices such as routers or switches during periods of congestion. Packets are held in the queues for subsequent processing. After being processed by the router, the packets are then sent to their destination based on priority.
In the queuing network, the traditional approach to find the solution is using characteristics of a continuous time markov chain to formulate a system of balance equation for the joint probability distribution of the system state. The solution of the balance equations, for certain classes of networks such as Jackson networks and Gordon-Newell networks is in the form of a product of simple terms, see [
Using the arrival theorem, if a job move from queue i to queue j in a closed queuing networks with K jobs in it, will find on average
Mean value analysis depends on the mean queue size and mean waiting time. This equation applied to each routing chain and separately to each service center will furnish the set of equations which will easily solved numerically. The proposed algorithm is simple and avoid overflow, underflow actions which may arise with traditional algorithms. All mean values in the algorithm are calculated in a parallel manner. Thus memory requirement is higher than the previous ones, but new mechanism is relatively faster in multi-server scenarios.
We have considered the closed multi-chain queuing system which has the product form solution. Suppose C is a routing chain and S is a service center. Each chain contains a fixed number of customers who processed through subset of services using Markov chain technique, while service providers adopt one of the following mechanisms.
1) FIFO: customers are serviced in order of arrival, and multi-servers can be used.
2) Priority Queuing: customers are serviced according to the traffic categorization.
3) WFQ (Weighted Fair Queuing): gives low-volume traffic flows preferential treatment and allows higher-volume traffic flows to obtain equity in the remaining amount of queuing capacity. WFQ tries to sort and interleave traffic by flow and then queues the traffic according to the volume of traffic in the flow.
4) PS: Customers are served in parallel by a single server.
5) LCFSPR: customers are served in reverse order of arrival by a single server, (Last come first served preemptive resume).
Now we assume that all the servers have constant service rate using multiple FCFS service centers starting with the following consequences, which relates mean waiting time
where
The above equations applicable for recursive analysis of mean queuing size, meat waiting time and system throughput. The initial point can be set as,
To make the substitution in an algorithmic form, we have
Our model is defined on a one-dimensional closed system consisting of M cells i.e.
Being a single lane model each vehicle moves to the next cell if empty or waits, and then moves when the vehicle ahead vacates the cell. Thus there can only be two configurations for a particular vehicle: either the cell
ahead is empty or occupied. We say a vehicle is in service when the cell ahead is empty and “waiting” if it is occupied.
In effect, each empty cell acts as a server, and at any point in time there are always
The model we claim can be mapped onto a cyclic Jackson network with
The results of a cyclic Jackson network are well known. A state is indicated by
and
while,
In this case we have consider the service rate and probability at each node and stage is same and equal to the inverse of the number of ways of selecting
here throughput
where
For the corresponding network number of queues
After scaling, throughput becomes
For
The algorithm starts with an empty network (zero customers), then increases the number of customers by 1 until it reaches the desired number of customers of chain
The average waiting time in this closed queuing network and the average response time per visit are given by the following formulas:
where,
where,
Now we have to implement our model in single and multi-server scenarios to calculate throughput and mean waiting time.
a) SINGLE SERVER CASE
Initialize
If at the service centers customers are delayed independently
Then we have a little’s equation for chains and service centers as,
The operations count for this algorithm is bounded by
We now proceed to extend the computational procedure to handle FCFS service centers with multiple constant unit rate servers. The mean value Equation (14) for such a center can be written as
where
where
From Equations (17) & (18) we can have the mean number of idle servers as
b) MULTISERVER CASE
Step 1: Parameters Initialization
Step 2: Main Loop, same as in single server case.
Step 3: Additional Corollary for Multi-servers.
For
Step 4: Little’s equation for chains having
Step 5: Little’s equation for service centers having
Step 6: Additional step for calculating marginal queue size under main loop for each multi FCFS service center
The process evaluate per multi service center and per recursive step of the order
c) Queuing Theory Limitations
The assumptions of classical queuing theory may be too restrictive to be able to model real-world situations exactly. The complexity of production lines with product-specific characteristics cannot be handled with those models. Therefore specialized tools have been developed to simulate, analyze, visualize and optimize time dynamic queuing line behavior.
For example; the mathematical models often assume infinite numbers of customers, infinite queue capacity, or no bounds on inter-arrival or service times, when it is quite apparent that these bounds must exist in reality. Often, although the bounds do exist, they can be safely ignored because the differences between the real-world and theory is not statistically significant, as the probability that such boundary situations might occur is remote compared to the expected normal situation. Furthermore, several studies [
Alternative means of analysis have thus been devised in order to provide some insight into problems that do not fall under the scope of queuing theory, although they are often scenario-specific because they generally consist of computer simulations or analysis of experimental data.
The network bottleneck is the fast server. For
The determination of network throughput for different values of β is calculated recursively. Every job arrives at server serve immediately (FCFS).
This means waiting time also one of the main objectives of my future work. To define and construct a model in which mean waiting system decreases as the number of customers increases by implementing some grid
computing functionalities. Also developing a queuing network model for multi-hop wireless ad hoc networks keeping same objectives, used diffusion approximation to evaluate average delay and maximum achievable per-node throughput. Extend analysis to many to one case, taking deterministic routing into account.
This research work was partially supported by NSF of china Grant No. 61003247. The authors also would like to thanks the anonymous reviewers and the editors for insightful comments and suggestions.