For the verification and validation of microscopic simulation models of pedestrian flow, we have performed experiments for different kind of facilities and sites where most conflicts and congestion happens e.g. corridors, narrow passages, and crosswalks. The validity of the model should compare the experimental conditions and simulation results with video recording carried out in the same condition like in real life e.g. pedestrian flux and density distributions. The strategy in this technique is to achieve a certain amount of accuracy required in the simulation model. This method is good at detecting the critical points in the pedestrians walking areas. For the calibration of suitable models we use the results obtained from analysing the video recordings in Hajj 2009 and these results can be used to check the design sections of pedestrian facilities and exits. As practical examples, we present the simulation of pilgrim streams on the Jamarat bridge (see Figure 5). The objectives of this study are twofold: first, to show through verification and validation that simulation tools can be used to reproduce realistic scenarios, and second, gather data for accurate predictions for designers and decision makers.
In this paper we attempt to explore the methods that can be used to make the results made by a software or simulation tool more authentic or believable. A set of statistical data taken from real life experience can be used to check the output values created by the simulation tools to validate the simulation model. This method is referred to as statistical technique method and can be applied to simulation models, depending on which real-life data is available (Dijkum van et al., 1999) . In general lack of empirical data makes the verification of any simulation model a complicated task.
In case the real data are not available―the simulation data obtained by the simulation tools are still guided by the condition of a statistical theory and probability distributions on the design of experiments (Knepell & Arangno, 1993) .
In case only output data is available―the values carried by the simulation model can be compared with well-known statistical data (Knepell & Arangno, 1993) .
If data can be collected on both system input and output trace-driven simulation becomes possible, model validation can be done through comparing the collected data with the simulation results. In trace-driven simulation, the simulation input data are identified by the trace data collected by a myriad of instruments and methods (Knepell & Arangno, 1993) .
What, however, does “validation” mean? The term validation will be used to refer to various processes. The process of examining whether the acceptability and credibility of the conceptual model is referred to as validation, it is an accurate method to check the actual system being analysed. Validation can help to develop the right model. Verification is a process to check simulation output for acceptability and controlling whether the results made by the computer program are compatible with the real data collected about the same system (Knepell & Arangno, 1993) .
Concerning this topic many books could be written to describe the philosophical and practical aspects involved in validation (see, the monograph by Knepell & Arangno (1993) ). For this reason, I identify validation as systematic examination of the simulation model whether (if) it displays or illustrates the real world in a reasonable time, either as a procedure to check for correctness or meaningfulness of the resulting data. Validation is a process to check the ability of the model to reproduce the real system. In the next sections we will concentrate on validation that uses mathematical statistics and comparison with video recordings of real situations (see
Since modelling and simulating has become very important in many domains in modern science, much literature on verification and validation of a simulation models have appeared: see the web (http://manta.cs.vt.edu/biblio/), and the detailed surveys in Beck et al. (1997); Kleijnen (1995); Sargent (1996) . Important work concerning the choice of statistical tests to validate a model was made by Kleijnen (1999) .
For the first step we try to compare our video taking with the simulation results. A lot of phenomena (like the lane formation, oscillation effect and edge effect) can be seen, to make sure if our simulation reproduces a part of the reality. For this investigation we need to make scenarios for the next video observation in the great mosque in Mecca.
In this section we present a variation of different techniques used to calibrate and validate the PedFlow simulation model. PedFlow is a microscopic simulation model, which was developed by Löhner Simulation Technologies International, Inc. (LSTI) (Löhner, 2010) . For verification and validation, data was provided by the Institute of Hajj research and the Ministry of Hajj, consisting of layout information, pilgrim numbers, and Hajj schedules. We augmented this data with camera-based observations at several stairways, gates and the piazza inside and outside the Great Mosque in Mecca. This collected data can be used as input parameters of the simulation and
improves the acceptability and accuracy of the data carried by the simulation.
PedFlow must model all processes that are related to pedestrians inside and outside the Haram at the normal and the busiest rush hours of the Hajj events such as: walking, performing activities, and route choice. In order to validate pedestrian flow modelling in PedFlow and to study pedestrian traffic flow movement during the Hajj in detail, observations were collected on the Haram in Mecca during the Hajj 2009. These observations concerned the Tawaf, Sa’y, (individual) walking times, and other sites such as the Haram gates before and after each prayer. These observations are very helpful in obtaining the data that will be used to verify our simulation tool PedFlow. This data concerns the numbers of pilgrims going in and out of the Haram and individual walking times and densities of pedestrians on the Mataf. Finally, a comparison is made between the observations and modelling results of PedFlow, in order to check the validity of PedFlow with respect to pedestrian traffic flow operations.
Since this investigation is concerned with studying safety and fluidity of large scale pilgrim flows at pilgrimage places in Mecca, the validation of the simulation tool is mainly concentrated on pedestrian traffic flow at the holy places. The main variables to be observed and compared with the model predictions are the walking speeds on the stairs (upward and downward directions) and on the piazza and the Mataf of the Haram, the densities over time and space in the video recording and fundamental diagrams of Predtechenski and Milinski (1971) as well as the layout information, which provides the data about the boundary condition and environmental information.
The credibility of the data produced by the pedestrian microscopic simulation model can be validated through comparing with results obtained by other models having the same characteristics, although we mention that the comparison with other simulation tools is necessary for the acceptance of the data but not sufficient. Different results (e.g. outputs) of the PedFlow simulation model, being validated, are compared with results of other models. For example, emergent lane formation generated by many simulation models, e.g. Blue (2001) , who used a cellular automata model. Lane formation in bidirectional flow and clogging effects at bottlenecks in case of emergency situation were realized by Helbing et al. (2000) , who use a social force model. First a simple case of a simulation model is compared with known results of analytic models (Predtechenski & Milinski, 1971) , and second the simulation model is compared with other simulation models that have been validated, such as social-force models (see (Helbing et al., 2002) and the references therein) and cellular automata, e.g (Fukui & Ishibashi, 1999; Muramatsu & Nagatani, 2000) .
Our first set of simulations consisted of pedestrian flow through a hallway with a narrow passage (see
This experiment is realized with constant influx, that means if a pedestrian has passed through the passage, he will be replaced by a new pedestrian at a random starting location, i.e. at the entrance of the hallway to keep the number of people in the hallway constant. The mean velocities (measured in the passage area) for different pedestrian influxes and the results are illustrated in
Pedestrian motion in passages is one of the few cases where reliable empirical data exists. In order to assess the validity of the proposed pedestrian motion model, a typical passage flow was selected. The geometry of the problem is shown in
The resulting data of the simulation are shown as crosses in
In this test we try to verify the simulation response by running a simplified version of the simulation program with a known analytical result. If the output data resulting from the simulation model do not exhibit a significant deviation from the known empirical data, this result can then be used to validate the model.
Through the simulation of pedestrian flow on the well known geometry of the Mataf in the Haram Mosque (see
average density for many runs was 5 to 7 persons/m2.
The simulation of high density pedestrian flow streaming the Jamarat area during the rush hour of the Hajj period revealed a great technical progress in the modelling, simulation and better understanding of how large crowds alter. In the past many fatal accidents happened in this extremely dangerous area, where a large number of pilgrims stream through the site and try to stone the pillars in a relatively short period of time. Since the movement of pilgrims is very slow an accumulation effect on both sites of the pillars arises. This leads to physical jamming, pilgrims trampling, and in the worst situation to the death of pedestrians underfoot. To accomplish the safety of millions of pilgrims walking this overcrowded area every year and for better fluidity of pedestrian flow near the pillars, the proposal was made to build a bridge with a 5-level structure to ease the process of performing this ritual. The bridge was designed to satisfy the international standard criterion of pedestrians safety, especially during overcrowding, and this concept arose from the idea to conduct the pilgrims flow in one direction without any counter flow. The Saudi government designated Professor Dr. Saad A. AlGadhi (expert in transportation management and design) and Dr. G. Keith Still (the crowd dynamics expert) to evaluate a model using crowd dynamic software tools to improve the conceptual design (AlGadhi & Still, 2003) . This study produced a lot of data and information about:
・ Sufficient arrival capacity
・ Sufficient throwing area
・ Sufficient space (density ≤ 4 Hajjis per square meter)
・ Sufficient passing area
・ Sufficient egress capacity
in the Jamarat bridge area that can be used to validate other pedestrian simulation tools. For example, published data about the Jamarat bridge capacity, in-flux and out-flux, demonstrate that the total available ingress width must be greater than 28 meters to allow 125,000 pilgrims per hour. This is a minimum requirement and provision for security forces/civil defence, bi-directional/counter flow and hesitation (pilgrims stopping to rest) where the longer ingress ramps have additional width requirements (AlGadhi & Still, 2003) .
The other set of data about the Jamarat bridge was published by Helbing after the onset of the crush event of 2006 in his paper: The Dynamics of Crowd Disasters: An Empirical Study 2007 (Helbing et al., 2007) . His video analysis revealed a lot of data and information about the average local speed, average local flows and the average local densities in the Jamarat area before and after the deathly crush accident. It was found that the pedestrian density near the pillars area can reach a huge value of 9 persons/m2.
To assess the validity of the PedFlow simulation model and for improvements of resulting data we apply analytical and comparative tests. These tests are used to compare the simulation output with the output from other simulation tools e.g. Simulex/Myriad (Still, 2003) . Compared to other models PedFlow is more sophisticated to predict high density crowd dynamics. The simulation result is shown in
The aim of most procedures and methods testing model validity is to determine the similarity between the results carried out by the conceptual model and the collected data. The better the simulation output resembles the output from the real system the better the results in general. The animation and visualization of the output simulation data are necessary to prove the credibility of the system, moreover this test is very important to examine how close the data is to the real world.
A literature survey reveals several investigations and animation methods which have been proposed to provide more realism in the conceptual model simulating large scale pedestrian motion. Treuille et al. (2006); Shao and Terzopoulos (2007) suggested a method that increases the degree of accuracy and realism of crowd simulations.
For example a realistic human like character is an essential role in the animation of high density crowd simulation. They illustrate the effects and interactions between the individuals itself within the crowd and the individuals and their environment. This yields a better prospect about the density distribution of pedestrians in a given site.
In the context of pedestrian animation we considered the technique of motion graphs (Kovar et al., 2002) in order to provide advanced behavioural human characters. We attempted to modify the motion graph approach to associate an existing database of short MoCap (Motion Capture) animations into a larger clip of continuous motion. However, in our approach the pedestrian movements are expressed as paths or trajectories of the character extracted from unlabelled motion capture data. This technique modifies the character’s position and orientation for the entire animation clip. The trajectory and orientation of a character in a BVH (Biovision Hierarchical data) MoCap animation is interconnected, and one cannot be modified without influencing the other, hence rendering the animation unrealistic. Our technique allowed us to produce a continuous and longer sequence of animations using an existing database of MoCap animations and joining the animations together. Behaviour is closely related to the corresponding animation. It is this binding between behaviour and animation that we intended to utilize to validate our model.
The processes used to describe animated movement of one or more objects or persons are presented in this paragraph. From tabular values carried by the microscopic simulation data results the path and velocity vectors of the agents are determined. The coordinates and velocity of every pedestrian at any time is given by the simulation data. The animation of the characters is designed in two steps: first we attach a polygon to every coordinate, and next we attach every polygon to a human character, (see
help the decision maker to manage huge crowds, detect critical points in a closed area and to help the architects and designer to establish the number of fire exits required for a building. This animation can contribute to the validation of the simulation tools.
For validation of a simulation model, it is necessary to compare the simulation output with real-world data, such as video recordings representing the same circumstances of the simulation. This method can ascertain a lot of effects and behaviours that appear in crowds.
Through observation of pilgrim flow we attempt to validate and verify the crowd dynamic model tool PedFlow. The obtained real data presented in paper (Dridi, 2015) is used to verify a microscopic crowd dynamics model developed to solve complicated problems concerning high density crowd behaviours. The crowd dynamics model attempts to simulate the global movement of each individual influenced by the temporal circumstances and the surrounding crowd. A good agreement between the predictions and observations will validate the prediction model.
In the last years optical flow is considered as one of the most important techniques concerning image processing and computer vision. Computing of optical flow vectors using consecutive image sequences is achieved in two different ways: gradient methods and correlation methods. Many studies show that optical flow techniques can be successfully used to identify or recognize moving objects, e.g. moving cars or walking person, (Ricquebourg & Bouthemy, 2000) . Compared with other models this approach is able to operate with relatively low computational expenditure or visibility requirements on a diversity of entities, permitting reconstruction of object trajectories with high accuracy from video recordings. The detection of movement can be determined by the introduction of different sets of image sequences―by considering the different details between two images―since this is more accurate in computational calculation and prediction (Masoud & Papanikolopoulos, 2001) . The difference in image brightness can then be analysed further to extract movement vectors that describe the motion of the drops (entity) captured in the respective images. This method is based on video segmentation and position identification, rather than motion recognition by analysing frame by frame sequences of ordered images.
Over the last decades, computer scientists have worked in different ways to reconstruct the trajectory of moving objects. Many studies and investigations appearing in different scientific fields attempt to compute the optical flow given by a sequence of images (see the comprehensive surveys (Barron et al., 1994; Beauchemin & Barron, 1995) ). The gradient and correlation methods are the mainly used techniques for computing and calculation of optical flows. In addition to these, there are other statistical methods which are able to estimate the motion parameters (Fan et al., 1996) and the use of phase information (Fleet & Jepson, 1990) . The approach proposed by Hayton establishes a relationship between optical flow and image registration techniques (Hayton et al., 1999) .
Let us denote by
which can be expanded in a Taylor series neglecting higher order terms (Horn & Schunk, 1981) . In general, gradient-based techniques are accurate only when the intensity is preserved, and the Taylor series approximation stays reasonable when frame-to-frame displacements due to subjects motion are a part of a pixel. To reduce the errors resulting from using this technique and to compute flow vectors over a larger image region an iteration method is deployed.
Correlation-based techniques will be useful if the image sequences do not meet the conditions required for gradient-based techniques, that means the brightness intensity is not preserved, for example in cloud (Wu, 1995) and combustion (Sun et al., 1996) images. Such techniques try to establish correspondences between invariant characteristics between frames. Typical features might be blobs, corners and edges (Clocksin, 2000) .
As already mentioned optical flow is a method to estimate object motions through brightness intensity changes in sequences of consecutively ordered images. A brightness intensity region variation related to the average pixel intensity of each image in a sequence of crowd images is used to estimate the pedestrian density distribution at various sites.
The technique permitting pedestrian’s movement capture e.g. extracting information about pedestrian speed, using video footage obtained from CCTV observation of urban crowd movement surveillance and image processing can be traced back to Velastin et al. (1994, 1993) , who use algorithms operating on pixel intensities under a certain condition (such as a high frame rate) (Johnston, 2004) . Other techniques and methods to compute the optical flow regarding changes in pixel intensities in a series of images sequences are developed by (Seki et al., 2000; Vannoorenberghe et al., 1996; Masoud & Papanikolopoulos, 2001; Yonemoto et al., 2003) .
Optical flow is defined as an apparent motion of image brightness. Let
・ Brightness
・ Brightness of every point of a moving or static object does not change in time.
Let some object in the image, or some point of an object, move and denote the object displacement after time
where
and
Dividing (3) by
results in
usually called the optical flow constraint equation, where
The movement recognition in this work is based on the optical flow method extracting data from picture sequences using the Lucas and Kanade algorithm (Lucas & Kanade, 1981) . It considers a group of adjacent pixels and supposes that all of them (the group of adjacent pixels) have the same velocity. It finds an approximate solution of the above Equation (6) using the least-square method by solving a system of linear equations. The equations are usually weighted. Here the following
where
To determine pedestrian dynamics in the mosque of Mecca, with millions of people performing their rituals, we chose to use the OpenCV tools from Intel. This section describes the structure, operation, and functions of the open source computer vision library (OpenCV) for the Intel Corporation architecture (Bradski, 2000) . The OpenCV library is mainly used for real time computer vision. Some example areas are human-computer interaction (HCI); object identification, segmentation, and recognition; face recognition; gesture recognition; motion tracking, ego motion, and motion understanding; structure from motion (SFM); and mobile robotics.
Image sequences were obtained from videos collected by hd-cameras at the Hajj-2009.
The flow fields in
The above mentioned techniques can be helpful for the validation and verification of simulation tools but is
not sufficient, since there are many effects affecting the credibility of this method, for example: ambiguity, aliasing, and the aperture effect. One of these effects the “aperture problem”, has been extensively detailed in optical flow literature Horn1981. However, the other two short-comings (ambiguity, aliasing) are discussed to a lesser extend. Computer scientists and algorithms developer are working to resolve this problem so that the programs can take into account the three points.
“Adobe After Effects”1 is a digital motion graphic and composition software published by Adobe Systems , used in the post-production process of film and television production. It is used for creating motion graphics and visual effects.
After Effects helps us to understand the fluidity of the pedestrian flow and the density waves observed in the video recording during the rush hour on the Haram. These density waves are generated by huge pedestrian forces that propagate with the help of body contact through a crowd.
In
We have used a multi-stage validation method. One of the most important parameters was verified, the pedestrian density distribution on the Mataf area as a function of the position
From observation of the Mataf it is well-known that the area indicating the beginning and the end of the Tawaf is the area most congested and accumulated by pilgrims, (see
The second step of the multi-stage verification was to compare the velocity-density diagram made by the simulation with all well-known fundamental diagrams. According to Predtetschenski and Milinski the average walking speed depends on the the walking facility and the local density which can reach 9 persons/m2 (Predtechenski & Milinski, 1971) . In
crowd. This phenomenon was clearly demonstrated in the statistical results shown in
Comparing the results of PedFlow with results of other models in the simulation of a special cases like the Jamarat bridge (see
For people working in software development and simulation program evolution the verification and validation of the model is a vital procedure to make sure that the tools apply to reality. The validation of the simulation program ensures the users and decision makers that the simulation results are credible and applicable in the development of the project. Moreover Turing and face validity tests contribute to progressive optimization of the program. The Turing test is a successful method comparing the real world with the simulation output. The output data obtained by the simulation can be presented to people attending the same project and working with the same tools knowledgeable about the system in the same exact format as the system data. The discussion between the experts about the deviation of the simulation and the system outputs can be helpful to validate the program, their explanation of how they did that should improve the model.
The opinion of the project member and model user for development, progress and verification of the simulation tools is very important. This method will be referred to as face validation. Face validation is necessary to identify the behaviour of the simulation system under the same simulation condition. A preliminary examination of the model one can deduce that this method is useful, necessary, but not sufficient.
In this paper we have discussed verification and validation of microscopic simulation models. Different approaches and methods for deciding verification and validation of the model development process have been presented, as have been various validation techniques.
As a practical example, the Haram Mosque in Mecca and the Jamarat Bridge in Saudi Arabia were used for high density crowd simulation: the huge number of pilgrims cramming the bridge during the pilgrimage to Mecca gave rise to serious pedestrian disasters in the nineties. Moreover, the analytical and numerical study of the qualitative behaviour of human individuals in a crowd with high densities can improve traditional socio- biological investigation methods.
For obtaining empirical data different methods were used, automatic and manual methods. We have analysed video recordings of the crowd movement in the Tawaf in Mosque/Mecca during the Hajj on the 27th of November, 2009. We have evaluated unique video recordings of a 105 × 154 m large Mataf area taken from the roof of the Mosque, where upto 3 million Muslims perform the Tawaf and Sa’y rituals within 24 hours.
For the validation and calibration of the simulation tools, different methods were used.
・ Comparison of the simulation result with the video recording.
・ Comparison with other models: Different results (e.g., outputs) of the simulation model being validated, and compared with the results of other models. For example: 1) simple cases of a simulation model were compared with well-known results of analytic models; and 2) the simulation model were compared with other simulation models that have been validated.
・ Parameter Variability - Sensitivity Analysis: Applying this technique one can determine the behaviour of the model or simulation output, using different input values.
・ A comparison with Optical Flow results was also carried out.
At medium to high pedestrian densities, the techniques used in PedFlow can produce realistic crowd motion, with pedestrians moving at different speeds and under different circumstances, following believable trails and taking sensible avoidance action.
I would like to express my sincerest thanks and gratitude to Prof. Dr. G. Wunner for a critical reading of the manuscript, for his important comments and suggestions to improve the manuscript. Many thanks to Dr. H. Cartarius for his support during writing this work.