^{1}

^{*}

^{1}

^{2}

This piece of research addresses an interesting comparative analytical study, which considers two concepts of diverse algorithmic computational intelligent paradigms related tightly with Neural and Non-Neural Systems’ modeling. The first computational paradigm was concerned with practically obtained psycho-learning behavioral results after three animals’ neural modeling. These are namely: Pavlov’s, and Thorndike’s experimental work. In addition, the third model is concerned with optimal solution of reconstruction problem reached by a mouse’s movement inside Figure 8 maze. Conversely, second algorithmic intelligent paradigm was originated from observed activities’ results after Non-Neural bio-inspired clever modeling namely Ant Colony System (ACS). These results were obtained after attaining optimal solution while solving Traveling Sales-man Problem (TSP). Interestingly, the effect of increasing number of agents (either neurons or ants) on learning performance was shown to be similar for both introduced systems. Finally, performances of both intelligent learning paradigms have been shown to be in agreement with learning convergence process searching for least mean square error LMS algorithm. While its application was for training some Artificial Neural Network (ANN) models. Accordingly, adopted ANN modeling is a relevant and realistic tool to investigate observations and analyze performance for both selected computational intelligence (biological behavioral learning) systems.

This research work introduces a systematic investigational analysis for two naturally diversified adaptive learning phenomena’ paradigms. These diversified paradigms consider two typical behavioral learning performance algorithms of non-human creatures which were biologically classified as Neural (animals), and Non-Neural (ant colonies) Systems’ modeling [

The first paradigm is associated to adaptive neural behavioral learning inside three animals’ brain: a Dog, a Cat, and a Mouse. However, the second belongs to analysis of bio-inspired behavioral learning associated to ant colony optimization for observed swarm intelligence phenomenon aiming to get optimal solution Traveling Salesman Problem (TSP), based on realistic simulation foraging of behavioral phenomenon observed by real Ant Colony System. Analysis and evaluation of such interdisciplinary challenging learning issue are carried out using Neural Networks’ Conceptual Approach. Herein, this paper presents analytical details for both intelligent behavioral approaches, which were considered via two hand folds as follows. Firstly, on one hand: autonomous inferences and perceptions were performed in nature by non-human brain (animals: Dogs, Cats, and Mice). Secondly, on the other hand paradigm is inspired by source of ant colony optimization originated from intelligent foraging behavioral phenomenon observed by real ant colonies in natural environment. This behavior is exploited in artificial ant colonies for the search of approximate solutions to optimization problems namely Traveling Salesman Problem (TSP).

More specifically, the first behavioral algorithmic paradigm considers three nonhuman models. All three neural creatures’ models have been inspired by results observed after behavioral psycho-learning performance in natural real world. Two of introduced models are based on Pavlov’s and Thorndike’s excremental work. In some details, Pavlov’s dog learns how to associate between two inputs sensory stimuli (audible, and visual signals). However, Thorndike’s cat behavioral learning tries to get out from a cage to reach food out of the cage. Both behavioral learning models improve their performance by trial to minimize response time period. The third model is concerned with behavioral learning of mouse while performing trials for getting out from inside

The second algorithmic paradigm is concerned with searching for optimal solution of TSP by using non-neural systems namely, colony system ACS. That model simulates a swarm (ant) intelligent system used for solving TSP optimally. Briefly, ACS algorithm is inspired by the foraging behavior of ants, specifically the pheromone communication between ants regarding a good path between the colony and a food source in an environment. This mechanism is called stigmergy. Interestingly, that mechanism performed by bringing food from different food sources to store (in cycles) at ant’s nest. Interestingly, all of presented models herein shown to behave analogously in agreement with Least Mean Square LMS Algorithm previously suggested at ANN learning.

Principles of biological information processing concerned with learning convergence for both bio-systems have been published at [

Briefly, analysis of obtained results by such recent research work leads to discovery of some interesting analogous relations between both behavioral learning paradigms. That concerned with observed resulting errors, time responses, learning rate values, gain factor values versus number of trials, training dataset vectors intercommunication among ants and number of neurons as basic processing elements [

The rest of this paper is organized as follows. At next section, a simple interactive learning model is presented along with a generalized ANN block diagram simulating learning process. Revising of Thorndike’s, Pavlov’s, and mouse’s behavioral learning are introduced briefly at the third section. The fourth section is dedicated to illustrate learning algorithm at ACS.

Obtained simulation results compared with the experimental results are given at the fifth section. Finally, at the last sixth section, some conclusions and valuable discussions are introduced.

Referring to

The presented model given in

Referring to above

where

Moreover, the following four equations are deduced:

where X is input vector and W is the weight vector. φ is the activation function. Y is the output.

where η is the learning rate value during the learning process for both learning paradigms. At this case of supervised learning, instructor shapes child’s behavior by positive/negative reinforcement Also, Teacher presents the information and then students demonstrate that they understand the material. At the end of this learning paradigm, assessment of students’ achievement is obtained primarily through testing results. However, for unsupervised paradigm, dynamical change of weight vector value is given by:

Noting that

The psycho-experimental work of Pavlov is known for classical conditioning. It is characterized by following two aspects: A spontaneous reaction that occurs automatically to a particular stimulus, and to alter the “natural” relationship between a stimulus and a reaction response was viewed as a major breakthrough in the study of behavior [

where α and β are arbitrary positive constant in the fulfillment of some curve fitting to a set of points as shown by graphical relation illustrated in

Referring to behaviorism learning theory presented at [

・ Present the information to be learned in small behaviorally defined steps.

・ Give rapid feedback to pupils regarding the accuracy of their learning. (Learning being indicated by overt pupil responses).

・ Allow pupils to learn at their own pace.

Furthermore, building on these he proposed an alternative teaching technique called programmed learning/instruction and also a teaching machine that could present programmed material. Initially, cat’s performance trials results in random outputs. By sequential trials, following errors observed to become minimized, by increasing number of training (learning) cycles. Referring to

ber of trials. Furthermore, referring to that original Thorndike’s experimental results given at

In general, principle of adaptive learning process (observed during creatures’ interaction with environment) illustrated originally at [

Referring to

Referring to [_{,} we want to estimate the value of x using the Bayes rule for conditional probability:

Assuming independent Poisson spike statistics. The final formula reads

where k is a normalization constant, P(x) is the prior probability, and f_{i}(x) is the measured tuning function, i.e. the average firing rate of neuron i for each variable value x. The most probable value of x can thus be obtained by finding the x that maximizes

By sliding the time window forward, the entire time course of x can be reconstructed from the time varying-activity of the neural population. This appendix illustrates well Referring to results for solving reconstruction (pattern recognition) problem solved by a mouse in

1) Referring to [

2) The hippocampus is said to be involved in “navigation” and “memory” as if these were distinct functions [

3) Recent studies have reported the existence of hippocampal “time cells,” neurons that fire at particular moments during periods when behavior and location are relatively constant as introduced at [

According to following

Noting that, the value of mean error converges (by increase of number of cells) to some limit, excluded as Cramer-Rao bound. That limiting bound is based on Fisher’s information given as tabulated results in the above and derived from [

Furthermore, it is noticed that the algorithmic performance learning curve referred to

Referring to

No. of neuron cells | 10 | 14 | 18 | 22 | 26 | 30 |
---|---|---|---|---|---|---|

Mean error (cm) | 9 | 6.6 | 5.4 | 5 | 4.5 | 4 |

path that reconnects a broken line after the sudden appearance of an unexpected obstacle has interrupted the initial path (

The paradigm consists of two dominant sub-fields 1) Ant Colony Optimization that investigates probabilistic algorithms inspired by the foraging behavior of ants [

Referring to two more recent research work [

ACS optimization process versus MICE reconstruction problem. Finally the relation between cooperative process in ACS and activity at hippocampus of the mouse brain is illustrated well at two recently published works [

Cooperative learning by Ant Colony System for solving TSP referring to

Referring to

In other words, by different levels of cooperation (communication among ants) the optimum solution is reached after CPU time τ placed somewhere between above two limits 300 - 650 (M. sec). Referring to [

ly, in natural learning environment, the (S/N) signal to noise ratio is observed to be directly proportional to leaning rate parameter in self-organized ANN models. That means in less noisy learning environment (clearer) results in better outcome learning performance given in more details at [

where τ(r,u) is the amount of pheromone trail on edge (r,u), η(r,u) is a heuristic function, which was chosen to be the inverse of the distance between cities r and u, β is a parameter which weighs the relative importance of pheromone trail and of closeness, q is value chosen randomly with uniform probability in [0, 1], q_{0} (0 ≤ q_{0} ≤ 1) is a parameter, M_{k} is memory storage for k ants activities, and S is a random variable selected according to some probability distribution [

where α … is an amplification factors representing asymptotic value for maximum average speed to get optimized solutions and λ in the gain factor changing in accords with communication between ants. However by this mathematical formulation of that model normalized behavior it is shown that by changing of communication levels (represented by λ) that causes changing of the speeds for reaching optimum solutions. More appropriate that declares the slope (gain factor) for suggested sigmoid function is a direct measure for intercommunications level among ants in ACS in other words, the slope, λ is directly proportional to pheromone trail mediated communication among agents of ACS. Consequently, ACS global performance has become nearly parallel (slope = 0) to the X-axis (number of ants), nevertheless increasing of ants comprising tested colony (slope, λ = 0), that’s the case when no intercommunications between ants exists.

In the given

where λ_{i} represents one of gain factors (slopes) for sigmoid function.

By referring to

At the

According to the above animal learning experiments (dogs, cats, and mice), and their analysis and evaluation by ANN^{s} modeling, all of them agree well as for ACS, optimization process. Also, the performance of both (ant and animals) is similar to that for latency time minimized by increasing of number of trials. Referring to

Pavlov are commonly characterized by their hyperbolic decay and also, both obey generalized LMS for error minimization by learning convergence.

In this context, the algorithm agrees with the behavior of brainier mouse behavior (that is genetically reformed) as illustrated at [

By some details, artificial neural network models perform computation either on analogue signaling base or on pulsed spikes decoding criterion; they both lead to learning convergence following LMS error algorithm. It is noted that, reconstruction method following Bayesian rule is bounded to Cramer Rao’s limit. This limit is analogous to minimum response time in Pavlov experiment, and Thorndike work as well. Similarly, for ACS, optimization processes are following as LMS error algorithm when performing solution TSP. Additionally; adaptation equations for all of three systems are running in agreement with dynamic behavior of each other. Additionally, the learning

algorithms for the presented four models are close to each other with similar iterative steps (either explicitly or implicitly). Finally, it is worthy to note that the rate of increase of salivation drops is analogous to rate for reaching optimum average speed in ACS optimization process. Similarly, this rate is also analogous to speed of cat getting out from cage in Thorndale’s experiment. It is noted that, increase on number of artificial ants is analogous to number of trials in Pavlov’s work.

Mustafa, H.M.H., Tourkia, F.B. and Ramadan, R.M. (2016) On Analysis and Evaluation of Comparative Performance for Selected Behavioral Neural Learning Models versus One Bio-Inspired Non-Neural Clever Model (Neural Networks Approach). Open Access Library Journal, 3: e2933. http://dx.doi.org/10.4236/oalib.1102933