Swarm Intelligence – Chapter 7

The Particle Swarm

The chapter starts out by talking about some concepts that were covered in the first half of the book, and why they make the particle swarm an obvious solution. The area they cover with the largest amount of depth is the adaptive culture model. When discussing this area they talk about the three primary concepts of the sociocognitive underpinnings, which are evaluate, compare, and imitate. Then, they go into discussion about each of these aspects.

The next area of focus is the model of binary decision. They begin by talking about what a binary decision model is. Then, they discuss the two types of neighborhoods that are used in binary modes: lbest (local), and gbest (global). They go into depth about each of these methods, and then begin talking about how you evaluate binary strings. There is then discussion of the reasoned action model; this is addressed when trying to figure out how to improve cognitive fitness. From there, they begin developing mathematical models for determining the probability of and individual deciding yes or no. After discussing the aspects of the mathematical backings, they give an example algorithm for optimizing goodness.

The next area of discussion is the testing of the binary algorithm using the De Jong test suite. In this section they look at several different functions, their dimensionality, and the performance of the binary swarm on the different functions. There are some bit string examples in order to help understand some of the work done by the binary particle swarm on the functions.

As in any field, there is some controversy regarding the evaluation of an algorithm. The discussion says that a majority of this strife is created by David Wolperet and William Macready, whom state that the performance of all algorithms is the same when averaged over all possible problems (or costs). The “no free lunch” theorem shows how all algorithms can be considered equal when averaged across the problem space. The section then describes the NFL problem, and then proceeds to defend itself against the argument, stating that you can determine which algorithm is better towards finding a goal, even if it is not necessarily good at finding answers to questions that no one would ever ask.

The next area of discussion is multimodality. In this context they are referring to problems that have more than one solution (or global optimum). After discussion what a multimodal problem is, they go into discussion as to why these problems are difficult for genetic algorithms to handle. They talk about three forms of genetic algorithms (mutation and crossover, crossover only, and mutation only), and how they perform on various problems against number evaluations and their peak fitness. Then, just for kicks they try to show how particle swarm can handle these problems better, and illustrate the PS performance against the GA’s performance.

“Minds as parallel constraint satisfaction networks” is the next focus of interest. They start out by talking about Hopfield, and his contributions to the field. Next, they begin discussion of binary and continuous Hopfield networks. They begin talking about having setup a binary particle swarm to optimize the network structure proposed by Hutchins. They then go into an example and explain the way the PS optimized Hutchins’ problem.

The next section deals with particles swarms that handle continuous numbers. The section begins by explain how this is the “real” particle swarm, and sets up for the explanation. The first area they discuss is the particle swarm in real number space. Essentially, particles in a real number space are connected to topographical neighbors, and neighbors tend to cluster in the same regions of search space. After that discussion they go into some mathematical background. There is then discussion of pseudocode for particle swarm optimization in continuous numbers. The next area they address, are the issues associated with the implementation of this version of the particle swarm. Having setup the foundation they go into an example of the particle swarm optimization of neural net weights. The section continues with a discussion of real world applications of the particle swarm. In this case they are referring to PSO for “training” neural nets rather than using back propagation.

The next section focuses on the hybrid particle swarm. They begin by briefly explain what a hybrid system would consist of, and then try to explain why you might want to implement such a system. They use a system that would diagnosis various abdominal diseases, and explain how some sections are more easily computed using binary PS, whereas more complex symptoms are computed using real PS. The presentation is merely hypothetical, and the hybrid system is still considered and on going research area.

The following section takes a look at science as a collaborative search. The section talks about null hypothesis testing, and confirmation bias. There is also discussion about the difference between truth and certainty. Then they talk about the establishment of paradigms in the scientific community. There is then discussion about mistakes that have been done when dealing with human social search, and problem space, which they believe is due to tendency of individuals to move towards self and social confirmation of hypotheses. The premise of which is logically invalid, results in excellent information processing capabilities.

The final section takes a quick look at emergent culture, and immergent intelligence. They begin by talking about trends that develop throughout multiple iterations within the population of the PS. Then, they talk about polarization, and optimal solutions either consuming lesser solutions, or compromising and thus moving away from an optimum. Then they begin to talk about he emergence of cultures within the programs, and how they are not hard coded, and care difficult to predict. Next, they discussed the process of the immergence of cognitive adaptation among the individuals. The section is ended with their perspective on the importance of the emulation of cognitive positions allowing individuals to adapt.

Swarm Intelligence – Chapter 6

Thinking is Social

The chapter begins by talking about the story of the blind men and the elephant. The point of the story here is to show the societies are able to benefit from the sharing of individuals partial knowledge, which results in a large body of knowledge that alloww the group to develop strategies no individual could formulate. They then begin talking about the three levels of adaptation, the point of these three levels are the development of optima processes in a group. The adaptive culture model is the next topic of interest. First they briefly review their earlier discussion of Axelrod, and his contribution to evolutionary computing. Then they begin talking about speculations that have been made regarding the development optimums through the use of cognitive optimization. Axelrod’s recent simulations are touched on before they address the next topic.

Axelrod’s culture model is the next topic of interest in the chapter. It begins by talking about how similarities between individuals can be used to spread culture. In the ACM model individuals adopt non-matching features from their neighbors stochastically. It then talks about how simulations are conducted by repeated iteration until regions of the matrix contain matching patterns. The following sections detail various experiments that were conducted using ACM.

The first experiment deals with Axelrods theory that similarity is a precondition for social interaction and subsequent exchange of cultural features. There is then discussion on about the “birds of a feather flock” idea where self-similar individuals group, another idea presented where by people are more interested in group with people that share the same ideas. In this experiment the effect of similarity as a casual influence was removed. The result of the experiment was unanimity. Thus, it appears that the effect of casual similarity in ACM results in polarization.

In the second experiment they substituted a simple arbitrary function for the similarity test previously used. The rule was “if (the neighbor’s sum is larger than the targeted individual’s sum) then interact.” In this experiment the population converged on the global optimum every time, though the number “9999” was never included in the initial population.

The task of the third experiment was to find a set of five numbers that represented the features of an individual, within which the sum of the first three numbers equaled the sum of the last two. They go into discussion about why this is interesting, and relevant. Then they explained the details of the experiment. The result of the experiment was that all the individuals solved the problem, and parts of the solution were distributed through definite regions of the matrix. They believe the point of this experiment is to show the spread of features throughout a culture.

The fourth experiment deals with “hard” problems, also known as NP problems. In this particular experiment, they looked at the traveling salesman problem. They talk about the details of setting up the TSP to work within the simulation, and then make some observations about the results they received across multiple tests. It seems that at best half of the population would find an optima path, but throughout the simulations they found five different paths that all yielded the shortest distance.

Parallel constraint satisfaction was the focus of the fifth experiment. It began by discussion how features of ACM can be used to represent constraint satisfaction networks. They then go into discussion about parallel constraint satisfaction networks. Discussion of the advances and disadvantages of various aspects of these networks followed. An example, and setup for the experiment was the next topic of discussion. They go into detailing how these networks were encoded for the sample, and then talk about their observations from the experiment.

The sixth experiment focuses on symbol processing. There is discussion about traditional AI and navigation through symbolic nodes. Then, there is a more detailed discussion of how a network of nodes is transformed into a hierarchical tree. From the hierarchical tree they example the properties that are used in this experiment. There is some discussion at the end about the relevance of the experiment.

The chapter ends with a discussion about the ACM, and important questions related to it. Then they begin to talk about the relative insignificance of the individual in the system. And, finish by trying to make a global comparison to human thinking, and cognition.

Swarm Intelligence – Chapter 5

Humans Actual, Imagined, and Implied

This chapter begins by taking a look at the study of minds. The work of Claude Shannon is first examined, as he proposed information could be conceptualized mathematically as a kind of inverse function of probability. It goes through some of his studies dealing with randomly selecting letters from a page based on their probable conjunction with other letters. Since he lived before computers, they expanded his work using a computer model to show how things began to make more and more sense the further out you branched.

The next area they address was the fall of the behaviorist paradigm. Here they talked about the traditional aspects of psychological study based on behavior. Through the section they try to explain the basis of the behaviorist perspective, and begin to look at the problems that plagued it. This was immediately followed by the cognitive revolution. Here they begin talking about the shift in paradigm where minds were beginning to be thought of in terms of cognition rather than behavioral. They also talk about the impact this had on the development of AI systems at the time. These discussions worked towards the Bandura Social Learning Paradigm. This is the system that provided the foundation for social psychology, or looking at the learning/abilities of individuals in terms of the group they were in.

Social psychology is the focus of the next section. They begin the section by talking about the history, and development of the disciplines proponents. Initially, they focus on discussion of the principles of Gestalt psychology brought over by Germans in the mid twenty century. The main point here was that they were trying to establish that the mind attempts to make sense of the object in a coherent manor regardless of their actual nature. The next discussion takes a look at the Lewin field theory. Here they discuss his proposed life space and how it was portrayed at the time. His work is considered the stage for modern complex systems theory in psychology. Social influence is examined as the first prime example of social psychology. Here they look into how and why humans establish norms, conform, and the impact of social influence. They review various studies that show how powerful the influence of others can impact an individual. The next area covered is sociocognition. They talk about examples by where people can share collective understanding where by each individual only knows a fragment of the whole picture.

Pulling this all together into a more useful form, the next section begins to talk about simulating social influence. Here they are examine what exactly the difference is between a simulated mind, and a real mind. Though there is no definitive point, they seem to put forth the idea that a simulated mind is an actual mind. They then begin talking about trying to figure out how various modes of thought are conducted in order to model them properly in simulation. The next area of discussion is the shift from traditional AI to evolution methods. They start by taking about the history of bad blood between the two. Then talk about the strengths and weaknesses of both. It boils down to the fact that tradition AI is not good at handling simple tasks that evolutionary methods excel at. From here they begin talking about the evolution of cooperation. Through a discussion of the prisoners dilemma they examine how cooperation between individuals can greatly influence the rewards each member receives. The next area of discussion was an attempted explanation for the need for coherence. From there they begin to talk about networks in groups. Here they began to look at how groups of people were able to communicate, and thus come to some kind of consensus. To do this they used constraint satisfaction networks (which were explained in chapter 2).

In the following section, there is discussion about culture in theory and practice. The section begins by trying to establish whether or not culture is a thing that actually exists, or an abstraction. Eventually, they decide it is something, and begin to pursue principles based on that conjecture. First they begin to illustrate how computer simulations can represent the behavior of individuals in a social context. To demonstrate this point they use the example of the prisoners dilemma. The El Farol problem was then examined (Irish drinking night), this showed simulated interaction between agents that developed sophisticated negotiation techniques in order to trick other agents. It is basically a situation were you think you know what the other knows, but they know what you know soSugarscape was then discussed, here they were trying to grow artificial societies. The most interesting note here was the emergence of sophisticated immune systems that developed in the agents. The next system was Tesfatsions ACE. This was a system that modeled complex economic systems based on a modified version of the prisoners dilemma. The following sample was Pickers competing-norms model, where he tried to show how some behaviors might remain dominant when better behaviors are available in the environment. Latens dynamic social impact theory. This talked about how more people yield less impact on the system, and also showed concepts such as minority attitudes in populations are achieved at steady state. The evolutionary culture model established by Boyd and Richerson was then talked about. It basically discusses the link between cultural, and biological evolution. To further this discussion they follow up by talking about memetic (first talked about in chapter 1), and their relevance. Memetic algorithms are then talked about. One good example is the one created by Burke and Smith. Advancing another step in the scale, they begin talking about cultural algorithms. They end the section by talking about the convergence between social scientists and computer simulations.

The chapter wraps up by taking a look at what life might be like without culture. In the section they take a look at multiple instances of feral children that are found in the wild, and they talk about their tendencies and inability to become civilized. The picture they paint is very dismal, as they try to say that advanced intelligence comes from culture, rather than the individual.

Swarm Intelligence – Chapter 4

Evolutionary Computation Theory and Paradigms

The chapter begins by exploring some of the history of evolutionary computation. They take a look at the four areas of evolutionary computation (genetic algorithms, evolutionary programming, evolution strategies, and genetic programming) in terms of people rather than technologies (during history). First they talk about genetic algorithms and contributions made by A.S. Frasier, and then work their way to John Holland who is the main focus. From Holland they start talking about his students, Bagley, K. A. De Jong, and David E. Goldberg. Additionally, there is discussion of Steve Smith, and John Grefenstette’s contributions (both students of De Jong). The next discussion is on evolutionary programming where they examine the work of Larry J. Fogel and his colleagues. They also mention work by Don Dearhold and his students. Ingo Rechenberg and Hans-Paul Schwefel are illustrated as the pioneers of evolution strategies, the section focuses only on their contributions. The genetic programming section highlights the work of Friedberg, Dunham, and North. John Koza did extensions of their work. The section ends by discussing emerging trends towards the unification of the four fields.

The following section provides and overview of evolutionary computing. They first discuss the primary three things that are present in all four types. After which they begin discussing the evolutionary computing paradigm attributes. Where they discuss issues in relation to other existing search paradigms. This discussion provides the foundation for implementation concepts, where they explore the five steps of implementation that are found in all the systems. From this point they begin looking at specific implementation of the four areas of evolutionary computing.

Genetic algorithms are the most practiced, and perhaps relevant to swarm intelligence so they are addressed at length. The section begins with a quick primer on the various terms used within GAs. This explanation is followed by an overview of GAs. Here they discuss the five steps of implementation tailored towards GAs. The main difference is that crossover is so important. A simple GA problem is then examined to illustrate the key points. This study is attempting to optimize a value of x based on a sine function. In this example they illustrate simple implementations of crossover and mutation. After the sample they give a review of GAs where they discuss the representation of variables, population sizes, population initialization, fitness calculations, roulette wheel selection, crossover, and mutation in detail. Having established the basics they begin to look at schemata and the schema theorem. First they discuss what exactly a schemata is and how to implement, and perform various procedures on them (such as crossover). Then, they provide the schema theorem that predicts the number of times a specific schema will appear in the next generation of a GA, given the fitness of the population members in the schema. The section is ended with the authors’ thoughts on GAs as a whole.

Evolutionary programming is examined in the next section. The section begins with a primer on evolutionary programming. They then go into an explanation of how the five-implementation steps are tailored for EP. Since EP is essentially a top down approach they describe it in terms of the evolution of a finite state machine. They give design level implementation concepts for dealing with problems such as the addition and removal of states. There is also discussion of the five types of mutation that can occur. Another type of problem that EPs are used for is, function optimization. From this they illustrate how one might use this to solve the prisoner’s “dilemma.” As in the previous section, this ends with their thoughts on EPs as a whole.

Evolution strategies—an expansion of evolutionary programming—are examined in the following section. Once again, the section begins with an explanation of what exactly, evolution strategies, is. Then, they jump into an explanation of mutation in terms of an entire population. From the biological explanation they begin to discuss the foundations of how this is implemented. The next issue they address is recombination. After establishing how alterations are made, they discuss how selection is performed. This section ends with an explanation of the implementation procedure, and a brief summary of ES.

The final section examines genetic programming, which is often thought of as a subset of genetic algorithms. The section begins by explaining that genetic programming focuses on tree structures. From this they explain the five preparatory steps of implementation. After those five steps have been completed the program continues on the “real” implementation steps. They talk about different methods that are used to make the tree “grow.” This section is ended with a summary of GP, and problems that you are most likely to encounter when trying to implement it.

Swarm Intelligence – Chapter 3

On Our Nonexistence as Entities: The Social Organism

This chapter begins by taking a look at various perspectives on evolution. First they address the conflict between creationism, and evolution in schools, and then began to discuss how evolution on Earth may have taken place. Then they begin talking about life on different scales. The first scale they examine is macro. This perspective is called Gaia, and focuses on the earth as a life form itself. They talk about how life forms on a planet primarily serve to keep the planet stable in someway. In a simulation called Daisyworld, they examine how different colors of daisies can help regulate the surface temperature of a planet.

They then examine differential selection by which it is believed that evolution selects against animals that reproduce too often to prevent over population (and thus eliminating a food source). Next they discuss attempts to understand behavior that seems to contradict the idea of self-preservation: the process of inclusive fitness whereby individuals try to protect others with similar genetics. Their underlying point throughout the section is that scientists need to stop looking for individual selectionism, and focus on group selection.

The smallest level of interest lies within cells. They look at organelles, and put for the suggestion that the only purpose of humans is to preserve the life, and facilitate these “tiny masters.” This entry only appears to have been entered in order to show the importance of finding the right scope. The book puts forth the belief that you should not look at individuals, but rather societies, or “super organisms.”

Self-organization and flocking behavior is the focus of the following section. Here they examine swarm (flocking) behavior as a certain form of optimization. They talk about how it becomes easier for the individuals to survive in a group, rather then alone, and the social supports that are established to facilitate basic functions, such a raising young, and finding food. The discussion begins with bacteria, examines insects, and finishes with animals. The first discussion of potential optimization was with ants, where they looked at the traveling salesman problem. With animals, they begin to look at the impact individual agents had on the group. Additionally, they tried to establish some fundamental rules for flocking behavior.

The culmination of the previous section provided the baseline of then next: robot societies. Here they first talk about the old paradigm established as MIT, “Gold Old-Fashioned Artificial Intelligence.” GOFAI is a symbol processing system, whereby the AI is intended to understand its surroundings, and then act accordingly. The new style of AI is based on subsumption architecture. These robots intelligences are built from the bottoms up. They have a simple set of rules that they follow that allows them to appear more purposeful than they really are. From this they began talking about the nature of the mind, and how it is separated from the brain. They also look into considering whether it is worthwhile to consider the mind as a society of agents (which the deem it is not). They talk about how swarms are more nearly a sum of the parts. There is then discussion about using small robots to complete various tasks such as cleaning the television screen to taking readings from a volcano. Virtual robots are considered to simulate actions that defy laws of physics. Kerstin Dautenhahn’s research is examined regarding social intelligence, and social robotics.

The following section talks about another aspect of AI: shallow understanding. There is discussion about deep processing, which is performed through operations on symbolic representations within the computer’s native mode. Things such a processing complex databases and proving theorems seem “deep.” However, programs that simply talk with the user, and don’t seem to do anything important are called “shallow.” But, they are ironically more difficult to code. They go into example using ELIZA, and they point out that a machine can make convincing chatter based on a dataset of pre-formulated responses by on key words, but they do not actually understand any of the content they are receiving.

The final section looks into what agency means. They examine various peoples research, such as Stan Franklin and Art Graeseer to determine what exactly an agent is. Essentially they determine that it is an entity that acts according to its environment, and the people often anthropomorphize the actions of agents as more meaningful things than they really are. Then, going full circle they return to evolutionary concepts, where by they think speech may have evolved from primate grooming behaviors.