Swarm Intelligence – Chapter 2

Symbols, Connections, and Optimizations by Trial and Error

This chapter begins by looking into symbols in trees and networks. The first example of the first philosophies regarding intelligence: symbol processing. This was the first approach that was used for artificial intelligence. This methodology relies on symbols to be clearly categorized. They discussed the grounding problem for tree structures as well. The second assumption they look into is fuzzy logic. Which means that connections are not either true or false. They are based on a degree of accuracy. Another method that was discussed was constraint-satisfaction models.

Commonly in traditional symbol processing, symbols are arranged in tree structures. With this structure, each step leads to another decision. However, since fuzzy logic does not rely on clear-cut decision making, it may look erratic, where it is strong in some places, and week in others. Trees cannot depict feedback. So, network matrices are used to depict feedback relationships. There is further discussion about representing networks using matrices (include mapping of trees on matrices). There is also discussion about graph patterns that cause feedback. Paul Smolensky has an equation to find the optimal state of such networks. There is discussion about maintaining harmony of a network. Particularly, there is discussion about a method of optimization formulated by John Hopfield.

The discussion of discovering patterns of connections among elements (learning) is covered. They begin by taking a look at what it means for two things to be correlated. They then begin to talk about how this is implemented using correlation matrices. The two types of organization they talk about are symmetrical and asymmetrical connections. They begin to talk about how you can use complex logical interactions to achieve different results. They focused on creating feed forward networks, and used them to create logical propositions. Interpretation is then talked about; the sigmoid function is used to manipulate information into a tolerable range.

Neural networks are then discussed. These networks are used to try to provide a theory of cognition referred to as connectionism. This method assumes that cognitive elements are made up as patterns distributed through a network of connections. This model more closely resembles human thinking, since it does not require elements to be separated for categorization.

The next section begins to take a look at problem solving, and optimization. They begin by trying to establish that characteristics of problems can be used to generate a degree of goodness by which an answer can be estimated. In this example they look at methodologies for selecting a solution to satisfy an algebra equation. Then they begin to look at the three spaces of optimization. The first is parameter space, which is the legal value of all elements. The function space contains the results of operations on the parameters. The fitness space contains the degrees of success by which patterns of parameters are used in function space. This is measured as a degree of goodness, or badness. They then begin to talk about evaluating solutions by establishing a fitness landscape. The idea is to reach the highest point on the graph (which can be multi-dimensional).

High dimensional cognitive space and word meanings are examined in the next section. This section begins by talking about computer algorithms used for word meaning recognition. The first approach is called semantic differential (Osgood, Suci, and Tannenbaum – 1950). They gave people words, and had extensive questionnaires about their feelings pertaining to the word (57). They linked words into groups by “halo effect,” which results in the categories: evaluation, potency, and activity. Another group created a database of word relations based on Usenet samplings in order to remove the factor of tester bias (Colorado 1990). They organized word associations into a matrix so they could establish more relationships, such ad Euclidean distance. They talk about how this closely relates to the way human’s process word definitions by understanding context rather than consulting a dictionary.

NK Landscapes, and factors of complexity are discussed in the following section. This section deals with interdependent variables and problem space. The foundations of complexity lie in N, the size of the problem, and K, the amount of interconnectedness of the variables (Stuart Kauffman). Increasing N results in combinational explosion, and increasing K results in epistasis. There are extensive examples using bit strings to illustrate various problem complexities.

Combinational optimization is the focal point of the next section. The idea here is to either minimize or maximize a result. First simple permutations are talked about. Then, they talk about breadth-first and depth-first search of simple permutation diagrams (trees). Heuristics are used as shortcuts to reduce the search space. The whole idea appears to boil down to trying to find the best way to guess where you are going to find the right answer.

The next section deals with developing binary optimizations. They first discuss how various things can be encoded in binary so they can be used for binary optimizations. Then there is discussion about how binary search space doubles for every additional element, and how you represent binary strings in various dimensions. There is also discussion about the meaning of hamming distance. Since binary strings can become intractable it becomes necessary to determine which bits are the most important in order to narrow the search space. They then go into discussion of various searches such as random and greedy, hill climbing, and simulated annealing. Additionally, they talk about various implementation concepts such as binary vs. gray coding, and examining step sizes and granularity.

The final section is a brief overview of optimization using real numbers. Essentially these problems appear to be similar to combination, and binary optimization methods. However, distance (include step size) is no longer determined in Hamming distance. Typically they are represented using Euclidean distance as calculated in n-dimensional space.

Swarm Intelligence – Chapter 1

Models and Concepts of Life and Intelligence

This chapter begins with a section that examines the theories regarding the mechanics of life, and thought. They begin by talking about how people have historically how we define things that are alive, or not. And, how people have always considered themselves to be both made of living matter, and continuous with inanimate mater. Then, they trying to establish a working definition for what is required for an entity to be alive, and man’s reluctance to accept things they have created to be alive. Ultimately, what they allude to is adaptation.

Next, they begin to example the nature of what it really means to be random. Much of the foundations of self organized systems rely on stochastic adaptation, so they try to determine if anything is really every “random.” They go through multiple examples of various events that we consider to be random, such and computer random number generators. For these types of events, that we know are deterministic, they label as “quasirandom” events. However, more complex events where we cannot observe all of the variables that result in an outcome are, random. Ultimately, what they decided is that random only means “unexpected outcome,” And that nothing truly happens without cause.

The following section examines what Gregory Bateson coined the “two great stochastic systems,” which are evolution, and mind. The section works through the interconnections between evolution, and the mind. Particularly, trying to explain a method of thought based on evolution. So called, “memes” that act like meta-physical genes and behavior in a similar manner. They do make a distinction between the two, stating the evolution removes the less fit from members of the population, while the mind adapts by changing the states of persisting members.

The Game of Life is then examined, as it illustrates a simple form of emergence. The Game of Life is a “game” that is setup on a grid; each cell in a grid has a certain set of rules that dictated its behavior based on the cells around it. A cell can be either “alive” or “dead.” They then try to deal with the slippery issue of what exactly emergence means. They talk about how complex behavior “emerges” from a series of relatively simple systems. Emergence is generally considered a characteristic of complex, or dynamic systems.

Cellular Automata (CA) provided the foundation for the Game of Life mentioned in the last section. Most cellular automata are one dimensional and binary. The book illustrates a simple example where by a center number is affected by its neighbors depending on its current state. This is a seemingly simple situation that can result in eight different outcomes. They discuss the different types of cellular automata: evolution leads to homogeneous state, a simple stable state or periodic structures, chaotic patterns, or complex localized structures. The fourth structure is the one of the most interest. It has been theorized that it can be manipulated in such a way to perform any kind of computation.

The following section began to examine artificial life as it develops within computer programs. They make the assertion that something need not behave like any “real” life to be living. In fact they may follow a set of characteristics completely like anything we have seen on Earth. They use CA’s as their “breeding stock” in a few examples. They introduce “random” mutations by flipping bits in the rule table. The change in the rules, results in a change in the system. This is likened to the difference between genotype, and phenotype. They then go on to multiple examples such as biomorphs, and Sims’ “seed” creatures.

The final section in this chapter examines intelligences, first in people then in machines. Much of what is considered to be human intelligence is based on the premise established by a psychologist named Boring. His idea is that human intelligence is whatever an intelligence test measures. This is actually an ironic situation, considering the current computers can be setup to easily complete current IQ tests with near, or perfect accuracy. Turing created the test to determine computer intelligence. In order for a computer to be intelligent, it has to fool a human into thinking it is communicating with a human. David Fogel contests that intelligence is something that should be measured equally between humans and computers, he defines it as the “ability of a system to adapt its behavior to meet its goals in a range of environments.”

Axim 3Xi

Dell has finally worn me down. I received my Axim 3Xi today. In fact I am writing this post on it. It appears this is ok for short entries, but I cannot imagine writing one of my regular posts like this. It is taxing to say the least. The primary reason that I am writing this is to determine if I want to put some AIM client on here. I suppose that it would be alright if I used lame abbreviations like b4, and u. For example “how r u” and I cannot see myself doing that :-/. Oh well, I think this is enough for now. It will be enough of a trial attempting to upload this to the site.

Emergence: The connected lives of ants, brains, cities, and software.

By Steven Johnson

This book serves as a decent introduction into to self-organizing systems. He uses a broad range of examples that range from ants to video games. Much of the text is heavily researched, such as Resnick’s slime mold simulation, Gordon’s studies on ants, and many more, even reaching back as far as Turing in the twilight of his career. The bibliography itself makes up a substantial chunk of the book. However, he does have the tendency to make assumptions, and allows his personal bias to be shown. Many times to a fault, as they don’t seem to based on adequate research.

He seemed to focus on four key areas when discussing self-organizing systems: neighborhood interaction, pattern recognition, feedback, and indirect control. Within each section he used a broad variety of examples to try to illustrate his point. Initially, it seems somewhat eclectic, but you get used to it as you go along.

The section on neighborhood interaction seemed to be the basis for self-organizing systems. Without individual elements reacting, and communicating with other elements, they would just be completely autonomous pieces. The interaction between the individuals is what forms the foundation for the systems.

He continued to explain about Pattern Recognition, the basically dealt with the ability of multi-agent self-organized systems to recognize patterns that are more difficult for top down centralized entities to recognize. He heavily focus on the way the human brain works to illustrate his point here.

He split feedback into two distinct sections: positive, and negative feedback. Positive feedback systems feed on themselves to propel themselves onward faster and faster. The key example here was the modern media. However, the counter example was negative feedback. When a system receives negative feedback it must make changes, and adapt appropriately. The major example he used here was Slashdot’s community feedback system.

The final section of the second part of the book dealt with indirect control. My understanding is that this dealt with the emergence of an appearance of centralized behavior illustrated by the multi-agent systems. He focused a lot on video games in this section. Particularly, the Sims, and the variations thereof.

The third, and final section of the book dealt with his speculations, and assessments of what ever meant. Unfortunately, the ideas expressed here do not really seem to be substantial enough to take at face value. It is fairly obvious that he is illustrating lines of thought we are insufficiently researched, and heavily biased by his opinions.

All things considered, it was not the greatest book of its kind that I have read. But, it certainly wasn’t the worst either. It does prove to provide a good background, and underlying conceptual framework into multi-agent, self organized systems. It is just laced with a few inaccuracies, and biases.