“The art of progress is to preserve order amid change and to preserve change amid order.”
— Alfred North Whitehead
“The art of progress is to preserve order amid change and to preserve change amid order.”
— Alfred North Whitehead
By
Dr. Robert Finkelstein
President
Robotic Technology Inc.
"A mind becomes a detriment when it acquires more intelligence than its integrity can handle." ---- Cullen Hightower
The swarm, the colony, the school, the nest, the flock, the herd, and the pack are aggregates of animals: insects, fish, reptiles, birds, and mammals. Animal groups, whether predator or prey, offer advantages in survival over lone individuals. Humans need the collective as well – from family to tribe to nation – for their own survival.
The technology is ripe for developing combat robots which function as part of a group. The social insect paradigm is a good place to start, where the approach might not be to see how intelligent one can make a robot bug or critter, but how stupid it can be and still accomplish its mission. Robots can be extremely useful well short of having human intelligence. And human intelligence in robots, especially combat robots, might not always be desirable even if it were achievable.
Our pragmatic definition of intelligence is: “an intelligent system is a system with the ability to act appropriately (or make an appropriate choice or decision) in an uncertain environment.” An appropriate action (or choice) is that which maximizes the probability of successfully achieving the mission goals (or the purpose of the system). For example, the “purpose” of an organism is to survive to reproduce, while the purpose of a robot might be to provide nursing assistance in a hospital or defeat insurgents. Intelligence need not be at the human level, just sufficient for the system’s purpose. The level of a robot’s intelligence depends on the user’s requirements and the technical, operational, and economical feasibility of achieving the desired level of intelligence
In one view, intelligence, natural or artificial, is an emergent property – an epiphenomenon – of collective (cellular and individual) communication. In the human brain there is communication among 100 billion neurons, and there is communication in human society among 6 billion humans. In an ant brain there is communication among 100 thousand neurons, and there is communication in an ant colony among as many as a 1 million ants or more. The 100 billion aggregated neurons in an ant colony do not equal the cognitive performance of 100 billion neurons in one human skull because quantity alone does not equal intelligence. The architecture of the neurons and the means of communication among them, as well as their numbers, contribute to the characteristics of intelligence. Communication among neurons is relatively fast compared with communication among individuals, whether human or insect. Communication among robots, electromagnetically linked, could be almost as fast as communication among an individual robot's internal processors, endowing them with the equivalent of extrasensory perception (ESP).
While communication among individuals is slow, it can be effective: ants, bees, and other groups have managed to survive and prosper (i.e., to accomplish their "mission") over tens of millions of years in a harsh world. Problem solving and goal achieving can be accomplished beyond the scope and cognition of the individual. Insects achieve complex behavior without centralized control, global world model, or direct communication. Beyond insects, robots could communicate as if they had ESP, sharing total memory, experience, and sensory transfer among themselves. But total information transfer among robots is not necessary if less information will do the job at lower cost.
The Subsumption Architecture was developed at The Massachusetts Institute of Technology Artificial Intelligence Laboratory for achieving intelligent behavior in robots. While this approach looks very promising for achieving insect (or perhaps, eventually, rodent) level intelligence, it is not clear to me how human cognition could ever be replicated with this approach. The Subsumption Architecture depends on reactive control of the robot, using simple processing elements (finite state machines) each connected together but functioning independently. It is oriented around behavior, rather than cognition, with layers of sensory inputs leading to behavioral outputs. There is no task decomposition, world model, or planning. Yet purposive behavior has been demonstrated, including avoidance of, or attraction to, sensory stimuli (like a roach scurrying away from the light or towards food). Lower level responses are suppressed or subsumed by higher level responses without any centralized control.
The rationale for this approach is that traditional robotics is unable to deliver the goods: real-time performance in a dynamic world. Traditional robotics modularized perception, world modeling, planning, and execution; it is concerned with central symbolic representations and the decomposition of control into functional modules (i.e., perception, modeling, planning, task execution, and motor control). All of this computation and manipulation takes time and the robot runs into a ditch. The subsumption approach tightly couples sensing to action with broad, but shallow, computational elements: interaction with the real world and its dynamics is the essence in developing robots. Proponents of the architecture claim it is the most robust, successful approach for controlling the low-level behavior of mobile robots. It offers graceful degradation of performance, real-time control, simple and inexpensive processing elements, and it can use biological models for developing control techniques. The technique allows the use of sensor fission, where different sensors trigger different behaviors, and conflicting behaviors are arbitrated at the activator stage rather than the sensor stage. This is easier and less computationally expensive than the conventional attempt at sensor fusion.
On the other hand, the Subsumption Architecture has no cognitive level processes, cannot plan or follow a plan given to it, and it cannot predict the consequences of actions or events. The subsumption robot could not, therefore, behave tactically like a human combatant even within a limited domain. It could not bring to bear human expertise to solve a problem. If the control were hardware-based, the robot's behavior could not be changed readily. Also, the robot could not learn from experience, or explain itself, or respond appropriately to novel experiences.
When there are multiple interacting subsumption robots, the complexity of the conventional centralized planner is replaced by the complexity of inter-robot and inter-behavior dynamics. There are many research issues. Little is known about how the individual robots will interact, or what the relationships will be among individual behaviors, communications among individuals, and the resulting patterns of group behavior. The form and specification of cooperative behavior needs to be determined. We cannot now predict accurately the global behavior of the multitude from the programmed behavior of the individuals. It would be useful to be able to derive programs automatically for group behavior. We need to determine how the group's behavior depends on the population density of individuals, and how many individuals over what duration are needed to accomplish goals. And what is the relationship between the amount of communication among individuals and the performance of the group? Some of these issues apply to human, as well as robot, groups, but perhaps we have the beginnings of the field of robot sociology (sociorobotology?).
Collective behaviors demonstrated with reactive architecture include: avoid collisions, disperse, aggregate, follow, home, flock, forage, collect objects, sort, and construct. Robust global behaviors are based on simple local rules, where the basic, individual behaviors are designed to be sufficient (i.e., stand-alone) and additive (i.e., serve as building blocks for more complex collective behaviors). Emergent behaviors are not programmed, but arise from local interactions among the group's individuals - behavior is generated bottom-up, not top-down.
There is a subfield of artificial intelligence which central to the development of cooperating robots: distributed artificial intelligence (DAI). DAI is concerned with concurrency in AI computations at many levels, and it is equally relevant to robots or computational nodes in networks. The primary areas of DAI research include: how to divide problem solving and share knowledge among individual agents; and how to coordinate intelligent behavior, knowledge, skills, goals, and plans among autonomous intelligent individuals so that they can take action or solve problems. The rationale for DAI is that DAI systems are more adaptable, more reliable, and cost less than conventional systems. The fundamental issue of DAI is how to achieve cooperation among independent agents (such as robots). Indications are that the necessary cooperation could be achieved through communications and social order.
Insect-brained robots could make do with primitive communications skills, using a finite set of fixed signals having fixed interpretations. Coordination would be limited and used, primarily, to avoid conflict between sequential processes. Without a syntax of signals with which to construct complex actions, sophisticated cooperation among the robots would not be possible.
But sophistication may not be necessary to accomplish many missions. Cooperation requires, at a fundamental level, information concerning the intentions of each robot, and information which modifies the internal state of each robot. Also needed is sensory information, or information about the state of the world. Distributed intelligent systems can arise through the integration of existing intelligent systems after the imposition of a supervisory intelligent control system. Or they can arise from chance encounters of self-organizing intelligent systems, or through the creation of intelligent systems designed to cooperate (such as robots).
As we have discussed in detail elsewhere: while there are many approaches to designing control system architectures for complex systems, our approach, the 4D/RCS, is more advanced than other intelligent control system architectures – and it has been demonstrated and proven in a multiplicity of applications and test-beds. (The “4D” represents the four dimensions of space and time, while the “RCS” is an abbreviation for Real-time Control System).
The 4D/RCS is a framework in which sensors, sensor processing, databases, computer models, and machine controls may be linked and operated such that the system behaves as if it were intelligent. It can provide a system with several types of intelligence (where intelligence is the ability to make an appropriate choice or decision), including reactive, deliberative, and creative intelligence.
The 4D/RCS was developed over the last 30 years by the Intelligent Systems Division (ISD) of the National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, and more than $125 million was invested in it by the U.S. government. The 4D/RCS had its origins in early work by Dr. James Albus on neuro-physiological models and adaptive neural networks, and it was originally designed to control manufacturing facilities. It has since been modified and adapted for robotic vehicles, including autonomous underwater vehicles and robotic ground vehicles. It served as the reference model architecture for the NASA space station and for the Army’s Demo I, II, and III Programs, and it was successfully demonstrated autonomously driving robotic ground vehicles on roads and cross-country. With additional funding of $250 million, the 4D/RCS was specified as the intelligent control architecture, in the Autonomous Navigation System (ANS) Program, for the Army’s Future Combat System (FCS).
We are experimenting with a unique approach to achieving swarm behavior and distributed artificial intelligence by partitioning the 4D/RCS among individuals in the collective. In the concept of the Cognitive Collective, THE 4D/RCS is partitioned among multiple robotic vehicles and then reassembled across the collective. This allows robots which are individually reactive with limited intelligence to become deliberative and cognitive within the collective; or robots which are individually deliberative to gain greater intelligence and efficacy within the collective.
For example, each reactive robot might have a portion of a world model and a portion of a planner. These would be useless to the individual robot. But a collective, in which each robot possessed distinct portion of a world model and planner, would be able to reconstitute the entire world model and planner, enabling the collective to achieve deliberative intelligence. This ability would allow small, inexpensive reactive robots to exhibit higher intelligence – and greater abilities – when operating in a swarm. This would be like a swarm of insects having the intelligence of a mouse – or a person.
Likewise, robots which are individually deliberative and already have world models and the ability to plan, can become more intelligent when those individual world models and planners are aggregated with others in the collective (when, for example, the individual world models are designed to be enlarged and integrated with others).
The number of individuals in the collective needed to comprise an aggregated world model will differ, depending on the type of individual robot and the mission. The robots in a collective may encompass any medium of operation, including air, ground, and water. Some level of redundancy will ensure that attrition of individual robots will not prevent the formation of a cognitive collective from a surviving subset.
While cognitive collectives may be fearsome, they can behave ethically if they are programmed or instructed properly. The 4D/RCS has always included a module with value-driven logic to inform the robot of its priorities in various circumstances. While the values are often operational (e.g., stealth is more important than speed for this mission), they can also be ethical and humane (e.g., do not harm civilians).