// php echo do_shortcode (‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’)?>

One of the most difficult problems in creating intelligent machines – especially on the edge – can be to take behaviors or functionalities that are designed or trained in one environment and get them to work in another. Your robot controller, visual system, or neural network can work perfectly until the temperature, light level, or radiation exposure changes and then quickly deteriorates or damages. As early as the 1950s, researchers realized the same process they were doing life so successful – evolution – can be used to optimize all types of engineering systems.

With the growing impetus behind the construction of intelligent machines, there has been an increase in evolutionary research in this field. The important thing here is that the focus is not on finding the most effective solutions, but the most healthy such: stable against noise, variability and damage to the hardware where they will be implemented. This feature will be critical to the success of many artificial intelligence (AI) technologies, especially those used in hostile environments such as space, and those using emerging analog technologies such as memristors.

In engineering and computation, the concept of evolution is almost the same as in biology. In essence, a set of initial configurations – potential solutions to the problem to be solved – are defined within a set of constraints (what components can be used, how they can be connected to each other, etc.). They are designed to perform a task such as controlling a robot with a sensor so as to avoid obstacles. The success of any solution is measured by some fitness feature and then the worst are eliminated.

Every solution – good and bad – is represented by a genetic code that determines its shape, wiring, structure, and anything else that is allowed to change with evolution. The more successful are either bred (their genetic codes are somehow combined), mutated (part of the code changes randomly), or both. This is repeated many times, essentially searching the status space for increasingly successful configurations. This happens without the need for a designer’s idea. One of the advantages of this approach is that, as in nature, the seemingly insignificant benefits of poor performance solutions can become major advantages later.

In silico

This is not something new, even in robotics. One of the most compelling examples of evolving AI was created in 1994 by Carl Sims. Sims was working for Thinking Machines at the time, giving him access to one of the most powerful supercomputers of the time.

In a simulated environment, he evolved virtual beings (including body morphology, sensors, and controllers) who learned – through the survival of the fittest – to swim, walk, and competitively grasp an object (see Figure 1).

Still in battle between two evolved virtual creatures to capture the green block. (Source: Karl Sims, 1994)

Watch a video of the evolved creatures below. Although this project was virtual, it showed the potential of using the approach to develop not only algorithms but also hardware.

A year later at the University of Sussex, Adrian Thompson and colleagues showed that you could use an evolutionary approach by reconfiguring relatively new programmable gate arrays (FPGAs).

This work was interesting in three ways. First, it is the first fully developed hardware robot controller. Second, he showed how evolution can use subtle elements in the structure to accomplish the goal as efficiently as possible, but such decisions (inevitably) depend on the hardware. This means that they will either not work (or will not work well) when replicated to other seemingly identical machines. Third, the team demonstrated a few years later, is that evolution can be the solution to one’s own problem, as long as hardware variability is built into the process.

More recently, researchers from the same group have resumed work in this area. From digital FPGAs in the 1990s (albeit without clock speeds, which gave them continuous dynamics), they moved to evolving controllers in a 16 × 16 fully analog field programmable transistor grid in 2020. By incorporating enough noise and variability in the simulators in which the controller evolved, they managed to provide a low-specific robot with bad sensors, advanced behavior to avoid obstacles.

Neuromorphic evolution

Within the neuromorphic engineering community, Katie Schumann and her colleagues at Oak Ridge National Laboratory and the University of Tennessee have been working for years to develop optimized neural networks. In 2020, they published the paper “Evolutionary Optimization for Neuromorphic Systems,” which shows how they can create systems that work within normal hardware constraints, such as limited weight resolution or slowing down synapses and neurons. However, they pointed out that – with further development – the presented type of results could “… be used as part of a neuromorphic hardware process of joint design in the development of new hardware implementations”.

Olga Krestinkaya and her colleagues are working on just that, with a specific focus on analog chips. Their co-design process allows not only the known limitations of the particular technology to be taken into account, but also the inherent (but unknown) variability of the basic devices. The team focused especially on the properties of memristors as a technology that will never have the inherent uniformity of digital memory (see Figure 2).

Memristors – resistors that change depending on how current has flowed through them in the past – can be a key component of future neuromorphic systems. Unfortunately, they can show many variations, including in: (a) resilience and (b) aging over time. In addition, some devices will remain on or off after production. Krestinskaya and her colleagues have shown that you can automate the design of chips (d) while using the evolutionary process to mitigate these problems. (Source: Advanced Intelligence Systems Inc.) (Click on image to enlarge)

A few months ago, Žiga Rojec and his colleagues at the University of Ljubljana in Slovenia showed that you can go even further by not only taking into account imperfections or variability, but also complete failure. One of the outstanding applications of the early ones neuromorphic systems, especially analog, can be satellites: size, weight and power are critical, but the price is not. Such systems must be sufficiently tolerant to radiation and huge temperature changes in space to work well. Rojec’s research shows that, by evolution, an analog chip can be designed to give satisfactory results even in the presence of short-circuit damage.

Perhaps it is inevitable that bio-inspired technology must find its advancement, activated by bio-inspired optimization techniques. Time will tell.



https://www.eetimes.com/intelligent-hardware-evolves-to-overcome-its-limitations/

Previous articleGeneral Dynamics appoints new president of Bath Iron Works
Next articleFacebook’s algorithm has wreaked havoc ahead of Australia’s media law: WSJ