Reconfigurable computing



Reconfigurable computing technology is the ability to modify a computer system's hardware architecture in real time. Although originally proposed in the late 1960s by a researcher at UCLA, reconfigurable computing is a relatively new field of study. The decades long delay had mostly to do with a lack of acceptable reconfigurable hardware. Interest in this field was first triggered- off late in 2002 when a small Silicon Valley start up called Quick Silver Technologies announced what it called the Adaptive Computing Machine(ACM), a new class of digital integrated circuit that can be embedded directly into a mobile device and will enable hardware to be programmed almost as if it were a piece of software, For example, take 3 common applications that the average mobile phone performs seamlessly: search for a local cellphone; verify whether the number represents an authorized user then make the connection. Today the 3 operations are performed by 3 different chips inside the handset. With the new adaptive technology, a single chip can be reconfigured by a software instruction to assume different hardware functions and to perform all 3 applications one after the another

The earliest reconfigurable computing systems predate even digital computers. Before digital logic scientific and engineering computations were done on programmable analog computers: big banks of op amps, comparators, multipliers and passive components interconnected via a plug board and patch cords. By connecting components together, the very clever user could implement a network whose node obeyed a set of differential equation solver, capable of deployment- time reconfigurability. Toward the end of its era, the analog computer was combined with relay banks, and later with digital computers, to form hybrids. These machines could reconfigure themselves between execution sequences, providing an early form of yet another category of configurability. Some hybrid computer programmers become experts at juggling configurations while holding data in sample and to extend the range of these systems

The first moves toward really fluid reconfigurability came with the advent of embeddable digital computers. With the characteristics of a system defined by software in RAM, nothing could be simpler. Changing the operation of the system at installation, in response to changing data or even on the fly, is the matter of loading a different application. Variants on this theme included tightly coupled networks of computers in which the network topology could adapt to changing data flows, and even computers that could change their instruction sets in response to changing application demands.

But the first explorations into what most people today mean by the term reconfigurable computing came after the development of large SRAM- based FPGAs. The devices provided a fabric of logic cells and interconnects that could be altered- albeit with some difficulty - to create just about any logic netlist that would fit into the chip. Researches quickly seized upon the parts and began experimenting with deployment tie reconfiguration creating a hardwired digital network designed for a specific algorithm.

Experiments with reconfigurability in FPGAs identified two promising advantages: reduction of size or power consumption of the hardware, and increases in performance. Often the two types of advantages came together, rather than separately. The advantages, it turned out, came with only few quite specific techniques. One of these was simple: reuse of hardware. If it is organize a system in such a way that it has several distinct, non overlappimg operating modes, then you can save hardware by configuring a programmable fabric to execute in one mode, stopping, then configuring it to operate in another mode

A number of companies are currently working in this area. Most of the big players in the conventional DSP/ASIC area -Texas ,IBM, Motorola, Intel- are known to be working overtime to come up with reconfigurable designs of their own.


Current computers are fixed hardware systems based upon microprocessors. As powerful as the microprocessor is, it must handle far more functions than just the application at hand. With each new generation of microprocessors, the applications performance increases only incrementally.In many cases the application must be rewritten to achieve this incremental performance enhancement. Traditional fixed hardware may be classified into three categories: Logic(Gate Arrays, PALS etc.), Embedded control (controllers eg ASICs & Custom VLSI Devices) and Computers(Microprocessors eg x 86, 68000, Power PC).

Reconfigurable Computing Systems are those computing platforms whose architecture can be modified by the software to suit the application at hand. To get the maximum through put, an algorithm must be placed in hardware ( eg. ASIC, DSP, etc) Dramatic performance gains are obtained through the 'hardwiring' of the algorithm. In a recofigurable computing system, the "Hardwaring takes place on a function by function basis as the application executes

Tags :
Your rating: None Average: 3.5 (2 votes)

Posted by

Sun, 19/12/2010 - 19:47