From: MarkH@i2.co.uk
Date: Wed Oct 04 2000 - 05:14:11 PDT
I'm not sure if this represents a potential software revolution or a
potential hardware revolution, but it is quite exciting.
We don't really think about it much, but the code generated by our compilers
ends up going all around the city just to get next door. This is because our
algorithm has to be chopped up, twisted and screwed back together in a way
that causes a general purpose engine (the von Neumann architecture:
CPU+code/data+buses) to carry it out. All those pushes, pops, loads, stores
etc add up to an incredibly wasteful amount of electron shuffling and
silicon real estate.
Enter Handel-C, which takes a C based expression of your algorithm and
implements it by wiring up the gates of an ASIC (application specific
integrated circuit).
Example...
The Handel-C researchers implemented Conway's life in hardware in an
afternoon (remember, they are just writing C here), and on a 30MHz FPGA
(field programmable gate array) it still ran many times faster than on a
300MHz Pentium (my figures need checking). This is because the solution was
non von Neumann - no CPU/program/data and associated bottlenecks - just
gates wired up in a way that implemented the algorithm. If you wanted you
could take the FPGA design and implement it on faster silicon of course and
boost this performance by another factor of 10 to 100. This has exciting
consequences.
Consequences...
One is that the end result is in theory better - quicker and smaller - than
conventional code on a von Neumann machine. Another is the death of hardware
design - or at least another (potentially massive) encroachment on it. By
shifting to a software design model, hardware designers (maybe we need to
think of a new name for them) are liberated from a number of constraints
that currently stifle innovation and creativity, and which make ASIC
development very risky, time consuming and expensive.
We could be talking here about the death of Intel, AMD and alike. Consider
this...
...Traditional hardware design is time consuming and intricate, with long
prototyping and product proving cycles. This stifles creativity and leads to
low risk, incremental, design strategies based on proven building blocks.
This is at its most extreme in the PC CPU market where design costs are
astronomical, and the design cycle is measured in years. This is one reason
why the Pentium-III architecture has changed remarkably little since the
8086. Even small mistakes are very costly indeed, ant the cost of making a
fundamental mistake would be astronomical - if Intel fell behind AMD by a
generation their business would be decimated.
Adopting a software design model for ASIC design brings rapid prototyping,
design experimentation, creativity, and the ability to explore more of the
"solution space", in less time and at lower cost. I'm not saying this allows
AMD or Intel to cast away their chains, but it does look like being a big
deal.
From a softies point of view I'm not sure how it fits into the scheme of
things (NEST or anything else) but am forcing myself to re-read an article
about it to make sure it sticks. My interest is at least partly cos I was a
hardware geek in the days before PCs - when we had to build our own uP based
single board thingies and software was measured in bytes - but also because
I sniff something that may well change things radically.
I once worked on a project that aimed to capture a requirement in an
abstract way, and provide a continuum of ever more "designy" layers down to
a definition of the solution to that requirement, including the partitioning
of the implementations between hardware and software functions. The purpose
here was to manage extremely costly, risky, (complex, high performance)
developments, and is ringing sweet bells here.
The bells say that software and hardware may be blurring even more. (Once
writing software was a matter of plugging wires into an array of sockets in
the right combination). Can we imagine our HTTP/WAP/P2P solutions ending up
embodying bits of harder stuff around the edges? Hit the compile button and
spit out not just something that gets loaded into a CPU, but rewires an ASIC
which does the same job faster/quicker/with less power? Could it make things
possible that we don't even consider because, even as softies, we
unconsciously accept and factor in, the limitations of conventional hardware
approaches (e.g. CPU+code+memory+i/o) in our target platforms? Who knows?
More info...
I'm trying to find stuff about it online but failing so far - doesn't build
confidence in the marketing ability of these guys! It is being toted around
by an Oxford academic called Page from a company called ESL (an Oxford Uni
spin off I think - out of the parallel recipe kitchens of Prof Tony Hoare
(Communicating Sequential Processors) and Inmos's David May.
I'll post a follow up with details of the article and any online refs I can
find. If anyone knows of anything please email me and I'll include it in the
follow up.
mark
-- Mark Hughes Agile HTML Editor Agilic Corporation http://www.agilic.com
This archive was generated by hypermail 2b29 : Wed Oct 04 2000 - 05:13:53 PDT