G2000's (G2K) - faster than light

Tim Byars (tbyars@earthlink.net)
Mon, 3 Aug 1998 10:15:34 -0700


--============_-1309957515==_ma============
Content-Type: text/plain; charset="us-ascii"

http://www.mackido.com/Hardware/G2000.html

AIM (Apple-IBM-Motorola) have never been specific on anything about
Project 2000 (or G2K / G2000). So all numbers are based on those loose
references by IBM and Motorola (mentioned in presentation over the last few
years), combined with liberal speculation. Furthermore, times change. Now
that Motorola and IBM are diverging a little, their plans may be changing
as well.

When guessing the future, the farther we go the more hypothetical things
become, the more variables enter the equation -- and the larger our misses
could be. So with that in mind, I will play Carnak the prognosticator
(looking 2 to 5 years ahead).

I'm guessing that these chips will probably be numbered something like the
900's or 2000's (2K's) -- but probably be referred to as G2K's. Guessing
the next generation product number is bad enough, two generations in the
future, and all bets are off.

When?

Project 2000 (or G2000's) were originally scheduled to be released in '99
or early 2000 -- but little more was said than that. In some ways, the AIM
goals have been slipping a little (AIM seems to have slipped about one
quarter for every year or so ahead they planned). I would not be surprised
if this processor hit its later date and is released in early to mid 2000.

This is not bad for schedules defined in late '95 or early '96. Contrast
this to the P7/Merced, which was originally targeted for late '96 - '97,
and now looks to be delivered some time around 2001. In fact, the Merced
has been perpetually 3-4 years away, ever since 1993 (I'm not kidding --
look at old periodicals).

New Core and Instruction Set

G2000 (Project 2000) was going to have a total "new" core (processor
design) and changes to the instruction set. Basically, everything on the
design was opened up to potential changes in order to guarantee that the
PowerPC evolves and stays on the cutting edge.

If the instruction set is going to change (and it seems likely), then the
obvious question is, "How is the instruction set going to change?"

I suspect strongly that certain techniques, like predication, preload
information, hints, and other processor "tricks" that can be added to the
instruction set to make this "New" PPC instruction set significantly faster
than the current instruction set (and competitive with new Processor
designs like EPIC) for a single thread of execution. Yet, it will still
likely contain things like the AltiVec unit, and Multiple Cores (in at
least some flavors), as a way to guarantee a big performance advantage
advantage over all, and on parallel thread execution. In other words,
faster and faster.

Despite changes to the instruction set, I strongly suspect that
compatibility with the current PowerPC instruction set will be a
requirement. Fortunately, the PowerPC architecture is not even close to
outdated -- so I suspect the design to be basically a few tweaks and adds,
rather than a complete rework.

They can easily add a few new instructions, conditional execution bits,
some more hints (for loading and branching), and some other features -- all
without breaking the current executables and requiring recompiling code.
This means that I doubt there will be "two mode" as in "old PPC emulator"
and "new PPC instruction" -- and instead I expect that there will be a few
"complimentary" instructions added, to increase performance. This will be a
major advantage in adoption, as compared to the complete Instruction-Set
replacement (like Merced and IA64).

There will problably be a few older (depricated) instructions, that will
not AS powerful (fast) as the new ones -- but the baggage of carrying these
few instructions (for compatibility) will be almost no load at all (in
space or performance). So the PowerPC's anachronistic baggage in the
future, will be far less that of Merced (and the IA32/x86 instruction set).
So in other words -- even the least efficient way to add instructions to
the PowerPC-2000 will be superior (in space and performance) to the most
efficient way to drag around the 20-30 year old instruction set of the x86.

Originally, the design goal for the G4 was for it to include a new 64 bit
flavor of the PowerPC. We haven't yet seen if that will materialize. The
128 bit AltiVec unit definitely alleviated the need for 64 bit integers
(for data size) but there are still questions about 64 bit address space.
Big databases and servers want 64 bit addressing (current PowerPC's are
only 32 bit addressing). So if 64 bit addressing slips out of the design
goals for the G4, then it will probably end up in the G2000. But 64 bit
instructions should not be too hard to add to the PowerPC Architecture --
since the PowerPC Architecture was designed from the start to allow for 64
bit instructions. Many IBM Power implementations already have 64 bit (like
the PowerPC/AS's). So adding these extra-instructions will not be much
work, and will not be a kludge -- nor have to be a paged-mode hack (1).

(1) Yet, while a paged-mode was a hacky pain-in-the-butt, and slight drag
on performance, in the Intel x86 world (because it could only address 64
Kilobyte pages) -- it is far far far less painful when you have 4 Gigabyte
pages. I suspect that the 64 bit flavor of PowerPC will support both 32 (in
a paging mode) and 64 bit direct addressing.

Size / Power

It is likely that the G2000 was supposed to be a new "mega-processor". A
fast monster that uses lots of transistors, and is the ultimate powerhouse.
But AIM hasn't done that well in the Super-processor category. The PowerPC
is going great at being as fast (or faster) with a lot smaller chip -- but
the PowerPC are far smaller than the Intel Bretheren. One is left to wonder
what a PowerPC of the same size as a PentiumII or Merced will be capable
of.

Internally, IBM has done great with their PowerPC/AS (a 64 bit PowerPC
with other additions), and Power Architecture chip sets (like Power3). But
these large chip technologies have not really been used as much in the AIM
camp.

So while the PowerPC's have done well compared to Intel, they have just
not beaten the high-end Sparc's and DEC-Alpha's. AIM missed their goals for
their mega-chip, the 620. Later chips, like a G3 with a huge on chip L2
cache, also got dropped (2). So while AIM is certainly capable of doing the
big "mondo" chip -- they haven't yet done so.

(2) The G3 was originally supposed to go up to 30 Millions transistors,
but is not likely to go beyond 10 Million. What those extra 20 Million
transistor were probably going to be used for was on-chip L2-cache. It
turns out that adding all that on-chip cache just restricted versatilitying to be used for was on-chip L2-cache. It
turns out that adding all that on-chip cache just restricted versatility,
didn't increase performance, didn't significantly lower costs -- at least
not enough to be worth it. In fact, the results of large on-package cache
technology would be considered a flop for Intel too -- except the reason
Intel is putting the L2 cache on package (for Pentium Pro's, PentiumII's
and Xeon's) is as a way to destroy the competition in the cache market, by
leveraging their proprietary processor technology to force out the cache
competitors. Typical Intel.

The trend on the current G3's is to make things more efficient (size and
cost) -- and not waste lots of space trying to make instructions a little
faster (3). What all this means is that while processors are using more
transistors (which can let them optimize the processor and make things go
faster), the processor has not been gaining as many new transistors as it
has been shrinking and becoming more efficient.

(3) Subsequent versions of G3's, like Motorola's die shrunk G3 and IBM's
new Copper G3 are both ways to reduce cost, reduce power, reduce heat,
increase speed -- but not to spend a lot of space (transistors) making the
processor (instruction set) go much faster at the same clock rate.

This trend towards smaller and more efficient makes a lot of sense when
you are in an embedded market (which IBM and Motorola have both declared as
their goal). It also makes sense for more portable computers, or to use
computers in more devices. Yet despite that "efficiency first" trend, the
AltiVec, and Multiple-Core PPC's will allow the G4's to grow substantially
in performance per MHz, and will probably be far better performers than the
current generation (at the same clock-speed). Both of these concepts were
possibly pushed from the G3 project into the G4's -- so the G4 may return
(a little) toward the trend of larger and larger (more and more powerful)
processors. G2000 too will probably go with more parallelism than the
current generation.

So you can see that there has been a bit of a fork in the road. IBM and
Motorola were originally targeting workstations and high-end market for the
PowerPC -- but then they took a detour towards being faster (to market) and
smaller (processors). In the process of getting fast and small, they
enabled themselves to have a larger market (embedded controllers), and that
small and efficient core, makes it easier for them to keep scaling up
processors by adding more and more processors onto a single chip.

Intel is shut out of the embedded market because they are bad at making
small and efficient -- the x86s (Pentiums) are anything but that. So Intel
keeps going for bigger and bigger. They keep bolting more and more features
on, and new instruction sets, and extra-modes -- like Frankenstien's
Processor.

What does all this mean for the future?

That PowerPC's superior efficiency has made it a ruler in the embedded
market. That money guarantees further development, and further
improvements. That efficiency also means that while the process can easily
compete on the high end of the market, it is making an even bigger
difference in the Portable market. That advantage is only likely to grow in
the future.

Low-end and Portables

Intel's processors have a big weakness heat and size. This hurts them in
capabilities in portables and low-end machines -- that disadvantage is only
getting worse (and will continue to do so). Merced doesn't cure that
problem, being a big, hot, monster -- and it is likely the last nail in the
coffin for efficient x86 portables. CISC processors are dying in portable
markets first -- and if EPIC can't cure the problem (which it can't), then
what will? The PowerPC's are going to try to dominate these markets -- and
has a good chance of doing so. The PowerPC's lower power (and higher
efficiency) means that it is easier (cheaper) to make higher performing
portables that have a longer battery life. Intel just can't beat the laws
of physics.

I don't see this advantages disappearing. The big surprise may come is in
"all-in-one" processors, or "system-on-a-chip". The PowerPC's are becoming
so small, and efficient, that it is quite likely that in a couple of years
(or less), Apple can have Motorola or IBM making complete PowerMacs (sans
memory) that are on single chip. Especially if that chip only needs to have
FireWire and USB on board (for most of its I/O needs). This System on a
chip solution, reduces the chip count, reduces the price, reduces the size,
reduce the complexity, reduces the power required, increases reliability,
increases battery life, increases the options for hardware systems (what
kind of cases you can put them in, or what you can use them for). System on
a chip for the future is probably a key reason for a recent Motorola-AMD
alliance, and some other alliances that Motorola has been making.

Intel just can't compete in that area either -- and there is no doubt that
if computers become smaller, and faster, than more people are going to use
portables, and smaller disposable computers. Which looks more and more to
mean, "not Intel". I think AMD, Cyrix and the other x86 clone makers are
going to make a play for that market in the PC world -- but it is still
very hard to drag around the x86 baggage and be efficient at the same time
(or to the same degree).

Remember, all this lower-power stuff doesn't ONLY apply to Portables. The
less power your processor uses, the less it costs to make a desktop system
as well -- a smaller power supply, and possibly eliminate the fan, and
other possible engineering wins. In fact, it is very difficult to make
certain types of machines (like a set-top box type computer) if they have a
fan -- and who wants the noise? So lower-power has advantages in many areas
of computing -- just on portables and low-end machines the most.

High-end

Intel's processors have a big weakness heat (power) and size. The more
power it uses (and heat it makes), the more it costs to make a system and
keep it cool. On $5,000 workstations this isn't that big a deal -- if you
only have single processors. But the industry is preparing to shift to more
multiprocessor systems, and there the problems get worse. So, on the high
end, the cheap and efficient cores for the G4 (and beyond), will mean that
you can slap many more processors in the same box for the same or lesser
cost. While it is possible to put 2 or 4 Xeon processors in a box, it is
expensive to cool them, and power them. In fact, that box could easily cost
$10,000 - $20,000 for just two Xeon's -- and they haven't even gotten all
the bugs worked out of that chip yet. The scalability of the more
efficient, PowerPCs will likely make their advantages over Intel even
larger than the current generation.

In the future, I would not be surprised to find that for the price of a
dual or quad top end x86 workstation, you may be able to buy a PowerPC
workstation with quad-G2000's (with 6 or 8 sub-processors in each G2000).
Maybe even a 4:1, 6:1, or even 8:1 advantage for PowerPC's, with far more
versatility in system design. How is Intel going to compete with that?

Remember, the only way that Intel is getting the performance for the past
few generations is by tying in a level-2 cache to the package. This
increases the heat and price of the processors. Intel can't keep responding
with new single processor chips that are bigger and hotter than previous
versions, and simultaneously get better and better parallel processor
characteristics. (Actually, they have to try to, but they are fighting the
laws of nature again).

Conclusion

The Powwer cost and lower power budgets. It has taken a little
longer to gain momentum than early predictions said it would -- but the
momentum is building.

At the bottom end of the market is going to be a tough fight for PC's.
Computers are becoming more and more commoditized (and appliance like).
PC's have to make a choice -- cheap OR fast. PowerPC are doing both --
going after cheaper and cheaper markets while keeping their performance
high. Users are becoming familiar with disposable computers (just use them
and hand them down or throw them away). There is go to be a hard-press made
in that area of the market by PowerPC's -- and the customers are wising up
to the value of this paradigm.

On the high end of the market, the PowerPC's efficiency is going to give
it an advantage on scalability and cost. The Uniprocessor may be on its
last legs in the high end of the market before Merced ever comes out. AIM
gets two or three more generations of chips before the Merced comes out
(more if Merceds schedule doesn't stop slipping) -- G2000's are probably
going to be out a year before Merced, and might be going into their second
generation before Merced comes out. Then the PowerPC gets another two
generations (3-4 years) before software that can support Merced well will
be widely available -- according to Intel's own (over optimistic)
predictions. Merced may be a has-been before it ever gets released. The
only thing Intel has to fight with is more FUD (misinformation and hype) --
in an industry that is finally (after 20 years) getting wise to the ways of
Intel.

So Apple and AIM are using a Martial Art (and business) strategy, of
finding where the opponent is weak, and then continually pounding on that
weakness until they give up. Ouch, that's gotta hurt. I'm not sure that
Intel can ever cover that opening either. At best, they will probably have
to wait, and try to buy someone out who covers it for them (their savior is
going to be size and money).

The advantages of the PowerPC are starting to really shine -- and I think
it is going to get so bright that it will be hard to see. I hope Intel can
keep up enough to at least keep pressure on the PowerPC camp -- but
technologically, I'm not sure they will be able to. Intel seems to be
running very quickly towards the wrong goal.
-

The idea that Bill Gates has appeared like a knight in shining
armour to lead all customers out of a mire of technological
chaos neatly ignores the fact that it was he who, by peddling
second-rate technology, led them into it in the first place.
-Douglas Adams, on Windows '95

<> tbyars@earthlink.net <>
--============_-1309957515==_ma============
Content-Type: text/enriched; charset="us-ascii"

http://www.mackido.com/Hardware/G2000.html

AIM (Apple-IBM-Motorola) have never been specific on anything about
Project 2000 (or G2K / G2000). So all numbers are based on those loose
references by IBM and Motorola (mentioned in presentation over the last
few years), combined with liberal speculation. Furthermore, times
change. Now that Motorola and IBM are diverging a little, their plans
may be changing as well.

When guessing the future, the farther we go the more hypothetical
things become, the more variables enter the equation -- and the larger
our misses could be. So with that in mind, I will play Carnak the
prognosticator (looking 2 to 5 years ahead).

I'm guessing that these chips will probably be numbered something like
the 900's or 2000's (2K's) -- but probably be referred to as G2K's.
Guessing the next generation product number is bad enough, two
generations in the future, and all bets are off.

When?

Project 2000 (or G2000's) were originally scheduled to be released in
'99 or early 2000 -- but little more was said than that. In some ways,
the AIM goals have been slipping a little (AIM seems to have slipped
about one quarter for every year or so ahead they planned). I would not
be surprised if this processor hit its later date and is released in
early to mid 2000.

This is not bad for schedules defined in late '95 or early '96.
Contrast this to the P7/Merced, which was originally targeted for late
'96 - '97, and now looks to be delivered some time around 2001. In
fact, the Merced has been perpetually 3-4 years away, ever since 1993
(I'm not kidding -- look at old periodicals).

New Core and Instruction Set

G2000 (Project 2000) was going to have a total "new" core (processor
design) and changes to the instruction set. Basically, everything on
the design was opened up to potential changes in order to guarantee
that the PowerPC evolves and stays on the cutting edge.

If the instruction set is going to change (and it seems likely), then
the obvious question is, "How is the instruction set going to change?"

I suspect strongly that certain techniques, like predication, preload
information, hints, and other processor "tricks" that can be added to
the instruction set to make this "New" PPC instruction set
significantly faster than the current instruction set (and competitive
with new Processor designs like EPIC) for a single thread of execution.
Yet, it will still likely contain things like the AltiVec unit, and
Multiple Cores (in at least some flavors), as a way to guarantee a big
performance advantage advantage over all, and on parallel thread
execution. In other words, faster and faster.

Despite changes to the instruction set, I strongly suspect that
compatibility with the current PowerPC instruction set will be a
requirement. Fortunately, the PowerPC architecture is not even close to
outdated -- so I suspect the design to be basically a few tweaks and
adds, rather than a complete rework.

They can easily add a few new instructions, conditional execution
bits, some more hints (for loading and branching), and some other
features -- all without breaking the current executables and requiring
recompiling code. This means that I doubt there will be "two mode" as
in "old PPC emulator" and "new PPC instruction" -- and instead I expect
that there will be a few "complimentary" instructions added, to
increase performance. This will be a major advantage in adoption, as
compared to the complete Instruction-Set replacement (like Merced and
IA64).

There will problably be a few older (depricated) instructions, that
will not AS powerful (fast) as the new ones -- but the baggage of
carrying these few instructions (for compatibility) will be almost no
load at all (in space or performance). So the PowerPC's anachronistic
baggage in the future, will be far less that of Merced (and the
IA32/x86 instruction set). So in other words -- even the least
efficient way to add instructions to the PowerPC-2000 will be superior
(in space and performance) to the most efficient way to drag around the
20-30 year old instruction set of the x86.

Originally, the design goal for the G4 was for it to include a new 64
bit flavor of the PowerPC. We haven't yet seen if that will
materialize. The 128 bit AltiVec unit definitely alleviated the need
for 64 bit integers (for data size) but there are still questions about
64 bit address space. Big databases and servers want 64 bit addressing
(current PowerPC's are only 32 bit addressing). So if 64 bit addressing
slips out of the design goals for the G4, then it will probably end up
in the G2000. But 64 bit instructions should not be too hard to add to
the PowerPC Architecture -- since the PowerPC Architecture was designed
from the start to allow for 64 bit instructions. Many IBM Power
implementations already have 64 bit (like the PowerPC/AS's). So adding
these extra-instructions will not be much work, and will not be a
kludge -- nor have to be a paged-mode hack (1).

(1) Yet, while a paged-mode was a hacky pain-in-the-butt, and slight
drag on performance, in the Intel x86 world (because it could only
address 64 Kilobyte pages) -- it is far far far less painful when you
have 4 Gigabyte pages. I suspect that the 64 bit flavor of PowerPC will
support both 32 (in a paging mode) and 64 bit direct addressing.

Size / Power

It is likely that the G2000 was supposed to be a new "mega-processor".
A fast monster that uses lots of transistors, and is the ultimate
powerhouse. But AIM hasn't done that well in the Super-processor
category. The PowerPC is going great at being as fast (or faster) with
a lot smaller chip -- but the PowerPC are far smaller than the Intel
Bretheren. One is left to wonder what a PowerPC of the same size as a
PentiumII or Merced will be capable of.

Internally, IBM has done great with their PowerPC/AS (a 64 bit PowerPC
with other additions), and Power Architecture chip sets (like Power3).
But these large chip technologies have not really been used as much in
the AIM camp.

So while the PowerPC's have done well compared to Intel, they have
just not beaten the high-end Sparc's and DEC-Alpha's. AIM missed their
goals for their mega-chip, the 620. Later chips, like a G3 with a huge
on chip L2 cache, also got dropped (2). So while AIM is certainly
capable of doing the big "mondo" chip -- they haven't yet done so.

(2) The G3 was originally supposed to go up to 30 Millions
transistors, but is not likely to go beyond 10 Million. What those
extra 20 Million transistor were probably going to be used for was
on-chip L2-cache. It turns out that adding all that on-chip cache just
restricted versatility, didn't increase performance, didn't
significantly lower costs -- at least not enough to be worth it. In
fact, the results of large on-package cache technology would be
considered a flop for Intel too -- except the reason Intel is putting
the L2 cache on package (for Pentium Pro's, PentiumII's and Xeon's) is
as a way to destroy the competition in the cache market, by leveraging
their proprietary processor technology to force out the cache
competitors. Typical Intel.

The trend on the current G3's is to make things more efficient (size
and cost) -- and not waste lots of space trying to make instructions a
little faster (3). What all this means is that while processors are
using more transistors (which can let them optimize the processor and
make things go faster), the processor has not been gaining as many new
transistors as it has been shrinking and becoming more efficient.

(3) Subsequent versions of G3's, like Motorola's die shrunk G3 and
IBM's new Copper G3 are both ways to reduce cost, reduce power, reduce
heat, increase speed -- but not to spend a lot of space (transistors)
making the processor (instruction set) go much faster at the same clock
rate.

This trend towards smaller and more efficient makes a lot of sense
when you are in an embedded market (which IBM and Motorola have both
declared as their goal). It also makes sense for more portable
computers, or to use computers in more devices. Yet despite that
"efficiency first" trend, the AltiVec, and Multiple-Core PPC's will
allow the G4's to grow substantially in performance per MHz, and will
probably be far better performers than the current generation (at the
same clock-speed). Both of these concepts were possibly pushed from the
G3 project into the G4's -- so the G4 may return (a little) toward the
trend of larger and larger (more and more powerful) processors. G2000
too will probably go with more parallelism than the current generation.

So you can see that there has been a bit of a fork in the road. IBM
and Motorola were originally targeting workstations and high-end market
for the PowerPC -- but then they took a detour towards being faster (to
market) and smaller (processors). In the process of getting fast and
small, they enabled themselves to have a larger market (embedded
controllers), and that small and efficient core, makes it easier for
them to keep scaling up processors by adding more and more processors
onto a single chip.

Intel is shut out of the embedded market because they are bad at
making small and efficient -- the x86s (Pentiums) are anything but
that. So Intel keeps going for bigger and bigger. They keep bolting
more and more features on, and new instruction sets, and extra-modes --
like Frankenstien's Processor.

What does all this mean for the future?

That PowerPC's superior efficiency has made it a ruler in the embedded
market. That money guarantees further development, and further
improvements. That efficiency also means that while the process can
easily compete on the high end of the market, it is making an even
bigger difference in the Portable market. That advantage is only likely
to grow in the future.

Low-end and Portables

Intel's processors have a big weakness heat and size. This hurts them
in capabilities in portables and low-end machines -- that disadvantage
is only getting worse (and will continue to do so). Merced doesn't cure
that problem, being a big, hot, monster -- and it is likely the last
nail in the coffin for efficient x86 portables. CISC processors are
dying in portable markets first -- and if EPIC can't cure the problem
(which it can't), then what will? The PowerPC's are going to try to
dominate these markets -- and has a good chance of doing so. The
PowerPC's lower power (and higher efficiency) means that it is easier
(cheaper) to make higher performing portables that have a longer
battery life. Intel just can't beat the laws of physics.

I don't see this advantages disappearing. The big surprise may come is
in "all-in-one" processors, or "system-on-a-chip". The PowerPC's are
becoming so small, and efficient, that it is quite likely that in a
couple of years (or less), Apple can have Motorola or IBM making
complete PowerMacs (sans memory) that are on single chip. Especially if
that chip only needs to have FireWire and USB on board (for most of its
I/O needs). This System on a chip solution, reduces the chip count,
reduces the price, reduces the size, reduce the complexity, reduces the
power required, increases reliability, increases battery life,
increases the options for hardware systems (what kind of cases you can
put them in, or what you can use them for). System on a chip for the
future is probably a key reason for a recent Motorola-AMD alliance, and
some other alliances that Motorola has been making.

Intel just can't compete in that area either -- and there is no doubt
that if computers become smaller, and faster, than more people are
going to use portables, and smaller disposable computers. Which looks
more and more to mean, "not Intel". I think AMD, Cyrix and the other
x86 clone makers are going to make a play for that market in the PC
world -- but it is still very hard to drag around the x86 baggage and
be efficient at the same time (or to the same degree).

Remember, all this lower-power stuff doesn't ONLY apply to Portables.
The less power your processor uses, the less it costs to make a desktop
system as well -- a smaller power supply, and possibly eliminate the
fan, and other possible engineering wins. In fact, it is very difficult
to make certain types of machines (like a set-top box type computer) if
they have a fan -- and who wants the noise? So lower-power has
advantages in many areas of computing -- just on portables and low-end
machines the most.

High-end

Intel's processors have a big weakness heat (power) and size. The more
power it uses (and heat it makes), the more it costs to make a system
and keep it cool. On $5,000 workstations this isn't that big a deal --
if you only have single processors. But the industry is preparing to
shift to more multiprocessor systems, and there the problems get worse.
So, on the high end, the cheap and efficient cores for the G4 (and
beyond), will mean that you can slap many more processors in the same
box for the same or lesser cost. While it is possible to put 2 or 4
Xeon processors in a box, it is expensive to cool them, and power them.
In fact, that box could easily cost $10,000 - $20,000 for just two
Xeon's -- and they haven't even gotten all the bugs worked out of that
chip yet. The scalability of the more efficient, PowerPCs will likely
make their advantages over Intel even larger than the current
generation.

In the future, I would not be surprised to find that for the price of
a dual or quad top end x86 workstation, you may be able to buy a
PowerPC workstation with quad-G2000's (with 6 or 8 sub-processors in
each G2000). Maybe even a 4:1, 6:1, or even 8:1 advantage for
PowerPC's, with far more versatility in system design. How is Intel
going to compete with that?

Remember, the only way that Intel is getting the performance for the
past few generations is by tying in a level-2 cache to the package.
This increases the heat and price of the processors. Intel can't keep
responding with new single processor chips that are bigger and hotter
than previous versions, and simultaneously get better and better
parallel processor characteristics. (Actually, they have to try to, but
they are fighting the laws of nature again).

Conclusion

The PowerPC (RISC) is doing what it has promised to do -- deliver more
performance, at lower cost and lower power budgets. It has taken a
little longer to gain momentum than early predictions said it would --
but the momentum is building.

At the bottom end of the market is going to be a tough fight for PC's.
Computers are becoming more and more commoditized (and appliance like).
PC's have to make a choice -- cheap OR fast. PowerPC are doing both --
going after cheaper and cheaper markets while keeping their performance
high. Users are becoming familiar with disposable computers (just use
them and hand them down or throw them away). There is go to be a
hard-press made in that area of the market by PowerPC's -- and the
customers are wising up to the value of this paradigm.

On the high end of the market, the PowerPC's efficiency is going to
give it an advantage on scalability and cost. The Uniprocessor may be
on its last legs in the high end of the market before Merced ever comes
out. AIM gets two or three more generations of chips before the Merced
comes out (more if Merceds schedule doesn't stop slipping) -- G2000's
are probably going to be out a year before Merced, and might be going
into their second generation before Merced comes out. Then the PowerPC
gets another two generations (3-4 years) before software that can
support Merced well will be widely available -- according to Intel's
own (over optimistic) predictions. Merced may be a has-been before it
ever gets released. The only thing Intel has to fight with is more FUD
(misinformation and hype) -- in an industry that is finally (after 20
years) getting wise to the ways of Intel.

So Apple and AIM are using a Martial Art (and business) strategy, of
finding where the opponent is weak, and then continually pounding on
that weakness until they give up. Ouch, that's gotta hurt. I'm not sure
that Intel can ever cover that opening either. At best, they will
probably have to wait, and try to buy someone out who covers it for
them (their savior is going to be size and money).

The advantages of the PowerPC are starting to really shine -- and I
think it is going to get so bright that it will be hard to see. I hope
Intel can keep up enough to at least keep pressure on the PowerPC camp
-- but technologically, I'm not sure they will be able to. Intel seems
to be running very quickly towards the wrong goal.

-

The idea that Bill Gates has appeared like a knight in shining

armour to lead all customers out of a mire of technological

chaos neatly ignores the fact that it was he who, by peddling

second-rate technology, led them into it in the first place.

-Douglas Adams, on Windows '95

<<> tbyars@earthlink.net <<>

--============_-1309957515==_ma============--