http://www.networkcomputing.com/605/605moskowitz.html
CORPORATE VIEW / ROBERT MOSKOWITZ
The High Cost Of Application Waste
Some time ago, we were analyzing why an Open Database Connectivity
(ODBC) application was performing so poorly for remote access users.
While sitting there with our packet sniffers between the SQL servers
and Point-to-Point Protocol (PPP) access servers, we watched in
dismay as data we saw going over the link a few packets ago went
again.
Our first reaction was that these were retransmissions, but no; other
information in the packet indicated that these were uniquely
different packets. Later we learned that ODBC does not maintain state
and it is not uncommon to see some information re-requested. ODBC
seemed to assume that bandwidth was free; at least freer than the
logic to avoid asking for information that the client already got.
Since then I've become very concerned about the waste that is
occurring over our networks. Protocol writers seem to think that
everyone has a clear 10 Mbps between themselves and the server. After
all, that is what they have in their labs. I had thought that if I
could just keep modules like dynamic link libraries (DLLs) local I
would be controlling the abuse of our LAN resources (see Corporate
View, March 1994, page 43). Have I been rudely awakened!
Network administrators rise up and take note! Are you prepared to rip
out your Type 3 twisted- pair wire for Type 5 so you can run 100-Mbps
LANs? Perhaps you will prefer switching hubs so each node does have
dedicated 10 Mbps. All this is because they keep telling us bandwidth
is cheap.
Interestingly, the waste come s from all quarters. We know that LAN
mail applications like cc:Mail consume considerable bandwidth and
that Notes is an even heavier-weight hitter. But surprisingly, even
supposed light-weights like HTTP can eat up the bandwidth. This has
occurred with the advent of a new class of Web browsers exemplified
by Netscape (winner of Network Computing's Well-Connected award).
We have known since the beginning of the Internet that two parallel
FTP sessions will transfer two files faster than one FTP session will
transfer the same two files. Netscape used this principle to its
advantage by requesting multiple embedded images via parallel HTTP
connections. The users see snappier response, but the routers see
bursts of greater congestion and the LAN sees burstier peak loads.
Some consider this easier than replacing large GIF files with smaller
JPEG or PostScript files. It can still exceed the bandwidth you
designed your network for just last year.
Disappearing Bandwidth In a recent discussion of the End2End Interest
list on the Internet, one researcher stated that Remote Procedure
Calls (RPCs) that shield programmers from the network are bad, as
they have resulted in these (and other) applications that consume
bandwidth that is just not there.
This is a little extreme in the other direction for me, but it does
bear a point. The network and protocol architects have actually done
their job too well--in the sense that now programmers are finding new
ways, even in the client/server paradigm, to rapidly consume all of
our network bandwidth.
Why is this a concern to my fellow networkologists? The most apparent
reason is the cost to upgrade our LANs and WANs to accommodate a very
rapidly increasing bandwidth demand, even if the more traditional
high bandwidth applications, like imaging, are not part of the
traffic mix.
A second less obvious reason occurs when the traffic mix includes a
timing-sensitive application like video conferencing or Data Link
Switching (DLSw). High-bandwidth consumption results in router conge
stion. This means dropped or delayed packets. For video conferencing,
this could result in choppy images. For DLSw it could result in
session time-outs.
An even more subtle problem is the inability to support distributed
users over slow links. These include dial-up PPP users and remote
LANs on slow WAN links like 56-Kbps frame-relay clouds. The dial-up
users either have to put up with multiminute response times or the
application has to be proxied on some system on the LAN and only
screen updates sent over the link. This is the pcANYWHERE approach
that client/server applications should not need. The remote LANs end
up getting distributed servers that would otherwise not be needed.
This considerably raises the hardware, software and support costs.
Conserving Bandwidth
There are some things you can do to conserve your bandwidth for real
use. Set up a test net in your facility. This test net consists of a
LAN segment off a router that is connected to your main net via a
serial link.
Use a null-modem device that you can vary the speed on from T1 down
to 9,600 Kbps for that serial link. This way you can directly see how
applications perform over different speed links. Watch the
interaction over the link between the client and the server for
tell-tale signs of excessive packets or high packet bursts that could
result in router congestion.
Avoid known stateless protocols like NFS and ODBC, or fix them. In
the case of these two, they can be fixed in a DCE environment, DFS
instead of NFS and Open Horizon's Open Vision can run ODBC over a
secure RPC connection and maintain state along the way. If your
network will support multicast routing, use multicast for group
transmissions. Protocols like TN3270 are surprisingly frugal in their
use of bandwidth, particularly when compared with DLSw.
Adjust your TCP window. Most TCP/IP stacks are configured to
acknowledge every packet. Changing this window to at least every
other packet up to every fifth packet can significantly improve
performance and bandwidth usage of batch applications like FTP,
Gopher and HTTP. Finally, either be willing to change the way an
application works or be willing to pay for resulting bandwidth
requirements.
Robert Moskowitz is a software systems specialist at Chrysler Corp.,
Detroit, Mich. He can be reached on MCI Mail at 385-8921