From: Jeff Bone (jbone@jump.net)
Date: Mon Feb 28 2000 - 15:38:50 PST
> It seems however, that Napster suffers from a few design flaws:
> centralism (there is a central database, right?); it seems to produce
> cleartext traffic in certain patterns on a certain port (otherwise the
> protocol wouldn't have been reverse engineered so quickly and it would
> not be so easily detectable/blockable, as recently happening in
> certain university networks striving to conserve bandwidth). Is this
> correct?
>
Yes --- on both counts.
>
> It would have been nice to to be able to run the global title index
> via a distributed database (no single point of failure, e.g. due to
> unfriendly legal action),
Well, the cat's out of the bag on Napster anyway. All the open-source
attention to their protocols guarantees that if somebody kills Napster per
se, a bunch more "central authorities" will pop up. The game will become
hunter-killer, with the undesirable side effect of fragmenting the
community in the short term, w/o distributed / partition-tolerant
algorithms.
> which establishes connects through
> addresspace searches of nearby (in terms of number of hops and/or
> bandwidth) nodes, and sending something very much resembling a SSL
> connection as created by a vanilla secure browser session, both in in
> regards to used protocol and traffic pattern. Scanning for/blocking
> Napster traffic would so become much more difficult. An obvious
> problem is how to avoid collision with vanilla https (other than
> intercepting/redirecting https://aaa.bbb.ccc.ddd/napster/
> traffic. Perhaps just using https on a different port would be a
> smarter idea, though usage of a nonstandard port is bound to draw
> closer scrutiny).
>
Building good peer-to-peer protocols that can piggyback HTTP *and* work
bidirectionally through firewalls is tough business, at least IMO from
buddy list experience. There ought to be a "standard" way to do this, but
there isn't as yet.
Side note: I don't know too much about distributed search algorithms on
partitioned indices. Dr. Ernie, if you have any particular pointers to
that space I'd love to chase them.
>
> A moderate connectivity (say, ~10), with each node acting as a relay,
> would allow each node to reach any other node within just a few hops
> over the virtual network. Caching the index of titles stored directly
> on these nodes could probably reduce the traffic quite a bit. A
> top-100 list and related-list (users who downloaded this title also
> downloaded the following titles, ranked by total downloads) would seem
> to greatly increase program functionality.
>
> It would seem to be necessary to limit the scope of visibility to
> direct neighbours only, to prevent malicious users from obtaining
> extensive lists of other Napster users' addresses. To increase
> obfuscation, introducing some traffic mixing should be contemplated,
> injecting pseudorandom garbage traffic between direct neighbours to
> foil statistical traffic attacks.</paranoia>
>
> The program should be capable of serving arbitrary types of digital
> content.
>
Let's see... a distributed, peer-to-peer, disguisable, fault-tolerant
network that can serve out arbitrary types of multimedia content,
presumably through HTTP, but with firewall characteristics? Sounds like a
winner to me. :-) Probably need to throw in some kind of opportunistic
hide-and-seek distributed caching protocol to get around the
single-point-of-failure for a given server's content.
Oh, wait, doesn't NNTP sort of already do most of that?
;-)
>
> The rationale is to sneak code into a widely used utility which would
> create an infrastructure for secure anonymous peer to peer
> communication, content sharing and digital payments (let's call it
> CryptNet) on top of insecure, increasingly monitored/filtered public
> networks.
>
> Comments?
I'm there. Lead the way.
>
>
> -- Eugene
This archive was generated by hypermail 2b29 : Mon Feb 28 2000 - 15:56:16 PST