Gerald Oskoboiny <gerald@impressive.net> writes:
> On Fri, Mar 09, 2001 at 12:42:35PM +0100, Eugene Leitl wrote:
>
> > I guess this means I need a logging proxy which doesn't interfere too
> > badly. Do you know a good one, preferably in Perl or Python? Perhaps
> > one which also strips the cookies and the ad business?
I've put up a small list of links I've collected over time at
<http://viii.dclxvi.org/bookmarks/tech/proxy>. Some usable tools,
some discussion and overview. We're finally getting somewhere; there
wasn't much before this.
Please don't hit viii too hard, or publish it very far yet; it's
currently from a severely underpowered and underoptimized (zope &
pcgi, and not cache-friendly) box that I just threw together.
> I have an archiving proxy/cache [1] on my desktop that uses Squid
> with a short perl script to copy files out of its cache into a
> persistent archive.
Excellent, I'm sick of grepping my squid logs.
Anyone know of a configurable proxy chainer? I use junkbuster and
squid, so currently I can choose any subset of the two by switching
proxy ports, but I can't do that if I add a third proxy to the chain;
also navigating to the proxy config window in netscape sucks.
> I would really like to replace it with a simple http proxy
> cache/archive written in perl or python. (if it's in python,
> Medusa [2] sounds like it would be useful.)
Well, since medusa is in python, catching & caching what it gets is a
trivial exercise :) Hmmm, using the ZODB would make it easy to
associate metadata with pages, but it'd be nice to just have it on the
filesystem so you could look at it with grep & find.
-- Karl Anderson kra@monkey.org http://www.monkey.org/~kra/
This archive was generated by hypermail 2b29 : Fri Apr 27 2001 - 23:13:54 PDT