From: Dave Long (dl@silcom.com)
Date: Fri May 18 2001 - 18:52:28 PDT
> Could very well be
> that any rational ethical system that is consistent cannot be complete.
Is a rational ethical system one in which we can,
based on suitable axioms, take the shades of grey
in which real situations are encoded and map them
onto the truth values of black or white?
As black and white are discrete, and the shades
of grey are continuous, it seems unlikely that we
can find a smooth mapping that is both consistent
and complete. (can we prove that smooth mappings
between the greys must have a fixed point?)
Note however that Tom doesn't face this problem:
he is willing to let (insists that?) the greys be
greys. There is nothing inconsistent with saying
both that life can be defined simply, and with
saying that right and wrong can't be evaluated
simply, because he hasn't assumed that there is
a simple function generating right-values given
the life-values.
I suppose that's what judges are for in legal
systems -- law and precedent give a dithering
of blacks and whites, against which the judge
can compare the grey of the current situation.
Law is, after all, about justice, and justice
is rarely a matter of logical either/or.
-Dave
while I'm faux math geeking:
> [4] Small Worlds: The Dynamics of Networks Between Order and Randomness by
> Watts
Upon reading this it struck me that we might
profitably consider a kind of curvature of
the network. Consider the number of unique
nodes that are encountered at increasing
numbers of hops away from an initial node;
in an ordered network this value increases
polynomially (according to the dimension of
the network), and in a random network it
increases exponentially. In a "small world"
network, while we can see exponential growth
on global scales, we find that on a local
scale (close to zero hops), the exponential
looks awfully like a flat polynomial. Is
there a "rule of 72" for small world nets?
:: :: :: :: ::
As I understand it so far, an algebra gives a bunch of structures and some rules; we then assume that
the structures are all distinct and the rules tell
us which ones are equivalent, whereas a coalgebra
gives a bunch of structures and some rules; we assume
that the structures are all equivalent, and the rules
tell us which ones are distinct. Is this correct?
> Because this definition has had a lot of historical practice,
> it's easy to recognize the downside. It divides humanity
> into equivalence classes that don't always view
> the others as entirely human.
Should we try a humanity coalgebra? Assume we're all
human, unless we can show otherwise? (It seems like
this just results in the same downside, though -- in
general mathematics seems a poor tool with which to
tackle social problems)
- - -
Traditional program verification seems to be algebra
based: take a program P, and formally prove that the
program P' is equivalent. Is "extreme programming"
in some sense coalgebraic, in that it assumes that
any two programs P and P' are equivalent unless they
show different behavior against the test cases?
This archive was generated by hypermail 2b29 : Fri May 18 2001 - 19:13:24 PDT