> > From: Alex Williams <thantos@decatl.alf.dec.com>
> > Date: Thursday, January 02, 1997 9:29 PM
[CLIP]
> > Interestingly, this means that spreading activation architectures can
> > work toward /conflicting Goals in the same network/, sometimes with
> > the same competency modules. All this without explicitly defining
> > confliction but simply defining required states and changed states of
> > the organism/entity. Could this be a situation that helps us
> > understand the idea of `meme' in a better or more complete light? It
> > does hint that memes needn't have other memes that refer to their
> > interaction in order to have it rectified; that rectification can
> > occur as a direct result of simpler rules.
>
> I'm not sure why you call these memes. It sounds more like you are
> referring to agents as described in Minsky's Society of Mind, or
> Rodney Brook's papers on subsumption architecture.
Perhaps they're *instances* of memes. I'm not really aware of any
operational differences between agents and instantiated memes.
Especially when I'm trying to defy a currently-executing meme to get
something *else* done, that is contrary to that meme.
//////////////////////////////////////////////////////////////////////////
/ Towards the conversion of data into information....
/
/ Kenneth Boyd
//////////////////////////////////////////////////////////////////////////