[FRIAM] Abduction and Introspection

Frank Wimberly wimberly3 at gmail.com
Sun Jan 26 10:03:58 EST 2020


Introspection :  it would be possible to insert a causal inference module
into an agent-based modeling program.  The ABM could examine the causal
conclusions emerging from the data it's generating.  It seems to me that is
close to what "introspection" means to me.

Frank
-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

On Sun, Jan 26, 2020, 7:56 AM Frank Wimberly <wimberly3 at gmail.com> wrote:

> Validation:  Bill Reynolds wrote a paper about inferring causal models
> from observational data to validate, for example, agent based models.  My
> original idea was that If the same causal edges emerge as are observed in
> the modeled system then that helps validate the model.  Bill thought it was
> better to compare the causal model with experts' opinions regarding
> causation in the modeled system.  Good enough.
>
> Reynolds, W. N., Wimberly, F. C.,
>
> Simulation Validation Using Causal     Inference Theory with
> Morphological Constraints.  Proceedings of the     2011 Winter Simulation
> Conference, Arizona Grand Resort, December,     2011.
> I mentioned this here recently and Glen asked for a copy so I sent it to
> him.  I look forward to his comments.
>
> Frank
>
> -----------------------------------
> Frank Wimberly
>
> My memoir:
> https://www.amazon.com/author/frankwimberly
>
> My scientific publications:
> https://www.researchgate.net/profile/Frank_Wimberly2
>
> Phone (505) 670-9918
>
> On Sat, Jan 25, 2020, 11:48 PM Pieter Steenekamp <
> pieters at randcontrols.co.za> wrote:
>
>> I would go along with Johsua Epstein's "if you did not grow it you did
>> not explain it". Keep in mind that this motto applies to problems involving
>> emergence. So what I'm saying is that it's in many cases futile to apply
>> logic to reasoning to find answers - and I refer to the emergent properties
>> of the human brain as well as to ABM (agent based modeling) software. But
>> even if the problem involves emergence, it's easy for both human and
>> computer logic to apply validation logic. Similar to the P=NP problem*,
>> it's difficult to find the solution, but easy to verify.
>>
>> So my answer to "As software engineers, what conditions would a program
>> have to fulfill to say that a computer was monitoring “itself" is
>> simply: explicitly verify the results. There are many approaches to do this
>> verification; applying logic, checking against measured actual data,
>> checking for violations of physics, etc.
>>
>> *I know you all know it, just a refresher, The P=NP problem is one of the
>> biggest unsolved computer science problems. There is a class of very
>> difficult to solve problems and a class of very easy to verify problems.
>> The P=NP problem asks the following: if you have a difficult to solve but
>> easy to verify problem, is it possible to find a solution that is
>> reasonably easy for a computer to solve. "Reasonably easy" is defined as
>> can you solve it in polynomial time. The current algorithms takes
>> exponential time to solve it and even for a moderate size problem that
>> means more time that the age of the universe for a supercomputer to solve
>> it.
>>
>> Pieter
>>
>> On Sat, 25 Jan 2020 at 23:04, Marcus Daniels <marcus at snoutfarm.com>
>> wrote:
>>
>>> I would say the problem of debugging (or introspection if you insist)
>>>  is like if you find yourself at some random place, never seen before, and
>>> the task it do develop a map and learn the local language and customs.  If
>>> one is given the job of law enforcement (debugging violations of law), it
>>> is necessary to collect quite a bit of information, e.g. the laws of the
>>> jurisdiction, the sensitivities and conflicts in the area, and detailed
>>> geography.  In haphazardly-developed  software, learning about one part of
>>> a city teaches you nothing about another part of the city.   In
>>> well-designed software, one can orient oneself quickly because there are
>>> many easily-learnable conventions to follow.    I would say this
>>> distinction between the modeler and the modeled is not that helpful.   To
>>> really avoid bugs, one wants to have metaphorical citizens that are
>>> genetically incapable of breaking laws.   Privileged access is kind of
>>> beside the point because in practice software is often far too big to fully
>>> rationalize.
>>>
>>>
>>>
>>> *From: *Friam <friam-bounces at redfish.com> on behalf of "
>>> thompnickson2 at gmail.com" <thompnickson2 at gmail.com>
>>> *Reply-To: *The Friday Morning Applied Complexity Coffee Group <
>>> friam at redfish.com>
>>> *Date: *Saturday, January 25, 2020 at 11:57 AM
>>> *To: *'The Friday Morning Applied Complexity Coffee Group' <
>>> friam at redfish.com>
>>> *Subject: *Re: [FRIAM] Abduction and Introspection
>>>
>>>
>>>
>>> Thanks, Marcus,
>>>
>>>
>>>
>>> Am I correct that all of your examples fall with in this frame;
>>>
>>>
>>>
>>>
>>> I keep expecting you guys to scream at me, “Of course, you idiot,
>>> self-perception is partial and subject to error!  HTF could it be
>>> otherwise?”   I would love that.  I would record it and put it on loop for
>>> half my colleagues in psychology departments around the world.
>>>
>>>
>>>
>>> Nick
>>>
>>> Nicholas Thompson
>>>
>>> Emeritus Professor of Ethology and Psychology
>>>
>>> Clark University
>>>
>>> ThompNickSon2 at gmail.com
>>>
>>> https://wordpress.clarku.edu/nthompson/
>>>
>>>
>>>
>>>
>>>
>>> *From:* Friam <friam-bounces at redfish.com> *On Behalf Of *Marcus Daniels
>>> *Sent:* Saturday, January 25, 2020 12:16 PM
>>> *To:* The Friday Morning Applied Complexity Coffee Group <
>>> friam at redfish.com>
>>> *Subject:* Re: [FRIAM] Abduction and Introspection
>>>
>>>
>>>
>>> *Nick writes:*
>>>
>>>
>>>
>>>  As software engineers, what conditions would a program have to fulfill
>>> to say that a computer was monitoring “itself
>>>
>>>
>>>
>>> It is common for codes that calculate things to periodically test
>>> invariants that should hold.   For example, a physics code might test for
>>> conservation of mass or energy.   A conversion between a data structure
>>> with one index scheme to another is often followed by a check to ensure the
>>> total number of records did not change, or if it did change that it changed
>>> by an expected amount.   It is also possible, but less common, to write a
>>> code so that proofs are constructed by virtue of the code being compliable
>>> against a set of types.   The types describe all of the conditions that
>>> must hold regarding the behavior of a function.    In that case it is not
>>> necessary to detect if something goes haywire at runtime because it is
>>> simply not possible for something to go haywire.  (A computer could still
>>> miscalculate due to a cosmic ray, or some other physical interruption, but
>>> assuming that did not happen a complete proof-carrying code would not fail
>>> within its specifications.)
>>>
>>> A weaker form of self-monitoring is to periodically check for memory or
>>> disk usage, and to raise an alarm if they are unexpectedly high or low.
>>>  Such an alarm might trigger cleanups of old results, otherwise kept around
>>> for convenience.
>>>
>>>
>>>
>>> Marcus
>>>
>>>
>>> ============================================================
>>> FRIAM Applied Complexity Group listserv
>>> Meets Fridays 9a-11:30 at cafe at St. John's College
>>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>> archives back to 2003: http://friam.471366.n2.nabble.com/
>>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> archives back to 2003: http://friam.471366.n2.nabble.com/
>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20200126/2e02a7d4/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 65115 bytes
Desc: not available
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20200126/2e02a7d4/attachment-0001.png>


More information about the Friam mailing list