[FRIAM] WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

Frank Wimberly wimberly3 at gmail.com
Mon Aug 10 15:09:48 EDT 2020


When I was a grad student at Pitt in about 1974 the project was working on
(Project Solo, NSF funded, to get high school students to write programs to
solve problems in many application areas thereby requiring that the
understand those areas be they physics and chemistry or set design) was
given a high quality flight simulator based on ball/disk integrators.  I'm
wondering whether it was obtuse.  I had a private pilot's license and it
was certified by the FAA in a way that I could log time toward my
Instrument Rating by using it.  I used it for dozens of hours but I didn't
log the time.  I did became very confident about controlling and airplane
at night or in instrument conditions (IMC if I recall correctly).  Would
you call that a naturfact?

(I did find myself in an unforecasted cloud on a night flight back from
Williamsburg nto Pittsburgh.  I called the control tower and filed an IFR
flight plan to descend through the clouds, which I did with no problem.)
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Aug 10, 2020, 12:46 PM uǝlƃ ↙↙↙ <gepropella at gmail.com> wrote:

> I see 2 questions: 1) are obtuse simulations useful? And 2) are all
> simulations naturfacts?
>
> My answers are Yes and Yes. The easiest ways to see (1) are that
> incomprehensible simulations are useful if they speed up time, shrink
> space, and/or predict accurately. In essence, what makes them useful in
> these circumstances is *manipulationist*. Even if you don't understand what
> you're manipulating, having something *to* manipulate helps. The canonical
> ALife example is that we only have 1 fundamental type of life. So, we have
> limited ways in which we can manipulate it. Simulated life helps us think
> clearly about *how* we might manipulate living systems that both do and do
> not exist.
>
> Re: (2) - It should be clear that *all* simulations are part artifact and
> part natural object. But because it's so obvious to me, I'm having trouble
> coming up with any way in which one might have a *pure* artifact. I suppose
> the closest we can get is a virtual machine, an emulator for another piece
> of hardware/software. Then whatever's executing inside the VM might be said
> to be purely artificial. But any simulation that runs on "bare metal" is a
> naturfact already. Then go a step further and argue that any simulation
> must, somehow, *simulate* its referent. And that means the behavior of the
> computation will be artifice made to look like some (presumably natural)
> referent. I.e. the requirements for the computation are inferred from
> behavior in the world. If we regard behavior as natural, then any such
> simulation will be a naturfact. This fields the question re: behavioral
> analogy.
>
> But your question is more about structural analogy. To what extent must
> the structure of a computation mirror the structure of a referent for us to
> call it a naturfact? And it's that question that distinguishes mechanistic
> modeling from predictive modeling. I'm agnostic on this [⛧]. Although I'm a
> mechanistic modeler, I'm perfectly happy with pure behavioral analogies
> where the structure is unrelated to that of its referent.
>
>
> [⛧] Well, I'm actually very opinionated on it. But those fine-grained
> opinions are irrelevant at this point. If/when we start arguing about
> "levels", then my fine-grained opinions will burst out like so many ants
> from a kicked bed.
>
>
> On 8/10/20 11:20 AM, thompnickson2 at gmail.com wrote:
> > The article had an additional lesson for me.  To the extent that
> you-folks will permit me to think of simulations as contrived metaphors, as
> opposed to Natural metaphors – ie., objects that are built solely for the
> purpose of being metaphors, as opposed to objects that are found in the
> world and appropriated for that purpose, then that reminds me of a book by
> Evelyn Fox Keller which argues that a model (i.e., a scientific metaphor)
> can only be useful if it is  more easily understood than the thing it
> models.  Don’t use chimpanzees as models if you are interested in mice.
> >
> >
> >
> > Simulations would seem to me to have the same obligation.  If you write
> a simulation of a process that you don’t understand any better than the
> thing you are simulating, then you have gotten nowhere, right?  So If you
> are publishing papers in which you investigate what your AI is doing, has
> not the contrivance process gone astray?
> >
> >
> >
> > What further interested me about these models that the AI provided was
> that they were in part natural and in part contrived.  So the contrived
> part is where the investigators mimicked the hierarchical construction of
> the visual system in setting up the AI; the natural part is the focus on
> texture by the resulting simulation.  So, in the end, the metaphor
> generated by the AI turned out to be a bad one – heuristic, perhaps, but
> not apt.
>
>
> --
> ↙↙↙ uǝlƃ
>
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> archives: http://friam.471366.n2.nabble.com/
> FRIAM-COMIC <http://friam.471366.n2.nabble.com/FRIAM-COMIC>
> http://friam-comic.blogspot.com/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20200810/7526ceb3/attachment.html>


More information about the Friam mailing list