[FRIAM] WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors
uǝlƃ ↙↙↙
gepropella at gmail.com
Mon Aug 10 14:46:21 EDT 2020
I see 2 questions: 1) are obtuse simulations useful? And 2) are all simulations naturfacts?
My answers are Yes and Yes. The easiest ways to see (1) are that incomprehensible simulations are useful if they speed up time, shrink space, and/or predict accurately. In essence, what makes them useful in these circumstances is *manipulationist*. Even if you don't understand what you're manipulating, having something *to* manipulate helps. The canonical ALife example is that we only have 1 fundamental type of life. So, we have limited ways in which we can manipulate it. Simulated life helps us think clearly about *how* we might manipulate living systems that both do and do not exist.
Re: (2) - It should be clear that *all* simulations are part artifact and part natural object. But because it's so obvious to me, I'm having trouble coming up with any way in which one might have a *pure* artifact. I suppose the closest we can get is a virtual machine, an emulator for another piece of hardware/software. Then whatever's executing inside the VM might be said to be purely artificial. But any simulation that runs on "bare metal" is a naturfact already. Then go a step further and argue that any simulation must, somehow, *simulate* its referent. And that means the behavior of the computation will be artifice made to look like some (presumably natural) referent. I.e. the requirements for the computation are inferred from behavior in the world. If we regard behavior as natural, then any such simulation will be a naturfact. This fields the question re: behavioral analogy.
But your question is more about structural analogy. To what extent must the structure of a computation mirror the structure of a referent for us to call it a naturfact? And it's that question that distinguishes mechanistic modeling from predictive modeling. I'm agnostic on this [⛧]. Although I'm a mechanistic modeler, I'm perfectly happy with pure behavioral analogies where the structure is unrelated to that of its referent.
[⛧] Well, I'm actually very opinionated on it. But those fine-grained opinions are irrelevant at this point. If/when we start arguing about "levels", then my fine-grained opinions will burst out like so many ants from a kicked bed.
On 8/10/20 11:20 AM, thompnickson2 at gmail.com wrote:
> The article had an additional lesson for me. To the extent that you-folks will permit me to think of simulations as contrived metaphors, as opposed to Natural metaphors – ie., objects that are built solely for the purpose of being metaphors, as opposed to objects that are found in the world and appropriated for that purpose, then that reminds me of a book by Evelyn Fox Keller which argues that a model (i.e., a scientific metaphor) can only be useful if it is more easily understood than the thing it models. Don’t use chimpanzees as models if you are interested in mice.
>
>
>
> Simulations would seem to me to have the same obligation. If you write a simulation of a process that you don’t understand any better than the thing you are simulating, then you have gotten nowhere, right? So If you are publishing papers in which you investigate what your AI is doing, has not the contrivance process gone astray?
>
>
>
> What further interested me about these models that the AI provided was that they were in part natural and in part contrived. So the contrived part is where the investigators mimicked the hierarchical construction of the visual system in setting up the AI; the natural part is the focus on texture by the resulting simulation. So, in the end, the metaphor generated by the AI turned out to be a bad one – heuristic, perhaps, but not apt.
--
↙↙↙ uǝlƃ
More information about the Friam
mailing list