[FRIAM] hot time in town tonight

Marcus Daniels marcus at snoutfarm.com
Thu Sep 24 18:36:12 EDT 2020


Hmm.  Here's how I solve a problem:   I stare at the ceiling or sit in the yard.   Maybe I walk the dog.   Once I kind of can see how it comes together, I start to write it down as a program.     The encoding is a way to become aware of the loose ends I failed to anticipate.    If it can't make it type check and can run tests in isolation, then I have some confidence basically sound and I can go back to blue sky thinking.   Literacy is very important not to guide imagination but to quickly bolt it down and to bring things into (and out of) focus.   Like a blackboard that won't tolerate mistakes being written on it.   The preference for a modeling/programming language is mostly what works best to grasp how information can be transformed.   Theorems for free, per a slogan in the FP community.   I want to offload as much of the consistency checking to the computer as I can.  

-----Original Message-----
From: Friam <friam-bounces at redfish.com> On Behalf Of glen ep ropella
Sent: Thursday, September 24, 2020 10:23 AM
To: FriAM <friam at redfish.com>
Subject: Re: [FRIAM] hot time in town tonight

I don't really know where the topic ended up re: Sapir-Whorf vs. McWhorter. But it dovetails nicely with this sub-thread. As a reminder, here is the last post on the Sapir-Whorf vs. McWhorter sub-thread: http://friam.471366.n2.nabble.com/ethnography-and-information-systems-tp519925p7598205.html wherein Dave accuses McWhorter of strawmanning Sapir-Whorf.

I know I *should* do my homework and read up on Sapir-Whorf and McWhorter's objection to it. But I'd like to simply bookmark what I think now so that I can double back and be embarrassed later. To me a language, including programming languages, don't control or channel "thought" in any way. But the scare quotes are there on purpose. Performative language, like any action, *does* feed back onto thought. And the feedback loop is critical. It doesn't happen inside one's head. It happens when you *express* in a language. So if you imagine a "mind" that has never said a single word, expresses a thought in some language. They either hear/see themselves doing the expressing or they don't. If they do, then that hearing/seeing modifies their "mind". If they express the thought again in the same language, they may (or may not) do it differently. Again, that modifies their "mind". Etc. It seems obvious that, over time, the framework of the expression(s) influences the machine doing the expressing.

Of course, there are different types of thinkers. I'm mostly algebraic. But I've known lots of people way smarter than me who are visual thinkers. A visual person expressing in text may be like the above novel expressor not hearing/seeing themselves doing the expression. And an algebraic person expressing visually may not hear/see their expression. So, a machine's performative expressions may have more or less feedback, again depending on the language, the framework in which the expression is made.

This is the heart of the argument I make to my clients. I don't argue that, e.g., physics-based modelers can only think in differential equations (DEs). I make the argument that if you *start* with a language that's limited to, or optimized for DEs, then each *iteration* of *that* model will be more DE-like. It will become difficult to generalize that model into something non-DE ... which provides a fulcrum for criticism of *some* hybrid cyber-physical systems, BTW.



On 9/23/20 9:22 AM, jon zingale wrote:
> I think that I agree with you, and add that "particularity not 
> mattering" in a model is a modding-out of the thing modeled. I can 
> imagine chains of quotients describing coarser and coarser models, 
> agents to ODEs to networks of weighted edges, say. Comparisons, then 
> of things modeled inducing comparisons across chains, maybe with some 
> exactness or torsion criteria for measuring degrees of satisfaction.
>
> On 9/23/20 8:21 AM, glen ep ropella wrote:
>> A common problem I have when arguing that "mechanistic models" are qualitatively different from "descriptive models" is describing what it is about "mechanism" that's being modeled. I see it as a spectrum. Compartment models provide a good example. Some ODE contains a term that homogenizes all the stuff that happens inside cells versus, say, the intercellular matrix. Because there are 2 compartments, identifiable by terms in the equations, you can say it's "mechanistic" ... funging a bit on the "-istic" suffix. If I make some claim like: "Any one cell might behave differently from any other cell based on its history", then we could create another compartment, cells of type 1 and type 2. We can do that progressively until there's a compartment for each particular cell (and each particular extra-cellular space engineered by the actions of the cell).
>> 
>> In this sense, FP is similar to OOP in its particularity, and they contrast with homogenizing paradigms like systems dynamics models. What I'd *like* to do is find a way to emphatically ask my clients: "Does particularity matter?" Chemistry seems to say "no" for the most part. Microbiology seems to waffle a bit between small and large molecules. Medical scale biology is decidedly in the "yes" category, what with individualized treatment and "no average person" problems. Social systems are like inverted microbiology, where at smaller scopes, the answer is "yes", but at huge scopes the answer becomes "no" again. I'm too ignorant of quantum theory to say, but it seems like decoherence implies it may waffle a bit too.
>> 
>> The answer to that question *should* help me choose the paradigm(s) for the analogs I build. Until I have a competent way to emphatically ask the question, though, my pluralism facilitates agile analogies. I argue for multi-models ... integrationist analogs that facilitate the composition of different models of computation. Reliance on any one computational paradigm *before* having a competent estimate for the analog's requirements is dangerous.
>> 
>> I guess it doesn't much matter how pure Rust is. It seems well situated for integrationism, which is the only reason I haven't given my friend an answer, yet. If I do "join", I'll probably do it as 1099 for now so I can treat him like a client instead of a boss.
>> 
>> 
>> On 9/22/20 7:32 PM, Marcus Daniels wrote:
>>> I think linear/affine types as in Rust are cool.  For one thing, they seem plausible for physical analogues to computation, like your infinitely-long expressions.  In a biochemical system it often wouldn't make sense to `share' a variable across several expressions.   A `physical' function would consume its inputs.   Similarly linear types are like the no-cloning theorem for quantum states.  It's a small change for a person used to writing functional programs to get in the habit of using linear types.   Similar to Swarm's notion of switching phases, but where the switching of the method sets is understood by the compiler and can be enforced.  Even besides the physical intuition, linear types provide a low-overhead way to manage memory, like is the norm for complex stack-allocated objects in C++.
>> 



--
glen ep ropella 971-599-3737
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 



More information about the Friam mailing list