[FRIAM] To repeat is rational, but to wander is transcendent

Marcus Daniels marcus at snoutfarm.com
Mon Apr 4 14:37:25 EDT 2022


Yeah, a dilemma for next few decades.   Models like GPT3 or GLAM [1] will start to look at a lot like general AI, but it will be very difficult to understand their mistakes.
Automated science will become routine, and effective in practice whether it is for pharma or materials design.   I think there will be a coevolution of tools to gain insights from things that work, but things that work will become before the understanding of why they work.

Fareed Zakaria had an interview with Billy Joel on his new CNN+ show.    Joel claimed he would dream a song and spend the day trying to find it again.   I could sort of see that for future implanted systems.   Record the dream and then start disassembling through automated mechanisms.   How is disassembly and reassembly any different than theory?

[1] https://arxiv.org/pdf/2112.06905.pdf

From: Friam <friam-bounces at redfish.com> On Behalf Of Prof David West
Sent: Monday, April 4, 2022 10:54 AM
To: friam at redfish.com
Subject: Re: [FRIAM] To repeat is rational, but to wander is transcendent

"I want to see a full round-trip"

The claim was made, and "proof of claim" was demonstrated, that Rationale's (Now IBM's) Rose provided exactly that. You could start with undocumented code and Rose would produce accurate and complete UML diagrams and templates. The Diagrams could be altered and Rose would then generate 'correct' and executable code that incorporated the changes.  Or you could create a complete UML specification (models and templates) and Rose would generate 'correct' and executable code. You could make changes to the code, and Rose would generate the UML so you could verify the 'correctness' of the altered code.

Problem was it only worked on a certain kind of program—one directed at a formally describable system, like a device driver or low level OS component. And even within this category of program, it did not scale beyond programs of tens to a few hundred lines of code.

It was totally parallel to the massive efforts expended on formal proof of programs. This effort could not scale beyond programs of 100 LOC, or so.

Roughly the same time frame, massive effort was invested in AI assisted natural language translation. It was deemed extremely difficult to translate from NL to NL, because a computer could not understand or deal with NL of any kind—too much ambiguity, context sensitivity, and metaphor.So an effort was made to find a "perfect language" that could be implemented in a computer and then translate from NL1 to PL to NL2.

Two candidate PLs were proposed: Sanskrit and Aymaran. This effort broke down because it proved impossible to map any NL to either candidate PL beyond some very specific, somewhat formally defined, NL subsets— a business letter for example, or a technical paper.

[I have no clue how Google Translate does such a good job at translation. Nor have I tried Google on things like poetry that is extremely difficult for humans to translate.]

In the realm of software development the ropund-trip is not just from UML-->code-->UML, it is Domain Experts model to Business Analyst's model, to Architects model, to UML model, to code, to "limbo." Limbo, because no effort exists to close the loop back to the Domain Expert's model. Models, and language employed, at each step of this round-trip is idiosyncratic to that step and, although similar in some minor ways concerning definition and syntax, requires some fairly sophisticated translation. This translation is never made specific and the result is massive miscommunication (total absence of communication) across the silos.

One more complication, the idiosyncratic variation among human individuals within a 'silo'—glen's example of senior and junior programmer, for example.

There is a reason that the multi-billion dollar CASE effort failed. The problem is too complex and too reliant on human abilities to deal with ambiguity, incompleteness, and metaphor—abilities that a computer and AI will, IMO, never be able to duplicate. My elementary-level understanding of current AI efforts has yet to reveal an idea that seems novel or promising enough to merit a full blown investigation of the technology and my opinion about the possibility of significant automation of the software development effort remains pretty much intact.

While I have focused on the narrow domain of software and the round-trip, there is a general argument to be made. Beginning with glen's restatement of my question, "how far can our formal structures go *toward* that ambiguity?" I would not deny that significant progress is possible, but I would assert that it will always parallel the efforts to apply perfect circles and spheres to planetary orbits—a never ending recursive application of more and more sophisticated epicycles.

There will always be a loss of 'information' when reality is filtered by any formalism. Formalists of every stripe address this fact with nothing more than "whistling while passing the graveyard," i.e., self-comforting hand-waving while ignoring the issue.

davew



On Mon, Apr 4, 2022, at 9:41 AM, glen wrote:
> I think what we're seeing there is simply that we're getting close
> enough with big data to constraining the space of concretization an
> automated system can arrive at. To see the context I'm trying to lay
> out, consider one junior programmer telling another junior programmer
> what to implement, declaratively. Then consider a junior programmer
> telling a senior programmer what to implement. Then consider, say, a
> literature or history buff explaining to a senior programmer what to
> implement.
>
> With each case, the space of possible programs the implementer might
> implement will change. Requirements flow and satisfaction is
> concretization. And with examples like these, we're showing vast
> improvement on such automation.
>
> But the preemptive nature is still there. The difference between a
> junior and senior programmer (as implementors) should be obvious. It'll
> carry things like "Well, I've been the Singleton Pattern for years. So
> there will be a Singleton in there somewhere!" And "Well, the only
> framework I know is NetLogo. So I guess I'll use NetLogo."
>
> A more interesting problem, I think, is abstraction. Automatic
> *reading* of programs. We've seen a lot of progress there, too, of
> course, perhaps the kerfuffle between xML and iML being fairly tightly
> focused. But what I want to see is a full round-trip. I don't
> particularly care which side it starts on, whether abstracting a
> concrete thing, then reconcretizing the abstraction or vice versa. But
> comparing and contrasting the initial thing with the iterated thing is
> the interesting part ... and targets, say, EricS' conception of
> identity as temporal inter-subjectivity (or "diachronicity" or
> "narrativity").
>
>
> On 4/4/22 08:54, Marcus Daniels wrote:
>> Is that natural language that is more contextualized?   When I look at, say, Open AI Codex, I start see the beginnings of systematic mappings between vague languages and precise ones.
>>
>> https://youtu.be/Zm9B-DvwOgw
>>
>> -----Original Message-----
>> From: Friam <friam-bounces at redfish.com<mailto:friam-bounces at redfish.com>> On Behalf Of glen
>> Sent: Monday, April 4, 2022 7:53 AM
>> To: friam at redfish.com<mailto:friam at redfish.com>
>> Subject: Re: [FRIAM] To repeat is rational, but to wander is transcendent
>>
>> But this is the point, right? That cultural language retains its power at least in part *because* of its ambiguity, it's facilitating role in [re]abstracting and [re]concretizing. Jon's pointing out that we can design formal structures that *approach* such ambiguity (wandering vs periodic domains - or here with programming language design) targets that ambiguity. Nick's targeted it by questioning intelim rules in natural deduction. I guess we've all targeted it at some point.
>>
>> Under the paradigm that cultural language follows/reflects something about the human animal, which follows/reflects something about the world, then the question Dave asks, I think, is how far can our formal structures go *toward* that ambiguity? Can our ideal/formal/rigorous structures follow the world and "jump over" the cultural language and human animal connection ... i.e. follow the world *better*? Or do our formal structures need the animal for a high fidelity following?
>>
>> We've seen this same question in many other forms (e.g. Penrose's suggestion that human mathematicians do something computer mathematicians can't do, Rosen's suggestion that math/logic/algorithms can't realize a "largest model", Chalmer's "hard problem", etc.). So, perhaps its old hat. But in the spirit of parallax, rewording it helps those of us who (think they) have solved it in their pet domain communicate their solution to those of us struggling in other domains.
>>
>> On 4/2/22 13:50, David Eric Smith wrote:
>>> It’s nice having Marcus's answer and Frank’s juxtaposed.
>>>
>>> Conflating essences and attributes is logically and structurally incoherent in software design.
>>>
>>> Whatever process of ratification leads to human language conventions, these assignments get made and may even seem rigid within languages (do not confuse j'ai finis and j’suis finis in French, though the ambiguity in English is important to Daniel Day Lewis’s line in the bowling alley in There Will be Blood), the semantic field is ambiguous enough to the process of language generation that the verb scopes get drawn differently in different lineages.
>>>
>>> Hmm.
>>>
>>> Eric
>>>
>>>
>>>> On Apr 3, 2022, at 12:31 AM, Marcus Daniels <marcus at snoutfarm.com<mailto:marcus at snoutfarm.com> <mailto:marcus at snoutfarm.com<mailto:marcus at snoutfarm.com>>> wrote:
>>>>
>>>> Mixing-up is-a and has-a is a fundamental software design error.  It
>>>> is so consequential that some languages don’t even allow subtyping of
>>>> concrete types.   Now it seems essential, but just wait…
>>>>
>>>>> On Apr 1, 2022, at 9:42 PM, David Eric Smith <desmith at santafe.edu<mailto:desmith at santafe.edu> <mailto:desmith at santafe.edu<mailto:desmith at santafe.edu>>> wrote:
>>>>>
>>>>> 
>>>>>> On Mar 31, 2022, at 3:24 AM, Roger Critchlow <rec at elf.org<mailto:rec at elf.org> <mailto:rec at elf.org<mailto:rec at elf.org>>> wrote:
>>>>>>
>>>>>> And I suppose the back side of the trap is that we have an innate essentialist heuristic which we use for organizing essentially everything we encounter in the world.
>>>>>
>>>>> You know, in reading this now, I am suddenly put back in mind of
>>>>> Vygotsky’s Thought and Language, and a category distinction he makes
>>>>> between “family resemblances” and “predicates”.  (Not that there is
>>>>> any privilege to this reference; only that I have seen so few things
>>>>> that I often come back to it.)
>>>>>
>>>>> If “family resemblance” and “predicate” are even categories, maybe one should add to them “essences” as a third category.
>>>>>
>>>>> What would any of those categories be?  Postures toward perceiving?  Or “experiencing”?
>>>>>
>>>>> The language that philosophers seem to like that “it is like” something “to be alive” (or whatever; to be a bat) — I have sympathy for their need to have some verbal locution, though I have no impulse to follow them in using that one — seems to have an origin something like the origin of terms like “quailia”. Or to be off somehow in the same general quadrant.  So okay, we can do a lot, and we want verbal conventions for signals to put each other into various states of mind or frames of reference.  The language isn’t analytic in any sense, but if people think they mostly agree on it, maybe it does whatever we put weight on language to do, to some degree.  Cues to coordinate states of mind.
>>>>>
>>>>> When Vygotsky uses the term “predicate”, he doesn’t mean it only (or maybe at all) in the logician’s sense of existence of a partition of a collection into non-overlapping classes.  He is referring to something somehow perceptual, so that from an early “family resemblance”, where maybe most of the blocks in a set are the red ones, or maybe most of them are the triangles, etc., we settle on taking “red” as a “property” on the basis of which to assign set membership.  Somehow it is that investing of things with properties or aspects, as a cognitive-developmental horizon, that he means as the shift to assigning them predicates.
>>>>>
>>>>> Is it a mode of perception?  A cast of mind, among many in which
>>>>> perceptions might take place or be enacted?  Is “experiencing something” as “being of some essence” then, in any similar sense, a mode of perception or an orientation, disposition, or posture toward things that partly forms what we take from the interaction with them?  That is my attempt to re-say REC’s “innate essentialist heuristic”.  Is perceiving-in-essences a distinct disposition from perceiving-as-having-properties?  Or are they two names for the same thing?  Linguistically, we seem to use them differently.  At least in English, one is strongly attached to the verb “is” and the other to the verb “has”, and there seem to be few instances in which we would regard the two as substitutes.  (Though I can think of constructions involving deictics and existential where apparently the same usage can shift which verb carries it across languages.)  Would the linguistic form strongly prejudice the semantic domain, or does it entrain on a semantic domain that is mostly language- and culture-invariant?
>>>>>
>>>>> I guess the psychologists and the philosophers have all this worked out.  Or maybe the contemplatives have systems for it.  But those are literatures I don’t cover.
>>>>>
>>>>> Eric
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> So in certain contexts -- mechanics, chemistry, thermodynamics, electronics, computation -- we have refined our naive essentialism into categories and operations which essentially solve or are in the process of solving the context. And in other contexts, we have lots of enthusiastic application of naive essentialist theories, lots of ritualistic imitations of the procedures employed in the contexts which are succeeding, and lots of proposals of ways that the unresolved contexts might be reduced to instances of the solved.
>>>>>>
>>>>>> EricS's dimensional analysis in a nutshell, which is an essential description of a successful essential analysis of a context, leaves a lot of problems for the reader to work out if taken as a recipe for action.   How do you identify the units of aggregation?   What are the rules for forming larger aggregates from smaller and vice versa?  What is entropy, anyway, and what is the correct entropy (*dynamic potential) in this context?
>>>>>>
>>>>>> Thermodynamic state functions as derivatives with respect to entropy are all over JW Gibb's On the Equilibrium of Heterogeneous Substances.  It is the point.  PW Bridgman's Dimensional Analysis essentially summarizes all of physics up to 1922 as a problem of combining and factoring units of measurement, one of my favorite library discoveries as an undergraduate.  Both available in the internet archive.
>>>>>>
>>>>>> -- rec --
>>>>>>
>>>>>>
>>>>>> On Wed, Mar 30, 2022 at 12:12 PM Marcus Daniels <marcus at snoutfarm.com<mailto:marcus at snoutfarm.com> <mailto:marcus at snoutfarm.com<mailto:marcus at snoutfarm.com>>> wrote:
>>>>>>
>>>>>>      Here is a situation I frequently experience with software development where I try to adopt some code, even my own.  I stare at the code and..
>>>>>>
>>>>>>      1) It becomes clear how to assemble it into to what I want
>>>>>>
>>>>>>      2) I become confused or frustrated.   As a ritual, I remove it from my sight and open a blank editor window to start over.  Sometimes I must walk away from the screen to think, until I want to type.
>>>>>>
>>>>>>      I think the reason I dwell in #2 space is because I believe in #1.   That is, when I have just the right combinator library things just snap into place.   I seem to spend a lot of time trying to convince myself of why it can't work, and whether it is a bad fit or something that needs to be fixed in the platform.  What is important, in this value system, is that platforms are good, not that this or that problem gets solved.   I think it is basically the Computer Science value system in contrast to the Computational Science value system.
>>>>>>
>>>>>>      To [re]abstract and [re]concretize can be expensive and those who don't do it have a productivity advantage, as well as the benefit of having particulars to work from.   I don’t think it is a case of confusing the sign for the object.   It is a question of what kind of problem one wants to solve.
>>>>>>
>>>>>>      In contrast, I have met several very good computational people that hate abstraction and indirection.  They want code to be greppable even if it that means it is baroque and good for nothing else.
>>>>>>
>>>>>>      -----Original Message-----
>>>>>>      From: Friam <friam-bounces at redfish.com<mailto:friam-bounces at redfish.com> <mailto:friam-bounces at redfish.com<mailto:friam-bounces at redfish.com>>> On Behalf Of glen
>>>>>>      Sent: Wednesday, March 30, 2022 8:40 AM
>>>>>>      To: friam at redfish.com<mailto:friam at redfish.com> <mailto:friam at redfish.com<mailto:friam at redfish.com>>
>>>>>>      Subject: Re: [FRIAM] To repeat is rational, but to wander is
>>>>>> transcendent
>>>>>>
>>>>>>      Of all the words being bandied about (quality, property, composition, domain, continuity, intensity, general, special, iteration, etc.) EricC's "contextless" stands out and reflects EricS' initial target of dimension analysis. The conversation seems to be about essentialism. Maybe that's a nice reflection that we're sticking to the OG topic "analytic idealism". But maybe it's Yet-Another example of our pareidolia to see patterns in noise and then to *reify* those patterns. [Re]Abstracting and [re]concretizing heuristics across contexts may well be what separates us from other life forms. But attributions of the "unreasonable effectiveness" of any body of heuristics is the most dangerous form of reification. The superhero ability to [re]abstract and [re]concretize your pet heuristics convinces you they are "properties" or "qualities" of the world, rather than of your anatomy and physiology. Arguing with myself, perhaps Dave's accusation is right. Maybe this is an
>>>>>>      example of swapping the sign for the object, or reworded prioritizing for the description over the referent, confusing the structure of the observer with the structure of the observed.
>>>>>>
>>>>>>      Those of us with less ability tend to attribute (whatever haphazard heuristics they've landed on) to the world *early*. Those of us with more ability continue the hunt for Truth, delaying attribution to the world until we get too old to play that infinite game any more.
>>>>>>
>>>>>>      I think Possible Worlds helps, here, too: https://plato.stanford.edu/entries/possible-worlds/ <https://plato.stanford.edu/entries/possible-worlds/> Patterns are simply (non-degenerate) quantifiers over possible worlds.
>>>>>>
>>>>>>      Regardless, I'd like to ask whether the formulation of intensive properties as derivatives of entropy w.r.t. extensive properties is formalized somewhere? If so, I'd be grateful for pointers. I'm used to the idea that the intensives divide out the extensives. But I haven't seen them formulated as higher order derivations from entropy.
>>>>>>
>>>>>>      Thanks.
>>>>>>      -glen
>>>>>>
>>>>>>      On 3/29/22 14:37, David Eric Smith wrote:
>>>>>>      > [snip]
>>>>>>      > 1. One first has to have a notion of a macrostate; all these terms
>>>>>>      > only come into existence with respect to it. (They are predicates of
>>>>>>      > what are called “state variables” — the intensive ones and the
>>>>>>      > extensive ones — and that is what the “state” refers to.)
>>>>>>      >
>>>>>>      > 2. One needs some criterion for what is likely, or stable, which in general terms is an entropy (extending considerably beyond the Gibbs equilibrium entropy, but still to be constructed from specific principles), and on the macrostates _only_, the entropy function (which may be defined on many other states besides macroststates as well) becomes a _state function_.
>>>>>>      >
>>>>>>      > 3. Then (actually, all along since the beginning of the construction)
>>>>>>      > one needs to talk about what kind of aggregation operator we can apply
>>>>>>      > to systems, and quantities that do accumulate under aggregation become
>>>>>>      > the arguments of the state-function entropy, and the extensive state
>>>>>>      > variables.  (I say “accumulate” in favor of the more restrictive word
>>>>>>      > “add”, because what we really require is that they are what are termed
>>>>>>      > “scale factors” in large-deviation language, and we can admit a
>>>>>>      > somewhat wider class of kinds of accumulation than just addition,
>>>>>>      > though addition is the extremely common one.)
>>>>>>      >
>>>>>>      > 4. Once one has that, the derivatives of the entropy with respect to the extensive variables are the intensive state variables.  It is precisely the duality — that one is the derivative of a function with respect to the other, which is the argument of that function — that makes it not bizarre that both exist and that they are different.  But as EricC rightly says, if one just uses phenomenological descriptions, why any of this should exist, and why it should arrange itself into such dual systems, much less dual systems with always the same pair-wise relations, seems incomprehensible.  For some of the analogistic applications, there may not be any notions of state, or of a function doing what the entropy does, or of aggregation, or an associated accumulation operation, or gradients, or any of it.  Some of the phenomenology may seems to kinda-sorta go through, but whether one wants to pin oneself down to narrow terms, is less clear.
>>>>>>      >
>>>>>>      > [snip]
>>>>>>      >
>>>>>>      >> On Mar 30, 2022, at 5:04 AM, Eric Charles <eric.phillip.charles at gmail.com<mailto:eric.phillip.charles at gmail.com> <mailto:eric.phillip.charles at gmail.com<mailto:eric.phillip.charles at gmail.com>> <mailto:eric.phillip.charles at gmail.com<mailto:eric.phillip.charles at gmail.com> <mailto:eric.phillip.charles at gmail.com<mailto:eric.phillip.charles at gmail.com>>>> wrote:
>>>>>>      >>
>>>>>>      >> That is a bizarre distinction, that can only be maintained within some sort of odd, contextless discussion. If you tell me the number of atoms of a particular substance that you have smushed within a given space, we can, with reasonable accuracy, tell you the density, and hence the "state of matter". When we change the quantity of matter within that space, we can also calculate the expected change in temperature.
>>>>>>      >>
>
> --
> Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙
>
> .-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam<http://bit.ly/virtualfriam>
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:
>  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20220404/cb970d86/attachment.html>


More information about the Friam mailing list