[FRIAM] are we how we behave?

Steven A Smith sasmyth at swcp.com
Tue Mar 5 20:42:40 EST 2019


Glen -

What a great (continued) riff on the (general) topic, in spite of the
thread wandering more than a little (no kinks but far from straight and
smooth).

I would like to contrast "learning" with "problem solving" as I think
the latter is the key point of what might allow "general intelligence"
(if such exists, as you say) to distinguish itself.  Many may disagree,
but i find the essence of "problem solving" at it's best to be the art
and science of "asking the right question".  Once that has been
achieved, the "answer" becomes self-evident and as Feynman liked to say
"QED!"     Contemporary machine learning seems to confront the
definition of "self-evident" and "quite easily"

The 1970's computer-proof of the 4-color problem is a good (on that
boundary) example.  Perhaps we could say that the program written to do
the search of the axiom/logic space is a prime (if obscure) example of
"asking the right question" and (though it is a bit of a stretch) the
halting/solution of the program represents the "self-evident" answer
(nQED?).  At the very least, this is how I take the idea of "elegant"
solutions to be (though the complexity of the 4-color problem-solution
would seem to be a far stretch for what one would call "elegant").

Contemporary machine (deep?) learning techniques (even those emerging in
the late 80s such as evolutionary algorithms) seem to demonstrate that a
suitably "framed" question is as good as a well "stated" question with
the right amount/type of computation.  EA, GA, Nueral Nets, etc.  are
all "meta-heuristics" .

I am not sure I can call applications of these techniques, even in their
best form, "general intelligence" but I think I would be tempted to call
them "more general" intelligence.   I would *also* characterize a LOT of
human problem-solving as NO MORE general, and the problem of "the
expert" seems to frame that even more strongly... it often appears that
an "expert" is distinguished from others with familiarity with a topic
by *at least* the very same kind of "supervised learning" that advanced
algorithms are capable of.   Some experts seem to be very narrow, and
ultimately not more than a very well populated/trained associative
memory, whole others seem very general and are *also* capable of
reframing complex and/or poorly formed questions into an appropriate and
well-formed enough question for the answer to emerge (with or without
significant computation in between) as "self-evident". 

There are plenty of folks with more practical and theoretical knowledge
of these techniques than I have here, but I felt it was worth trying to
characterize the question this way.  

Another important way of looking at the question of what can be
automated might be an extension/parallel to the point of "if you have a
hammer, then everything looks like a nail".   It seems that our
socio-political-economic milieu is evolving to "meet the problem of
being human" halfway, by providing a sufficiently complex set
(spectrum?) of choices of "how to live" to satisfy (most) everyone. 
This does not mean that our system entirely meets the needs of humanity,
but rather that it does at a granularity/structure that it many (if not
most) people can fit themselves into one of it's many compartments/slots
in a matrix of solutions.   

Social Justice and Welfare systems exist to try to help people fit into
these slots as well as presumably influencing the cultural and legal
norms that establish and maintain those slots.   The emergence of ideas
such as Neurodiversity and this-n-that-spectrum diagnoses seem to help
deal with the outliers and those falling between the cracks but this is
once again, an example (I think) of force-fitting the real phenomenon
(individuals in their arbitrary complexity) to the model
(socio-political-economic-??? models).

Mumble,

 - Steve


On 3/5/19 2:40 PM, uǝlƃ ☣ wrote:
> I can't help but tie these maunderings to the modern epithets of "snowflake" and "privilege" (shared by opposite but similar ideologues).  I have to wonder what it means to "learn" something.  The question of whether a robot will take one's job cuts nicely to the chase, I think.  How much of what any of us do/know is uniquely (or best) doable by a general intelligence (if such exists) versus specific intelligence?  While I'm slightly fluent in a handful of programming languages, I cannot (anymore) just sit down and write a program in any one of them.  I was pretty embarrassed at a recent interview where they asked me to code my solution to their interview question on the whiteboard.  After I was done I noticed sugar from 3 different languages in the code I "wrote" ... all mixed together for convenience.  They said they didn't mind.  But who knows?  Which is better?  Being able to coherently code in one language, with nearly compilable code off the bat?  Or the [dis]ability of changing languages on a regular basis in order to express a relatively portable algorithm?  Which one would be easier for a robot?  I honestly have no idea.
>
> But the idea that the arbitrary persnickety sugar I learned yesterday *should* be useful today seems like a bit of a snowflake/privileged way to think (even ignoring the "problem of induction" we often talk about on this list).  Is what it means to "learn" something fundamentally different from one era to the next?  Do the practical elements of "learning" evolve over time?  Does it really ... really? ... help to know how a motor works in order to drive a car?  ... to reliably drive a car so that one's future is more predictable?  ... to reduce the total cost of ownership of one's car?  Or is there a logical layer of abstraction below which the Eloi really don't need to go?
>
> On 3/5/19 11:04 AM, Steven A Smith wrote:
>> Interesting to see the "new bar" set so low as age 30.  Reminds me of my
>> own youth when the "Hippie generation" was saying "don't trust anyone
>> over 30!".  Later I got to know a lot of folks from the "Beat"
>> generation who were probably in their 30's by that time and rather put
>> out that they couldn't keep their "hip" going amongst the new youth culture.
>>
>> ...
>> My mules are named Fortran/Prolog/APL/C/PERL and  VMS/BSD/Solaris/NeXT
>> and IBM/CDC/CRAY/DEC and GL/OpenGL/VRPN/VRML.   I barely know the names
>> of the new
>> tractors/combines/cropdusters/satellite-imaging/laser-leveling/???
>> technology.
>>
>> Always to be counted on for nostalgic maunderings,




More information about the Friam mailing list