[FRIAM] Future of humans and artificial intelligence

Frank Wimberly wimberly3 at gmail.com
Wed Aug 9 09:35:01 EDT 2017


Right.  Then you use gradient ascent.  But what if you are scheduling a job
shop for throughput when there are thousands of variables most of which
have discrete values?

Frank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 10:41 PM, "Marcus Daniels" <marcus at snoutfarm.com> wrote:

> Frank writes:
>
>
> "My point was that depth-first and breadth-first can probably serve only
> as a straw-man (straw-men?)."
>
>
> Unless there is a robust meta-rule (not heuristic) or single deterministic
> search algorithm to rule them all, then wouldn't those other suggestions
> also be straw-men too?   If I knew that there were no noise and the domain
> was continuous and convex, then I wouldn't use a stochastic approach.
>
>
> Marcus
> ------------------------------
> *From:* Friam <friam-bounces at redfish.com> on behalf of Frank Wimberly <
> wimberly3 at gmail.com>
> *Sent:* Tuesday, August 8, 2017 10:15:05 PM
> *To:* The Friday Morning Applied Complexity Coffee Group
> *Subject:* Re: [FRIAM] Future of humans and artificial intelligence
>
> My point was that depth-first and breadth-first can probably serve only as
> a straw-man (straw-men?).
>
> Frank Wimberly
> Phone (505) 670-9918
>
> On Aug 8, 2017 10:11 PM, "Marcus Daniels" <marcus at snoutfarm.com> wrote:
>
>> Frank writes:
>>
>>
>> "Then there's best-first search, B*, C*, constraint-directed search,
>> etc.  And these are just classical search methods."
>>
>>
>> Connecting this back to evolutionary / stochastic techniques, genetic
>> programming is one way to get the best of both approaches, at least in
>> principle.   One can expose these human-designed algorithms as predefined
>> library functions.  Typically in genetic programming the vocabulary
>> consists of simple routines (e.g. arithmetic), conditionals, and recursion.
>>
>>
>> In practice, this kind of seeding of the solution space can collapse
>> diversity.   It is a drag to see tons of compute time spent on a million
>> little refinements around an already good solution.  (Yes, I know that
>> solution!)  More fun to see a set of clumsy solutions turn into to
>> decent-performing but weird solutions.  I find my attention is drawn to
>> properties of sub-populations and how I can keep the historically good
>> performers _out_.  Not a pure GA, but a GA where communities also have
>> fitness functions matching my heavy hand of justice..  (If I prove that
>> conservatism just doesn't work, I'll be sure to pass it along.)
>>
>>
>> Marcus
>>
>>
>> ------------------------------
>> *From:* Friam <friam-bounces at redfish.com> on behalf of Frank Wimberly <
>> wimberly3 at gmail.com>
>> *Sent:* Tuesday, August 8, 2017 7:57:06 PM
>> *To:* The Friday Morning Applied Complexity Coffee Group
>> *Subject:* Re: [FRIAM] Future of humans and artificial intelligence
>>
>> Then there's best-first search, B*, C*, constraint-directed search, etc.
>> And these are just classical search methods.
>>
>> Feank
>>
>> Frank Wimberly
>> Phone (505) 670-9918
>>
>> On Aug 8, 2017 7:20 PM, "Marcus Daniels" <marcus at snoutfarm.com> wrote:
>>
>>> "But one problem is that breadth-first and depth-first search are just
>>> fast ways to find answers."
>>>
>>>
>>> Just _not_ -- general but not efficient.   [My dog was demanding
>>> attention! ]
>>> ------------------------------
>>> *From:* Friam <friam-bounces at redfish.com> on behalf of Marcus Daniels <
>>> marcus at snoutfarm.com>
>>> *Sent:* Tuesday, August 8, 2017 6:43:40 PM
>>> *To:* The Friday Morning Applied Complexity Coffee Group; glen ☣
>>> *Subject:* Re: [FRIAM] Future of humans and artificial intelligence
>>>
>>>
>>> Grant writes:
>>>
>>>
>>> "On the other hand... evolution *is* stochastic. (You actually did not
>>> disagree with me on that. You only said that the reason I was right was
>>> another one.) "
>>>
>>>
>>> I think of logic programming systems as a traditional tool of AI
>>> research (e.g. Prolog, now Curry, similar capabilities implemented in Lisp)
>>> from the age before the AI winter.  These systems provide a very flexible
>>> way to pose constraint problems.  But one problem is that breadth-first and
>>> depth-first search are just fast ways to find answers.  Recent work seems
>>> to have shifted to SMT solvers and specialized constraint solving
>>> algorithms, but these have somewhat less expressiveness as programming
>>> languages.  Meanwhile, machine learning has come on the scene in a big way
>>> and tasks traditionally associated with old-school AI, like natural
>>> language processing, are now matched or even dominated using neural nets
>>> (LSTM).  I find the range of capabilities provided by groups like
>>> nlp.stanford.edu really impressive -- there examples of both approaches
>>> (logic programming and machine learning) and then don't need to be mutually
>>> exclusive.
>>>
>>>
>>> Quantum annealing is one area where the two may increasingly come
>>> together by using physical phenomena to accelerate the rate at which high
>>> dimensional discrete systems can be solved, without relying on fragile or
>>> domain-specific heuristics.
>>>
>>>
>>> I often use evolutionary algorithms for hard optimization problems.
>>> Genetic algorithms, for example, are robust to  noise (or if you like
>>> ambiguity) in fitness functions, and they are trivial to parallelize.
>>>
>>>
>>> Marcus
>>> ------------------------------
>>> *From:* Friam <friam-bounces at redfish.com> on behalf of Grant Holland <
>>> grant.holland.sf at gmail.com>
>>> *Sent:* Tuesday, August 8, 2017 4:51:18 PM
>>> *To:* The Friday Morning Applied Complexity Coffee Group; glen ☣
>>> *Subject:* Re: [FRIAM] Future of humans and artificial intelligence
>>>
>>>
>>> Thanks for throwing in on this one, Glen. Your thoughts are
>>> ever-insightful. And ever-entertaining!
>>>
>>> For example, I did not know that von Neumann put forth a set theory.
>>>
>>> On the other hand... evolution *is* stochastic. (You actually did not
>>> disagree with me on that. You only said that the reason I was right was
>>> another one.) A good book on the stochasticity of evolution is "Chance and
>>> Necessity" by Jacques Monod. (I just finished rereading it for the second
>>> time. And that proved quite fruitful.)
>>>
>>> G.
>>>
>>> On 8/8/17 12:44 PM, glen ☣ wrote:
>>>
>>>
>>> I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they don't go very far.  We can see this even in the foundations of mathematics, the unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he said: "But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object."  Or, if you don't like that, you can see the same perspective in his iterative construction of sets as an alternative to the classical conception.
>>>
>>> The point being that reality, traditionally, has shown more expressiveness than any of our rule sets.
>>>
>>> There are ways to handle the mismatch in expressivity between reality versus our rule sets.  Stochasticity is the measure of the extent to which a rule set matches a set of patterns.  But Grant's right to qualify that with evolution, not because of the way evolution is stochastic, but because evolution requires a unit to regularly (or sporadically) sync with its environment.
>>>
>>> An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will *always* fail.  It's guaranteed to fail because syncing with the environment isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.
>>>
>>>
>>>
>>>
>>> ============================================================
>>> FRIAM Applied Complexity Group listserv
>>> Meets Fridays 9a-11:30 at cafe at St. John's College
>>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>>>
>>
>> ============================================================
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20170809/c66b498e/attachment.html>


More information about the Friam mailing list