<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>PS. One of the comments in the Daily Beast article suggested
that deep-fakes are used for signal compression... I have
experienced 2 examples of this which are compelling: <br>
</p>
<ol>
<li>text/search-completion/anticipation where it feels as if the
system already knows what I am likely to want to type next (I
sometimes think it is training *mildly* on my own text, not just
on a larger corpus).</li>
<li>hand/eye/body tracking. Early VR was hampered badly by lag
in tracking caused both by transmission/queuing and processing
delays so a workaround was to build an anticipatory model which
extrapolated trajectories of (for example, hand) motions... when
overdone it gave you the sense that your hand was attached
elastically to the VR rendering of a pointing device with
inertia which was better than the raw *lag* experience. head
trackng was even more critical because cybersickness but in some
ways easier because it was more constrained (6DOF neck/body
movements more constrained than the nominally
hip/torso/shoulder/elbow/wrist DOF of the hand). I always
wondered if a properly coupled system (in the inter-reality
(Guintatas) sense) would lead to entrainment similar to what (I
think) martial artists, gymnasts, dancers experience when they
train? This is probably a very real issue/experience in robotic
exoskeletal development?<br>
</li>
</ol>
<div class="moz-cite-prefix">On 6/24/22 7:50 AM, Steve Smith wrote:<br>
</div>
<blockquote type="cite"
cite="mid:b4d41216-87a4-f164-b47d-fdbb89279060@swcp.com"><a class="moz-txt-link-freetext" href="https://www.thedailybeast.com/amazon-unveils-new-ai-tool-for-alexa-to-turn-a-speech-snippet-into-a-voice">https://www.thedailybeast.com/amazon-unveils-new-ai-tool-for-alexa-to-turn-a-speech-snippet-into-a-voice</a>
<br>
<br>
Marcus confronted me a month or more ago with something along the
lines of " who says learning isn't just imitation? ", and it
*really* hit home. Play seems to be a lot about mock-work and
mock-fighting and it seems widely accepted that a great deal of
learning happens through emulation of the expertise of others.
<br>
<br>
When I consider those who have passed in my life, I think I much
prefer to hear-through-memory their voices (and visages) over
recordings. Synthesizing voices or visages seems to compound the
uncanny valley effect. Bad enough to get it eeriely-nearly right
with something less visceral than the affect of a loved one since
passed.
<br>
<br>
During the first gulf war (1991) I worked with a voice/speech
researcher who "went dark" for about a year. He remained cagey
about this work even after he came out of that period of silence
and misdirection, but eventually the general nature of the work
was declassified (though not details) and it was about
synthesizing voices by modeling the vocal tract, and the
application that made it so skitchy was battlefield communications
of Saddam Hussein. This was still in an era of analog
communication with various analog scrambling techniques standing
in for what we do today with digital encryption and
authentication. I have no idea if it was ever deployed or
effective operationally, but the examples he used for
demonstration with well known voices. I doubt anyone could argue
that this speech synthesizer in any way "thought" like Saddam
Hussein, but the larger organizational unit (Iraq army), might
well "act as if" the intention injected by the synthesized voice
was part of their collective psyche.
<br>
<br>
Back to the original premise of whether a well-enough
practiced/trained learning classifier system has any emergent
properties that are parallel to animal/human cognition. This
seems entirely up in the air to me, but Glen's recent rant/rave
about "what it's like" and splitters v smooshers was mildly
compelling to me.
<br>
<br>
A big shift in my own perception of self/change was when someone I
respected offered the idea of "trying it on" and "acting as if" as
a model for personal transformation. In retrospect it seems
obvious that by emulating some *one* (real or idealized) one might
actually become *more like* that person (real or idealized). Of
course, this is about a change of affect/self/consciousness, not
emerging from whole cloth (sand?).
<br>
<br>
<br>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
<br>
FRIAM Applied Complexity Group listserv
<br>
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
<a class="moz-txt-link-freetext" href="https://bit.ly/virtualfriam">https://bit.ly/virtualfriam</a>
<br>
to (un)subscribe
<a class="moz-txt-link-freetext" href="http://redfish.com/mailman/listinfo/friam_redfish.com">http://redfish.com/mailman/listinfo/friam_redfish.com</a>
<br>
FRIAM-COMIC <a class="moz-txt-link-freetext" href="http://friam-comic.blogspot.com/">http://friam-comic.blogspot.com/</a>
<br>
archives: 5/2017 thru present
<a class="moz-txt-link-freetext" href="https://redfish.com/pipermail/friam_redfish.com/">https://redfish.com/pipermail/friam_redfish.com/</a>
<br>
1/2003 thru 6/2021 <a class="moz-txt-link-freetext" href="http://friam.383.s1.nabble.com/">http://friam.383.s1.nabble.com/</a>
<br>
<br>
</blockquote>
</body>
</html>