[FRIAM] Warring Darwinians for Glen, Steve

Steven A Smith sasmyth at swcp.com
Tue May 5 20:28:24 EDT 2020


On 5/5/20 5:40 PM, uǝlƃ ☣ wrote:
> So, in state space reconstruction, it seems, we're attempting to infer a structure that is *as* expressive as the data we feed it, similar I guess to deep learning or genetic programming. All the sci-fi movies focus on the brain (or maybe even the CNS). But what I'd like to see is something like FPGA where the dynamically programmable "sleeve" (as in Altered Carbon) is hooked to one end of a harness and the "original" is hooked to the other end ... like a motion capture suit but with way more pathways. Then the "sleeve" is re-configured/programmed over some relatively short period. Like the original has to wear the suit for a 24 hour period ... paying bills, making coffee, having sex, etc. Then, at the end of the process, the sleeve is configured to be as expressive as the original ... at  least in so far as the data taken during that period.
>
> We couldn't use such things for long-term duplication. But it would be great for, say, giving speeches. We could take a Donald Trump sleeve, *program* it with Barack Obama, and abracadabra we have a real president who can get through a 30 minute speech without screwing it up.

Yeh... but what about his long-form Birth Certificate, huh? 
<crude-reference trigger-alert> And I won't even go into what Melania
would get out of the deal!  Even Stormy might give the money back and
ask for "do-overs"? </crude references>

<anecdotal  self-aggrandizing technical discursion>

My last major failed project was an "omnistereoscopic" camera I was
(making plans/designs) to fit FPGAs between all 52 cameras (200%+
coverage over the 4Pi steradians) to do realtime
lens-correction/stitching with the intention of being able to *resample*
the implied sampled lightfield for myriad purposes, military, industrial
and entertainment.   The FPGA inter-camera fabric was the most efficient
for the purpose for lots for lots of reasons, but was also quite a bit
more processing power than was needed.   Not only could it have included
an array (even denser?) of microphones but there would have been
leftover power to do semantic segmenting up to some level.   Another
layer of "deep learning" behind the semantic segmentation and viola! 
Petavision! <https://petavision.github.io/petavision.html>

The Patent holder Micoy I was working with got bought by Digital Domain
<https://www.digitaldomain.com/news/digital-domain-acquires-micoy-portfolio-patents-interactive-entertainment-technology/>
leaving my accounts receiveable unsurprisingly in line behind the
friends-and-family investors, partly because i wasn't willing to sign on
to their non-competes and was fixated for my own reasons on going
forward with open SW/HW solutions like the elphel 393
<https://www.elphel.com/> which NOW comes with a dual-core ARM and FPGA
integrated on each camera board and GigE for intercamera comms...    I
can't tell what DD has done with the patents but they do have a big
*claim* in the VR
<https://www.digitaldomain.com/virtual_reality/end-to-end-vr/> space,
including (visual) streaming/capture.

so... "Assume a spherical sensory sleeve"

Bumperball Plastic Bubble Suit | HiConsumption

meets

just need to add temperature, chemical (smell/taste), and pressure
sensors to the surface and viola, glen's "sleeve"!

</nonsense what if's>

- Steve 




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20200505/11150621/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: gfbmgejhchcbmjkj.png
Type: image/png
Size: 418939 bytes
Desc: not available
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20200505/11150621/attachment-0001.png>


More information about the Friam mailing list