<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Re Reversible Computing and Fredkin/Toffoli gates. I'm
fascinated with the apparent lack of progress in this general
field. </p>
<p>When they spoke on this topic in 1983 (and I think Feynman
referenced it in his updated "plenty of room at the bottom" (ca
1959) talk in the context of speculating as to whether "molecular
computing" might be a good candidate for this style of
reversibility. Every time it comes up (every decade?) I kick
myself for not paying closer attention, but the lack of progress
suggests it is "before it's time" or perhaps a total "wild goose".<br>
</p>
<p>I don't remember if they referenced "adiabatic" computing and I
have not followed up references to understand that trade-space...
it feels like a TANSTAAFL argument which only carries traction in
edge/corner cases, though computing in biologic and (other)
molecular scale contexts might well make that trade (to avoid
thermal problems)? Whether Universal Assembler NT or biologic
self-assembly "circuits".</p>
<p>What *little* update I've been able to obtain specifically on
Toffoli and Fredkin gate based reversible circuits suggests that
the space/time costs is order 4X to 8X in combined increased
real-estate and latency? Intuition suggests to me that such is
worth it in these new giga-scale AI training contexts, if the
thermal gains are as significant as suggested. Theoretically the
reversibility and thermodynamic implications might be absolute but
practically not (at least in electronic circuits, maybe not in
photonic?).</p>
<p>The most recent survey paper I found was 2013 and it already
seems too dense for me, so maybe I will stall in this
quest/reflection. <br>
</p>
<blockquote>
<p><a href="https://arxiv.org/pdf/1309.1264"
class="moz-txt-link-freetext">https://arxiv.org/pdf/1309.1264</a><br>
</p>
</blockquote>
<p>I understand that Quantum Computing (in some forms?) is
reversible so some/many of the issues are likely shared? Our
resident Quantum Alchemist (or other CS/EECE wizards here) might
be able to shed some light?</p>
<p>- Steve<br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 1/11/25 8:38 AM, steve smith wrote:<br>
</div>
<blockquote type="cite"
cite="mid:80d59f56-6cd0-4f64-904e-d28fd69f8fd6@swcp.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<br>
<blockquote type="cite"
cite="mid:MN0PR11MB598585F70659843B02924FB9C51C2@MN0PR11MB5985.namprd11.prod.outlook.com">
<meta http-equiv="Content-Type"
content="text/html; charset=UTF-8">
<meta name="Generator"
content="Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]-->
<style>@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}@font-face
{font-family:Aptos;}@font-face
{font-family:Consolas;
panose-1:2 11 6 9 2 2 4 3 2 4;}p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:12.0pt;
font-family:"Aptos",sans-serif;}a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}pre
{mso-style-priority:99;
mso-style-link:"HTML Preformatted Char";
margin:0in;
font-size:10.0pt;
font-family:"Courier New";}span.HTMLPreformattedChar
{mso-style-name:"HTML Preformatted Char";
mso-style-priority:99;
mso-style-link:"HTML Preformatted";
font-family:"Consolas",serif;}span.EmailStyle21
{mso-style-type:personal-reply;
font-family:"Aptos",sans-serif;
color:windowtext;}.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;
mso-ligatures:none;}div.WordSection1
{page:WordSection1;}</style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt">SFI had
one of these for a while. (As far as I know it just sat
there.)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt"><a
href="http://www.ai.mit.edu/projects/im/cam8/"
moz-do-not-send="true" class="moz-txt-link-freetext">http://www.ai.mit.edu/projects/im/cam8/</a><br>
<br>
Nowadays GPUs are used for Lattice Boltzmann.</span></p>
</div>
</blockquote>
<p><br>
</p>
<p>such a blast-from-past with the awesome 90's stylized web page
and the pics of the SUN (and Apollo?) workstations!</p>
<p>CAM8 is clearly the legacy of Margolis' work (MIT). At the
time (1983) I remember handwired/soldered breadboards and I
think banks of memory chips wired through logic gates and
such... I think this was pre SUN days (I had an M68000 Wicat
Unix box on my desktop which sported a massive 5MB hard drive
with pruned down BSD variant installed on it). In fact, that
was where I ran the MT simulations (tuning rules until I got
"interesting" activity, running parameter sweeps, etc).</p>
<p>When GPUs first rose up (SGI) they seemed hyper-apropriate to
the purpose but alas, I had not spare cycles at that point in my
career to look into it. Just a few years ago when I was working
on the Micoy Omnistereoscopic "camera ball" (you mentioned it
looked a bit like coronavirus particle) I had specced out an
FPGA fabric solution (with a dedicated FPGA wired directly
between every adjacent overlapping camera pair - 52 cameras) to
do realtime image de-distortion/stitching with the special
considerations which stereo vison adds. I never became a VHDL
programmer but I did become familiar with the paradigm... I
think I tried to engage Roger at a Wedtech on the topic when he
was (also) investigating FPGAs. (circa 2016?) <br>
</p>
<p>At that time, my fascination with CA had evolved into
variations on Gosper's Hashlife... so GPU and FPGA fabric
didn't seem as apt, though TPUs do seem (more) apt for the
implicit data structures (hashed quad-trees).</p>
<p><br>
</p>
<p>The new nVidia DGX concentrated TPU system for $3k is
fascinating and triggers my thoughts (not very coherent) about
the tradeoffs between power and entropy and "complexity".</p>
<p>A dive down this 1983/4 rabbit hole lead me (also) to the <i>Toffoli</i>
and <i>Fredkin Gates</i> and <i>Reversible Computing. </i>More
on that in a few billion more neural/GPT cycles...</p>
<br>
</blockquote>
</body>
</html>