[FRIAM] Fredkin/Toffoli, Reversibility and Adiabatic Computing.
steve smith
sasmyth at swcp.com
Sat Jan 11 18:13:35 EST 2025
As usual (often) I am humbled... in complement to your (EricS's) "if I
were smart and I had time" I counter "if I were smart and I had
focus"... this stuff just seems too hard (intellectually) for me to stay
focused on long enough to carefully puzzle it out, but/so I really
appreciate your and Marcus (and any latent laggards to the party)
engaging with the question.
I have imaginated that the value of reversibility in energy consumption
is that to "clear a computation" (dispose o the slag) the obvious answer
is to simply "uncompute" the computation... thereby (only?) *doubling*
the computational time? Of course the "readout" of the state of the
"halted" computation is it's own bit-burning exercise... But if in
fact, the answer really is 42 as predicted, then a mere 6 bits
suffices? What was the question again?
I'm guessing I've got something really fundamentally wrong here though.
My ideation is squarely in the "free lunch" regime I fear whilst the
current methods *seem* to produce more waste heat than might be
absolutely necessary if more "skillful means" were applied?
Your reference to "slag" was excellent and very apropos of the other
tangent I referenced of Gosper's Hashlife where every computation is
hashed for potential re-reference. 2D nearest neighbor CA is "easy" to
quadtree down and memoise in such a way as to avoid recomputing any
repeating patterns... so much of the GofL game is in various states of
translation and reflection that most evolutions might degenerate to
translations/reflections of existing patterns with any *new and novel*
computation having to, by definition, happen at a larger and larger
scale? Another as-yet-unfulfilled exercise was to re-implement
hashlife with a less efficient but more thorough kernel (N-1xN-1
recursive decomposition instead of N/2xN/2) andinstrumented so as to
identify where "novel" computation was going on... where it *wasn't*
degenerating to simply hash collisions and look ups at scale.
Regarding slag: all slag is re-used or recognized for being unique (and
therefore acutely interesting?).
Thanks to all for enduring my half-gassed speculations here.
- Steve
On 1/11/25 2:56 PM, Santafe wrote:
> It seems like there are two separate questions here.
>
> Steve talked about reversible gates, and suggested them as solutions
> to heat wastage. But I think that doesn’t go through. I too thought
> of Marcus’s point about unitary quatum gates as the obvious case of
> reversibility (needed for them to function at all for what they are).
> But quantum or Toffoli or Fredken, the point of the Landauer relation
> et al. is that you can move around where the dissipation happens (out
> of the gate and into somewhere else), but reversibility itself isn’t
> obviating dissipation. (f it is to be obviated, that is a different
> question; I’ll come back in a moment to say this more carefully.)
>
> The different matter of superconducting or other less-wasteful gates
> seems to be about _excess_ dissipation that can be prevented without
> changing the inherent constraints from the definition of what the
> computational program is.
>
>
> So back to explaining the first para more carefully: As I understand
> it (so offering the place to tell me I have it wrong), the point is
> that we use a model in which the state space is a kind of
> product-space of two ensembles. One we call the data ensemble, and
> its real estate we have to build and then populate with one or another
> set of data values. The other is the thermal ensemble which gets
> populated with states that we don’t individually control with boundary
> conditions, but control only in distribution by the energy made
> available to them.
>
> Then what is the premise of computation? It is that every statement
> of a well-formed question already contains its answer; the problem is
> just that the answer is hard to see because it is distributed among
> the bits of the question statement, along with other things that
> aren’t the answer. If we maximally compress all this, what a
> computation is doing is shuffling the bits in a one-to-one mapping, so
> that the bits constituting the answer are in a known set of registers,
> and all the slag that wasn’t the answer is in the remaining registers.
> In a reversible architecture, that can be done in isolation from the
> thermal bath, so no entropy “production” takes place at all.
>
> But the slag is now still consuming real estate that you have to
> re-use to do the next computation, and even the answer has to get
> moved somewhere off-computer to re-use that part of the real estate.
> If the slag is really slag, and you just want to get rid of it, then
> you are still going to offload entropy to somewhere in doing so. Not
> in the gates that did the computation, maybe, but in the washer that
> returns clean sheets for the next day. If we stay within the
> representational abstraction that we have only the two ensembles (data
> and thermal), then every version of that dissipates to heat.
>
>
> The reason I said that whether dissipation is unavoidable or not is “a
> different question”, rather than “already known”, is that it is not
> obvious to me that one _must_ “dispose” of the slag that wasn't the
> “answer-part” of your first question. Maybe it isn’t true slag, but
> part of articulation of other questions that request other answers.
> One might imagine (to employ the metaphor of the moment),
> “sustainable” computation, whereby all slag gets reversibly recycled
> to the Source of Questions, to be re-used by some green questioner
> another day.
>
> That’s a fun new problem articulation, but I think the imagination
> that it solves anything just displaces the naivete to another place.
> One might make computation “more sustainable” by realizing that there
> will be new questions later, and saving input bits in case those
> become useful. But there is no “totally sustainable computation”
> unless we are sure to ask all possible questions, so that every bit
> from the Source is the answer to something that eventually gets used.
> No Free Lunch kind of assertion. This is Dan Dennett World where
> volition is modeled by deterministic automata. But Dennett world is
> not our world: everything we do works because we are tiny and care
> about only a few things, with which we interact stochastically, and
> the world tolerates us in doing so. In that world, returning the slag
> to the Source of Questions should create a kind of chemical potential
> for interesting questions, in which, like ores that become more and
> more rarified, finding the interesting questions among the slag that
> one won’t dispose of gets harder and harder. So there should be
> Carnot-type limits that tell asymptotically what the minimal total
> waste could be to extract all the questions we will ever care about
> from the Source of Questions, retuning as much slag as possible over
> the whole course, and dissipating only that part that defines the
> boundaries of our interest. That Carnot limit could be considerably
> less wasteful than our non-look-ahead Landauer bound, but it isn’t
> zero. And the Maxwell Deamon cost of the look-ahead needed to recycle
> the slag in an optimal manner presumably also diverges, by a
> block-coding kind of scaling argument.
>
> Could be a delightful academic exercise, to work out any tiny model to
> illustrate this concept. If I were smart and had time, I would want
> to do it. But then those with social urgency would chop my head off
> too, in the next French Revolution, for having wasted time doing
> academic things when I should have been providing a more useful
> service. (Sorry, between meetings and the incoming emails over the
> past few days, I have been spending lots of time with those who think
> that the reason there are still problems in the world is that we let
> go of the Struggle Sessions too early. I can’t argue that they are
> wrong, so I am keeping my head down in public.)
More information about the Friam
mailing list