[FRIAM] naive question

glen gepropella at gmail.com
Fri Oct 21 11:56:12 EDT 2022


Ha! Hey, don't judge. I was going to show my demonstration of the Euler identity in Rust ... but ... Cargo. Pfft. I guess I could have simply iterated numbers in Bash.

Here it is without running Zoom:
gepr at elric:~/lang/js$ time for (( i = 0 ; i < 999999 ; i = i + 1 )); do false ; done

real	0m2.373s
user	0m2.372s
sys	0m0.000s
gepr at elric:~/lang/js$ time for (( i = 0 ; i < 999999 ; i = i + 1 )); do false ; done

real	0m2.341s
user	0m2.340s
sys	0m0.000s

And here it is while running Zoom:
gepr at elric:~/lang/js$ time for (( i = 0 ; i < 999999 ; i = i + 1 )); do false ; done

real	0m3.772s
user	0m3.734s
sys	0m0.000s
gepr at elric:~/lang/js$ time for (( i = 0 ; i < 999999 ; i = i + 1 )); do false ; done

real	0m2.528s
user	0m2.508s
sys	0m0.000s


On 10/21/22 08:47, Marcus Daniels wrote:
> Numerics in JavaScript.  Messed-up.
> 
> -----Original Message-----
> From: Friam <friam-bounces at redfish.com> On Behalf Of glen
> Sent: Friday, October 21, 2022 8:43 AM
> To: friam at redfish.com
> Subject: Re: [FRIAM] naive question
> 
> Just for example. I run my fib.js twice wrapped by time and get:
> 
> gepr at elric:~/lang/js$ time ./fib.js 100
> 354224848179262000000
> 
> real	0m0.148s
> user	0m0.137s
> sys	0m0.013s
> 
> gepr at elric:~/lang/js$ time ./fib.js 100
> 354224848179262000000
> 
> real	0m0.134s
> user	0m0.116s
> sys	0m0.020s
> 
> strace also shows quite a bit of difference:
> 
> gepr at elric:~/lang/js$ strace ./fib.js 100 2> fibout1
> 354224848179262000000
> gepr at elric:~/lang/js$ strace ./fib.js 100 2> fibout2
> 354224848179262000000
> gepr at elric:~/lang/js$ diff -C0 fibout?|wc -l
> 1331
> 
> 
> On 10/21/22 08:16, glen wrote:
>> No, not even from the user's perspective. The time program helps show how much system, process, and user time is executed by the same script on different systems. This is especially important on multi-user systems, but any system that allows multiple processes will show differences. I suggest you run your script, wrapped by time or strace on the different systems and examine the output.
>>
>> Now, *you* as the user, may not have noticed the differences in execution time or resource use. But *you* would not be the canonical Unix user, if that's the case. >8^D Hell, even on VMS, we were plagued with differences between successive executions of various scripts. That you didn't notice such is interesting.
>>
>> On 10/21/22 08:08, Frank Wimberly wrote:
>>> Yes.  From the user's perspective they ran identically.  Those workstations didn't even have the same instruction sets.
>>>
>>> ---
>>> Frank C. Wimberly
>>> 140 Calle Ojo Feliz,
>>> Santa Fe, NM 87505
>>>
>>> 505 670-9918
>>> Santa Fe, NM
>>>
>>> On Fri, Oct 21, 2022, 8:24 AM glen <gepropella at gmail.com <mailto:gepropella at gmail.com>> wrote:
>>>
>>>      By "ran identically", you actually mean "produced identical outputs". They didn't run identically. Simple ways to see this are system and process monitors, top, strace, etc.
>>>
>>>      On 10/20/22 17:19, Frank Wimberly wrote:
>>>       > Back in the 80s I wrote many Unix shell scripts.  For my purposes they ran identically on various workstations whether Sun, SG, or, eventually, Vax (running Unix).  The software existed in my mind/brain, in files in the various filesystems, or on paper listings.  What's wrong with my thinking?
>>>       >
>>>       > Frank
>>>       >
>>>       > ---
>>>       > Frank C. Wimberly
>>>       > 140 Calle Ojo Feliz,
>>>       > Santa Fe, NM 87505
>>>       >
>>>       > 505 670-9918
>>>       > Santa Fe, NM
>>>       >
>>>       > On Thu, Oct 20, 2022, 3:52 PM glen <gepropella at gmail.com <mailto:gepropella at gmail.com> <mailto:gepropella at gmail.com <mailto:gepropella at gmail.com>>> wrote:
>>>       >
>>>       >     I can't speak for anyone else. I'm a simulationist. Everything I do is in terms of analogy [⛧]. But there is no such thing as a fully transparent or opaque box. And there is no such thing as "software". All processes are executed by some material mechanism. So if by "computational metaphor", you mean the tossing out of the differences between any 2 machines executing the same code, then I'm right there with you in rejecting it. No 2 machines can execute the same (identical) code. But if you define an analogy well, then you can replace one machine with another machine, up to some similarity criterion. Equivalence is defined by that similarity criterion. By your use of the qualifier "merely" in "merely the equivalent", I infer you think there's something *other* than equivalence, something other than simulation. I reject that. It's all equivalence, just some tighter and some looser.
>>>       >
>>>       >     [⛧] Everyone's welcome to replace "analogy" with "metaphor" if they so choose. But, to me, "metaphor" tends to wipe away or purposefully ignore the pragmatics involved in distinguishing any 2 members of an equivalence class. The literary concept of "metaphor" has it right. It's a rhetorical, manipulative trick to help us ignore actual difference, whereas "analogy" helps us remember and account for differences and similarities. "Metaphor" is an evil word, a crucial tool in the toolkit for manipulators and gaslighters.
>>>       >
>>>       >
>>>       >     On 10/20/22 13:27, Prof David West wrote:
>>>       >      >
>>>       >      > Marcus and glen (and others on occasion) have posted frequently on the "algorithmic "equivalent" of [some feature] of consciousness, human emotion, etc.
>>>       >      >
>>>       >      > I am always confronted with the question of of "how equivalent?" I am almost certain that they are not saying anything close to absolute equivalence - i.e., that the brain/mind is executing the same algorithm albeit in, perhaps, a different programming language. But, are their assertions meant to be "analogous to," "a metaphor for," or some other semi/pseudo equivalence?
>>>       >      >
>>>       >      > Perhaps all that is being said is we have two black boxes into which we put the same inputs and arrive at the same outputs. Voila! We expose the contents of one black box, an algorithm executing on silicon. From that we conclude it does not matter what is happening inside the other black box—whatever it is, our, now, white box is an 'equivalent'.
>>>       >      >
>>>       >      > Put another way: If I have two objects, A and B, each with an (ir)regular edge. in this case the irregular edge of A is an inverse match to that of B—when put together there are no gaps between the two edges. They "fit."
>>>       >      >
>>>       >      > Assume that A and B have some means to detect if they "fit" together. I can think of algorithms that could determine fit, a simplistic iteration across all points to see if there was a gap between it and its neighbor, to some kind of collision detection.
>>>       >      >
>>>       >      > Is it the case that whatever means used by A and B to detect fit, it is _*/merely/*_ the equivalent of such an algorithm?
>>>       >      >
>>>       >      > The roots of this question go back to my first two published papers, in _AI Magazine_ (then the 'journal of record' for AI research); one critical of the computational metaphor, the second a set of alternative metaphors of mind. An excerpt relevant to the above example of fit.
>>>       >      >
>>>       >      > /Tactilizing Processor
>>>       >      > /
>>>       >      > /Conrad draws his inspiration from the ability of an enzyme to combine with a substrate on the  basis  of  the  physical  congruency  of  their respective shapes (topography). This is a generalized  version  of  the  lock-and-key  mechanism  as  the  hormone-receptor  matching discussed by Bergland. When the topographic shape  of  an  enzyme  (hormone)  matches  that of  a  substrate  (receptor),  a  simple  recognize- by-touch  mechanism  (like  two  pieces  of  a puzzle  fitting  together)  allows  a  simple  decision,  binary  state  change,  or  process  to  take place, hence the label “tactilizing processor.”/
>>>       >      >
>>>       >      > Hormones and enzymes, probably/possibly, lack the ability to compute (execute algorithms), so, at most, the black box equivalence might be used here.
>>>       >      >
>>>       >      > [BTW, tactilizing processors were built, but were extremely slow (speed of chemical reactions) but had some advantages derived from parallelism. Similar 'shape matching' computation was explored in DNA computing as well.]
>>>       >      >
>>>       >      > My interest in the issue is the (naive) question about how our understanding of mind/consciousness is fatally impeded by putting all our research eggs into the simplistic 'algorithm box'?
>>>       >      >
>>>       >      > It seems to me that we have the CS/AI/ML equivalent of the quantum physics world where everyone is told to "shut up and compute" instead of actually trying to understand the domain and the theory.
>>>       >      >
>>>       >      > davew
>>
>>
> 

-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ



More information about the Friam mailing list