[FRIAM] privacy games

uǝlƃ ☣ gepropella at gmail.com
Tue May 26 10:25:22 EDT 2020


So, to recap:

The "holographic" principle of [non]privacy: All valid questions about one's inner world can be properly asked as questions about one's interaction with the outer world. (Or for those triggered by "inside" and "outside": All valid questions about processes beyond a boundary can be properly asked as questions about the surface of the boundary.)

1st order privacy: There's a combinatorial explosion of possible ways to decode the surface.
2nd order privacy: The map from encoder to decoder is many-to-many.

Feel free to continue to criticize [†] those. In the meantime, I'll just keep plugging along. >8^D My candidate for 3rd order privacy is *gaming*. We've (I think mostly SteveS, but Jon's mentioned "adversarial", too) talked about both instantaneous planned obfuscation of one's encoder choices in order to defeat the decoder choice(s). Lying, manipulation, plausible deniability (Trump's "perfect phone call"), etc. all fall into this category. I think more positive things like non-linear prose (I'm thinking Joyce or Moorcock, maybe) and poetry might qualify as well, depending on the author's intentions. But the critical distinction between 2nd order and 3rd is the purposeful gaming of the encode-decode map, not the mere accident that the map is many-to-many. We've even talked a little bit about non-instantaneous co-evolution, that the boundary is a dynamic thing and the encoders and decoders chosen can feed off one another, with both historicity and layering/depth.

At the root, the 3rd order is about "agency" and perhaps "algorithmicity" (thanks to both Jon and Nick for helping isolate these). It's difficult to imagine a purely mechanical homunculus being a difficult adversary in a privacy game. Sure, computer chess (or Watson) can win well-formulated games because they have access to computational powers a human opponent doesn't have. But if we could (somehow) ensure symmetry between the opponents, I expect we'd lose any *nonrandom* outcomes to any games they constructed. A lurking lemma somewhere in here is the idea that if humans are, ultimately, mechanical, then any game devised/played by 2 identical humans would reduce to tic-tac-toe (or tit-for-tat IPD).

But in any given practical, real, game we might find in the world we know, there seems to be something loopy therein ... some kind of meta-game, i.e. the game that includes [re]defining the game on top of playing the game. And *that's* the candidate for 3rd order privacy: meta-games, supergames, or hypergames. It's important to distinguish between well-founded and non-well-founded meta-games. But I wouldn't want to hinge the conception of this "holographic" principle on metaphysical choices of which axioms we should [not] include, even if it is tempting to distinguish Frank from EricC/Nick by their tendency to adopt one set of axioms over another. >8^D I think we can play meta-games regardless of whether our fundamental metaphysics is well-founded or not.

So, to sum up: Even *if* all valid questions about one's inside can be properly formed as questions about the surface, when the inside and the outside are allowed to *game* each other, we can meet even stronger privacy criteria.



[†] By "criticism", I mean the type of playing along, steelmanning, empathetic listening, constructive criticism to which I've tried to allude ... not sophist nit-picking about jargonal definitions of words, or appeals to authority requiring one first get a PhD in Peirce or the old dead phenomenologists, or pointing out that this principle is blatantly ridiculous in the first place. Etc. If I wanted peer-review, I wouldn't be posting this to a mailing list.

-- 
☣ uǝlƃ



More information about the Friam mailing list