[FRIAM] by any means necessary

Frank Wimberly wimberly3 at gmail.com
Tue Feb 15 15:51:15 EST 2022


Oh good.  Let's replace universities with trucker blockades.

The broadly focused activities at UC Davis consist of many narrowly focused
projects.

Non sequitur:  when I graduated from high school I applied to only two
universities:  UC Davis and one other.


---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Tue, Feb 15, 2022, 1:42 PM glen <gepropella at gmail.com> wrote:

> Hm. Marcus' layout was pretty clear, I thought. If we take the utilitarian
> seriously, there is some calculus for more good, less bad. But that
> calculus isn't simple. And even if we can simplify it, there's no reason to
> believe it'll be useful in the end. What the Neuralink ⇔ UCDavis kerfuffle
> demonstrates is that capital[ism] is ethic[al|s]. It's similar to the
> conclusion that technology is not agnostic to ethics/values.
>
> But, as Marcus has pointed out in past threads (and I've agreed with) is
> that some problems (may) *require* the consolidation of funds. If
> consolidation happens through celebrity or taxation may be irrelevant to
> the fact that large pools of money are necessary ... Big Science, Big Tech,
> whatever. So, in the end, in a capitalist society, we pool our assets via
> private ownership of things (or the "means of production" ... whatever that
> means). But our universities, largely socialist in the past, pool their
> money through taxation and award funding.
>
> We're seeing the death of the university. But we're also seeing the birth
> of some things like "community organizing", trucker blockades, Consilience
> Projects, etc. The ethical [ab]use of animals is just one small part of the
> ethical calculus we use to reason over all these things. But it's an
> important one, not merely because of our technological progress in AI (if
> not AGI) but because of our progress in consciousness studies.
>
> Can the world remain funded through private ownership? Or, asked in
> another way, should broadly purposed research universities like UC Davis
> accept funding from narrowly purposed corporations? If so, why? If not, why?
>
> On 2/15/22 12:26, Steve Smith wrote:
> > Thanks for having this conversation in front of us,  I'm pretty invested
> in these kinds of issues and they are rarely discussed openly IMO.
> >
> > Perhaps you can unpack for me a little (or say it another way so I can
> gain my own parallax):
> >
> >     /In our capitalist society, is it reasonable for Neuralink to be
> less susceptible to the flattening you describe by aggregating (not summing
> over) all subjects' projections from a high-dimensional construct?
> >     /
> >
> > /
> > /
> >
> > On 2/15/22 12:56 PM, glen wrote:
> >> Excellent! Thanks. However, it's also important to note that the
> lawsuit is against UC Davis, not Neuralink. So, to whatever extent that
> Neuralink funding, mixed with tax payer funding, drives university research
> (and possibly other things like overhead or paying a percentage of salary
> for some with teaching loads, etc.), those backseating costs can deeply
> impact whatever it is we call a research university.
> >>
> >> I'm about halfway into my "evaluation" of
> https://consilienceproject.org/. What I've seen so far has a healthy
> plating (I was going to say veneer, but that's too thin) of pretty words.
> But those pretty words sound a tiny bit like Neuralink's corporatized
> strawman/response to these accusations. I bring up Consilience because it's
> placed in between a for-profit company and a research university. On
> Consilience's About page, you see 2 ethical commitments:
> >>
> >> • collective attribution of authorship, and
> >> • transparency in methodology
> >>
> >> These may seem a bit contradictory to some observers. My guess is that,
> given some time and effort (maybe even semi-automated NLP computation), I
> could ferret out who wrote which featured article. What I'd like to be
> transparent is who contributes what to each article. (This is a
> professional task I have to some extent with my clients ... so it's not
> mere hobby.)
> >>
> >> Going back to the lawsuit against UC Davis and the 3 example spectrum
> (and perhaps even the political tangent SteveS raised), where does
> Neuralink end and UC Davis begin? In our capitalist society, is it
> reasonable for Neuralink to be less susceptible to the flattening you
> describe by aggregating (not summing over) all subjects' projections from a
> high-dimensional construct?
> >>
> >> We see a similar thread in the "academic free speech" rhetoric the
> alt-right is pushing these days (though there are lefty exceptions) ... aka
> when is an academic not talking as an academic? And in the Barret and
> Gorsuch exhortations that they're not partisan hacks ... even when talking
> at a partisan event.
> >>
> >> [sigh] I know these fluffy issues aren't interesting to most people.
> It's way easier to shut up and calculate. But not only are they interesting
> to me, I think they're necessary, then, now, and later.
> >>
> >> On 2/15/22 11:30, Marcus Daniels wrote:
> >>> For some activity there will be a mesh of consequences, that perhaps
> with enough transparency, debate, and observation the facts of the matter
> could be quantified as a large graph.  Across this graph, one could apply a
> subject's function of the utility of each one of those consequences.   If
> some of the consequences are both illegal and observable and a node
> represented a risk to the subject doing the assessment of the graph, then
> that node would probably result in a negative utility for most subjects and
> perhaps it will overwhelm other positive evaluations across other nodes.
> One could perform the same procedure across all possible subjects.   The
> sum would be a social evaluation of the mesh of consequences.  I think it
> would not be very useful, and not even address externalized costs.
> Throughout this procedure the subjects' utility functions would all be
> subject to advertising, propaganda, religion, blood sugar and hormones.
> Measure twice you could get different
> >>> answer.
> >>>
> >>> If there are externalized costs that need to be recognized for the
> survival of humans, then humans will have to create laws with large risks
> for those that don't comply with them. (Case-by-case harassment,
> vigilantism, or terrorism wouldn't scale as well.)   My guess in this
> Neuralink case, is that if there were any deviations from best practices,
> they will be aware of this risk in the future.   In the cynical view of it
> being propaganda, well, yes, they'll be motivated to make the best kind
> they can and to set things up to compartmentalize the most sensitive or
> emotionally charged information.
> >>
>
> --
> glen
> When elephants fight, it is the grass that suffers.
>
> .-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:
>  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20220215/dfa036ae/attachment-0001.html>


More information about the Friam mailing list