<div dir="ltr"><div dir="ltr">In response to the very valuable feedback I got, including from Rus in this group but also from others outside of this group, I reworked my deep learning training material very significantly. For if anybody is interested it's available from <a href="https://www.dropbox.com/s/y5fkp10ar5ze67n/deep%20learning%20training%20rev%2013012023.zip?dl=0">https://www.dropbox.com/s/y5fkp10ar5ze67n/deep%20learning%20training%20rev%2013012023.zip?dl=0</a><br><br>Just to make sure you guys are clear about the objective of the workshop, below is a quote from the training document. <br><br>But before I give the quote, just an answer to why am I doing this workshop?<br>a) There is plenty of very good material on the internet for free to go into deep learning for people with software skills. But none (that I know of) for those without any software expertise wanting to know and use deep learning <br>b) AI is making a splash in the world right now and there are (I think) many people without software skills yearning to know more about it.<br>c) It is really very easy (I think) for a professional without software skills to acquire what's necessary to apply deep learning for applications where tabular data is available to train the deep learning. This workshop intends to do exactly that. <br><br>Now for the quote from the training material:<br><p class="MsoNormal" style="margin:0pt 0pt 0.0001pt;font-family:Calibri">"This workshop aims to provide a broad overview of deep learning principles, rather than focusing on technical details. The goal is to give you a sense of how deep learning can be used to make predictions on practical problems using labeled data in spreadsheet form. By the end of the workshop, you will have the skills to apply deep learning to address challenges in your area of expertise.<br><br>During this workshop, we will be utilizing R or Python programming languages as a means of configuring and making predictions with labeled data in spreadsheet form. However, it is crucial to note that the primary focus of the workshop is not on programming itself, but rather on the fundamental concepts and techniques of deep learning. We will provide template programs that can be used with minimal modifications, and the main emphasis will be on guidance and exercises to solidify participants' understanding of the material. For those with little to no prior programming experience, we have included an optional section to introduce basic concepts of R or Python, but it is important to note that this section is only intended to provide a basic understanding and will not make you an expert in programming."</p><p class="MsoNormal" style="margin:0pt 0pt 0.0001pt;font-family:Calibri"><br></p><p class="MsoNormal" style="margin:0pt 0pt 0.0001pt;font-family:Calibri">Pieter</p></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, 9 Jan 2023 at 20:32, Steve Smith <<a href="mailto:sasmyth@swcp.com">sasmyth@swcp.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>The "celebration of the hand" project (coordinated with UNM's
Maxwell museum of anthropology) is nearly 15 years defunct now.
We made a tiny bit of progress on some LANL
small-business-assistance time from an ML researcher, a small bit
of seed-funding and matching time on my own part. The key to
this was the newly available prosumer-grade laser-scanners of the
time, as well as emerging photogrammetric techniques.<br>
</p>
<p>The point of the project was to augment what humans were already
doing... both pointing to similarities too subtle for a human to
notice or based on dimensions a human expert wouldn't be able to
identify directly. I haven't touched bases on where that field
has gone in most of that intervening time. My
paleontological/archaelogical partner in that endeavor went his
own way (he was more interested in developing a lifestyle business
for himself than actually applying advanced techniques to the
problem at "hand" (pun acknowledged)) and then a few years later
he died which is part of the reason for not (re)visiting it
myself... <br>
</p>
<p>The relevant issue to the linked article(s) is mostly that we
were building modelless models (model-free learning) from the data
with the hope (assumption?) that *some* of the features that might
emerge as being good correlates to related/identical "hands" might
be ones that humans could detect or make sense of themselves.
The state of the art at the time was definitely an "art" and was
based on the expertise of contemporary flint knappers trying to
reproduce the patterns found in non-contemporary artifacts. We
did not do any actual work (just speculation guided background
research) on pottery and textiles... the Maxwell museum
researcher we worked with was also "overcome by events" simply
needing to develop the exhibits and not having time/focus to
attend to the more researchy aspects... her interest was more on
textiles than ceramics or lithics... which seemed to be the area
we likely had the least advantage over human-experts.</p>
<p>My niece works at the U of Az Archaelogy dept cataloging the
accellerating number of artifacts coming in. At some point I can
imagine an automated classification system taking over much of the
"mundane" aspects of this work... like a self-driving car that
at least provides collision avoidance and lane following...<br>
</p>
<p><br>
</p>
<div>On 1/9/23 10:29 AM, Pieter Steenekamp
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Thanks for the references, I've briefly looked at
them and am looking forward to perusing them more closely. The
interpretation of ML is a big thing of course. The machine gives
you the results and it would sometimes be nice if it can be
accompanied by some explanation of how it achieved it.<br>
<br>
What you describe about your work sounds very interesting
indeed. My gut feel is that, at least with the current
generation of AI, human ingenuity and judgement would be far
superior to AI in recognising similarities in the "hand" of the
artisans involved. But AI could well be a powerful tool in
assisting the humans? </div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Mon, 9 Jan 2023 at 18:50,
Steve Smith <<a href="mailto:sasmyth@swcp.com" target="_blank">sasmyth@swcp.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>I was hoping this article would have more meat to it, but
the main point seems highly relevant to a
practical/introductory workshop such as the one you are
developing:<br>
</p>
<p><a href="https://dl.acm.org/doi/10.1145/3544903.3544905" target="_blank">The Need for
Interpretable Features: Taxonomy and Motivation <br>
</a></p>
<p><a href="https://scitechdaily.com/mit-taxonomy-helps-build-explainability-into-the-components-of-machine-learning-models/" target="_blank">https://scitechdaily.com/mit-taxonomy-helps-build-explainability-into-the-components-of-machine-learning-models/</a></p>
<p>My limited experience with ML is that the
convo/invo-lutions that developers go through to make
their learning models work well tend to obscure the
interpretability of the results. Some of this seems
unavoidable (inevitable?), particularly the highly
technical reserved terminology that often exists in a
given domain, but this team purports to help provide
guidelines for minimizing the consequences...</p>
<p>My main experience with this domain involved an attempt
to find distance measures between the flake patterns in 3D
scanned archaeological artifacts, starting with lithics
but also aspiring to work with pottery and textiles...
essentially trying to recognize similarities in the "hand"
of the artisans involved.<br>
</p>
<p><br>
</p>
<div>On 1/9/23 1:29 AM, Pieter Steenekamp wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr"><a class="gmail_plusreply" id="m_4420140071018442964m_8371091482889110803plusReplyChip-0">@Russ, </a>
<div><a class="gmail_plusreply"><br>
</a>a) I appreciate the suggestion to include a simple
neural network that can make predictions based on
inputs and be trained using steepest descent to
optimize its weights. This would be a valuable
addition to my training material, as it provides a
foundational understanding of how neural networks
work. <br>
<br>
b) My focus is on providing practical training for
professionals with deep domain knowledge but limited
software experience, who are looking to utilize deep
learning as a tool in their respective fields. In
contrast, it seems that your focus is on understanding
the inner workings of deep learning. Both approaches
have their own merits, and it is important to cater to
the needs and goals of different learners.<br>
<br>
Pieter</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Sun, 8 Jan 2023 at
21:20, Marcus Daniels <<a href="mailto:marcus@snoutfarm.com" target="_blank">marcus@snoutfarm.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="auto"> The main defects of both R and Python
are a lack of a typing system and high performance
compilation. I find R still follows (is used by)
the statistics research community more than Python.
Common Lisp was always better than either.<br>
<br>
<div dir="ltr">Sent from my iPhone</div>
<div dir="ltr"><br>
<blockquote type="cite">On Jan 8, 2023, at 11:03
AM, Russ Abbott <<a href="mailto:russ.abbott@gmail.com" target="_blank">russ.abbott@gmail.com</a>>
wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">
<div style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)">As
indicated in my original reply, my interest
in this project grows from my relative
ignorance of Deep Learning. My career has
focussed exclusively on symbolic computing.
I've worked with and taught (a) functional
programming, logic programming, and related
issues in advanced Python; (b) complex
systems, agent-based modeling, genetic
algorithms, and related evolutionary
processes, (c) a bit of constraint
programming, especially in MiniZinc, and (d)
reinforcement learning as Q-learning, which
is reinforcement learning without neural
nets. I've always avoided neural nets--and
more generally numerical programming of any
sort. </div>
<div style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)"><br>
</div>
<div style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)">Deep
learning has produced so many impressive
results that I've decided to devote much of
my retirement life to learning about it. I
retired at the end of Spring 2022 and (after
a break) am now devoting much of my time to
learning more about Deep Neural Nets. So
far, I've dipped my brain into it at various
points. I think I've learned a fair amount.
For example, </div>
<div style="font-family:arial,helvetica,sans-serif;font-size:small;color:rgb(0,0,0)">
<ul>
<li>I now know how to build a neural net
(NN) that adds two numbers using a
single layer with a single neuron. It's
really quite simple and is, I think, a
beautiful example of how NNs work. If I
were to teach an intro to NNs I'd start
with this.</li>
<li>I've gone through the Kaggle Deep
Learning sequence mentioned earlier. </li>
<li>I found a paper that shows how you can
approximate any differentiable function
to any degree of accuracy with a
single-layer NN. (This is a very nice
result, although I believe it's not used
explicitly in building serious Deep NN
systems.)</li>
<li>From what I've seen so far, most
serious DNNs are built using Keras
rather than PyTorch.</li>
<li>I've looked at Jeremy Howard's <a href="http://fast.ai" target="_blank">fast.ai</a>
material. I was going to go through the
course but stopped when I found that it
uses PyTorch. Also, it seems to be built
on <a href="http://fast.ai" target="_blank">fast.ai</a>
libraries that do a lot of the work for
you without explanation. And it seems
to focus almost exclusively on
Convolutional NNs. </li>
<li>My impression of DNNs is that to a
great extent they are <i>ad hoc</i>.
There is no good way to determine the
best architecture to use for a given
problem. By architecture, I mean the
number of layers, the number of neurons
in each layer, the types of layers, the
activation functions to use, etc. </li>
<li>All DNNs that I've seen use Python as
code glue rather than R or some other
language. I like Python--so I'm pleased
with that.</li>
<li>To build serious NNs one should learn
the Python libraries Numpy (array
manipulation) and Pandas (data
processing). Numpy especially seems to
be used for virtually all DNNs that I've
seen. </li>
<li>Keras and probably PyTorch include a
number of special-purpose neurons and
layers that can be included in one's
DNN. These include: a DropOut layer,
LSTM (short-long-term memory) neurons,
convolutional layers, recurrent neural
net layers (RNN), and more recently
transformers, which get credit for
ChatGPT and related programs. My
impression is that these special-purpose
layers are <i>ad hoc</i> in the same
sense that functions or libraries that
one finds useful in a programming
language are <i>ad hoc</i>. They have
been very important for the success of
DNNs, but they came into existence
because people invented them in the same
way that people invented useful
functions and libraries. </li>
<li>NN libraries also include a menagerie
of activation functions. An activation
function acts as the final control on
the output of a layer. Different
activation functions are used for
different purposes. To be successful in
building a DNN, one must understand
what those activation functions do for
you and which ones to use. </li>
<li>I'm especially interested in DNNs that
use reinforcement learning. That's
because the first DNN work that
impressed me was DeepMind's DNNs that
learned to play Atari games--and then
Go, etc. An important advantage of
Reinforcement Learning (RL) is that it
doesn't depend on mountains of labeled
data. </li>
<li>I find RL systems more interesting
than image recognition systems. One of
the striking features of many image
recognition systems is that they can be
thrown off by changing a small number of
pixels in an image. The changed image
would look to a human observer just like
the original, but it might fool a
trained NN into labeling the image as a
banana rather than, say, an automobile,
which is what it really is. To address
this problem people have developed
Generative Adversarial Networks (GANs)
which attempt to find such weaknesses in
a neural net during training and then to
train the NN not to have those
weaknesses. This is a fascinating
result, but as far as I can tell, it
mainly shows how fragile some NNs are
and doesn't add much conceptual depth to
one's understanding of how NNs work. </li>
</ul>
<div>I'm impressed with this list of things
I sort of know. If you had asked me before
I started writing this email I wouldn't
have thought I had learned as much as I
have. Even so, I feel like I don't
understand much of it beyond a superficial
level. </div>
<div><br>
</div>
<div>
<div>So far I've done all my exploration
using Google's Colab (Google's Python
notebook implementation) and Kaggle's
similar Python notebook implementation.
(I prefer Colab to Kaggle.) Using either
one, it's super nice not to have to
download and install anything!</div>
<div><br>
</div>
</div>
<div style="color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif">
<div dir="ltr">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div>I'm continuing my journey to learn more
about DNNs. I'd be happy to have company
and to help develop materials to teach
about DNNs. (Developing teaching materials
always helps me learn the subject being
covered.)</div>
<div><br>
</div>
</div>
<div>
<div dir="ltr">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr"><span></span>--
Russ Abbott
<br>
Professor
Emeritus,
Computer
Science<br>
California
State
University,
Los Angeles<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Sun, Jan
8, 2023 at 1:48 AM glen <<a href="mailto:gepropella@gmail.com" target="_blank">gepropella@gmail.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> Yes, the
money/expertise bar is still pretty high.
But TANSTAAFL still applies. And the
overwhelming evidence is coming in that
specific models do better than those trained
up on diverse data sets, "better" meaning
less prone to subtle bullsh¡t. What I find
fascinating is tools like OpenAI
*facilitate* trespassing. We have a
wonderful bloom of non-experts claiming they
understand things like "deep learning". But
do they? An old internet meme is brought to
mind: "Do you even Linear Algebra, bro?"
>8^D<br>
<br>
On 1/8/23 01:06, Jochen Fromm wrote:<br>
> I have finished a number of Coursera
courses recently, including "Deep Learning
& Neural Networks with Keras" which was
ok but not great. The problems with deep
learning are<br>
> <br>
> * to achieve impressive results like
chatGPT from OpenAi or LaMDA from Goggle you
need to spend millions on hardware<br>
> * only big organisations can afford to
create such expensive models<br>
> * the resulting network is s black box
and it is unclear why it works the way it
does<br>
> <br>
> In the end it is just the same old back
propagation that has been known for decades,
just on more computers and trained on more
data. Peter Norvig calls it "The
unreasonable effectiveness of data"<br>
> <a href="https://research.google.com/pubs/archive/35179.pdf" rel="noreferrer" target="_blank">
https://research.google.com/pubs/archive/35179.pdf</a><br>
> <br>
> -J.<br>
> <br>
> <br>
> -------- Original message --------<br>
> From: Russ Abbott <<a href="mailto:russ.abbott@gmail.com" target="_blank">russ.abbott@gmail.com</a>><br>
> Date: 1/8/23 12:20 AM (GMT+01:00)<br>
> To: The Friday Morning Applied
Complexity Coffee Group <<a href="mailto:friam@redfish.com" target="_blank">friam@redfish.com</a>><br>
> Subject: Re: [FRIAM] Deep learning
training material<br>
> <br>
> Hi Pieter,<br>
> <br>
> A few comments.<br>
> <br>
> * Much of the actual deep learning
material looks like it came from the Kaggle
"Deep Learning <<a href="https://www.kaggle.com/learn/intro-to-deep-learning" rel="noreferrer" target="_blank">https://www.kaggle.com/learn/intro-to-deep-learning</a>>"
sequence.<br>
> * In my opinion, R is an ugly and /ad
hoc/ language. I'd stick to Python.<br>
> * More importantly, I would put the
How-to-use-Python stuff into a preliminary
class. Assume your audience knows how to use
Python and focus on Deep Learning. Given
that, there is only a minimal amount of
information about Deep Learning in the
write-up. If I were to attend the workshop
and thought I would be learning about Deep
Learning, I would be disappointed--at least
with what's covered in the write-up.<br>
> <br>
> I say this because I've been
looking for a good intro to Deep Learning.
Even though I taught Computer Science for
many years, and am now retired, I avoided
Deep Learning because it was so
non-symbolic. My focus has always been on
symbolic computing. But Deep Learning has
produced so many extraordinarily impressive
results, I decided I should learn more about
it. I haven't found any really good
material. If you are interested, I'd be more
than happy to work with you on developing
some introductory Deep Learning material. <br>
> <br>
> -- Russ Abbott<br>
> Professor Emeritus, Computer Science<br>
> California State University, Los
Angeles<br>
> <br>
> <br>
> On Thu, Jan 5, 2023 at 11:31 AM Pieter
Steenekamp <<a href="mailto:pieters@randcontrols.co.za" target="_blank">pieters@randcontrols.co.za</a>
<mailto:<a href="mailto:pieters@randcontrols.co.za" target="_blank">pieters@randcontrols.co.za</a>>>
wrote:<br>
> <br>
> Thanks to the kind support of
OpenAI's chatGPT, I am in the process of
gathering materials for a comprehensive and
hands-on deep learning workshop. Although it
is still a work in progress, I welcome any
interested parties to take a look and
provide their valuable input. Thank you!<br>
> <br>
> You can get it from:<br>
> <a href="https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0" rel="noreferrer" target="_blank">https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0</a>
<<a href="https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0" rel="noreferrer" target="_blank">https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0</a>><br>
> <br>
<br>
-- <br>
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ<br>
<br>
-. --- - / ...- .- .-.. .. -.. / -- --- .-.
... . / -.-. --- -.. .<br>
FRIAM Applied Complexity Group listserv<br>
Fridays 9a-12p Friday St. Johns Cafe /
Thursdays 9a-12p Zoom <a href="https://bit.ly/virtualfriam" rel="noreferrer" target="_blank">
https://bit.ly/virtualfriam</a><br>
to (un)subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" rel="noreferrer" target="_blank">
http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>
FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" rel="noreferrer" target="_blank">
http://friam-comic.blogspot.com/</a><br>
archives: 5/2017 thru present <a href="https://redfish.com/pipermail/friam_redfish.com/" rel="noreferrer" target="_blank">
https://redfish.com/pipermail/friam_redfish.com/</a><br>
1/2003 thru 6/2021 <a href="http://friam.383.s1.nabble.com/" rel="noreferrer" target="_blank">
http://friam.383.s1.nabble.com/</a><br>
</blockquote>
</div>
<span>-. --- - / ...- .- .-.. .. -.. / -- ---
.-. ... . / -.-. --- -.. .</span><br>
<span>FRIAM Applied Complexity Group listserv</span><br>
<span>Fridays 9a-12p Friday St. Johns Cafe /
Thursdays 9a-12p Zoom <a href="https://bit.ly/virtualfriam" target="_blank">https://bit.ly/virtualfriam</a></span><br>
<span>to (un)subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a></span><br>
<span>FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" target="_blank">http://friam-comic.blogspot.com/</a></span><br>
<span>archives: 5/2017 thru present <a href="https://redfish.com/pipermail/friam_redfish.com/" target="_blank">https://redfish.com/pipermail/friam_redfish.com/</a></span><br>
<span> 1/2003 thru 6/2021 <a href="http://friam.383.s1.nabble.com/" target="_blank">http://friam.383.s1.nabble.com/</a></span><br>
</div>
</blockquote>
</div>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . /
-.-. --- -.. .<br>
FRIAM Applied Complexity Group listserv<br>
Fridays 9a-12p Friday St. Johns Cafe / Thursdays
9a-12p Zoom <a href="https://bit.ly/virtualfriam" rel="noreferrer" target="_blank">https://bit.ly/virtualfriam</a><br>
to (un)subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" rel="noreferrer" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>
FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" rel="noreferrer" target="_blank">http://friam-comic.blogspot.com/</a><br>
archives: 5/2017 thru present <a href="https://redfish.com/pipermail/friam_redfish.com/" rel="noreferrer" target="_blank">https://redfish.com/pipermail/friam_redfish.com/</a><br>
1/2003 thru 6/2021 <a href="http://friam.383.s1.nabble.com/" rel="noreferrer" target="_blank">http://friam.383.s1.nabble.com/</a><br>
</blockquote>
</div>
<br>
<fieldset></fieldset>
<pre>-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom <a href="https://bit.ly/virtualfriam" target="_blank">https://bit.ly/virtualfriam</a>
to (un)subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a>
FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" target="_blank">http://friam-comic.blogspot.com/</a>
archives: 5/2017 thru present <a href="https://redfish.com/pipermail/friam_redfish.com/" target="_blank">https://redfish.com/pipermail/friam_redfish.com/</a>
1/2003 thru 6/2021 <a href="http://friam.383.s1.nabble.com/" target="_blank">http://friam.383.s1.nabble.com/</a>
</pre>
</blockquote>
</div>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. ---
-.. .<br>
FRIAM Applied Complexity Group listserv<br>
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p
Zoom <a href="https://bit.ly/virtualfriam" rel="noreferrer" target="_blank">https://bit.ly/virtualfriam</a><br>
to (un)subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" rel="noreferrer" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>
FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" rel="noreferrer" target="_blank">http://friam-comic.blogspot.com/</a><br>
archives: 5/2017 thru present <a href="https://redfish.com/pipermail/friam_redfish.com/" rel="noreferrer" target="_blank">https://redfish.com/pipermail/friam_redfish.com/</a><br>
1/2003 thru 6/2021 <a href="http://friam.383.s1.nabble.com/" rel="noreferrer" target="_blank">http://friam.383.s1.nabble.com/</a><br>
</blockquote>
</div>
<br>
<fieldset></fieldset>
<pre>-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom <a href="https://bit.ly/virtualfriam" target="_blank">https://bit.ly/virtualfriam</a>
to (un)subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a>
FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" target="_blank">http://friam-comic.blogspot.com/</a>
archives: 5/2017 thru present <a href="https://redfish.com/pipermail/friam_redfish.com/" target="_blank">https://redfish.com/pipermail/friam_redfish.com/</a>
1/2003 thru 6/2021 <a href="http://friam.383.s1.nabble.com/" target="_blank">http://friam.383.s1.nabble.com/</a>
</pre>
</blockquote>
</div>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .<br>
FRIAM Applied Complexity Group listserv<br>
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom <a href="https://bit.ly/virtualfriam" rel="noreferrer" target="_blank">https://bit.ly/virtualfriam</a><br>
to (un)subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" rel="noreferrer" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>
FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" rel="noreferrer" target="_blank">http://friam-comic.blogspot.com/</a><br>
archives: 5/2017 thru present <a href="https://redfish.com/pipermail/friam_redfish.com/" rel="noreferrer" target="_blank">https://redfish.com/pipermail/friam_redfish.com/</a><br>
1/2003 thru 6/2021 <a href="http://friam.383.s1.nabble.com/" rel="noreferrer" target="_blank">http://friam.383.s1.nabble.com/</a><br>
</blockquote></div></div>