[FRIAM] TheoremDep

uǝlƃ ☣ gepropella at gmail.com
Fri Mar 29 12:56:53 EDT 2019



On 3/29/19 9:29 AM, Steven A Smith wrote:
> All I think Frank and Joe did was make a jump from TheoremDep the tool to imagining things one might do with the underlying ConcepDag data structure...   "proof generation" I suppose.

Yes, I can see that.  The answer(s) to the question, though, is still interesting.  I tried to allude to it by offering the 3 alternatives: chain, tree, mesh from the foundations up to "sophisticated" conclusions.  It seems to me that if one reacts to something like TheoremDep with a sense of "concatenation", then that implies they think about this sort of reasoning as a chain or, at least, a tree.

For context, I have the same problem with what we've (I've?) come to call "physics-based biological modelers".  Their focus is on physically realistic simulation engines, upon which they implement chemically realistic reactions, upon which they implement biologically realistic mechanisms.  I (being the soft-X type of person I am), tend to focus on *relational* modeling, which can be usefully applied at any layer of the heterarchy.  Or, at least, I prefer a "middle out" method, starting at the layer where you understand/care the most.  The physics-based people tend to think in terms of chains and trees, whereas the relational people tend to think in terms of networks.  I call the phyics-based people "Grand Unified Modelers" because they're seeking the One True Model.  ... very Monistic.

> The old fascination with why mathematics (as a pure abstraction) seems to line up so well with physical reality comes up again.   I think there is some kind of anthropic principle involved?   Do you have any parallax on this?

I tend to believe it boils down to *attention*, intentionality, purposeful behavior.  It strikes me that math can unambiguously describe *anything*, real or fantasy.  And where non-ambiguity is required, math will be successful.  Bands like Tool argue against me in some sense, because they use a lot of math in the creative composition of their tunes and lyrics.  But, then again, they're nerds and have high expectations for how tightly a performance hangs together ... which is, again, a requirement for non-ambiguity.  My guess is math would be less successful if we tried to use it to, say, foster creativity in children ... at least for now.

> I would *also* like to try to lay out the ConceptDAG data structure with enough interactivity to do as you suggest ... removing a node and seeing what "breaks"...  or possibly adjusting weights on the edges?  I'm not clear there is a meaningful semantic to that, but as usual I'm interested in trying to get a "big picture".  I did a project like this with the Gene Ontology in 2004 or so...

Very nice!  Thanks for including the graphs.

> The Gene Ontology is a partially ordered set which provided more constraints (and opportunities)... in this example, we didn't delete individual nodes, but could adjust the force-directed layout paramaters as well as grab any node and drag it far from it's equilibrium position.  This was useful for untangling local energy minima but also for inferring dependencies.

The author of TheoremDep cites Metacademy (here's a good example: https://metacademy.org/graphs/concepts/incompleteness_of_set_theory#focus=mh2y5zs9&mode=explore).  That strikes me as a system amenable to edge weight adjustment.  E.g. How important is it to understand Russell's Paradox, really, for understanding how ZF fits into this "lineage of reasoning"?

In the kind of graph I'm imagining, it may be better to imagine the lemmas as edges and the nodes as something else (conceptions of reality, maybe?).  Lemmas would, then, be transitions from one state to another.

-- 
☣ uǝlƃ



More information about the Friam mailing list