The Book of Why

Judea Pearl’s interventions for exploring cause-effect can be discussed in terms of families of related Bayesian Networks, so IMO, they should definitely be implemented in future versions of Edward.

Judea Pearl thinks current AI is mere curve fitting. Human-like AI is much richer. To reproduce it with software like Edward will require implementing his interventions. Some programs in R and the commercial software BayesiaLab (no association to me) already do some basic Judea Pearl interventions (causality calculus). I hope Edward does so too, someday, soon.

Finally, if you think like me that classical Bayesian networks are a super useful way of thinking about probability and statistics, I suggest to you that their quantum generalization, quantum bayesian networks, are a vast, at present mostly unexplored and untapped, frontier. Quantum Mechanics is, after all, an incredibly successful statistical theory, the basis of most successful new physics theories for the past century. So combine Bayesian networks with quantum mechanics and quantum computing, and you are bound to get something super cool and useful.

Thanks for sharing @rrtucci.

Any generative model is a causal model. (More formally, any generative process can be rewritten in terms of structural equations, where sampling corresponds to uniformly distributed nuisance variables and functional mechanisms as inverse CDFs.) So Edward already supports causality. Causality is really about guaranteeing the right assumptions about your data so that you can go from probabilistic inference to causal inference.

We applied Edward for example to causal models for GWAS; check out Tran and Blei (2018).

This can be not obvious to non-experts. It would be great to have a tutorial about all this at some point.

1 Like

Thanks Dustin. I was not aware of this and will try to understand it better. My background is in physics, in which I have a PhD. I wish there were more communication between the physics and statistician communities. Physicists often re-invent the wheel because they are too proud to consult those who have been building wheels since the stone age

Hi, Dustin.

Your implicit causal model for GWAS is really cool, and its causality identification performance is definitely amazing. Actually, I have read it several times, during my appreciation, I have to go back to your earlier papers on generative implicit models and likelihood-free variational inference. I benefit a lot from the paper and get better understanding about causality, especially probabilistic causal models.

I really appreciate that you provided the core code for the ICMs of the treatment model (i.e., SNPs) and the outcome model (traits). I am also wondering whether the full code, or a demo, is avaliable since I am not familiar with genetics or GWAS and find myself lost in data simulation and pre-processing.

Thanks very much.

1 Like

Dachylong, perhaps Dustin would appreciate your assistance in writing a notebook about his paper with Blei, so as to include it as part of the Edward Tutorials