29 Nov 21
07:50

Longevity Book Reviews

I’ve been increasingly interested in longevity for the past few years but not written much about it. A part of the reason is that I’ve tried to keep my finances out of this blog, and part of my interest in longevity is in investing. I want to talk about that in a future post. This post is about a curated selection of longevity-focused books that I’ve read.
Longevity research has been slowly but steadily accelerating since the 90’s. There has been increasing progress since the late 19th century, but until recently it has been ‘career suicide’ to dedicate one’s research to this domain.
This post has two purposes for being: I would like to capture and share my thoughts on some of the longevity books that I’ve read. Eventually, I would also like to provide a useful guide for anyone looking to start exploring longevity biotech by reading a casual non-fiction book. There are a lot of books labeled as longevity books that are really just lifestyle, diet, fitness, cosmetics, and supplement books. Those topics are tangentially related to longevity, but are not the most interesting and impactful longevity space is primarily concerned with. The authors of these books are often not even aware of the high likelihood that maximum human lifespan and healthspan will increase by significant amounts in this century. To compound the issue, some of the authors are celebrities that are about ’embracing aging’, such as Cameron Diaz’s book, and their fame means that if you search for ‘longevity book’ on Amazon you will first see one of these books.
I’ve selected four important books for longevity that are targeted to the general audience and each provide a different value. I will do mini-reviews of them here, focusing on their differential value add. The books I’m considering here are written by world class researchers that break down the research trends and explain the cellular mechanisms that will enable such a future that will slow, stop, and eventually reverse aging. In case you are new to this idea, if it sounds suspicious, perhaps because of the history of dubious claims in supplements and alternative medicine, some appeals to authority may be useful to keep you from closing the tab. The people doing the important research here include PhDs and professors at universities such as Harvard and Stanford, and publish the research in the top journals such as Nature and Cell. Institutional capital, from funds and venture capital, is backing a growing number of these companies, which requires significant diligence to vet the ideas. There are also publicly traded companies in the space, as well as highly visible private ventures such as Alphabet’s Calico and Altos Labs recent $270M round announcement, backed Jeff Bezos.
Ageless
Ageless: The new science of getting older without getting old – Andrew Steele (2021)
Unlike Sinclair and Barzilai, Andrew Steele hasn’t launched any LongBio companies (or been in licensing disputes regarding products). His background is in physics. This disconnectedness actually appears to give Steele an edge in being able to provided less biased surveys of the different approaches and research. Steele does not cover every advancement and research area (e.g. fisetin is not even mentioned in the senolytics section), but Steele also manages to keep a balance between the history of research, technical discussions, and big picture trends, which makes this book very appropriate for a first foray into longevity books.

Update (Feb 2024): Andrew Steele continues to be interesting, and has a number of videos on the topic that are informative and entertaining.  I also recommend reading this blog post if you want a detailed review.

Lifespan
Lifespan: Why we age – and why we don’t have to – David Sinclair (2019)
This book is written with enthusiasm that is infectious at times. David Sinclair is a very high profile person in this space, having an extensive research career and having developed several businesses that have made money. While there are supporters, there are also those that express doubt on reddit that feel that his enthusiasm is a charade. I can’t comment on whether or not the book is genuine (which is not falsifiable), but I did felt that the book was well written and presents a cohesive story that benefits from Sinclair’s solid research and business background and conviction. This is certainly a field that deserves passion. Sinclair presents describes aging in terms of information theoretic entropy and accumulation of DNA damage and epigenetic noise, suggesting a focus on these DNA and cell-based mechanisms of aging, with less focus on others such as atherosclerosis. The content is naturally biased towards Sinclair’s interests describing his personal story with NAD+ and resverterol (with a little discussion of the GSK/Sirtris shutdown). It was a pleasant surprise to see the depth into which Sinclair discusses the sociological consequences of extending lifespan, beyond the typical strawman arguments. These societal questions are something that Steele and Barzilai only address at the surface level, which is acceptable, since they are not sociologists. But for the layperson reading into this field for the first time, these questions can be the gatekeepers, and it is important for a public advocate to be able to speak to them. The weakest part of the book is also the bias towards his own research, especially in the anecdotal descriptions of his and his father’s self-treatments using NMN, metformin and resveratrol, as well as to some extent the neurogenesis glaucoma work. The near miraculous effects in his father (from being near bed-ridden to being super healthy and active) are so outsized from the expected outcomes in the literature, that even with the preface that this is just n=1, this will encourage a general audience to self-experiment with similarly outsized expectations. I much rather preferred Steele’s approach, which was to say the only verified approaches are exercise and diet for now and that we just need to wait for better studies and drugs. Overall, because of Sinclair’s influence on the field and conviction, I think this book is a must-read (but should be supplemented with other perspectives).

Age Later
Age Later: Health Span, Life Span, and the New Science of Longevity – Nir Barzilai (2020)
This book follows Nir Barzilai’s research and looks at centenarians in depth to see what patterns emerge. The data is naturally observational and confounded, so the trick is to come up with a theoretical causal model, and then test it in a more controlled experiment. Barzilai has access to gene banks that make this study more interesting. Barzilai talks about business and investing to a larger degree than any of the other books, including business development, fundraising, and valuations, which was refreshing to see. Barzilai also talks about metformin in considerable detail, including how he managed to design, advocate for, and launch the Targeting Aging with Metformin (TAME) trial, including getting the National Institute of Aging (NIA) to sign on. It might be natural to focus on research progress, but in practice, these interpersonal, business, and salesman ship skills are incredibly important in this field, so it is wonderful to see this book cover that in such detail. As suggested by the centenarian premise, this book does not try to look at each root cause of aging, but instead looks at individual solutions that are interesting to the author. As a result, it has coverage of different research topics when compared to the other three books (for example, a significant section on statins). As a result, I would recommend this book as a supplement to the others.

Ending Aging
Ending Aging – Aubrey de Grey (2007)
This is a groundbreaking book that paved the way for others to follow and helped to form a movement and community/organizations (SENS). The book describes in detail seven root causes of aging and proposes theoretical solutions for each of them. Because of AdG’s background and previous book, he spends more time on the mitochondrial mechanisms. This book was written for a general audience, but it does go into enough detail that the technical sections are slower to read. This detail is refreshing at times but some of the detail will likely be lost on someone like me (e.g. little biology background) unless I go back and forth between the pages several times. The other thing is that this book does feel a little dated – it won’t contain the latest advancements, and some of the optimistic predictions of progress (which were contingent on proper funding) have already expired without being fulfilled. For these reason I would recommend the Steele or Sinclair book as a first book before approaching this one.

There are of course, many more interesting books to read. I also read Al Chalabi and Jim Mellon’s Juvenescence and Jean Hebert’s Repalcing Aging and found them more specific than the above books. Juvenescence‘s subtitle is ‘investing in the age of longevity’, but there is not that much content about actual investment and business (other than a compilation of a list of companies), which was surprising to me given Jim Mellon’s background. Rather, it spends most of the time on surveying the research progress, as well as a mixture of health advice – this part aligns closely with David Sinclair’s book, perhaps due to their business relationship. Jean Hebert’s book is a short and powerful, presenting a very different strategic approach using replacement of damage to handle the majority of the problems caused by aging, with the brain being the one thing that needs special treatment due to identity preservation. This is a thought-provoking quick read that even brings transhumanistic and philosophical questions into the mix.

All of these books are worth a read. It’s great to see so many of them coming out, especially because of the rapid pace of the research and industry. I’d say it’s important to go into these books with the right mindset. Most of the negative comments are about not being actionable feedback for people that want treatments today. Unlike the ’embracing aging’ books it doesn’t try to make you feel better about getting old, and I think this is jarring to some people as well. But the point of these books is to change the pro-aging stance that society has developed before there was anything that could be done about it. This is a field that will change life as we know it, literally, and it is just taking off now. On a parting note to get in the right mindset, I highly recommend reading Nick Bostrom’s gem The Fable of Dragon-Tyrant.

22 Apr 21
09:23

Review of The Information

All disciplines have interesting histories that explain their development. For some reason, different studies seem to ‘value’ their history differently. Art and music students are required to study art and music history with multiple dedicated classes. Computer science and mathematics students do not have to study math or computer science history in a typical undergraduate program (if you are lucky, maybe one or two classes). Instead, if the computer science or math student is lucky, they get a charismatic professor who is a good story teller and fits in anecdotes about the creators of the topics being studied, or a reference to a book about this.

Presumably, the history of art is useful to an artist, not only for telling interesting stories to their students one day, but also to understand something deeper about where new art comes from. A deeper understanding may be useful for a number of things, including how to go about creating novel art. The creation of new abstract concepts in math and art have certain high-level similarities – they provide a new way to look at things. Perhaps the poster child for this type of thinking is Xenakis, whose existence united architecture, statistics, algorithms, and music, where knowledge across the fields had an interesting synergy. But even if we ignore cross-discipline examples, I think we will find that the innovators typically have had an interest in the history. Is the assumption for the sciences that this is correlation, and not causation? That argument can be made, but it seems less likely to me. Or is the assumption that in the interest of time, most undergraduate students don’t need to be innovators, and rather just need to understand how to solve the damn equation, not how it came to be?

Perhaps this is a straw-man. I may be generalizing from an unrepresentative experience, and it has been a number of years since I was in school. It seems like folks pursuing graduate degrees in math/computer science have more understanding of history, and because this information is easier to come by today than 20 years ago, people that self-study also pick this up naturally. In that case all I can say is that I was not aware of so much of the history involved in computer science, and I wish I had started my studies with something like James Gleick’s The Information.


Information theory is a relatively new field of the sciences. Of course, it did not spring out of nowhere. There are a few history-oriented books that describe its formation, but there is not much. Gleick’s coverage is by far the widest I’ve seen.

The book has an excellent cast of characters, starting out with long distance communication with African drums, Babbage/Lovelace and early computers, and Laplace. As the book develops the more typical founding characters in Information theory appear, with Maxwell and his heat daemons, Clausius and his entropy, Morse with his codes, Hartley, Shannon, Wiener, Turing, and Kolmogorov. What makes the book’s presentation special is the depth in which each character is gone into. There are a large amount of supporting characters and competitors that I hadn’t heard of, which provides great context for the developments. Naturally, a large amount of time is spent of the juicy rivalries such as the Shannon-Wiener relationship, but also how it fit into the rest of the world, e.g., how Kolmogorov felt about them.

I was introduced to a range of new connections that I was not aware of, including the Schrodinger (yes, that Schrodinger) connection to molecular biology and What is Life?. There were also nice teasers for the parts of info theory I haven’t had exposure to such as Quantum computing and Schorr/Feynman’s thoughts on it. There also deeper ties to fundamental math history such as the early developments in greek and arabic states from Aristotle to al-Khwarizmi. I was also unaware of the amount of obsoleted infrastructure required for telegraph networks, and the book spends a good time talking about the logistics of this kind of thing.

I very much enjoyed this book, although it still misses a few important areas. Notably, Kullback’s application of information theory to statistics, as well as Bayesian statistics and the related information criteria are not mentioned. Deep learning is also not mentioned, but the book was published in 2011, before the recent surge. Naturally, Gleick also discusses the fictional works of Borges. Unfortunately as much as I enjoy Borges, I found this to be the weakest part of the book.

At 426 pages, Gleick’s presentation is almost entirely conceptual and non-technical, so I think this would be an great bedtime read for anyone interested in the topic that isn’t in a rush. For faster and a more technical approach, one might consider John Pierce’s book.

23 Feb 21
22:30

Noise bands from interpolating instantaneous frequency

Frequency bands are often used in analysis or input representation, for example, in mel spectogram, there are a number of bands of differing frequency widths used to represent the signal. In synthesis, frequency bands are also used. However, synthesizing a frequency band of non zero width is usually a noisy process. The most common way to synthesize this is by generating wide band noise, either with a white noise generator or an impulse, and filtering it to have fewer frequencies. For example, with a band pass filter or high-Q IIR filter. When narrow enough these will resemble sinusoids with some attractive roughness. FM can be used to create narrow bands as well, and be made more complex with a daisy-chained FM. In the music page, the piece “December 9” uses daisy chained FM and granular synthesis exclusively to create tones and rain-like sounds. Each of these has some drawbacks and advantages depending on your use case – IIR might explode, daisy-chained FM is unstable in other ways. They are all pretty neat – somehow it’s enjoyable to turn noise into something that resembles a sinusoid.

I want to describe another technique to create banded noise which I’ve used before that does the inverse, turning a sinusoid into noise. I’m fairly sure others have used this as well, but it doesn’t seem to be well documented. The basic idea is to start with a sinusoidal unit generator and linearly interpolate the instantaneous frequency from the previous target frequency to a random target frequency, with the target updating at an average but random duration that is inversely proportional to the width. The phase advances monotonically each step. The result is a band of noise that is a perfect sinusoidal tone at zero width, and white noise at full width.

This technique was described in my first academic paper at ICMC 2007 as follows:

It might be interesting to create some DDSP modules that implement this for DNN context.

31 Jan 21
22:19

Books I Read in 2020: Stats

I wanted to write in-depth book reviews for books that I enjoyed in 2020 at the very start of the year. Well, it’s already a month and I haven’t gotten to it.
So to kickstart it, I’m just going to make the task easier and just list some of the books I read in 2020 that were noteworthy with one or two sentences. And to narrow it down further, I’m just going to talk about stats-related books, because I read a few of them.

  • Science Fictions, Stuart Ritchie (2020)

  • This is a relentless, even thrilling debunking of bad science, starting with the replication crisis in psychology and ending up touching much more than I expected. The stories are captivating, and the explanations focus on the system’s perverse incentives for fraud, hype and negligence more than blaming the individuals (although there is definitely shaming where it is called for). The criticism of Kahneman’s overconfidence in Thinking Fast and Slow was refreshing to read, because he (and Yudkowsky in his senquences) says something to the effect of ‘you have no other choice to believe after reading these studies’, which felt like it didn’t match the larger message about questioning your beliefs and updating them on new information. It is a good lesson, not without a healthy helping of irony that rationalists need to be told that they should be less confident that they don’t know everything yet. Another great criticism was about the hugely popular, and often unchallenged Matthew Walker’s Why We Sleep, which I just took at face value before reading this book.

  • Statistical Rethinking, Richard McElreath, 2nd edition (2020)

  • This is an excellent introduction to Bayesian statistics, and pairs wonderfully with the author’s engaging, enlightening, and entertaining recorded 2019 lectures on YouTube as well as the homework problems on GitHub. Miraculously, McElreath manages to pull off a new video phenomena that mixes statistics with hints of standup comedy. There are no mesmerizing 3blue1brown-like plots, but McElreath picks interesting problems and datasets to play with, and breaks down the model into the core components that a newcomer would need to understand it. I also appreciate that the course is designed for a wide range of people, so there are very little assumptions about math backgrounds, but if you know calculus, information theory, and linear algebra, there are nice little asides that go deeper. It’s also great how up-to-date the book is – I didn’t expect to be so interested by the developments in Hamiltonian Monte Carlo that have come in the past few years, but it seems the field is undergoing rapid development. This book also helped me with understanding casual inference and confounders and is a great follow-up to the casual-reader oriented Pearl book mentioned below.

  • The Book of Why, Judea Pearl (2018)

  • This book was the first thing I read on causal inference and causality in general. It’s a light non-fiction book that assumes no math background, and does a good job of explaining how to answer the question of disentangling correlation and causation. The book has interesting problems and examples, such as how controlling on certain types of data (confounders) can actually cause you to find spurious causation, and how to go about showing that smoking really does cause cancer when it is unethical to do a randomized controlled trial and there are tons of confounders. There is a fair amount of interesting history in it, and because of that, there are traces of personal politics that seem slightly out of place, but it doesn’t detract from the book too much. This book is sort of a teaser for Pearl’s deeper textbook, Causality, which I haven’t read. The Book of Why doesn’t really go into how you would create a model of your own, or how such a bayesian model would compare to the deep neural network/stochastic gradient descent models that are driving the computing industry today. Perhaps Causality covers some of these things, but I felt like it would benefit from a few comparisons between the popular frameworks with a toy problem that deep learning can’t solve. Still, it could be argued that this was not the point of the book. In any case it was an enlightening introduction that gave me other questions to pursue, which is the type of book I am after.

12 Dec 20
16:22

Log probability, entropy, and intuition about uncertainty in random events

Probability is a hard thing for humans to think about. The debate between Bayesian and orthodox (frequentist) statistics around the relationship of event frequency and probability makes that clear. Setting that aside, there are a whole bunch of fields that care about log probability. Log probability is an elemental quantity of information theory. Entropy is a measure of uncertainty, and at the core of information theory and data compression/coding. Expected log probability is entropy:
H(X) = -\sum_x p(x) \log p(x) = -\mathop{\mathbb{E}} \log p(x)

For a uniform distribution this can be simplified even further. A fair coin toss, or die throw, for example, has uniform probability of heads/tails or any number, and we get:
H(X) = -\sum \frac{1}{n} \log \frac{1}{n} = -n \frac{1}{n} \log \frac{1}{n} =  -\log \frac{1}{n} = \log{n},
where n = 2 for a coin flip and n = 8 for a 8-sided die (because there are 2 and 8 possible values, respectively). So with a base-2 log, we get H(X_{coin}) = \log{2} = 1 for the coin flip, and H(X_{d8}) = \log{8} = 3 for a 8-sided die.

One thing that might not be immediately obvious is that this allows us to compare different types of events to each other. We can now compare the uncertainty in a coin flip and the uncertainty in a 8-sided die. H(X_{d8}) = 3 H(X_{coin}), so it takes 3 coin flips to have the same uncertainty. In fact, this means you could simulate an 8-sided die with 3 coin flips, but not 2 coin flips with some sort of tree structure: the first flip determines if the die is in 1-4 or 5-8, the next if it is in 1-2 or 3-4 if the first flip was heads, or 5-6 vs 7-8 for tails on the first flip, and the last flip resolves which of those two numbers the die ends up on.

You probably would have been able to come up with this scheme to simulate die throws from coinflips without knowing about how entropy is formulated. I find this interesting for a couple reasons. First, this means there may be something intuitive about entropy that we have in our brains that we can dig up. The second is that this gives us a formal way to verify and check what we intuited about randomness. For this post I want to focus on intuition.

The first time you are presented with entropy, you might wonder why we take the log of probability. That would be a funny thing to do without a reason. Why couldn’t I say, ‘take your logs and build a door and get out, I’ll just take the square or root instead and use that for my measure of uncertainty’, and continue with my life? It turns out there are reasons. I wanted to use this post to capture those reasons and the reference.

If you look at Shannon’s A Mathematical Theory of Communication, you will find a proof-based solution that’s quite nice. Even after looking at it if you haven’t looked at convexity-based proof in a while, it can still be somewhat unintuitive why there needs to be a logarithm involved. Here is a less formal and incomplete explanation that would have been useful for me to get more perspective on the problem.

There are a few desirable properties of entropy that Shannon describes. For example, entropy should be highest when all the events are equally likely. Another one of them is how combining independent events like coin flips or dice rolls combine the possibilities. This means that the number of outcomes is exponential on the number coin flips or die rolls. So if I compute the entropy of one coin flip and another coinflip, and add them together, the sum should be the same value if I were to compute a single entropy on those two coinflips together.

If you want a measure that grows linearly on number of coin flips or die rolls that achieves this property, then taking the logarithm of the number of combinations gives you just that. No other function that isn’t a logarithm will do that. This is because the number of possibilities of n coinflips is exponential. Notice that \log_2(2^n) = n, where n is the number of coin tosses and 2^n are the possible outcomes for n coin tosses). So the log inverts the exponent added by the combinations of multiple events, which gives entropy the linear property on n.