Chart of the Week
August 4, 2025

Source: Our World in Data
Normal Sampling
July 23, 2025

Made with đ©” by Dialid Santiago.
Reading about A.I.
July 21, 2025
Just read this fascinating article in The New Yorker
Takeaways:
đ A.I. shifts our opportunity costs. âCollege is all about opportunity costs.â Students today spend far less time on schoolwork than in decades past. In the early 60s, college students spent an estimated 24 hours a week on schoolwork. Today, that figure is about 15 hours. With tools like ChatGPT, what used to take hours now takes minutes. But what are we giving up when we outsource the messy, formative process of thinking?
⥠The intoxication of hyperefficiency. Most students start using A.I. as an organizer aid but quickly evolve to off-loading their thinking altogether. Moreover, they describe using it like social media: constantly open, constantly tempting. Are we becoming so efficient that we forget why weâre thinking in the first place?
âïž Yes, typing is fast… “but neuroscientists have found that the âembodied experienceâ of writing by hand taps into parts of the brain that typing does not. Being able to write one wayâeven if itâs more efficientâdoesnât make the other way obsolete.”
đ§ââïž What does it mean to âsound like ourselvesâ? As AI gets better at sounding like us, do we risk forgetting what we sound like in the first place?
đ Is Cognitive decline associated to A.I.? According to a recent study from the Organisation for Economic Co-operation and Development, human intellect has declined. The assessment of tens of thousands of adults across 31 countries showed an over-all decade-long drop in test scores for math and for reading comprehension. Andreas Schleicher, the director for education and skills at the O.E.C.D., hypothesized that the way we consume information todayâoften through short social-media postsâhas something to do with the decline in literacy.
Books
July 18, 2025
The Diary of a CEO: Geoffrey Hinton
July 12, 2025
Just finished watching Geoffrey Hintonâs interview on The Diary of a CEO â a rare, candid conversation from one of the pioneers of AI.
Takeaways
-
Real AI risks are already here Hinton emphasizes that the pressing dangers aren’t sci-fi superintelligence, but things we see today: algorithmic manipulation, the amplification of bias, and disinformation at scale.
-
Hinton comes from a family tree of scientific legacy. His lineage includes George Boole best known as the author of The Laws of Thought (1854), which contains Boolean algebra; George Everest, a geographer and a Surveyor General of India, after whom Mount Everest was named; and Joan Hinton a nuclear physicist and one of the few women scientists who worked for the Manhattan Project in Los Alamos.
-
Hinton left his academic post not for ambition or prestige, but to earn enough to support his son, who has learning disabilities. His story highlights a quiet crisis: academia struggles to retain top talent when it canât offer the financial security that industry can. What does it say about our priorities when some of the most important work in science canât afford to support a family?
Related
The Limited Virtue of Complexity in a Noisy World
July 7, 2025
Cartea, Ălvaro and Jin, Qi and Shi, Yuantao, The Limited Virtue of Complexity in a Noisy World (April 02, 2025). Available at SSRN here or here.
Summary: In this paper, the authors analyse the role of model complexity in the context of predicting asset returns and portfolio construction. In particular, they aim to address the significant question of whether adding a large number of predictive features ultimately harms performance. Their work aims to bridge two views: the traditional econometric one, favoring parsimonious models (i.e. Occamâs razor), and the more modern machine learning findings that highly the fact that so-called “overparameterized” models can perform well under proper regularization ( double descent phenomenon).
The authors set up a framework where investors predicts excess returns using a large number of features, but these features are “contaminated” by noise (which can arise from data collection gaps, computational approximations, or other infrastructure limitations). They examine how this affects the Sharpe ratio of a timing strategy, which the investor seeks to maximize, and the out-of-sample R-squared of return forecasts. Given the high-dimensional setting, they use ridge regression and apply Random Matrix Theory classical results to characterise the asymptotic behaviour of these metrics, focusing on the case where true features are independent (a quantitative investing).
Their results show that model complexity can improve asset return predictions and portfolio performance when regularisation is used and data quality is high. However, when features are noisy or only partially observed, there exists an optimal level of complexity. Beyond this point, adding features introduces more noise than signal, degrading both predictive accuracy and portfolio outcomes.
Takeaway: In asset return prediction, higher complexity is not always better. When data are noisy, more features can harm performance underscoring that garbage in, garbage out applies strongly in quantitative finance.
Natural History Ensemble
July 1, 2025
Natural History Ensemble, no. 11 (1596â1610) by the talented Flemish draughtsman Anselmus BoĂ«tius de Boodt (1550â1632). Check out more of Anselmus BoĂ«tius de Boodt’s works in this Rawpixel Gallery.

Source: Public Domain. Original from the Rijksmuseum.