Libido Sciendi Digest 07 - Before the Optimisation
Pareto frontiers in n dimensions, Nvidia's no-surge-pricing, Plato's joints, Claude's 34 million features: four upstream choices that the answer never sees but always inherits.
This is your Libido Sciendi Digest. Weekly notes from a book-obsessed seed investor exploring AI, living systems, ethics of technology, emergence, and the diffusion of innovation. Essays, field reports, book notes and curated readings from libido-sciendi.com.
If you enjoy it, I would genuinely appreciate it if you shared it with someone who might too. And if you have feedback on the format, just reply to this email.
#07
Three pieces this week run the same upstream move from three different angles.
Before the optimisation, the specification.
Before the classification, the boundary.
Before the answer, the choice of which dimensions to count.
The hardest part of every interesting problem is the upstream act of fixing what the function being optimised actually is, and that act is invisible by the time the answer arrives.
The World Bank moved its international poverty line from $2.15 to $3.00 a day in June 2025, and the global count of people in extreme poverty in 2022 was revised up from 713 million to 838 million, a jump of 125 million without any household becoming poorer in the interval. The redrawn line did the work.
Karl Popper (1902-1994, Austrian-British philosopher of science) gave the move its sharpest formulation in his 1934 Logik der Forschung. He called it the demarcation problem: how to tell a scientific claim from one that only looks scientific. His answer is refutability, also called falsifiability. A theory earns the label “scientific” when it states, in advance, what observation would force it to be wrong. The criterion sits upstream of any evidence the theory eventually accumulates: any half-decent theory can find some, what matters is what would count against it. Einstein’s 1915 general relativity predicted a precise angle for light bending around the sun; the 1919 Eddington eclipse expedition could have measured a different angle, and the theory would have been finished. The risk was stated before the test. Freud’s psychoanalysis runs the other way: whatever the patient does, the theory absorbs the result, because no clinical outcome is ruled out ex ante, and any datum can be reconciled with the theory after the fact. The contrast organises Popper’s Conjectures and Refutations (1963). What Popper names in epistemology is what the three pieces of this week trace in strategy, in classification, and in classification’s 2026 algorithmic form.
Our weekly deep dive turns the move on strategy: Apple refuses to license iOS, Costco caps its margins at 14%, Berkshire holds $300 billion in cash, and Nvidia refuses to surge price under a generational GPU shortage. Each looks irrational on a one-dimensional objective; each is the rational answer to a richer function once Williamson’s asset specificity, Axelrod’s repeated games, and the n-dimensional Pareto frontier are added back into the specification.
The first journal entry, Carving at the Joints, walks the philosophical arc from Plato through Mill, Goodman, Hacking, and Bowker and Star, and lands on a discipline of attention: keep the line visible as a line. The sequel, When Algorithms Carve, ports the arc into 2026, where the carver is a foundation model with 34 million sparse-autoencoder features deploying at the scale of a search engine, and the looping effects Hacking traced through DSM editions across decades now close in seconds.
Plenty of the anchors we cite as measurements are choices we inherited and stopped checking. The boundary between an “AI-native” startup and an “AI-enabled” one, treated as if there were a fact of the matter. The seven-ish-year venture cycle, lived as if seven were a permanent property of young tech companies and not a residue of US tax code. The top-quartile TVPI threshold by vintage, drawn against a peer set itself selected by survivorship.
There is a political asymmetry built into every line that has gone invisible. Whoever drew it paid the cost of drawing once; everyone who inherits it pays the cost of living with it. The 125 million people who crossed the new World Bank threshold in 2025 were not consulted on the new line. The patient diagnosed against a DSM threshold inherits a working group from forty years ago. The LP writing a check against a ten-year fund life inherits an institutional convention from when the industry was small enough to fit on Sand Hill Road. In 2026 the asymmetry sharpens, because the next generation of categories is being drawn by foundation models trained on yesterday's lines, deployed at the scale of a search engine, and shipping new boundary decisions at the pace of a product release.
Popper’s rule is a procedure we can run. Before we commit to a line, we name the observation that would force us to redraw it, and we put the name where someone else can hold us to it.
Pareto Frontiers and Lock-In: When Mathematical Sub-Optimisation Is the Right Strategy · 25 min read
Part 3 of the Reading Nvidia series. Most apparent strategic irrationalities are the rational answer to a richer problem once the dimensions, the horizon, and the game structure are added back in. The difficulty of strategy is never in the optimisation; it is in correctly specifying the function being optimised.
The Pareto frontier is n-dimensional, not two-dimensional. Markowitz’s 1952 efficient frontier put portfolios on this footing with risk and return as axes; the inference market repeats the move with throughput and latency, but the real decision space adds at least six more (99th-percentile latency, throughput per watt, per capex-dollar, memory bandwidth, reliability under load, mixed-model serving). Every strategy that ignores the extra dimensions optimises a function nobody is playing.
Williamson’s asset specificity is why surge pricing destroys value on both sides. Williamson (Nobel 2009) names five types of specificity, and the Nvidia-TSMC corridor hits all five at once. A hyperscaler committing $50 billion of capex over three years cannot renegotiate cycle by cycle, and the thirty-year TSMC relationship runs without a formal contract because the absence of one is the signal that trust is total.
Axelrod’s tournament is the empirical proof. Cooperation emerges spontaneously in repeated games where memory and the future matter, and tit-for-tat won the 1979 and 1980 round-robins against entries written specifically to beat it. Nvidia’s no-surge-pricing decision is the textbook application.
Carving at the Joints: On Segmentation, Its Limits, Its Use · 14 min read
A tour from Plato’s Phaedrus (265e) through Mill’s natural kinds, Goodman’s grue, Hacking’s looping effects, Bowker and Star’s torque, and Scott’s legibility. The conclusion is not a refusal of classification but a discipline of attention: we cannot think without categories, every operation in a data pipeline presupposes a partition, and yet no neutral grid exists. The cost of an unexamined line is rarely paid by the one drawing it.
When Algorithms Carve: Classification at Machine Speed · 14 min read
The sequel pushes the same question into 2026. Inside Claude 3 Sonnet, Anthropic researchers found roughly 34 million internal categories the model had built without being told to. The Chouldechova-Kleinberg impossibility theorem (2017) proves that any classifier scoring two populations with different actual outcome rates cannot satisfy calibration, equal false-positive rates, and equal false-negative rates at the same time. COMPAS was not a buggy model: the disagreement reduces to which definition of fairness one picks, and that choice is irreducibly normative. The loop Hacking traced through DSM editions across decades now closes through foundation models in seconds.
From Pareto Frontiers and Lock-In:
Pareto frontier: the set of points in a multi-dimensional space such that no point dominates another on all dimensions simultaneously; the optimum depends on the choices of the optimiser, the frontier does not;
efficient frontier (Markowitz): the Pareto frontier applied to investment portfolios, with risk and return as axes; demonstrates that “better in absolute” dissolves once multiple dimensions are accepted;
asset specificity (Williamson): investments whose value is primarily realised inside a specific relationship and would drop substantially outside it; site, physical, human, dedicated, and temporal varieties;
repeated games (Axelrod): when the same parties interact across many cycles with memory, cooperation emerges spontaneously and tit-for-tat beats maximum extraction; the optimisation horizon drives the cooperative equilibrium;
trade-off as strategy (Porter, Rumelt): strategy is the deliberate choice of which trade-offs to accept and which to refuse on a Pareto frontier; a company without trade-offs has no strategy, it has operations.
From Carving at the Joints:
carving at the joints (Plato): the right method of thought divides things according to their natural joints rather than breaking any limb in half; the twenty-four-century metaphor for classification done right;
natural kinds (Mill, Quine): some groupings hold together in a way that lets us learn from one instance and predict the next, and others do not; induction works only when categories pick out projectible regularities;
grue (Goodman): a predicate that fits all observed emeralds as well as “green” does and forks afterwards; the entrenchment of a category, not the world, is what makes one prediction feel obvious and the other strange;
looping effects (Hacking): human kinds produce feedback that natural kinds do not, with the classified reading the classification, changing because of it, and the category in turn getting revised to fit them;
torque (Bowker and Star): the friction generated when a body, a record, or a life fails to fit cleanly into a classification scheme the surrounding infrastructure cannot accommodate; the system bends the person rather than the scheme.From Andler on Problem and Situation
From When Algorithms Carve:
sparse autoencoder: a smaller network trained to express each activation as the sum of just a few features drawn from a much larger dictionary, on the principle that the underlying concepts are sparse even if the neuron-level patterns look dense;
polysemanticity: the property that any single neuron in a foundation model typically responds to several apparently unrelated concepts at once, and any single concept is spread across many neurons; the tangle that interpretability has to decompose;
fairness impossibility (Chouldechova / Kleinberg-Mullainathan-Raghavan): any classifier scoring two populations with different actual outcome rates must trade off across calibration, equal false-positive rates, and equal false-negative rates, and cannot satisfy more than two at once;
automation bias: the documented tendency of human users to weight machine-generated outputs more heavily than equivalent human judgements; a foundation model’s categorisations become harder to challenge than the human ones they replaced precisely because their machine origin lends the appearance of objectivity;
pragmatic interpretability (DeepMind): a research orientation that looks for whatever interpretive method actually predicts the model’s downstream behaviour, even at the cost of clean human-readable features; probing classifiers, causal interventions, and circuit-level analysis as alternatives to sparse autoencoders.
Six readings, split across the two operations the week ran. Three on specification (which dimensions get counted before the optimisation starts), three on classification (what gets carved, by whom, and how the loop tightens at machine speed). Each pushes against an assumption this week’s pieces relied on.
Strategy and specification
Deep Dive: Where Value Accrues in the AI Stack, Chamath (Substack), 25 min
A six-layer map of the AI stack from infrastructure to application, with fulcrum assets named layer by layer and the stack forking into software AI versus physical AI. Read after the Pareto deep dive and the question which dimensions does my strategy actually count sharpens by layer.The $1 to $10 rule that breaks every AI business case, AI Adopters Club, 12 min
Brynjolfsson’s J-curve ratio: for every dollar an enterprise spends on the model, expect up to ten on process redesign, data work, and change management. The dimensions a one-line AI budget never specified are the ones the pilots stall on.Services: The New Software, Sequoia Capital, 6 min
Sequoia’s argument that the next $1T company will be a software company masquerading as a services firm. Williamson’s asset specificity at the deal level: whoever absorbs the customer-specific configuration cost captures the rent it produces.
Classification and interpretability
When AIs act emotional, Anthropic, 5 min
Anthropic looking inside Claude’s neural network and identifying patterns that drive behaviour: the same sparse-autoencoder programme the algorithmic-classification entry walks through, told from the inside. The model’s functional emotions sit somewhere between Hacking’s looping effects and a research artefact, depending on how the line is drawn.Cognitive Surrender Explainer, Data Chutney, 12 min
The Shaw and Nave Wharton study in interactive form: 1,372 participants, 9,593 trials, three experiments on how AI assistance reshapes reasoning. The empirical layer underneath the looping-effects argument running through both journal entries.AI Coding Works. That’s the Problem, SimonDev (YouTube), 20 min
A developer take on the specification of “good software”: stronger AI coding tools widen the gap between what ships and what stays understandable. The maintenance dimension the productivity benchmark does not count, and the gap is exactly what shows up on a Pareto frontier of n dimensions.
Strategy: A History, Lawrence Freedman, 2013 (★★★★★)
Freedman’s 700-page survey from the Greeks through nuclear deterrence to network theory. The Pareto deep dive sits inside the longer arc Freedman traces, where Porter and Rumelt are visible as one tradition among several rather than as the default.These Strange New Minds: How AI Learned to Talk and What It Means, Christopher Summerfield, 2025 (★★★★★)
The cognitive-neuroscience-informed companion to the algorithmic-classification entry. Summerfield walks through how LLM internal categories form, what kind of mind they constitute or fail to, and where the answer touches questions twenty-four centuries old.Behind Pareto Frontiers and Lock-In
The Evolution of Cooperation, Robert Axelrod, 1984 (★★★★★)
The book behind the tit-for-tat tournament. Six chapters that resolve a research question in evolutionary game theory and reframe what counts as rational play in repeated interaction.Behind Carving at the Joints
Rewriting the Soul: Multiple Personality and the Sciences of Memory, Ian Hacking, 1995 (★★★★★)
Hacking’s full-length case study of the multiple personality disorder loop from the 1970s through the 1990s. Read alongside his 2006 LRB essay “Making Up People” and the discipline of attention the journal entry argues for becomes operational.















