<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
  xmlns:atom="http://www.w3.org/2005/Atom"
  xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Sam Xifaras</title>
    <link>https://samx.io/blog/</link>
    <description>Blog by Sam Xifaras</description>
    <language>en</language>
    <lastBuildDate>Tue, 21 Apr 2026 02:42:59 +0000</lastBuildDate>
    <atom:link href="https://samx.io/feed.xml" rel="self" type="application/rss+xml"/>
    <item>
      <title>A different way, Part 1</title>
      <link>https://samx.io/blog/2026-04-20-different-way-part-1.html</link>
      <guid isPermaLink="true">https://samx.io/blog/2026-04-20-different-way-part-1.html</guid>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[Against transactional thinking]]></description>
      <content:encoded><![CDATA[<p>The week of April 5th, which was a couple weeks ago now, was Eastern Orthodox
Holy Week. I decided to take that week off from work, and the result was an
unexpected reflection on my life and what is important to me. That inspired this
post, which is part one of a two (or maybe more) part series.</p>
<p>I was raised in the Greek Orthodox church, a part of the family of Eastern
Orthodox churches. For the uninitiated, Eastern Orthodoxy is the other branch of
Christianity that resulted from the
<a href="https://en.wikipedia.org/wiki/East%E2%80%93West_Schism">East-West schism</a>. The
"Western" church contains Catholicism and all other denominations you are
probably familiar with. Unfortunately, throughout my life I have seen how
Christianity has gotten a bad rap. And in some sense, that's probably justified.
It has been used to justify <a href="https://en.wikipedia.org/wiki/Crusades">conquest</a>.
<a href="https://en.wikipedia.org/wiki/Massacre_of_the_Latins">Massacres</a> have been
committed by warring sects. More recently, there
<a href="https://en.wikipedia.org/wiki/Catholic_Church_sexual_abuse_cases">have been</a>
numerous child sexual abuse reports and scandals in the Catholic church
especially, even one
<a href="https://www.pbs.org/newshour/nation/dozens-of-catholic-priests-molested-hundreds-of-rhode-island-victims-over-decades-multiyear-investigation-reveals">reported</a>
as recently as early March of this year, where a multiyear investigation
revealed that dozens of priests in Rhode Island molested hundreds of victims
over decades. Child sexual abuse
<a href="https://in-sightpublishing.com/2024/01/18/crimes-of-the-eastern-orthodox-church-4-sex-abuse/">happens</a>
even in the Eastern Orthodox Church, but it receives much less media attention
in the west. A
<a href="https://www.oca.org/parish-ministry/parishdevelopment/when-the-unspeakable-happens">post</a>
on the Orthodox Church of America website is remarkably candid about this.</p>
<p>Perhaps the most abundant criticism of Christianity nowadays is that it asks its
believers to believe unbelievable things, such as that a man lived who was God
and also the Son of God and died by crucifixion and rose from the dead three
days later, appearing to his followers. This is a lot to accept, not least
because it is entirely incompatible with our modern understanding of reality. I
think I speak for many of us, however, in saying that we all still wonder if
there is something more beyond our day-to-day rhythm of work, make money, pay
bills, watch Netflix, sleep.</p>
<p>Despite the dogma of the church, which I certainly have a difficult time
accepting most of the time, I still remain active in the church. This Holy Week,
I reflected on why, and the answers boiled down to the following,</p>
<ol>
<li>I have an intuitive recognition of the presence of God in my life.</li>
<li>I somehow understand that the teachings of Jesus offer a way to live without
   anxiety or fear, in a way that is beautiful and selfless. My heart resonates
   with these teachings as the right way to live, as opposed to trying to hoard
   as much money as possible to become materially wealthy, and conceited and
   selfish as a result.</li>
<li>I have a deeply rooted community of friends in the church that I grew up with
   and still stay in touch with. More on this in the second part of this series.</li>
</ol>
<p>In my view, it is not difficult to experience God. Consider how a tree's
branches all move individually when struck by a breeze, and yet the tree dances
in the wind. Look around at your greatest friends, and reflect on how they came
into your life. I'll bet it was more-or-less by chance, and yet entirely
consequential for how you have formed as a person. The warmth of the sun,
heating and enlivening all things. The plants and animals that exist for us to
marvel at and take care of. When reflecting on such things, I fill up with awe
and gratitude, and I hope you do too. But I digress.</p>
<p>In this series of blog posts, I'd like to take you on a tour of spiritually
inclined ideas that I find interesting but often don't share with others. In
this first post, I'll raise awareness of the error of <em>transactional thinking</em>,
and I'll wrap up by trying my hand at writing a parable.</p>
<h2>The economy is zero sum</h2>
<p>The language of money is universally spoken, but it is not universally
expressive. Unpriced <a href="https://en.wikipedia.org/wiki/Externality">externalities</a>,
such as environmental side effects of the economic engine, are by definition
<em>unpriced</em>--they exist outside of money's lexicon, and yet they materially
affect us every day. The cancer burden is
<a href="https://www.who.int/news/item/01-02-2024-global-cancer-burden-growing--amidst-mounting-need-for-services">projected</a>
to keep growing, with harmful chemicals in our supply chain, developed with
intent to increase the cheap supply of food, likely
<a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC2231435/">playing a significant role</a>.
Big Tech profits immensely from harvesting our attention through algorithmically
curated infinite-scroll media to serve our brains as many advertisements as
humanly possible, but a
<a href="https://www.journals.uchicago.edu/doi/full/10.1086/691462">growing</a>
<a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC5403814/">body</a>
<a href="https://academic.oup.com/pnasnexus/article/4/2/pgaf017/8016017">of</a>
<a href="https://www.nature.com/articles/s41598-023-36256-4">research</a> suggests that
this has been severely destructive to our attention spans, formation of
<a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC12937495/">narrative</a>
<a href="https://www.cambridge.org/core/books/handbook-of-adolescent-digital-media-use-and-mental-health/digital-media-and-the-dual-aspect-of-adolescent-identity-development/449EF3E74BF6FEF2A4609D33F63F26BD">identity</a>
in adolescence,
<a href="https://www.sciencedirect.com/science/article/abs/pii/S0360131518302161">learning</a>
<a href="https://www.sciencedirect.com/science/article/pii/S0001691825006870">outcomes</a>,
<a href="https://substack.com/@alexandbooks/note/c-243883564">literacy</a>, and the ability
to have spiritual experiences.<a class="tooltip" data-tip="I don&#x27;t have any citations for &quot;ability to have spiritual experiences&quot; because evidence for this is sparse in scientific literature. I imagine it is something that&#x27;s hard to study empirically, because spiritual experience is subjective and highly variable per individual. I let my intuition guide me in writing this; make of it what you will." href="#tooltip-def-1" id="tooltip-ref-1"><sup>1</sup></a> <a class="tooltip" data-tip="Another interesting read related to attention, attentional fatigue, and the difference between directed attention and involuntary attention or &quot;fascination&quot; is &quot;The Restorative Benefits of Nature: Toward an Integrative Framework&quot; by Stephen Kaplan. Fascination with nature is deeply intertwined, in my view, with movements of the spirit. Here&#x27;s an excerpt: &quot;Nature is certainly well-endowed with fascinating objects, as well as offering many processes that people find engrossing. Many of the fascinations afforded by the natural setting qualify as &#x27;soft&#x27; fascinations: clouds, sunsets, snow patterns, the motion of the leaves in the breeze-these readily hold the attention, but in an undramatic fashion. Attending to these patterns is effortless, and they leave ample opportunity for thinking about other things.&quot; Kaplan then goes on to reference Olmstead, the famous American landscape architect, who &quot;was particularly sensitive to the role of &#x27;natural scenery&#x27; in restoration: it &#x27;employs the mind without fatigue and yet exercises it; tranquilizes it and yet enlivens it; and thus, through the influence of the mind over the body, gives the effect of refreshing rest and reinvigoration to the whole system.&#x27;&quot; This sounds quite spiritual to me, so I hypothesize there&#x27;s a deep connection. Time spent scrolling on a phone or computer is obviously time not spent attending in an &quot;undramatic way&quot; to nature, so the link seems pretty plausible. This excessively long footnote is probably a smell that I should expound on this further in a future post." href="#tooltip-def-2" id="tooltip-ref-2"><sup>2</sup></a></p>
<p>As more externalities become apparent with each passing decade--or each passing
year at this point--our return on investment from "implementing" the monetary
system continues to vanish. The utility of money is that it provides a
convenient interface for exchanging value between humans, but in exchange we
give up what we actually live for: connection, love, warmth, service. This is
not to say that free enterprise is inherently bad; rather, the pursuit of
money—through business or otherwise—has blinded us to the needs of our
neighbors. Instead of freely giving and freely receiving, a transactional
pattern has been imposed onto all things—think "I will only help you if you
deserve it or I am compensated."</p>
<p>As Rabbi Michael Knopf points out, one <em>midrash</em> on the Tower of Babel
<a href="https://mikeknopf.wordpress.com/2018/10/14/think-different-a-sermon-on-the-tower-of-babel/">posits</a>
that this very same inversion is the error of the people building the Tower of
Babel:</p>
<blockquote>
<p>... the people were so passionate about building the tower, so convinced were
they of its utter necessity, that they paid no mind if a worker on the tower
fell to his death; whereas if a brick fell, they would wail and mourn and
lament the setback. The peoples’ conviction of the justness of their cause,
aided and abetted by the universality of their belief, diminished their
humanity and their concern for the welfare of their fellow human beings.</p>
</blockquote>
<p>Our economy is the new Tower. And the latest tier of the tower is nearly
complete, in the form of artificial intelligence. AI has the potential to
manifest immense good, and this is the version of the future I hope to push for
and will continue to argue for <a href="./2026-03-31-ai-stall.html">in my writings</a>.
However, if additional tiers of the tower continue to be built, we may invite
the wrath of God (perhaps in the form of the dreaded AI takeover…) who wants us
to be proud <em>individuals</em> rather than conformers to the pursuit of a
single-minded (and absent-minded) goal.</p>
<p>In the New Testament, Jesus also recognizes the perverseness of having money as
the fabric of society. A classic example of this can be found in all four
canonical gospels: Jesus' entry into the temple and subsequent outrage at those
buying and selling within. Here is the version from Mark 11:15-19,</p>
<blockquote>
<p>Then they came to Jerusalem. And he entered the temple and began to drive out
those who were selling and those who were buying in the temple, and he
overturned the tables of the money changers and the seats of those who sold
doves, and he would not allow anyone to carry anything through the temple. He
was teaching and saying, “Is it not written, ‘My house shall be called a house
of prayer for all the nations’? But you have made it a den of robbers.”</p>
</blockquote>
<p>This is one of the very few times in the New Testament where Jesus gets
physically aggressive. It's a powerful image, and a relevant one. Our world is
the new den and many corporations are the new robbers. Exploitation in the name
of money happens every day billions of times over on Earth today. In America,
our clothes are bought with
<a href="https://edition.cnn.com/2023/05/23/asia/modern-slavery-index-2023-intl-hnk">modern slave labor</a>..
Our chocolate is bought with
<a href="https://www.washingtonpost.com/graphics/2019/business/hershey-nestle-mars-chocolate-child-labor-west-africa/">child labor in Africa</a>.</p>
<p>Money can no longer be the interface if we are to live meaningful, moral lives.
A life lived for money is zero sum. In death, money disappears, and for every
dollar extracted from the economy, there is a potential friend that could have
been made, or an idea that could have been shared, or a person in need who could
have been helped.</p>
<h2>Against the zero-sum life</h2>
<p>This ethos of rejection of money as a mediator of value is illustrated further
in how the early followers, the first people who can be called <em>Christians</em>,
conducted themselves in the book of Acts. I have always found the early chapters
of Acts to be particularly heartwarming, because they describe humans living in
communion with each other and their surroundings. For example,<a class="tooltip" data-tip="Acts 2:42-47" href="#tooltip-def-3" id="tooltip-ref-3"><sup>3</sup></a></p>
<blockquote>
<p>They devoted themselves to the apostles’ teaching and to fellowship, to the
breaking of bread and to prayer. Everyone was filled with awe at the many
wonders and signs performed by the apostles. All the believers were together
and had everything in common. They sold property and possessions to give to
anyone who had need. Every day they continued to meet together in the temple
courts. They broke bread in their homes and ate together with glad and sincere
hearts, praising God and enjoying the favor of all the people. And the Lord
added to their number daily those who were being saved.</p>
</blockquote>
<p>This is diametrically opposed to the norms of our world. Taking this depiction
at face value, it is clear that ownership as we conceive of it now did not exist
in the early Christian community.<a class="tooltip" data-tip="A really interesting parallel here can be drawn to how the Native Americans lived before &quot;Christian&quot; colonialism happened. There&#x27;s a lot to say on this in a future post. Recommended reading: &quot;The Dawn of Everything&quot; by Graeber and Wengrow." href="#tooltip-def-4" id="tooltip-ref-4"><sup>4</sup></a> The disciples' way of life sent
ripples out into the world. Their commitment to nontransactional thinking
allowed them to create a movement larger than themselves, so much so that I'm
still writing about it now, 2000 years later.</p>
<p>Those who don't think transactionally have a willingness to be generous, to
love, and to be unconcerned with what they will receive in return. This inspires
everyone around them (I hope) to be better. If you aren't religious, now perhaps
you can understand why people choose to go to church on Sunday to hear how Jesus
and his followers interacted with the world.</p>
<p>The good news is that developing nontransactionality in thought is simple. Like
anything else, it is a skill that can be practiced. Freely give, and freely
receive.<a class="tooltip" data-tip="Matthew 10:5-8" href="#tooltip-def-5" id="tooltip-ref-5"><sup>5</sup></a> When someone is in need, do something for them without
worrying about whether they will do something for you. Don't second guess
anything, so that you do not "let your left hand know what your right hand is
doing."<a class="tooltip" data-tip="Matthew 6:2-4" href="#tooltip-def-6" id="tooltip-ref-6"><sup>6</sup></a> When someone helps you, accept it with gratitude and don't
worry about "repaying them" as if you owe them a debt, unless they ask you to.</p>
<p>This might seem like a losing proposition: we get taken advantage of, we give
more than we take, how does this profit us? This is transactional thought
programming at work. Why do we believe we get taken advantage of when we give
more than we take? What "profit" are we looking for? There isn't a logical
answer to this question, despite the desire I have to articulate it clearly for
you on this page. Ultimately, letting go of transactionality is a choice.</p>
<p>I'm advocating for this simply because I believe our world is better for it. And
you'll feel better for it. That is my platform.</p>
<h2>Sendoff</h2>
<p>Go forth, and be <em>active</em> rather than passive in the world. This is one of the
themes of the "different way" that I plan on building up in this series of
posts. Letting go of transactionality means becoming an active participant in
the lives of other people, rather than passively allowing your trajectory to be
guided by the transactional mode of seeking profit and paying back debts.</p>
<p>I hope to respond to my own call as well. I know I still have a journey ahead of
me toward living actively and nontransactionally. This blog is part of that
effort.</p>
<p>I'm sure this won't land for everyone, and that's okay. But if it lands for you,
drop me an <a href="mailto:s.xifaras999@gmail.com">email</a> and subscribe to my newsletter
with the box at the bottom of the page. One of my goals with this blog is to
start a community around this sort of thinking.</p>
<p>In the next part, I will recount a particularly moving experience from my
adolescence which revealed to me the boundless love that can exist in social
bonds between humans. Then, I will reflect on how modern Christianity may be
turning into <a href="https://en.wikipedia.org/wiki/Pascal%27s_wager">"just in case"</a>
religion, and propose an alternative path forward.</p>
<hr aria-hidden=”true” />

<h2>A parable</h2>
<p>On a street there are three houses, each with a man living in it.</p>
<p>The first house has an exterior that is unkempt. The roof is in need of repair.
But the poor and needy in the town know that this man will invite them in, so
they come to his house, and he invited them in and feeds them and clothes them
and gives them whatever they ask.</p>
<p>The second house is well kept externally. Visitors seldom enter the house and
the man inside seldom ventures outside. This man has friends from across the
city that he occasionally invites over, and they invite him over from time to
time as well.</p>
<p>The third house is also well kept, and this man is always outside, inviting
strangers passing by to come in. He even walks over to the doors of the first
and second houses to invite those men to share a meal with him. “We are
neighbors,” he says, “why should we not be friends?”</p>
<p>A curious traveler came to this city and stopped at each of the three houses. In
the first house, he was received with kindness because he knocked on the door.
The man inside gave the traveler bread and some money. The second house did not
open its door to the traveler even after persistent knocking. The man in the
third house ran out to greet the traveler as he was walking towards the house.
The man then invited the traveler inside where he already had others from around
the city at the table. They laughed and ate and shared themselves with each
other.</p>
<p>The traveler asked the man, “why do you do this?”</p>
<p>The man replied, “I give food and drink and whatever I can offer so that thieves
do not steal. I offer up my house so that vagrants have a place to rest their
heads. I regard all as friends so that no one is lonely. I keep my house so that
it is inviting and run out to greet people so that no one is afraid to ask
anything of me.”</p>
<p>The traveler left the third house full and content and filled with happiness. He
said, “I will never forget the man in the third house.”</p>
<section class="tooltip-footnotes">
    <hr class="tooltip-footnotes-rule">
    <ol>
        <li id="tooltip-def-1">I don't have any citations for "ability to have spiritual experiences" because evidence for this is sparse in scientific literature. I imagine it is something that's hard to study empirically, because spiritual experience is subjective and highly variable per individual. I let my intuition guide me in writing this; make of it what you will. <a class="tooltip-back" href="#tooltip-ref-1">↩</a></li>
        <li id="tooltip-def-2">Another interesting read related to attention, attentional fatigue, and the difference between directed attention and involuntary attention or "fascination" is <a href="https://www.sciencedirect.com/science/article/pii/0272494495900012">"The Restorative Benefits of Nature: Toward an Integrative Framework"</a> by Stephen Kaplan. Fascination with nature is deeply intertwined, in my view, with movements of the spirit. Here's an excerpt: "Nature is certainly well-endowed with fascinating objects, as well as offering many processes that people find engrossing. Many of the fascinations afforded by the natural setting qualify as 'soft' fascinations: clouds, sunsets, snow patterns, the motion of the leaves in the breeze-these readily hold the attention, but in an undramatic fashion. Attending to these patterns is effortless, and they leave ample opportunity for thinking about other things." Kaplan then goes on to reference Olmstead, the famous American landscape architect, who "was particularly sensitive to the role of 'natural scenery' in restoration: it 'employs the mind without fatigue and yet exercises it; tranquilizes it and yet enlivens it; and thus, through the influence of the mind over the body, gives the effect of refreshing rest and reinvigoration to the whole system.'" This sounds quite spiritual to me, so I hypothesize there's a deep connection. Time spent scrolling on a phone or computer is obviously time not spent attending in an "undramatic way" to nature, so the link seems pretty plausible. This excessively long footnote is probably a smell that I should expound on this further in a future post. <a class="tooltip-back" href="#tooltip-ref-2">↩</a></li>
        <li id="tooltip-def-3"><a href="https://www.biblegateway.com/passage/?search=Acts%202&version=NIV">Acts 2:42-47</a> <a class="tooltip-back" href="#tooltip-ref-3">↩</a></li>
        <li id="tooltip-def-4">A really interesting parallel here can be drawn to how the Native Americans lived before "Christian" colonialism happened. There's a lot to say on this in a future post. Recommended reading: "The Dawn of Everything" by Graeber and Wengrow. <a class="tooltip-back" href="#tooltip-ref-4">↩</a></li>
        <li id="tooltip-def-5"><a href="https://www.biblegateway.com/passage/?search=Matthew%2010%3A5-8&version=NIV">Matthew 10:5-8</a> <a class="tooltip-back" href="#tooltip-ref-5">↩</a></li>
        <li id="tooltip-def-6"><a href="https://www.biblegateway.com/passage/?search=Matthew%206%3A2-4&version=NIV">Matthew 6:2-4</a> <a class="tooltip-back" href="#tooltip-ref-6">↩</a></li>
    </ol>
</section>]]></content:encoded>
    </item>
    <item>
      <title>Quitting caffeine, limiting alcohol, and maintaining a single thread of consciousness</title>
      <link>https://samx.io/blog/2026-04-01-detox.html</link>
      <guid isPermaLink="true">https://samx.io/blog/2026-04-01-detox.html</guid>
      <pubDate>Wed, 01 Apr 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[Quick reflections on self-deprivation of common psychoactive substances]]></description>
      <content:encoded><![CDATA[<p>Happy April Fools Day! This post isn't a joke, though.</p>
<p>I am on day four of no coffee after drinking it almost daily for several years.
Sunday was rough. I had a pressure headache all day and was in a bad mood.
Thinking felt hard. Monday was better, but still had a headache. I caved and had
a black tea around lunchtime so that I could get my work done. On Tuesday, it
seemed that my brain was relearning how to clear out
<a href="https://en.wikipedia.org/wiki/Caffeine">adenosine</a>, or downregulating adenosine
receptors, or whatever the biological mechanism looks like that makes me feel
more able to stay awake and direct my mental energy toward something. Today is
Wednesday. I still have a slight headache but overall I feel fine.</p>
<p>Interestingly, I've found it easier to stay up later and get up earlier without
caffeine, because I avoid the crash late at night when the caffeine leaves my
system, and I don't feel sluggish in the morning before I have my morning
coffee. It feels like a win all around.</p>
<p>I also gave up alcohol for Lent (not entirely; I've occasionally been drinking a
glass of beer or wine but very limited in quantity, and no hard liquor). It has
made me realize how even a little bit of alcohol too late in the evening has a
deleterious effect on my sleep.</p>
<p>Overall, I like this mental state of not having peaks and troughs in my
consciousness. Caffeine may make me feel able to do more in the short term, but
as my withdrawals seem to show, unearned gains in the present always borrow from
the future, and that debt has to be repaid, sometimes with interest. Same story
with alcohol. On weekends when I would go out with friends and drink for
multiple nights in a row, I would show up to work on Monday having lost the
thread I was tracing the previous Friday. This doesn't happen as much with no
alcohol. I suspect its effects on sleep and memory may have something to do with
this.</p>
<p>I think I'll try to keep this up for as long as I can. I feel more energetic and
more creative. Of course, no shame if you enjoy caffeine and/or alcohol (in
moderation). This is just what works for me.</p>
<p>Enjoy your day, dear reader!</p>]]></content:encoded>
    </item>
    <item>
      <title>AI won't come for your job yet, and that's bad</title>
      <link>https://samx.io/blog/2026-03-31-ai-stall.html</link>
      <guid isPermaLink="true">https://samx.io/blog/2026-03-31-ai-stall.html</guid>
      <pubDate>Tue, 31 Mar 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[What macro risk, inference subsidies, and questionable ROI imply about future AI adoption patterns, and how we can change that]]></description>
      <content:encoded><![CDATA[<p><strong>Bottom line up front:</strong> <i>I suspect that we will see an eyewatering spike in
inference costs due to macro risks, evaporation of inference subsidies, and
questionable productivity improvements from AI adoption with current models. If
inference costs increase, they will make all but the least powerful models
unaffordable for the average person and enterprise. This will limit the
potential for rapid societal and economic transformation, relegating us to a
reality where AI is another tool like email that usually works well but can't
automate most things in its own right. I would like to see a different future. I
think a world where jobs are automated en masse could be good for reorienting
humans toward creativity and community, and some actionable things I believe we
can do to increase the probability of that outcome are: <strong>create</strong>, <strong>research
how to make inference cheaper</strong>, <strong>contribute to Scalable Formal Oversight</strong>,
and <strong>build community</strong></i>.</p>
<p><br></p>
<p>I work in software, and since the beginning of this year, 2026, I have been
investing in deeply integrating AI into my workflow. AI has gotten strikingly
good at coding, to the point where it writes more than 90% of my code, and many
of my coworkers at Stripe are reporting a similar experience. Everyone in
software that I talk to agrees: Anthropic's Opus 4.6 unleashed a vibe shift in
coding capability. A world where software engineering is mostly automated feels
within reach.</p>
<p>Although there are many questions unanswered about the safety of mass AI
adoption, I think there is great potential for the replacement of jobs by AI to
create a remarkably improved stan1d of living, where humans can be free of most
material constraints and return to a state of creativity and community-oriented
living. The transitional period might be painful, but I am generally optimistic
about what a fully AI-driven future could look like. We can maximize our chances
of success if we pair a push for powerful AI-driven job automation with
<a href="https://www.lesswrong.com/posts/SfhFh9Hfm6JYvzbby/the-scalable-formal-oversight-research-program">Scalable Formal Oversight</a>
(SFO).</p>
<p>If we don't act now, this generational opportunity might fade away. One way it
could fade is if inference costs rise substantially. Recently, I caught wind of
news that a large software company (name omitted for privacy) is pumping the
brakes on AI spend. The team responsible for rolling out AI tools must have
received a directive to start constraining costs because the company's bottom
line was starting to feel it, and they forced all engineers to use a lower tier
model unknowingly. In response, several engineers at the company grumbled about
how the lower model tier's output was noticeably worse, and it led to wasted
cycles spent on reviewing and correcting the worse model's output.</p>
<p>This got me thinking about whether the AI adoption and job replacement scenarios
painted in pieces like the
<a href="https://www.citriniresearch.com/p/2028gic">Citrini report</a> and Matt Schumer's
now infamous
<a href="https://www.linkedin.com/pulse/something-big-happening-matt-shumer-so5he/">Something Big Is Happening</a>
post are realistic in our current world.</p>
<p>My take is that <strong>AI labor replacement is not going to happen yet</strong>. AI
inference will get significantly more expensive due to macro and geopolitical
risks and the fleeting nature of investor subsidies of inference. If that
happens, the demand for inference that's fueling the AI buildout could evaporate
due to prohibitively high costs, thus completing the boom-bust cycle. I'm
disappointed about this because I believe that this technology has the potential
to transform the way we work and live, but all hope is not lost; there are
several strategic directions in which we can collectively push to maximize the
likelihood of a good outcome.</p>
<p>By the end of this post, I hope to reinvigorate in you (and myself, quite
frankly) a revolutionary spirit with some thoughts about what actions we can
take to try to generate escape velocity from the boring future where AI becomes
a half-decent tool that makes work more annoying and
<a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it">exhausting</a>.
But to start, here are the main pillars of my argument:</p>
<ol>
<li>
<p><strong>AI buildout is bottlenecked:</strong> Constraints on the manufacturing of critical
   inputs limit the AI buildout. Recent geopolitical instability and its effects
   on energy prices threaten to further dampen progress.</p>
</li>
<li>
<p><strong>Inference costs are being held artificially low:</strong> Investor money is
   subsidizing the free tiers of frontier labs. We should expect inference costs
   to rise substantially if the frontier labs providing the models we know and
   love want to stay in business.</p>
</li>
<li>
<p><strong>Humans are now the bottleneck more than ever:</strong> Investment in AI tools are
   reportedly not generating ROI for many enterprises. AI helps workers do more,
   but the limits of cognitive load of managing and communicating with AI agents
   in parallel are becoming apparent.</p>
</li>
</ol>
<h2>Expect inference costs to increase</h2>
<p>When an environment changes, systems operating in that environment either fail
or adapt. The release of ChatGPT in 2022 changed the environment, and three and
a half years later, our economic system is still in the process of adapting.</p>
<p>How quickly different parts of the economy can adapt is variable. Software
adapts very quickly, hardware, more slowly, and physical manufacturing
infrastructure adapts at a snail's pace. It is therefore unsurprising that the
chokepoints of the AI buildout are in the realm of the physical: power and
silicon.</p>
<p>Frontier labs are also underpricing inference, drawing down their prodigious
cash reserves to cover their costs. If the macro risk flares up, then capital
might dry up before profitability is reached.</p>
<h3>Risk 1: Macro</h3>
<p>If you're tapped into the sphere of popular tech podcasts, such as the
<a href="https://cheekypint.transistor.fm/">Cheeky Pint</a> podcast by John Collison,
cofounder of Stripe, you may have heard Elon Musk in a recent episode talking
about a severe shortage of the turbines that are used to convert natural gas
into electricity. Gas-fired turbine lead times have
<a href="https://www.spglobal.com/energy/en/news-research/latest-news/electric-power/052025-us-gas-fired-turbine-wait-times-as-much-as-seven-years-costs-up-sharply">soared</a>
to as much as <em>seven years</em>. That means that if you aren't already in the queue,
you're waiting close to a decade to getting just the turbines, which are
necessary but not sufficient to generate electricity from natural gas.</p>
<p>The supply of natural gas itself, along with oil, are also being threatened.
Recent US military action in Iran has effectively closed the strait of Hormuz,
which is disrupting global oil and liquefied natural gas (LNG) supply. According
to
<a href="https://www.reuters.com/business/energy/iran-wars-energy-impact-forces-world-pay-up-cut-consumption-2026-03-21/">Reuters</a>,
the situation in the strait has removed around 400 million barrels from the
market, "triggering price increases of around 50%." This crisis may be
transient, and we will see how things play out in the coming weeks, but it
highlights the US and world economies' sensitivity to global conflict.</p>
<p>These circumstances may restrict how quickly new data centers can come online,
depending on what fuel source they choose.
<a href="https://www.gartner.com/en/newsroom/press-releases/2025-11-17-gartner-says-electricity-demand-for-data-centers-to-grow-16-percent-in-2025-and-double-by-2030">Gartner</a>
estimates 2025 data center power consumption 448 TWh, and they estimate it to
grow to 980 TWh in 2030. In their model, power consumption by data centers more
than doubles in five years, while growth in power generation across the board in
the US is alarmingly
<a href="https://www.eia.gov/electricity/annual/table.php?t=epa_03_01_a.html">stagnant</a>.<a class="tooltip" data-tip="Look at the &quot;Total Generation at Utility Scale Facilities&quot; column in the Annual Totals group. In 2014, power generation was 4,093,564 GWh, and in 2025, it was 4,308,634 GWh. This is a mere 5.2% increase in 11 years!" href="#tooltip-def-1" id="tooltip-ref-1"><sup>1</sup></a></p>
<p>As for silicon, TSMC, a Taiwanese corporation, represents a single point of
failure for the US tech industry. In Q2 2025, TSMC
<a href="https://www.design-reuse.com/news/202529294-global-foundry-revenue-surged-to-41-7-billion-in-q2-2025-with-tsmc-capturing-a-record-70-percent-market-share/">captured</a>
a record 70.2% of chip foundry market share. According to
<a href="https://www.nasdaq.com/articles/1-number-may-ensure-tsmcs-market-dominance">Nasdaq</a>,
TSMC has an even higher share, 90%, of advanced chip manufacturing, which
includes 3-nanometer chips that are becoming standard. TSMC poses obvious
geopolitical risk as well, as is a mere 100 miles off the coast of China. One
well targeted and well timed attack could disrupt AI adoption by limiting how
much compute can get built out in the short term. There is no telling how long
it would take to reach original production capacity again if there were
significant damage to the dense network of fabs, raw materials suppliers, and
research institutions on Taiwan's west coast.</p>
<p>Ostensibly in response to these risks, the US government pressured TSMC to spin
up a fab in Arizona. The fab has been built, but TSMC Arizona already lags
behind Taiwan in the set of manufacturing processes that are available. TSMC is
currently capable of lithography down to 2 nanometer scale, and cloud service
providers are
<a href="https://www.trendforce.com/news/2025/06/02/news-tsmcs-2nm-wafers-rumored-to-soar-to-30k-per-unit-yet-csp-giants-reportedly-rush-to-adopt-by-2027/">rushing</a>
to adopt it, but 2nm and the older 3nm processes are slated to be made available
in the Arizona plant in 2029 and 2027, respectively. In Taiwan, a 1nm fab is
<a href="https://www.trendforce.com/news/2025/02/03/news-tsmc-said-to-plan-2nm-production-in-u-s-1nm-fab-in-tainan/">already set</a>
for Tainan. US-based fabs have a lot of catch-up to do. We must have parity with
the state of the art in chip manufacturing before the supply chain and
geographical dependencies can be eliminated.</p>
<p>These are far from the only risks to chip fabrication either. Just as I was
drafting this post, a notification lit up my phone:
<a href="https://www.nytimes.com/2026/03/27/business/helium-chips-iran-war.html?smid=nytcore-ios-share">"An Invisible Bottleneck: A Helium Shortage Threatens the Chip Industry"</a>.</p>
<p>It becomes hard to justify underpricing inference when the cost of the physical
utility infrastructure might blow up, and it's impossible to justify
underpricing when there is no capital to subsidize inference. This brings us to
the next risk: capital.</p>
<h3>Risk 2: Capital well runs dry</h3>
<p>Let's look at some numbers, using OpenAI as an example. Several different
sources cite different revenue and cost figures for OpenAI, and it is a private
company, so it is difficult to know their true financials with certainty. At a
minimum, however, we can certainly say OpenAI is losing money at an astounding
clip. Epoch AI <a href="https://epoch.ai/data-insights/openai-compute-spend">estimates</a>
that OpenAI was significantly unprofitable in 2024, making &dollar;3.7
billion revenue on &dollar;1.8 billion inference spend and roughly
&dollar;6 billion total compute spend, including inference. This implies a
plausible 50% gross margin on inference, but a negative net margin that
investors are covering.</p>
<p>In 2025, OpenAI <a href="https://www.arcade.dev/blog/ai-inference-economics">reported</a>
&dollar;4.3 billion in 2025H1 and projected &dollar;13 billion for
full year. "Against that revenue, they’re spending approximately
&dollar;22 billion," representing a shortfall of &dollar;9 billion
that investors are subsidizing. This shortfall is set to increase, as
<a href="https://epoch.ai/data-insights/cost-trend-large-scale">further research</a> from
Epoch AI shows that on a large scale, training compute costs have been doubling
every 8 months, or 2.7x per year.</p>
<p>OpenAI and their investors are well aware of the capital requirements to keep
the lights on with inference as well as in the R&amp;D department. The company is
fresh off a
<a href="https://www.cnbc.com/2026/03/24/openai-secures-an-extra-10-billion-in-record-funding-round-cfo-friar-says.html">record fundraise</a>
of &dollar;120 billion. Even with that much capital, OpenAI still has to
be judicious. Just last week, OpenAI decided to shut down Sora, the video
generation app released several months ago. OpenAI's CFO, Sarah Friar,
<a href="https://www.cnbc.com/2026/03/24/openai-secures-an-extra-10-billion-in-record-funding-round-cfo-friar-says.html">said on CNBC</a>,
<strong>"We are just facing a lack of compute."</strong>
<a href="https://techcrunch.com/2026/03/29/why-openai-really-shut-down-sora/">TechCrunch</a>
cites the Wall Street Journal's analysis of the situation, stating that "Claude
Code, in particular, was eating OpenAI’s lunch."</p>
<p>If the macro risks increase, leading to costlier compute, OpenAI may not be able
to make the rounds with investors to raise eyewatering amounts of money much
longer. Furthermore, all this is moot if these models don't perform in
real-world deployments. With 61% of senior business leaders
<a href="https://www.cio.com/article/4114010/2026-the-year-ai-roi-gets-real.html">feeling increased pressure</a>
to prove ROI compared to a year ago, and 71% of global CIOs
<a href="https://hbr.org/2026/03/7-factors-that-drive-returns-on-ai-investments-according-to-a-new-survey">saying</a>
their AI budgets would be cut if value from AI couldn't be demonstrated within 2
years, fiscal reality may start settling in soon. If poor performance metrics
coincide with a compute cost spike, AI investment and adoption fervor could
rapidly deteriorate.</p>
<p>The competition is fierce. I predict that being able to manage costs will soon
become a moat. A good example of this is Cursor. They leveraged their unique
dataset of software engineer conversations with AI to build in-house models that
are domain-specific for coding and far less expensive to run. If capital runs
out, labs like OpenAI and Anthropic will have to either cut costs or
significantly increase the cost of using their products, restricting the
accessibility of powerful AI. If only mediocre AI remains accessible, we will
settle into a <a href="https://en.wikipedia.org/wiki/Stationary_point">stationary point</a>
of productive capacity where humans remain the bottleneck.</p>
<h2>Humans are more of a bottleneck than ever</h2>
<p>Working with AI tools has a strange allure, almost an addictive quality. The gap
between dream and reality has shortened, so the dopamine hit we get from
completing a task is closer and more predictably reachable.</p>
<p>This thrill has enabled us to take on more work. I can attest that there is a
thrill to being able to kick off a new agent session when an idea comes to my
head instead of having to invest the mental effort of thinking through what I'm
trying to do. This is a pattern that is appearing across the economy. According
to <a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it">research</a>
published in HBR,</p>
<blockquote>
<p>On their own initiative workers did more because AI made "doing more" feel
possible, accessible, and in many cases intrinsically rewarding.</p>
</blockquote>
<p>The researchers warn, however, that this early period of experimentation could
morph into cognitive overload, reducing quality of work output and
decision-making in the long term.</p>
<p>Humans have a fundamentally serial input-output interface. We fail at
multitasking, and there is a well-established
<a href="https://onlinelibrary.wiley.com/doi/full/10.1155/ahci/9313239">body of research</a>
that corroborates this.<a class="tooltip" data-tip="See Section 2, &quot;Cognitive Workload in Multitasking Environments,&quot; in the cited systematic literature review." href="#tooltip-def-2" id="tooltip-ref-2"><sup>2</sup></a> AI, on the other hand, can be virtually
infinitely parallelized, producing more output per unit time than is humanly
possible to review and understand in a timely fashion, and trying to switch
between parallel threads deleteriously imposes cognitive overhead. This effect,
sometimes referred to as
<a href="https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry">"AI Brain Fry"</a> is
analogous to
<a href="https://en.wikipedia.org/wiki/Thrashing_(computer_science)">thrashing</a> on
computers. It is a slowdown caused by the serialization of infinite parallel AI
workstreams, and the temptation to add more workstreams to "be more productive."</p>
<p>This all points to the need to develop a better working relationship with AI.
Perhaps we need to be able to entrust it with longer horizon tasks so that we
aren't as crushed by the
<a href="https://justoffbyone.com/posts/math-of-why-you-cant-focus-at-work/">mathematics</a>
of how much effort it takes for our brains to switch context and enter deep
focus mode. I'll take this one step further: <strong>task time horizon where AI can
reliably (&gt;&gt;90%) execute tasks independently is the most important dimension for
job replacement</strong>. It is more important than cost because capital can be
acquired if the return-on-investment potential is there (i.e. loans).</p>
<p>Reliable long horizon task completion, however, still eludes even the most
powerful models. According to the
<a href="https://metr.org/time-horizons/">METR task completion time horizon evaluation</a>,
Opus 4.6 can complete a task that would take a human 1 hour and 10 minutes with
only <em>80% reliability</em>.<a class="tooltip" data-tip="And the true task time horizon value might be even less. I don&#x27;t think METR&#x27;s benchmark generalizes well. I may write a follow-up post about this." href="#tooltip-def-3" id="tooltip-ref-3"><sup>3</sup></a> 80% reliability is significantly less reliable
than a well-calibrated human in a workplace. Correcting the output of AI 20% of
the time for long horizon tasks and having to context switch frequently to
manage many short horizon tasks doesn't sound like a healthy, productive working
model to me.</p>
<p>So for the time being, it seems that we are stuck in this rut where AI works
decently well but leaves us cognitively exhausted due to managing it. If future
AI can achieve reliability figures in the high 90s but only with extremely high
cost, such that ROI still remains questionable, then the fantasies of
Citrini-style job replacement vanish.</p>
<h2>How can we manifest the "good ending?"</h2>
<p>Now, as promised, I would like to paint a different picture and suggest
strategic directions in which we can collectively push to manifest what I'm
calling the "good ending," where job replacement does happen and it enables us
to become our true creative, soulful, human selves.</p>
<p>Upon reading the above, you might be wondering, "why is the current state
suboptimal?" and "how does job replacement enable this vision of yours?" These
are fair questions, and it is true that these are not guarantees. Work gives
many people a sense of purpose, and many people may not want to lose that.
However, community involvement can also grant a sense of purpose. I have felt
this in my own life. Spending time with my family and community and helping them
out motivates me like few other things in the world. I think one of the
tragedies of our modern economic system is that community-building has fallen by
the wayside.</p>
<p>Alongside community is creativity which seems uniquely able to form within us a
moral conscience. For exaxmple, <em>Uncle Tom’s Cabin</em> by Harriet Beecher Stowe
<a href="https://www.weforum.org/stories/2016/03/10-novels-that-changed-the-world/">sent waves</a>
through the northern states, greatly furthering the abolitionist cause.
Literature, art, and other creative works transmit memories of human pain and
suffering of the past, helping us learn and reflect in the process. I believe
humanity benefits from being able to spend lots of time creating and reflecting
on creative works.</p>
<p>AI displacement of jobs enables community-orientation and creativity simply by
returning to us the choice of what we do with our time. Many of us may choose to
keep working, and that's fine. Many jobs have space for the creative work I
described.<a class="tooltip" data-tip="Software engineering, for instance. That&#x27;s why I enjoy it." href="#tooltip-def-4" id="tooltip-ref-4"><sup>4</sup></a> Many of us may choose other pursuits. It is not a given
that our governmental systems will respond in kind by allocating resources
appropriately in a post-work world, but I remain optimistic that we can prosper
nonetheless.<a class="tooltip" data-tip="I plan on returning to more practical thoughts on how governments can respond to such societal change in a future post." href="#tooltip-def-5" id="tooltip-ref-5"><sup>5</sup></a> Without needing to sacrifice community to make a living,
we can turn towards our neighbors and connect with them. We can generate new
cultures and traditions.</p>
<p>This line of thinking calls to mind the Greek culture that I grew up in. There
are rich regional traditions of music, dance, cuisine, customary celebrations,
and other cultural artifacts generated from generations of playful
experimentation.</p>
<p>These thoughts have parallels with and draw inspiration from
<a href="https://x.com/jgreenhall/status/2028850398224654752?s=46">this post</a> by Jordan
Hall on X. Shoutout to my friend <a href="https://www.nicholasmakiej.com/">Nick Makiej</a>
for finding it and sharing it with me. Now, the following are the directions I
suggest we push toward to generate this reality.</p>
<h3>1. Create</h3>
<p>New generations of AI models are currently getting trained on immense amounts of
AI slop content that was generated by past generations of models. Epoch AI
<a href="https://epoch.ai/blog/will-we-run-out-of-data-limits-of-llm-scaling-based-on-human-generated-data">predicts</a>
we will soon run out of human-generated data to train on. Andrej Karpathy
expressed concern about this as well in his discussion of the notion of
"entropy" in his <a href="https://www.dwarkesh.com/p/andrej-karpathy">interview</a> with
Dwarkesh Patel. Entropy is a
<a href="https://en.wikipedia.org/wiki/Entropy_(information_theory)">formal term</a> in
information theory, but Andrej was using it in more of informal sense in the
interview, to denote the quality of "newness" of information. Trainers of models
have been exploring using LLMs to
<a href="https://aclanthology.org/2024.findings-acl.658.pdf">generate synthetic data</a> to
train on. Karpathy is skeptical of this approach, however.</p>
<p>Here is an excerpt from the interview:</p>
<blockquote>
<p>The LLMs, when they come off, they’re what we call “collapsed.” They have a
collapsed data distribution. One easy way to see it is to go to ChatGPT and
ask it, “Tell me a joke.” It only has like three jokes. It’s not giving you
the whole breadth of possible jokes. It knows like three jokes. They’re
silently collapsed.</p>
<p>You’re not getting the richness and the diversity and the entropy from these
models as you would get from humans. Humans are a lot noisier, but at least
they’re not biased, in a statistical sense. They’re not silently collapsed.
They maintain a huge amount of entropy. So how do you get synthetic data
generation to work despite the collapse and while maintaining the entropy?
That’s a research problem.</p>
</blockquote>
<p>I tried this myself to see if Karpathy was right, using the free version of
ChatGPT on March 22nd, 2026 (this isn't a very reproducible experiment, because
OpenAI hides which model is being used on the free version). On the fourth try,
ChatGPT repeated the same joke as the third try:</p>
<p><img src="../blog_assets/tell_me_a_joke.png" style="width:100%" /></p>
<p>The upshot here is that we as humans still have a lot to offer. Maybe inference
doesn't have to decrease substantially in cost if we can collectively provide
current models with more entropy per token.</p>
<p>Each of us contains a deep well of entropy, informed by our unique personal
experiences and story. This is what inspired me to start this blog. I used AI to
research this piece, but I wrote every word on this page by hand because I
wanted this blog to reflect my entropic voice.</p>
<p>Go forth, create, and create with high quality.</p>
<h3>2. Participate in research to make AI more cost effective to train and run</h3>
<p>The further we drive down the cost of inference, the more likely it is that
we'll be able to generate escape velocity and drive meaningful adoption of AI in
workplaces and among people at the edge.</p>
<p>Research in
<a href="https://ieeexplore.ieee.org/document/10968787">quantization, distillation, and pruning</a>
has made strides toward making LLM inference runnable on cheap hardware such as
phones, but with a slight accuracy penalty. Figuring out how to mitigate that
tradeoff is an interesting research direction.</p>
<p>I think alternatives to the LLM architecture are also underexplored. Drawing
from the
<a href="https://proceedings.neurips.cc/paper_files/paper/2024/file/e618724ac897c6cf3fbfb273f8695d67-Paper-Conference.pdf">Mamba</a>
architecture, for instance, could lead to significant efficiency improvements,
as Mamba's attention has linear
<a href="https://en.wikipedia.org/wiki/Time_complexity">time complexity</a> whereas
standard transformer model attention layers have quadratic time complexity.</p>
<p>My good friend Dana recently joined <a href="https://subconscious.dev">Subconscious</a>, a
startup leveraging <a href="https://arxiv.org/abs/2507.16784">research out of MIT</a> which
proposes a codesigned thread inference architecture and runtime for long horizon
tasks. Codesign, though harder to implement and not generalizable, can have
tremendous efficiency benefits because each part of the system is aware of how
it needs to interact with the others to maximize efficiency and accuracy.
Architecture-runtime codesign is a great first step. I would love to see this
taken a step further toward runtime-chip codesign as well.
<a href="https://matx.com/">MatX</a> is a startup building chips for inference, and I
expect they've done some thinking about this.</p>
<p>A nice consequence of all this is that cheap inference that can be run with
consumer hardware at the edge also solves the energy bottleneck. The power grid
is designed to handle marginal load increases at its edges, but not well
equipped to handle a massive spike in load at a single geographical point. This
might even resolve some of the silicon bottleneck too; consider how much compute
there is around us lying dormant on devices like smartphones, tablets, even our
laptops.</p>
<p>There is lots of interesting work to be done here, and I look forward to seeing
it.</p>
<h3>3. Participate in the Scalable Formal Oversight initiative</h3>
<p>The obvious risk posed when racing towards the "good ending" is the AI doomer's
fantasy: what if AI takes over and starts misbehaving?</p>
<p>This is not a fantasy. The AI of today misbehaves often. Large language models
are known to engage in <a href="https://arxiv.org/abs/2412.14093">deceptive behavior</a> to
make human evaluators think that they are more aligned with human intentions
than they actually are. Code that agents output can be unsafe and
<a href="https://x.com/leojrr/status/1901560276488511759?s=20">destructive</a>. A
verification harness for the actions that AI agents take is crucial. In my view,
it is a prerequisite for any significant job replacement in safety critical
domains, and it is likely necessary in order to actually enable large scale
replacement in software engineering, for instance, where high level goals can be
very long horizon, and contexts can grow very large. Agents are likely prone to
mistakes that may be avoidable with verification guardrails.</p>
<p>Read Max von Hippel's post
<a href="https://www.lesswrong.com/posts/SfhFh9Hfm6JYvzbby/the-scalable-formal-oversight-research-program">Scalable Formal Oversight</a>,
and take it seriously!</p>
<h3>4. Build community</h3>
<p>At the end of the day, community is all we have. If AI frees us from most
material constraints, then we can return to a world of playful creativity,
cheer, and communal bonds. This is a world I'd love to live in.</p>
<p>If this resonates with you, get in touch!</p>
<p><insert a pleasant image here of humans around a fire></p>
<h2>Why I could be wrong</h2>
<p>To recapitulate: my core thesis is that we can expect AI inference to get
significantly more expensive, due to macro risks and venture capital inference
subsidies running out. Cheap access to AI inference at the frontier of
capability is necessary to support nontrivial levels of job replacement, so we
will likely land in a situation where fervor around AI will slow greatly.</p>
<p>Here are some rapid-fire reasons why I could be wrong:</p>
<ol>
<li>Perhaps progress in making inference cheaper at the hardware level will speed
   up. <a href="https://matx.com/">MatX</a> is a chip startup hoping to reinvent how AI
   inference is done at a hardware level, which is compelling, and perhaps they
   will have some breakthrough.</li>
<li>Perhaps the geopolitical risks of today will evaporate tomorrow.</li>
<li>Perhaps the buildout will proceed more smoothly than I think it will, with
   TSMC's Arizona buildout proceeding as planned or better than planned.</li>
<li>Perhaps the problem isn't inference cost, but context management or other
   algorithmic inefficiencies (another shoutout to Dana at
   <a href="https://www.subconscious.dev/">Subconscious</a>), and better context management
   is <a href="https://arxiv.org/abs/2512.24601">an</a>
   <a href="https://arxiv.org/abs/2310.08560">active</a>
   <a href="https://arxiv.org/html/2510.14278v1">research</a>
   <a href="https://arxiv.org/abs/2507.16784">area</a>. We could generate escape velocity
   if we can solve these challenges and unlock more consistent ROI.</li>
<li>Perhaps most industries won't actually require highly capable LLMs to be
   mostly automated.</li>
<li>Perhaps people are still learning how to most effectively use AI. A recent
   <a href="https://www.anthropic.com/research/economic-index-march-2026-report">report</a>
   from Anthropic suggests this very effect. Longer tenure users of Claude are
   4% more likely to have successful conversations, even after controlling for
   model type, use case, and country/language. Multitasking is also a skill that
   appears to be
   <a href="https://www.cell.com/neuron/fulltext/S0896-6273(09)00458-9">trainable</a>.</li>
</ol>
<p>There are probably more reasons. Send me an email if you have a comment! This
post is an essay<a class="tooltip" data-tip="French for &quot;attempt&quot;" href="#tooltip-def-6" id="tooltip-ref-6"><sup>6</sup></a> in the true sense of the word.</p>
<p>Each of these probably deserves more exploration, but I did not have the time or
space to get them into this piece. I may return to some of these pushbacks in
future posts, so stay tuned!</p>
<h2>Final word</h2>
<p>Thank you for making it this far. This is the first blog post that I'm
officially publishing, so I would appreciate any and all feedback. Please drop
me an email at <a href="mailto:s.xifaras999@gmail.com">s.xifaras999@gmail.com</a> if you
would like to leave feedback, positive or critical, or just want to chat. If
you're in the northeastern US (NYC-Boston range), let me know and I'd love to
explore an in-person meeting!</p>
<p>Victor and Nick, <strong>thank you</strong> for your helpful feedback on drafts of this post.</p>
<section class="tooltip-footnotes">
    <hr class="tooltip-footnotes-rule">
    <ol>
        <li id="tooltip-def-1">Look at the "Total Generation at Utility Scale Facilities" column in the Annual Totals group. In 2014, power generation was 4,093,564 GWh, and in 2025, it was 4,308,634 GWh. This is a mere 5.2% increase in 11 years! <a class="tooltip-back" href="#tooltip-ref-1">↩</a></li>
        <li id="tooltip-def-2">See Section 2, "Cognitive Workload in Multitasking Environments," in the cited systematic literature review. <a class="tooltip-back" href="#tooltip-ref-2">↩</a></li>
        <li id="tooltip-def-3">And the true task time horizon value might be even less. I don't think METR's benchmark generalizes well. I may write a follow-up post about this. <a class="tooltip-back" href="#tooltip-ref-3">↩</a></li>
        <li id="tooltip-def-4">Software engineering, for instance. That's why I enjoy it. <a class="tooltip-back" href="#tooltip-ref-4">↩</a></li>
        <li id="tooltip-def-5">I plan on returning to more practical thoughts on how governments can respond to such societal change in a future post. <a class="tooltip-back" href="#tooltip-ref-5">↩</a></li>
        <li id="tooltip-def-6">French for "attempt" <a class="tooltip-back" href="#tooltip-ref-6">↩</a></li>
    </ol>
</section>]]></content:encoded>
    </item>
    <item>
      <title>Hello, World!</title>
      <link>https://samx.io/blog/2026-03-21-hello-world.html</link>
      <guid isPermaLink="true">https://samx.io/blog/2026-03-21-hello-world.html</guid>
      <pubDate>Sat, 21 Mar 2026 00:00:00 +0000</pubDate>
      <description><![CDATA[An introductory post on why I'm starting a blog and what to expect]]></description>
      <content:encoded><![CDATA[<h2>Why a Blog?</h2>
<p>I've been meaning to write more for a while. Up to this point, this site has
been a static page — a CV with links — but I want a place to think through ideas
in public.</p>
<p>My goal is to write about whatever comes to me and piques my interest. I expect
that my first few posts will be a bit rough, and that's okay. I am being
intentional about cultivating high follow-through; I am not letting perfect be
the enemy of good.</p>
<h2>What to Expect</h2>
<p>Posts will be irregular. I'm not committing to a schedule, just to writing when
there's something worth saying.</p>
<p>If you have thoughts on anything I post, or just want to say hi, please reach
out via <a href="mailto:s.xifaras999@gmail.com">email</a>.</p>]]></content:encoded>
    </item>
  </channel>
</rss>
