Curated Finds

Handpicked collection of the most interesting, insightful, and inspiring links I've come across.

Interface Essential Bookmark_1 Streamline Icon: https://streamlinehq.com interface-essential-bookmark_1
Curated

Quoting Ethan Mollick: A Lot is Going to Change

There are people overhyping AI, but the alternative is not that AI is useless, or even the average of the two positions. A lot is going to change dramatically even with today’s AI. Ignoring that means no chance to shape what’s next

Source: bsky.app

Curated

AI Doesn't Reduce Work—It Intensifies It

Pulling an all-nighter because you got inspired right before bed. Forgetting to drink or eat. Missing a meeting because you’re too hyperfocused on your task. Being exhausted the rest of the day after being in the zone and super productive for 2-3 hours. If you’ve got ADHD like me, these all sound familiar. But if you’re neurotypical, they’re kinda weird. Well, good news: you’re not alone anymore. Bad news: this might make AI harder to scale in your company.

Berkeley Haas School of Business researchers found something interesting:

We discovered that AI tools didn’t reduce work, they consistently intensified it. In an eight-month study of how generative AI changed work habits at a U.S.-based technology company with about 200 employees, we found that employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so. Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.

This really strikes a chord. I’m writing this post while also having Composer 1.5 fix the menu of my CMS and Claude Code convert a Python tool into a web app. On paper this all sounds great (look at all that time saved and extra productivity), but what it actually makes me think of is context switching, burnout, and sick leave.

Our research reveals the risks of letting work informally expand and accelerate: What looks like higher productivity in the short run can mask silent workload creep and growing cognitive strain as employees juggle multiple AI-enabled workflows. Because the extra effort is voluntary and often framed as enjoyable experimentation, it is easy for leaders to overlook how much additional load workers are carrying.

So what should you do? Aruna Ranganathan and Xingqi Maggie Ye recommend adopting an “AI practice”:

a set of intentional norms and routines that structure how AI is used, when it is appropriate to stop, and how work should and should not expand in response to newfound capability. Without such practices, the natural tendency of AI-assisted work is not contraction but intensification, with implications for burnout, decision quality, and long-term sustainability.

Source: hbr.org (via simonwillison.net)

Curated

Hilary Gridley's AI Steering Wheel

This happens constantly with AI. You say “make this more ambitious” and the model doesn’t know if you mean expand the scope, increase the stakes, be more experimental, or just show more confidence. You ask for “simpler” and it dumbs everything down (you wanted straightforward, not condescending). You ask for “more concise” and it guts the nuance (you wanted lean, not hollow). The words feel like synonyms, but to an AI, they are completely different instructions.

The first tool my therapist ever suggested to me was Dr. Gloria Willcox’s Feelings Wheel. It stuck with me because it showed how much precision matters when naming things. So when I spotted a similar infographic on LinkedIn, it immediately caught my attention. Turns out Hilary Gridley adapted it to work better with LLMs. Test it out here.

I call it the AI Steering Wheel. Like the Feelings Wheel, it starts broad and gets more specific as you move outward. Six dimensions in the center—Originality, Grounding, Risk, Scope, Style, Certainty—each branch into increasingly precise adjectives.

Source: hils.substack.com

Curated

Quoting Ethan Mollick: Managing Agents is Really a Management Problem

I keep hearing stories of people (devs and product managers mostly) having either amazing or terrible experiences with LLMs. Here’s what’s consistently true: if you don’t know how to use them (and ideally how they work) you’ll get poor results. But if you know their strengths and weaknesses, and you can clearly describe what you’re building or the problem you’re solving, they become fantastic assistants.

There’s a reason product managers pick this up fast. They already map problem spaces, run discovery, measure success, prioritize, and plan. Those are the muscles you need to get value from agents. This applies to people managers too, as Ethan Mollick puts it:

When you see how people use Claude Code/Codex/etc it becomes clear that managing agents is really a management problem Can you specify goals? Can you provide context? Can you divide up tasks? Can you give feedback? These are teachable skills.

Source: x.com

Curated

Quoting Jason Gorman: The Future of Software Development is Software Developers

Have you ever asked an LLM to do the same task in different languages and gotten wildly different results? I mostly prompt in English, but I’ll switch to French sometimes, and it’s surprisingly hard to nail the exact same details and nuances. This quote from Jason Gorman got me thinking about it.

The hard part of computer programming isn’t expressing what we want the machine to do in code. The hard part is turning human thinking – with all its wooliness and ambiguity and contradictions – into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a programming language.

This resonates with another personal experience. My second language is Dutch, and sometimes I can’t find the right French or English word for what I’m thinking, but Dutch nails it. Turns out that word just doesn’t translate literally to French or English.

Edgar Dijkstra called it nearly 50 years ago: we will never be programming in English, or French, or Spanish. Natural languages have not evolved to be precise enough and unambiguous enough. Semantic ambiguity and language entropy will always defeat this ambition.

Source: codemanship.wordpress.com

Curated

Quoting Bryan Newbold: Why Leaving X Makes Sense

There’s less talk about decentralized social media these days (or maybe I’ve just been out of the loop), but it’s nice seeing more people join Bluesky or Mastodon. I haven’t been on X for over a year, and every time I need to go there because someone posted something useful only there, it frustrates me. That’s why this quote by Bryan Newbold about regional governments and institutions posting on Bluesky hit home:

Everybody should be able to get through their day safely without faustian privacy bargains and barrages of targeted ads and adversarial slop

Network effects lock users in, but they also lock people out. Every person who leaves X makes it less valuable for the next person thinking about leaving, until suddenly the whole thing tips. Institutions moving first is what can make this happen. Let’s hope it keeps going (so I never have to open X again).

Source: @bnewbold.net

Curated

Quoting Johann Rehberger: The Normalization of Deviance in AI

I just learned about the Normalization of Deviance a few days ago (you’d think a space nerd like me would’ve known about this in the context of the Space Shuttle Challenger disaster), thanks to Johann Rehberger and his take on how it applies to AI.

The original term Normalization of Deviance comes from the American sociologist Diane Vaughan, who describes it as the process in which deviance from correct or proper behavior or rule becomes culturally normalized.

This is something I think about a lot when I look at geopolitics, but I never realized how well it applies to AI and how we SaaS companies fall into the same trap more often than we’d like to admit.

I use the term Normalization of Deviance in AI to describe the gradual and systemic over-reliance on LLM outputs, especially in agentic systems. (…) In the world of AI, we observe companies treating probabilistic, non-deterministic, and sometimes adversarial model outputs as if they were reliable, predictable, and safe.

What worries me is that often we don’t even realize we’re doing it. Either we’re rushing to deliver value, or we’re just learning as we go because this is still an emergent field.

Such a drift does not happen through a single reckless decision. It happens through a series of “temporary” shortcuts that quietly become the new baseline. Because systems continue to work, teams stop questioning the shortcuts, and the deviation becomes invisible and the new norm.

I feel like adding more guardrails or checks gets treated like tech debt and legacy code: nobody wants to do it, it doesn’t have obvious value, and it’s complicated and time-consuming. I’m worried that implementing agentic workflows will only make this worse and amplify the risks. But I’m optimistic that with a bit more discipline and fewer “we’ll fix it later” shortcuts, we can keep the innovation without normalizing the risk.

Source: embracethered.com

Curated

Quoting The Resonant Computing Manifesto

I’m passionate about AI and LLMs, and I genuinely believe they could transform our world for the better. But I’m also a sarcastic realist who knows there are serious risks and challenges. I’ve struggled to find a clear way to describe how I imagine doing AI responsibly while still pushing innovation.

With the emergence of artificial intelligence, we stand at a crossroads. This technology holds genuine promise. It could just as easily pour gasoline on existing problems. If we continue to sleepwalk down the path of hyper-scale and centralization, future generations are sure to inherit a world far more dystopian than our own.

Turns out a lot of inspiring people (including some I’ve followed and admired for years, like Amelia Wattenberger and Simon Willison) have already nailed this in The Resonant Computing Manifesto. It lays out five principles for building resonant software (as Willison describes it):

Keeping data private and under personal stewardship, building software that’s dedicated to the user’s interests, ensuring plural and distributed control rather than platform monopolies, making tools adaptable to individual context, and designing for prosocial membership of shared spaces.

My favorite part of AI, perfectly put into words:

This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human—at scale. One-size-fits-all is no longer a technological or economic necessity. Where once our digital environments inevitably shaped us against our will, we can now build technology that adaptively shapes itself in service of our individual and collective aspirations. We can build resonant environments that bring out the best in every human who inhabits them.

(via Simon Willison)

Curated

Quoting Elena Verna: My beef with AI credit pricing

If you’re a product manager adding AI features to your product, you’ve probably struggled with pricing them. There’s no one-size-fits-all answer, but one thing’s clear: AI Credits aren’t it. Elena Verna nailed why in her piece on hating AI Credits pricing.

  • Customer doesn’t know the price up front
  • Prices don’t feel fair
  • There’s no apples-to-apples comparison
  • Customer Support becomes impossible
  • Companies will exploit this confusion

From my SaaS perspective, pricing AI Credits is incredibly tricky because LLM inference costs are so volatile. Sure, they’re getting cheaper, but it’s really hard to predict where they’ll go and how to build a pricing model that actually sticks and makes money.

Curated

Quoting Geoffrey Litt: Code Like a Surgeon

A surgeon isn’t a manager, they do the actual work! But their skills and time are highly leveraged with a support team that handles prep, secondary tasks, admin. The surgeon focuses on the important stuff they are uniquely good at. (…) My current goal with AI coding tools is to spend 100% of my time doing stuff that matters.

There’s a lot of talk about AI replacing humans everywhere, but it’s usually pretty vague. I don’t see myself getting replaced anytime soon. I try to explain to people that I use AI all the time, but mostly for secondary tasks that eat up my time and mental space. This is the best metaphor I’ve come across to describe exactly how I feel about it.

My time’s better spent on product strategy, vision, and problem-solving than logging Jira tickets, filing expense reports, and summarizing meeting notes.

Source: geoffreylitt.com