Luis Quintanilla Avatar Image

Hi, I'm Luis 👋

Latest updates from across the site

📌Pinned
Blog Post

How do I keep up with AI?

This question comes up a lot in conversations. The short answer? I don’t. There’s just too much happening, too fast, for anyone to stay on top of everything.

While I enjoy sharing links and recommendations, I realized that a blog post might be more helpful. It gives folks a single place they can bookmark, share, and come back to on their own time, rather than having to dig through message threads where things inevitably get lost.

That said, here are some sources I use to try and stay informed:

  • Newsletters are great for curated content. They highlight the top stories and help filter through the noise.
  • Blogs are often the primary sources behind those newsletters. They go deeper and often cover a broader set of topics that might not make it into curated roundups.
  • Podcasts serve a similar role. In some cases, they provide curation like newsletters and deep dives like blogs in others. Best of all, you can tune in while on the go making it a hands-free activity.

For your convenience, if any of the sources (including podcasts) I list below have RSS feeds, I’ve included them in my AI Starter Pack, which you can download and import into your favorite RSS reader (as long as it supports OPML file imports).

If you have some sources to share, send me an e-mail. I'd love to keep adding to this list! If they have a feed I can subscribe to, even better.

Newsletters

Blogs

I pride myself on being able to track down an RSS feed on just about any website, even if it’s buried or not immediately visible. Unfortunately, I haven't found a feed URL for either OpenAI or Anthropic which is annoying.

OpenAI and Anthropic, if you could do everyone a favor and drop a link, that would be great.

UPDATE: Thanks to @m2vh@mastodontech.de for sharing the OpenAI news feed.

I know I could use one of those web-page-to-RSS converters, but I'd much rather have an official link directly from the source.

Podcasts

Subscribing to feeds

Now that I’ve got you here...

Let’s talk about the best way to access all these feeds. My preferred and recommended approach is using a feed reader.

When subscribing to content on the open web, feed readers are your secret weapon.

RSS might seem like it’s dead (it’s not—yet). In fact, it’s the reason you often hear the phrase, “Wherever you get your podcasts.” But RSS goes beyond podcasts. It’s widely supported by blogs, newsletters, and even social platforms like the Fediverse (Mastodon, PeerTube, etc.) and BlueSky. It’s also how I’m able to compile my starter packs.

I've written more about RSS in Rediscovering the RSS Protocol, but the short version is this: when you build on open standards like RSS and OPML, you’re building on freedom. Freedom to use the tools that work best for you. Freedom to own your experience. And freedom to support a healthier, more independent web.

📌Pinned
Blog Post

Starter Packs with OPML and RSS

One of the things I like about Bluesky is the Starter Pack feature.

In a gist, a Starter Pack is a collection of feeds.

Bluesky users can:

  • Create starter packs
  • Share starter packs
  • Subscribe to starter packs

Unfortunately, Starter Packs are limited to Bluesky.

Or are they?

As mentioned, starter packs are a collection of feeds that others can create, share, and subscribe to.

Bluesky supports RSS, which means you could organize the feeds using an OPML file that you can share with others and others can subscribe to. The benefits of this is, you can continue to keep up with activity on Bluesky from the feed reader of your choice without being required to have an account on Bluesky.

More importantly, because RSS and OPML are open standards, you're not limited to building starter packs for Bluesky. You can create, share, and subscribe to starter packs for any platform that supports RSS. That includes blogs, podcasts, forums, YouTube, Mastodon, etc. Manton seems to have something similar in mind as a means of building on open standards that make it easy for Micro.blog to interop with various platforms.

If you're interested in what that might look like in practice, check out my "starter packs" which you can subscribe to using your RSS reader of choice and the provided OPML files.

I'm still working on similar collections for Mastodon and Bluesky but the same concept applies.

Although these are just simple examples, it shows the importance of building on open standards and the open web. Doing so introduces more freedom for creators and communities.

Here are other "starter packs" you might consider subscribing to.

If this is interesting to you, Feedland might be a project worth checking out.

📌Pinned
Note

OPML for website feeds

While thiking about implementing .well-known for RSS feeds on my site, I had another idea. Since that uses OPML anyways, I remembered recently doing something similar for my blogroll.

The concept is the same, except instead of making my blogroll discoverable, I'm doing it for my feeds. At the end of the day, a blogroll is a collection of feeds, so it should just work for my own feeds.

The implementation ended up being:

  1. Create an OPML file for each of the feeds on by website.

     <opml version="2.0">
       <head>
     	<title>Luis Quintanilla Feeds</title>
     	<ownerId>https://www.luisquintanilla.me</ownerId>
       </head>
       <body>
     	<outline title="Blog" text="Blog" type="rss" htmlUrl="/posts/1" xmlUrl="/blog.rss" />
     	<outline title="Microblog" text="Microblog" type="rss" htmlUrl="/feed" xmlUrl="/microblog.rss" />
     	<outline title="Responses" text="Responses" type="rss" htmlUrl="/feed/responses" xmlUrl="/responses.rss" />
     	<outline title="Mastodon" text="Mastodon" type="rss" htmlUrl="/mastodon" xmlUrl="/mastodon.rss" />
     	<outline title="Bluesky" text="Bluesky" type="rss" htmlUrl="/bluesky" xmlUrl="/bluesky.rss" />
     	<outline title="YouTube" text="YouTube" type="rss" htmlUrl="/youtube" xmlUrl="/bluesky.rss" />
       </body>
     </opml>
    
  2. Add a link tag to the head element of my website.

     <link rel="feeds" type="text/xml" title="Luis Quintanilla's Feeds" href="/feed/index.opml">
    
Reshare

There is no such thing as a tokenizer-free lunch

The only time most people hear about tokenization is when it’s being blamed for some undesirable behavior of a language model. These incidents have helped turn ignorance and indifference towards tokenizers into active dismissal and disdain. This attitude makes it harder to understand the tokenizers and develop better ones, because fewer people are actually studying tokenizers.

The goal of this blog post is to provide some context about how we got the tokenization approaches we have and argue that they’re not actually so bad.

On a personal level, I also want to foster more engagement with the tokenization literature. Regardless of whether you are pro- or anti-tokenization, more people to be working on issues related to tokenizers, the faster we’re going to make progress. For those that think they are taking a “tokenizer-free” approach, I argue that these approaches are just other kinds of tokenization. And incorporating the findings from static subword tokenization research can only help develop better alternatives.

Reply

ML Kit’s Prompt API: Unlock Custom On-Device Gemini Nano Experiences

AI is making it easier to create personalized app experiences that transform content into the right format for users. We previously enabled developers to integrate with Gemini Nano through ML Kit GenAI APIs tailored for specific use cases like summarization and image description.

Today marks a major milestone for Android's on-device generative AI. We're announcing the Alpha release of the ML Kit GenAI Prompt API. This API allows you to send natural language and multimodal requests to Gemini Nano, addressing the demand for more control and flexibility when building with generative models.

Bookmark

Why It’s Better for Us to Think of AI as a Tool than as a Worker

Back in 2018, Ben Thompson wrote another piece called “Tech’s Two Philosophies.” He contrasted keynotes from Google’s Sundar Pichai and Microsoft’s Satya Nadella, and came to this conclusion: “In Google’s view, computers help you get things done—and save you time—by doing things for you.” The second philosophy, expounded by Nadella, is very much a continuation of Steve Jobs’ “bicycle for the mind” insight. As Thompson put it, “the expectation is not that the computer does your work for you, but rather that the computer enables you to do your work better and more efficiently.” Another way of saying this is that you can treat AI as either a worker OR a tool, but your choice has consequences.

Yes, today’s AI is amazing. We don’t have to reach for hyperbole to appreciate that. And obviously, if AI systems do develop genuine volition and stakes in their work, the ethical calculus changes entirely.

For the moment, though, companies building and deploying AI tools should focus on three things: First, does AI empower its users to do things that were previously impossible? Second, does it empower a wider group of people to do things that formerly could be done only by highly skilled specialists? Third, do the benefits of the increased productivity it brings accrue to those using the tool or primarily to those who develop it and own it?

The answer to the first two questions is that absolutely, we are entering a period of dramatic democratization of computing power. And yes, if humans are given the freedom to apply that power to solve new problems and create new value, we could be looking ahead to a golden age of prosperity. It’s how we might choose to answer the third question that haunts me.

During the first industrial revolution, humans suffered through a long period of immiseration as the productivity gains from machines accrued primarily to the owners of the machines. It took several generations before they were more widely shared.

It doesn’t have to be that way. Replace human workers with AI workers, and you will repeat the mistakes of the 19th century. Build tools that empower and enrich humans, and we might just surmount the challenges of the 21st century.

Star

Towards Humanist Superintelligence

Instead of endlessly debating capabilities or timing, it’s time to think hard about the purpose of technology, what we want from it, what its limitations should be, and how we’re going to ensure this incredible tech always benefits humanity.

At Microsoft AI, we’re working towards Humanist Superintelligence (HSI): incredibly advanced AI capabilities that always work for, in service of, people and humanity more generally. We think of it as systems that are problem-oriented and tend towards the domain specific. Not an unbounded and unlimited entity with high degrees of autonomy – but AI that is carefully calibrated, contextualized, within limits. We want to both explore and prioritize how the most advanced forms of AI can keep humanity in control while at the same time accelerating our path towards tackling our most pressing global challenges.

Bookmark

Mathematical exploration and discovery at scale

AlphaEvolve is a generic evolutionary coding agent that combines the generative capabilities of LLMs with automated evaluation in an iterative evolutionary framework that proposes, tests, and refines algorithmic solutions to challenging scientific and practical problems. In this paper we showcase AlphaEvolve as a tool for autonomously discovering novel mathematical constructions and advancing our understanding of long-standing open problems. To demonstrate its breadth, we considered a list of 67 problems spanning mathematical analysis, combinatorics, geometry, and number theory. The system rediscovered the best known solutions in most of the cases and discovered improved solutions in several. In some instances, AlphaEvolve is also able to generalize results for a finite number of input values into a formula valid for all input values. Furthermore, we are able to combine this methodology with Deep Think and AlphaProof in a broader framework where the additional proof-assistants and reasoning systems provide automated proof generation and further mathematical insights. These results demonstrate that large language model-guided evolutionary search can autonomously discover mathematical constructions that complement human intuition, at times matching or even improving the best known results, highlighting the potential for significant new ways of interaction between mathematicians and AI systems. We present AlphaEvolve as a powerful new tool for mathematical discovery, capable of exploring vast search spaces to solve complex optimization problems at scale, often with significantly reduced requirements on preparation and computation time.

Note
Note

.NET Conf 2025 Week

It's that time of year again. .NET Conf week is here.

I'm really looking forward to the following sessions:

  • Aspire: Cloud-Native Development Simplified (Maddy Montaquila, Damian Edwards)
  • Building Intelligent Apps in .NET (Jeremy Likness)
  • Understanding Agentic Development (Jeremy Likness)
  • Model Context Protocol (MCP) for .NET Developers (Mike Kistler, Allie Barry)
  • Simplifying .NET with 'dotnet run file.cs' (Damian Edwards)
  • AI Foundry for .NET Developers (Bruno Capuano)

Check out the full agenda

Note

Added markdown support to RSS feeds

I just learned about this proposal to add markdown to RSS feeds, which Manton implemented in Micro.blog.

This is such a neat idea. Since I author my blog posts in Markdown, exposing it via RSS was relatively trivial because a lot of the plumbing was already there.

This is the PR where I had GitHub Copilot Coding Agent implement the feature.

Here's a snippet from my main feed:

<item>
<title>Dynamic OPML for Pocket Casts</title>
<description><![CDATA[[reply] <blockquote class="blockquote"> <p>How could this work? A new feature for <a href="https://opml.org/spec2.opml#subscriptionLists">OPML subscription lists</a>. Today it's used as the import/export format for lists. But that's a one-time thing. Instead I want to give Pocket Casts the URL of an OPML file with my podcast subscriptions from the desktop.<a href="http://scripting.com/2025/11/06/141023.html#a143319">#</a></p> </blockquote> <p>This is exactly the thinking behind my <a href="https://www.lqdev.me/podroll">podroll</a> and other <a href="https://www.lqdev.me/collections">collections</a> on my site that I provide an OPML file for. I want a single source of truth for my subscriptions that I can share with others. Sadly it's not dynamic because I still have to manually update the OPML file and re-import into my podcasting client. Having read / write capabilities from the client so that whenever I subscribe to a podcast, the OPML file is updated would make the experience even better.</p> ]]></description>
<link>https://www.lqdev.me/responses/podroll-dynamic-opml</link>
<guid>https://www.lqdev.me/responses/podroll-dynamic-opml</guid>
<pubDate>2025-11-06 12:39 -05:00</pubDate>
<category>opml</category>
<category>rss</category>
<category>podcasts</category>
<category>podroll</category>
<category>automattic</category>
<category>pocketcasts</category>
<source:markdown>
<![CDATA[ --- title: "Dynamic OPML for Pocket Casts" targeturl: http://scripting.com/2025/11/06/141023.html?title=dynamicOpmlForPocketCasts response_type: reply dt_published: "2025-11-06 12:39 -05:00" dt_updated: "2025-11-06 12:39 -05:00" tags: ["opml","rss","podcasts","podroll","automattic","pocketcasts"] --- > How could this work? A new feature for [OPML subscription lists](https://opml.org/spec2.opml#subscriptionLists). Today it's used as the import/export format for lists. But that's a one-time thing. Instead I want to give Pocket Casts the URL of an OPML file with my podcast subscriptions from the desktop.[#](http://scripting.com/2025/11/06/141023.html#a143319) This is exactly the thinking behind my [podroll](/podroll) and other [collections](/collections) on my site that I provide an OPML file for. I want a single source of truth for my subscriptions that I can share with others. Sadly it's not dynamic because I still have to manually update the OPML file and re-import into my podcasting client. Having read / write capabilities from the client so that whenever I subscribe to a podcast, the OPML file is updated would make the experience even better. ]]>
</source:markdown>
</item>

I don't have Mac or iOS so I can't test with NetNewsWire, so if anyone would be kind of enough to validate whether this works for them and send me a message, I'd greatly appreciate it.

Reply

Dynamic OPML for Pocket Casts

How could this work? A new feature for OPML subscription lists. Today it's used as the import/export format for lists. But that's a one-time thing. Instead I want to give Pocket Casts the URL of an OPML file with my podcast subscriptions from the desktop.#

This is exactly the thinking behind my podroll and other collections on my site that I provide an OPML file for. I want a single source of truth for my subscriptions that I can share with others. Sadly it's not dynamic because I still have to manually update the OPML file and re-import into my podcasting client. Having read / write capabilities from the client so that whenever I subscribe to a podcast, the OPML file is updated would make the experience even better.

Bookmark

Jami survival kit: Internet down? Keep talking!

One of Jami’s core features is its ability to function in emergency settings where internet access is severely limited or entirely cut off.

‌Unlike traditional messaging apps that rely on central servers to relay messages, Jami connects users peer to peer. This means each device acts as a node in a decentralized web, sending encrypted data directly to others, without any intermediary.

In troubled times, staying connected is not a luxury. It is a necessity.‌‌It means ensuring safety, coordination, and the ability to call for help, share your location, and reach your loved ones.

When official channels fail, direct communication helps share verified information, document events, and fight misinformation.

It also defends our freedoms. Speaking freely, even under pressure, is essential to support, resist, and rebuild.

Staying connected maintains solidarity and mental well-being. No one should face crisis alone.

And above all, it keeps communities resilient, able to act and organize even in the darkest times.

That is why tools like Jami, which operate without a central server or reliance on tech giants, are essential parts of a digital survival kit.

Reshare

Building the Open Agent Ecosystem Together: Introducing OpenEnv

With tools like TRL, TorchForge and verl, the open-source community has shown how to scale AI across complex compute infrastructure. But compute is only one side of the coin. The other side is the developer community; the people and tools that make agentic systems possible. That’s why Meta and Hugging Face are partnering to launch the OpenEnv Hub: a shared and open community hub for agentic environments.

Agentic environments define everything an agent needs to perform a task: the tools, APIs, credentials, execution context, and nothing else. They bring clarity, safety, and sandboxed control to agent behavior.

These environments can be used for both training and deployment, and serve as the foundation for scalable agentic development.

Reshare

Custom agents for GitHub Copilot

Custom agents for GitHub Copilot make it easy for users and organizations to specialize their Copilot coding agent (CCA) through simple, file-based configurations.

By adding a configuration file under .github/agents in a repository or in the {org}/.github repository, you can define agent personas that capture your team’s workflows, conventions, and unique needs. These agents can be further tailored with prompts, tool selections, and Model Context Protocol (MCP) servers to optimize for specific use cases.

Bookmark

Supervised Reinforcement Learning: From Expert Trajectories to Step-wise Reasoning

Large Language Models (LLMs) often struggle with problems that require multi-step reasoning. For small-scale open-source models, Reinforcement Learning with Verifiable Rewards (RLVR) fails when correct solutions are rarely sampled even after many attempts, while Supervised Fine-Tuning (SFT) tends to overfit long demonstrations through rigid token-by-token imitation. To address this gap, we propose Supervised Reinforcement Learning (SRL), a framework that reformulates problem solving as generating a sequence of logical "actions". SRL trains the model to generate an internal reasoning monologue before committing to each action. It provides smoother rewards based on the similarity between the model's actions and expert actions extracted from the SFT dataset in a step-wise manner. This supervision offers richer learning signals even when all rollouts are incorrect, while encouraging flexible reasoning guided by expert demonstrations. As a result, SRL enables small models to learn challenging problems previously unlearnable by SFT or RLVR. Moreover, initializing training with SRL before refining with RLVR yields the strongest overall performance. Beyond reasoning benchmarks, SRL generalizes effectively to agentic software engineering tasks, establishing it as a robust and versatile training framework for reasoning-oriented LLMs.

Note
Bookmark

Opt Out October: Daily Tips to Protect Your Privacy and Security

Trying to take control of your online privacy can feel like a full-time job. But if you break it up into small tasks and take on one project at a time it makes the process of protecting your privacy much easier. This month we’re going to do just that. For the month of October, we’ll update this post with new tips every weekday that show various ways you can opt yourself out of the ways tech giants surveil you.

Reshare

Introducing Agent HQ: Any agent, any way you work

At GitHub Universe, we’re announcing Agent HQ, GitHub’s vision for the next evolution of our platform. Agents shouldn’t be bolted on. They should work the way you already work. That’s why we’re making agents native to the GitHub flow.

Agent HQ transforms GitHub into an open ecosystem that unites every agent on a single platform. Over the coming months, coding agents from Anthropic, OpenAI, Google, Cognition, xAI, and more will become available directly within GitHub as part of your paid GitHub Copilot subscription.

Bookmark

Graffiti: Enabling an Ecosystem of Personalized and Interoperable Social Applications

Most social applications, from Twitter to Wikipedia, have rigid one-size-fits-all designs, but building new social applications is both technically challenging and results in applications that are siloed away from existing communities. We present Graffiti, a system that can be used to build a wide variety of personalized social applications with relative ease that also interoperate with each other. People can freely move between a plurality of designs—each with its own aesthetic, feature set, and moderation—all without losing their friends or data.

Our concept of total reification makes it possible for seemingly contradictory designs, including conflicting moderation rules, to interoperate. Conversely, our concept of channels prevents interoperation from occurring by accident, avoiding context collapse.

Graffiti applications interact through a minimal client-side API, which we show admits at least two decentralized implementations. Above the API, we built a Vue plugin, which we use to develop applications similar to Twitter, Messenger, and Wikipedia using only client-side code. Our case studies explore how these and other novel applications interoperate, as well as the broader ecosystem that Graffiti enables.

Reshare

Introducing vibe coding in Google AI Studio

We’ve been building a better foundation for AI Studio, and this week we introduced a totally new AI powered vibe coding experience in Google AI Studio. This redesigned experience is meant to take you from prompt to working AI app in minutes without you having to juggle with API keys, or figuring out how to tie models together.

Media

Claude Code cut me off

Just when I was wrapping up the design for media publishing migration from Discord, Claude Code decided it needed a break. Fortunately I was far along enough I got Copilot Coding Agent to take over implementation and successfully completed the feature.

Screenshot of Claude Code on the web
Note
Bookmark

Pico-Banana-400K: A Large-Scale Dataset for Text-Guided Image Editing

Recent advances in multimodal models have demonstrated remarkable text-guided image editing capabilities, with systems like GPT-4o and Nano-Banana setting new benchmarks. However, the research community's progress remains constrained by the absence of large-scale, high-quality, and openly accessible datasets built from real images. We introduce Pico-Banana-400K, a comprehensive 400K-image dataset for instruction-based image editing. Our dataset is constructed by leveraging Nano-Banana to generate diverse edit pairs from real photographs in the OpenImages collection. What distinguishes Pico-Banana-400K from previous synthetic datasets is our systematic approach to quality and diversity. We employ a fine-grained image editing taxonomy to ensure comprehensive coverage of edit types while maintaining precise content preservation and instruction faithfulness through MLLM-based quality scoring and careful curation. Beyond single turn editing, Pico-Banana-400K enables research into complex editing scenarios. The dataset includes three specialized subsets: (1) a 72K-example multi-turn collection for studying sequential editing, reasoning, and planning across consecutive modifications; (2) a 56K-example preference subset for alignment research and reward model training; and (3) paired long-short editing instructions for developing instruction rewriting and summarization capabilities. By providing this large-scale, high-quality, and task-rich resource, Pico-Banana-400K establishes a robust foundation for training and benchmarking the next generation of text-guided image editing models.

Repo

Star

Floor 796

Floor796 is an animated scene showing the lives of characters from various works on the 796th floor of a huge space station. The animation is regularly expanded with new blocks (rooms) and characters from movies, TV series, games, anime, memes, etc.

The project is being created by one author as a hobby starting in 2018.

This is so much fun. I've easily spent an hour exploring the space finding new rooms and characters.

Reshare

Introducing PyTorch Monarch

We’re excited to introduce Monarch, a distributed programming framework that brings the simplicity of single-machine PyTorch to entire clusters.

Monarch lets you program distributed systems the way you’d program a single machine, hiding the complexity of distributed computing:

  1. Program clusters like arrays. Monarch organizes hosts, processes, and actors into scalable meshes that you can manipulate directly. You can operate on entire meshes (or slices of them) with simple APIs—Monarch handles the distribution and vectorization automatically, so you can think in terms of what you want to compute, not where the code runs.
  2. Progressive fault handling. With Monarch, you write your code as if nothing fails. When something does fail, Monarch fails fast by default—stopping the whole program, just like an uncaught exception in a simple local script. Later, you can progressively add fine-grained fault handling exactly where you need it, catching and recovering from failures just like you’d catch exceptions.
  3. Separate control from data. Monarch splits control plane (messaging) from data plane (RDMA transfers), enabling direct GPU-to-GPU memory transfers across your cluster. Monarch lets you send commands through one path, while moving data through another, optimized for what each does best.
  4. Distributed tensors that feel local. Monarch integrates seamlessly with PyTorch to provide tensors that are sharded across clusters of GPUs. Monarch tensor operations look local but are executed across distributed large clusters, with Monarch handling the complexity of coordinating across thousands of GPUs."
Bookmark

MelonLand

"You've arrived at MelonLand! This is a web project and online arts community that celebrates homepages, virtual worlds, the world-wide-web and the digital lives that all netizins share, here at the dawn of the digital age.

This project has three goals:

  • To make the web; genuine, chaotic, timeless, individual and joyful
  • To provide knowledge and support to humans creating digital worlds
  • To promote websites and digital worlds as mediums of visual art.
    MelonLand is part of a wider movement sometimes called the web revival; to join, all you need to do is choose to engage with small handmade web projects next time you're online!"
Reply

SentinelStep: Building agents that can wait, monitor, and act

...we are introducing SentinelStep(opens in new tab), a mechanism that enables agents to complete long-running monitoring tasks. The approach is simple. SentinelStep wraps the agent in a workflow with dynamic polling and careful context management. This enables the agent to monitor conditions for hours or days without getting sidetracked. We’ve implemented SentinelStep in Magentic-UI, our research prototype agentic system, to enable users to build agents for long-running tasks, whether they involve web browsing, coding, or external tools.