Luis Quintanilla Avatar Image

Hi, I'm Luis 👋

Latest updates from across the site

📌Pinned
Blog Post

How do I keep up with AI?

This question comes up a lot in conversations. The short answer? I don’t. There’s just too much happening, too fast, for anyone to stay on top of everything.

While I enjoy sharing links and recommendations, I realized that a blog post might be more helpful. It gives folks a single place they can bookmark, share, and come back to on their own time, rather than having to dig through message threads where things inevitably get lost.

That said, here are some sources I use to try and stay informed:

  • Newsletters are great for curated content. They highlight the top stories and help filter through the noise.
  • Blogs are often the primary sources behind those newsletters. They go deeper and often cover a broader set of topics that might not make it into curated roundups.
  • Podcasts serve a similar role. In some cases, they provide curation like newsletters and deep dives like blogs in others. Best of all, you can tune in while on the go making it a hands-free activity.

For your convenience, if any of the sources (including podcasts) I list below have RSS feeds, I’ve included them in my AI Starter Pack, which you can download and import into your favorite RSS reader (as long as it supports OPML file imports).

If you have some sources to share, send me an e-mail. I'd love to keep adding to this list! If they have a feed I can subscribe to, even better.

Newsletters

Blogs

I pride myself on being able to track down an RSS feed on just about any website, even if it’s buried or not immediately visible. Unfortunately, I haven't found a feed URL for either OpenAI or Anthropic which is annoying.

OpenAI and Anthropic, if you could do everyone a favor and drop a link, that would be great.

UPDATE: Thanks to @m2vh@mastodontech.de for sharing the OpenAI news feed.

I know I could use one of those web-page-to-RSS converters, but I'd much rather have an official link directly from the source.

Podcasts

Subscribing to feeds

Now that I’ve got you here...

Let’s talk about the best way to access all these feeds. My preferred and recommended approach is using a feed reader.

When subscribing to content on the open web, feed readers are your secret weapon.

RSS might seem like it’s dead (it’s not—yet). In fact, it’s the reason you often hear the phrase, “Wherever you get your podcasts.” But RSS goes beyond podcasts. It’s widely supported by blogs, newsletters, and even social platforms like the Fediverse (Mastodon, PeerTube, etc.) and BlueSky. It’s also how I’m able to compile my starter packs.

I've written more about RSS in Rediscovering the RSS Protocol, but the short version is this: when you build on open standards like RSS and OPML, you’re building on freedom. Freedom to use the tools that work best for you. Freedom to own your experience. And freedom to support a healthier, more independent web.

📌Pinned
Blog Post

Starter Packs with OPML and RSS

One of the things I like about Bluesky is the Starter Pack feature.

In a gist, a Starter Pack is a collection of feeds.

Bluesky users can:

  • Create starter packs
  • Share starter packs
  • Subscribe to starter packs

Unfortunately, Starter Packs are limited to Bluesky.

Or are they?

As mentioned, starter packs are a collection of feeds that others can create, share, and subscribe to.

Bluesky supports RSS, which means you could organize the feeds using an OPML file that you can share with others and others can subscribe to. The benefits of this is, you can continue to keep up with activity on Bluesky from the feed reader of your choice without being required to have an account on Bluesky.

More importantly, because RSS and OPML are open standards, you're not limited to building starter packs for Bluesky. You can create, share, and subscribe to starter packs for any platform that supports RSS. That includes blogs, podcasts, forums, YouTube, Mastodon, etc. Manton seems to have something similar in mind as a means of building on open standards that make it easy for Micro.blog to interop with various platforms.

If you're interested in what that might look like in practice, check out my "starter packs" which you can subscribe to using your RSS reader of choice and the provided OPML files.

I'm still working on similar collections for Mastodon and Bluesky but the same concept applies.

Although these are just simple examples, it shows the importance of building on open standards and the open web. Doing so introduces more freedom for creators and communities.

Here are other "starter packs" you might consider subscribing to.

If this is interesting to you, Feedland might be a project worth checking out.

📌Pinned
Note

OPML for website feeds

While thiking about implementing .well-known for RSS feeds on my site, I had another idea. Since that uses OPML anyways, I remembered recently doing something similar for my blogroll.

The concept is the same, except instead of making my blogroll discoverable, I'm doing it for my feeds. At the end of the day, a blogroll is a collection of feeds, so it should just work for my own feeds.

The implementation ended up being:

  1. Create an OPML file for each of the feeds on by website.

     <opml version="2.0">
       <head>
     	<title>Luis Quintanilla Feeds</title>
     	<ownerId>https://www.luisquintanilla.me</ownerId>
       </head>
       <body>
     	<outline title="Blog" text="Blog" type="rss" htmlUrl="/posts/1" xmlUrl="/blog.rss" />
     	<outline title="Microblog" text="Microblog" type="rss" htmlUrl="/feed" xmlUrl="/microblog.rss" />
     	<outline title="Responses" text="Responses" type="rss" htmlUrl="/feed/responses" xmlUrl="/responses.rss" />
     	<outline title="Mastodon" text="Mastodon" type="rss" htmlUrl="/mastodon" xmlUrl="/mastodon.rss" />
     	<outline title="Bluesky" text="Bluesky" type="rss" htmlUrl="/bluesky" xmlUrl="/bluesky.rss" />
     	<outline title="YouTube" text="YouTube" type="rss" htmlUrl="/youtube" xmlUrl="/bluesky.rss" />
       </body>
     </opml>
    
  2. Add a link tag to the head element of my website.

     <link rel="feeds" type="text/xml" title="Luis Quintanilla's Feeds" href="/feed/index.opml">
    
Note

Added markdown support to RSS feeds

I just learned about this proposal to add markdown to RSS feeds, which Manton implemented in Micro.blog.

This is such a neat idea. Since I author my blog posts in Markdown, exposing it via RSS was relatively trivial because a lot of the plumbing was already there.

This is the PR where I had GitHub Copilot Coding Agent implement the feature.

Here's a snippet from my main feed:

<item>
<title>Dynamic OPML for Pocket Casts</title>
<description><![CDATA[[reply] <blockquote class="blockquote"> <p>How could this work? A new feature for <a href="https://opml.org/spec2.opml#subscriptionLists">OPML subscription lists</a>. Today it's used as the import/export format for lists. But that's a one-time thing. Instead I want to give Pocket Casts the URL of an OPML file with my podcast subscriptions from the desktop.<a href="http://scripting.com/2025/11/06/141023.html#a143319">#</a></p> </blockquote> <p>This is exactly the thinking behind my <a href="https://www.lqdev.me/podroll">podroll</a> and other <a href="https://www.lqdev.me/collections">collections</a> on my site that I provide an OPML file for. I want a single source of truth for my subscriptions that I can share with others. Sadly it's not dynamic because I still have to manually update the OPML file and re-import into my podcasting client. Having read / write capabilities from the client so that whenever I subscribe to a podcast, the OPML file is updated would make the experience even better.</p> ]]></description>
<link>https://www.lqdev.me/responses/podroll-dynamic-opml</link>
<guid>https://www.lqdev.me/responses/podroll-dynamic-opml</guid>
<pubDate>2025-11-06 12:39 -05:00</pubDate>
<category>opml</category>
<category>rss</category>
<category>podcasts</category>
<category>podroll</category>
<category>automattic</category>
<category>pocketcasts</category>
<source:markdown>
<![CDATA[ --- title: "Dynamic OPML for Pocket Casts" targeturl: http://scripting.com/2025/11/06/141023.html?title=dynamicOpmlForPocketCasts response_type: reply dt_published: "2025-11-06 12:39 -05:00" dt_updated: "2025-11-06 12:39 -05:00" tags: ["opml","rss","podcasts","podroll","automattic","pocketcasts"] --- > How could this work? A new feature for [OPML subscription lists](https://opml.org/spec2.opml#subscriptionLists). Today it's used as the import/export format for lists. But that's a one-time thing. Instead I want to give Pocket Casts the URL of an OPML file with my podcast subscriptions from the desktop.[#](http://scripting.com/2025/11/06/141023.html#a143319) This is exactly the thinking behind my [podroll](/podroll) and other [collections](/collections) on my site that I provide an OPML file for. I want a single source of truth for my subscriptions that I can share with others. Sadly it's not dynamic because I still have to manually update the OPML file and re-import into my podcasting client. Having read / write capabilities from the client so that whenever I subscribe to a podcast, the OPML file is updated would make the experience even better. ]]>
</source:markdown>
</item>

I don't have Mac or iOS so I can't test with NetNewsWire, so if anyone would be kind of enough to validate whether this works for them and send me a message, I'd greatly appreciate it.

Reply

Dynamic OPML for Pocket Casts

How could this work? A new feature for OPML subscription lists. Today it's used as the import/export format for lists. But that's a one-time thing. Instead I want to give Pocket Casts the URL of an OPML file with my podcast subscriptions from the desktop.#

This is exactly the thinking behind my podroll and other collections on my site that I provide an OPML file for. I want a single source of truth for my subscriptions that I can share with others. Sadly it's not dynamic because I still have to manually update the OPML file and re-import into my podcasting client. Having read / write capabilities from the client so that whenever I subscribe to a podcast, the OPML file is updated would make the experience even better.

Bookmark

Jami survival kit: Internet down? Keep talking!

One of Jami’s core features is its ability to function in emergency settings where internet access is severely limited or entirely cut off.

‌Unlike traditional messaging apps that rely on central servers to relay messages, Jami connects users peer to peer. This means each device acts as a node in a decentralized web, sending encrypted data directly to others, without any intermediary.

In troubled times, staying connected is not a luxury. It is a necessity.‌‌It means ensuring safety, coordination, and the ability to call for help, share your location, and reach your loved ones.

When official channels fail, direct communication helps share verified information, document events, and fight misinformation.

It also defends our freedoms. Speaking freely, even under pressure, is essential to support, resist, and rebuild.

Staying connected maintains solidarity and mental well-being. No one should face crisis alone.

And above all, it keeps communities resilient, able to act and organize even in the darkest times.

That is why tools like Jami, which operate without a central server or reliance on tech giants, are essential parts of a digital survival kit.

Reshare

Building the Open Agent Ecosystem Together: Introducing OpenEnv

With tools like TRL, TorchForge and verl, the open-source community has shown how to scale AI across complex compute infrastructure. But compute is only one side of the coin. The other side is the developer community; the people and tools that make agentic systems possible. That’s why Meta and Hugging Face are partnering to launch the OpenEnv Hub: a shared and open community hub for agentic environments.

Agentic environments define everything an agent needs to perform a task: the tools, APIs, credentials, execution context, and nothing else. They bring clarity, safety, and sandboxed control to agent behavior.

These environments can be used for both training and deployment, and serve as the foundation for scalable agentic development.

Reshare

Custom agents for GitHub Copilot

Custom agents for GitHub Copilot make it easy for users and organizations to specialize their Copilot coding agent (CCA) through simple, file-based configurations.

By adding a configuration file under .github/agents in a repository or in the {org}/.github repository, you can define agent personas that capture your team’s workflows, conventions, and unique needs. These agents can be further tailored with prompts, tool selections, and Model Context Protocol (MCP) servers to optimize for specific use cases.

Bookmark

Supervised Reinforcement Learning: From Expert Trajectories to Step-wise Reasoning

Large Language Models (LLMs) often struggle with problems that require multi-step reasoning. For small-scale open-source models, Reinforcement Learning with Verifiable Rewards (RLVR) fails when correct solutions are rarely sampled even after many attempts, while Supervised Fine-Tuning (SFT) tends to overfit long demonstrations through rigid token-by-token imitation. To address this gap, we propose Supervised Reinforcement Learning (SRL), a framework that reformulates problem solving as generating a sequence of logical "actions". SRL trains the model to generate an internal reasoning monologue before committing to each action. It provides smoother rewards based on the similarity between the model's actions and expert actions extracted from the SFT dataset in a step-wise manner. This supervision offers richer learning signals even when all rollouts are incorrect, while encouraging flexible reasoning guided by expert demonstrations. As a result, SRL enables small models to learn challenging problems previously unlearnable by SFT or RLVR. Moreover, initializing training with SRL before refining with RLVR yields the strongest overall performance. Beyond reasoning benchmarks, SRL generalizes effectively to agentic software engineering tasks, establishing it as a robust and versatile training framework for reasoning-oriented LLMs.

Note
Bookmark

Opt Out October: Daily Tips to Protect Your Privacy and Security

Trying to take control of your online privacy can feel like a full-time job. But if you break it up into small tasks and take on one project at a time it makes the process of protecting your privacy much easier. This month we’re going to do just that. For the month of October, we’ll update this post with new tips every weekday that show various ways you can opt yourself out of the ways tech giants surveil you.

Reshare

Introducing Agent HQ: Any agent, any way you work

At GitHub Universe, we’re announcing Agent HQ, GitHub’s vision for the next evolution of our platform. Agents shouldn’t be bolted on. They should work the way you already work. That’s why we’re making agents native to the GitHub flow.

Agent HQ transforms GitHub into an open ecosystem that unites every agent on a single platform. Over the coming months, coding agents from Anthropic, OpenAI, Google, Cognition, xAI, and more will become available directly within GitHub as part of your paid GitHub Copilot subscription.

Bookmark

Graffiti: Enabling an Ecosystem of Personalized and Interoperable Social Applications

Most social applications, from Twitter to Wikipedia, have rigid one-size-fits-all designs, but building new social applications is both technically challenging and results in applications that are siloed away from existing communities. We present Graffiti, a system that can be used to build a wide variety of personalized social applications with relative ease that also interoperate with each other. People can freely move between a plurality of designs—each with its own aesthetic, feature set, and moderation—all without losing their friends or data.

Our concept of total reification makes it possible for seemingly contradictory designs, including conflicting moderation rules, to interoperate. Conversely, our concept of channels prevents interoperation from occurring by accident, avoiding context collapse.

Graffiti applications interact through a minimal client-side API, which we show admits at least two decentralized implementations. Above the API, we built a Vue plugin, which we use to develop applications similar to Twitter, Messenger, and Wikipedia using only client-side code. Our case studies explore how these and other novel applications interoperate, as well as the broader ecosystem that Graffiti enables.

Reshare

Introducing vibe coding in Google AI Studio

We’ve been building a better foundation for AI Studio, and this week we introduced a totally new AI powered vibe coding experience in Google AI Studio. This redesigned experience is meant to take you from prompt to working AI app in minutes without you having to juggle with API keys, or figuring out how to tie models together.

Media

Claude Code cut me off

Just when I was wrapping up the design for media publishing migration from Discord, Claude Code decided it needed a break. Fortunately I was far along enough I got Copilot Coding Agent to take over implementation and successfully completed the feature.

Screenshot of Claude Code on the web
Note
Bookmark

Pico-Banana-400K: A Large-Scale Dataset for Text-Guided Image Editing

Recent advances in multimodal models have demonstrated remarkable text-guided image editing capabilities, with systems like GPT-4o and Nano-Banana setting new benchmarks. However, the research community's progress remains constrained by the absence of large-scale, high-quality, and openly accessible datasets built from real images. We introduce Pico-Banana-400K, a comprehensive 400K-image dataset for instruction-based image editing. Our dataset is constructed by leveraging Nano-Banana to generate diverse edit pairs from real photographs in the OpenImages collection. What distinguishes Pico-Banana-400K from previous synthetic datasets is our systematic approach to quality and diversity. We employ a fine-grained image editing taxonomy to ensure comprehensive coverage of edit types while maintaining precise content preservation and instruction faithfulness through MLLM-based quality scoring and careful curation. Beyond single turn editing, Pico-Banana-400K enables research into complex editing scenarios. The dataset includes three specialized subsets: (1) a 72K-example multi-turn collection for studying sequential editing, reasoning, and planning across consecutive modifications; (2) a 56K-example preference subset for alignment research and reward model training; and (3) paired long-short editing instructions for developing instruction rewriting and summarization capabilities. By providing this large-scale, high-quality, and task-rich resource, Pico-Banana-400K establishes a robust foundation for training and benchmarking the next generation of text-guided image editing models.

Repo

Star

Floor 796

Floor796 is an animated scene showing the lives of characters from various works on the 796th floor of a huge space station. The animation is regularly expanded with new blocks (rooms) and characters from movies, TV series, games, anime, memes, etc.

The project is being created by one author as a hobby starting in 2018.

This is so much fun. I've easily spent an hour exploring the space finding new rooms and characters.

Reshare

Introducing PyTorch Monarch

We’re excited to introduce Monarch, a distributed programming framework that brings the simplicity of single-machine PyTorch to entire clusters.

Monarch lets you program distributed systems the way you’d program a single machine, hiding the complexity of distributed computing:

  1. Program clusters like arrays. Monarch organizes hosts, processes, and actors into scalable meshes that you can manipulate directly. You can operate on entire meshes (or slices of them) with simple APIs—Monarch handles the distribution and vectorization automatically, so you can think in terms of what you want to compute, not where the code runs.
  2. Progressive fault handling. With Monarch, you write your code as if nothing fails. When something does fail, Monarch fails fast by default—stopping the whole program, just like an uncaught exception in a simple local script. Later, you can progressively add fine-grained fault handling exactly where you need it, catching and recovering from failures just like you’d catch exceptions.
  3. Separate control from data. Monarch splits control plane (messaging) from data plane (RDMA transfers), enabling direct GPU-to-GPU memory transfers across your cluster. Monarch lets you send commands through one path, while moving data through another, optimized for what each does best.
  4. Distributed tensors that feel local. Monarch integrates seamlessly with PyTorch to provide tensors that are sharded across clusters of GPUs. Monarch tensor operations look local but are executed across distributed large clusters, with Monarch handling the complexity of coordinating across thousands of GPUs."
Bookmark

MelonLand

"You've arrived at MelonLand! This is a web project and online arts community that celebrates homepages, virtual worlds, the world-wide-web and the digital lives that all netizins share, here at the dawn of the digital age.

This project has three goals:

  • To make the web; genuine, chaotic, timeless, individual and joyful
  • To provide knowledge and support to humans creating digital worlds
  • To promote websites and digital worlds as mediums of visual art.
    MelonLand is part of a wider movement sometimes called the web revival; to join, all you need to do is choose to engage with small handmade web projects next time you're online!"
Reply

SentinelStep: Building agents that can wait, monitor, and act

...we are introducing SentinelStep(opens in new tab), a mechanism that enables agents to complete long-running monitoring tasks. The approach is simple. SentinelStep wraps the agent in a workflow with dynamic polling and careful context management. This enables the agent to monitor conditions for hours or days without getting sidetracked. We’ve implemented SentinelStep in Magentic-UI, our research prototype agentic system, to enable users to build agents for long-running tasks, whether they involve web browsing, coding, or external tools.

Reshare

Introducing ExecuTorch 1.0

ExecuTorch enables seamless, production-ready deployment of PyTorch models directly to edge devices (mobile, embedded, desktop) without the need for conversion or rewriting, supporting a wide range of hardware backends and model types.

ExecuTorch 1.0 release delivers broader hardware support across CPU, GPU, and NPU, greater stability for production use, and robust model compatibility.

Reply

Rivian’s first e-bike

Rivian’s micromobility spinoff Also has just taken the wraps off its TM-B e-bike, TM-Q pedal-assisted electric quad bike, and Alpha Wave helmet that represents “a breakthrough in rider safety and connectivity".

Really cool, but at $4k+ it seems pricey. Given it's Rivian's entry into the market it makes sense. As more of these sell and hit the market, hopefully the prices also start to come down.

Reply

YouTube will help you quit watching Shorts

YouTube has added a new Shorts feature that makes it easier to manage how much time you’re spending watching videos. Mobile users can now set a customizable daily limit that restricts how long they can scroll Shorts feeds, aiming to help viewers better manage their time instead of endlessly scrolling.

I gotta admit this is a useful feature. At the same time, WHAT?!

Selling the problem and the solution.

Reshare

Happy Internet Archive Day!

"The San Francisco Board of Supervisors has officially declared October 22, 2025, as Internet Archive Day in the City and County of San Francisco. Sponsored by Supervisor Connie Chan (District 1), the resolution passed unanimously in recognition of the Internet Archive’s extraordinary milestone—preserving 1 trillion web pages. The resolution celebrates the Archive’s enduring mission to provide “universal access to all knowledge” and our deep roots in the city where it was founded nearly three decades ago."

Reshare

Ray Comes to the PyTorch Foundation

The PyTorch Foundation, the Linux Foundation-based open source AI organization, today announced that it will become the host of Ray, the popular open source distributed computing framework for scaling AI and Python applications. The Ray project will join existing projects like PyTorch itself, the vLLM inference engine and deep learning optimization library DeepSpeed.

Bookmark

Resonant Computing Manifesto

We suggest these five principles as a starting place:

  1. Private: In the era of AI, whoever controls the context holds the power. Users must stay in full control of their data and context.
  2. Dedicated: You must be able to trust that your software holds your best interest as its highest priority, without conflict of interest.
  3. Plural: No single entity should control the digital spaces we inhabit. Healthy ecosystems require distributed power, interoperability, and meaningful choice for participants.
  4. Adaptable: Software should be open-ended, able to meet the specific, context-dependent needs of each user.
  5. Prosocial: Technology should enable connection and coordination, helping us become better neighbors, collaborators, and stewards of shared spaces, both online and off.
Blog Post

Taking Claude Code on the web for a spin

Yesterday, Anthropic announced Claude Code on the web, a way of using Claude Code in your browser.

I decided to try it out and within an hour, I'd built a new project from scratch.

What I built

As someone who uses RSS extensively, hunting for feeds buried in page source code is one of my least favorite tasks. Even when feeds are prominently displayed, I still have to open my RSS reader, copy the URL, and paste it into my app.

Back in the day, RSS support was built into the browser. Sadly, those days are gone, so now you have to build those features yourself. However, doing so is straightforward thanks to browser extensions.

RSS Browser Extension

Introducing RSS Browser Extension, a lightweight browser extension for Chromium-based browsers (Chrome, Brave, Edge, etc.) that automatically detects RSS and Atom feeds on web pages and allows quick subscription via multiple RSS readers.

Discover

When you visit a site and a feed is discovered, the extension icon on the browser toolbar lights up and a badge shows the number of feeds detected.

Browser Toolbar with RSS Browser Extension Highlighted

Subscribe

When you click on the extension icon, it opens a page displaying all discovered feeds.

RSS Browser Extension Displaying Discovered Feeds

From here, with a click, you can subscribe using a variety of RSS readers such as Newsblur, Feedly, Inoreader, and many others.

What I liked

  • Works on desktop and mobile - Although I could only test the RSS Browser Extension on desktop, I was doing the prompting and coding on mobile. I like being able to delegate work to AI coding assistants while on the go.
  • Open in CLI - I haven't tried this yet, but it's convenient to continue working in the terminal when I want to.
  • Easy to set up with GitHub Connector - Anthropic's GitHub connector makes it easy to connect to your GitHub profile and repos.

Improvements I'd like to see

  • Multi-modal capabilities - During my AI-assisted coding sessions, one of the things I do when I run into issues is show rather than tell. Especially when it comes to UI, it's easier to just upload or reference images to show the AI assistant what needs to be fixed. Unfortunately, I couldn't find a way to do this.
  • Session management - Part of this is on Anthropic and part on me. I wasn't sure how to best use sessions, but it seems you should start a new one per feature since each creates its own branch. What bothered me was that sessions aren't organized by repo, making it harder to manage multiple projects.
  • Only works with existing projects - I had to select a repo before starting to prompt. What I'd prefer is to sketch out ideas first for a greenfield project, then create the repo afterward if I want to keep the code.
  • No MCP support - I'm sure with time this will come, but I couldn't find guidance on how to set up MCP servers.
  • No Claude chat integration - I often brainstorm and sketch out ideas in chat, then use Copilot or Claude Code to implement the spec. I'd like to transition directly from chat to Claude Code on the web, or reference previous conversations as starting points.

Conclusion

Overall I like being able to kick off jobs and have them running in the background as I go about my day. Claude Code on the web is a step in the right direction. Given it's still early days, I suspect many if not all of the items on my wishlist will eventually be addressed.

For now, I'm still partial to GitHub Copilot Coding Agent, mainly because of its seamless integration with GitHub. That said, I'm open to exploring Claude Code on the web further to see where it fits into my workflows as it matures.

In the meantime, feel free to use the RSS Browser Extension and if you find it useful or run into issues, send me a message.