MoreRSS

site iconMacStoriesModify

About all things apps, automation, the latest in tech, and more.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of MacStories

Podcast Rewind: Reconsidering the iPad and Reviewing Retro Videogame Stores

2025-06-25 02:20:59

Enjoy the latest episodes from MacStories’ family of podcasts:

AppStories

This week, Federico and John reflect on where the iPad fits within their workflows after the announcement of iPadOS 26.

Then, on AppStories+, they explore the potential for an Apple automation renaissance built on the features announced at WWDC.


NPC: Next Portable Console

This week, we have RG Slide pricing and penguins, along with new AYANEO news and a cool aluminum TrimUI Brick.

On NPC XL, Brendon’s on a mission to find the best retro videogame store in Japan and takes the rest of us with him.


AppStories, Episode 442, ‘An iPad Pickle’ Show Notes

Rethinking Where the iPad Fits After iPadOS 26

AppStories+ Post-Show


NPC, Episode 38, ‘What’s That Smell? The RG Slide’ Show Notes

The Latest Portable Gaming News

NPC XL: Retro Videogame Stores

Subscribe to NPC XL

NPC XL is a weekly members-only version of NPC with extra content, available exclusively through our new Patreon for $5/month.

Each week on NPC XL, Federico, Brendon, and John record a special segment or deep dive about a particular topic that is released alongside the “regular” NPC episodes.

You can subscribe here.


MacStories launched its first podcast in 2017 with AppStories. Since then, the lineup has expanded to include a family of weekly shows that also includes MacStories UnwindMagic Rays of LightComfort Zone, and NPC: Next Portable Console that collectively, cover a broad range of the modern media world from Apple’s streaming service and videogame hardware to apps for a growing audience that appreciates our thoughtful, in-depth approach to media.

If you’re interested in advertising on our shows, you can learn more here or by contacting our Managing Editor, John Voorhees.


Access Extra Content and Perks

Founded in 2015, Club MacStories has delivered exclusive content every week for nearly a decade.

What started with weekly and monthly email newsletters has blossomed into a family of memberships designed every MacStories fan.

Club MacStories: Weekly and monthly newsletters via email and the web that are brimming with apps, tips, automation workflows, longform writing, early access to the MacStories Unwind podcast, periodic giveaways, and more;

Club MacStories+: Everything that Club MacStories offers, plus an active Discord community, advanced search and custom RSS features for exploring the Club’s entire back catalog, bonus columns, and dozens of app discounts;

Club Premier: All of the above and AppStories+, an extended version of our flagship podcast that’s delivered early, ad-free, and in high-bitrate audio.

Learn more here and from our Club FAQs.

Join Now

The Curious Case of Apple and Perplexity

2025-06-23 01:28:32

Good post by Parker Ortolani, analyzing the pros and cons of a potential Perplexity acquisition by Apple:

According to Mark Gurman, Apple executives are in the early stages of mulling an acquisition of Perplexity. My initial reaction was “that wouldn’t work.” But I’ve taken some time to think through what it could look like if it were to come to fruition.

He gets to the core of the issue with this acquisition:

At the end of the day, Apple needs a technology company, not another product company. Perplexity is really good at, for lack of a better word, forking models. But their true speciality is in making great products, they’re amazing at packaging this technology. The reality is though, that Apple already knows how to do that. Of course, only if they can get out of their own way. That very issue is why I’m unsure the two companies would fit together. A company like Anthropic, a foundational AI lab that develops models from scratch is what Apple could stand to benefit from. That’s something that doesn’t just put them on more equal footing with Google, it’s something that also puts them on equal footing with OpenAI which is arguably the real threat.

While I’m not the biggest fan of Perplexity’s web scraping policies and its CEO’s remarks, it’s undeniable that the company has built a series of good consumer products, they’re fast at integrating the latest models from major AI vendors, and they’ve even dipped their toes in the custom model waters (with Sonar, an in-house model based on Llama). At first sight, I would agree with Ortolani and say that Apple would need Perplexity’s search engine and LLM integration talent more than the Perplexity app itself. So far, Apple has only integrated ChatGPT into its operating systems; Perplexity supports all the major LLMs currently in existence. If Apple wants to make the best computers for AI rather than being a bleeding-edge AI provider itself…well, that’s pretty much aligned with Perplexity’s software-focused goals.

However, I wonder if Perplexity’s work on its iOS voice assistant may have also played a role in these rumors. As I wrote a few months ago, Perplexity shipped a solid demo of what a deep LLM integration with core iOS services and frameworks could look like. What could Perplexity’s tech do when integrated with Siri, Spotlight, Safari, Music, or even third-party app entities in Shortcuts?

Or, look at it this way: if you’re Apple, would you spend $14 billion to buy an app and rebrand it as “Siri That Works” next year?

→ Source: parkerortolani.blog

I Have Many Questions About Apple’s Updated Foundation Models and the (Great) ‘Use Model’ Action in Shortcuts

2025-06-23 00:48:57

Apple's 'Use Model' action in Shortcuts.

Apple’s ‘Use Model’ action in Shortcuts.

I mentioned this on AppStories during the week of WWDC: I think Apple’s new ‘Use Model’ action in Shortcuts for iOS/iPadOS/macOS 26, which lets you prompt either the local or cloud-based Apple Foundation models, is Apple Intelligence’s best and most exciting new feature for power users this year. This blog post is a way for me to better explain why as well as publicly investigate some aspects of the updated Foundation models that I don’t fully understand yet.

On the surface, the ‘Use Model’ action is pretty simple: you can choose to prompt Apple’s on-device model, the cloud one hosted on Private Cloud Compute, or ChatGPT. You can type whatever you want in the prompt, and you can choose to enable a follow-up mode in the action that lets you have a chatbot-like conversation with the model.

Having a conversation with Apple's local model from Shortcuts.

Having a conversation with Apple’s local model from Shortcuts.

What’s immediately notable here is that, while third-party app developers only got access to the local Foundation model to use in their apps, Shortcuts users can prompt either versions of AFM. It’s not clear if the cloud model is going to be rate-limited if you perform too many requests in a row, which is something I’d like to clarify at some point (but I would expect so). Likewise, while the ChatGPT integration seems to integrate with the native ChatGPT extension in Settings and therefore connects to your OpenAI account, it’s not clear which ChatGPT model is being used at the moment. Trying to ask the model about it from Shortcuts showed that ChatGPT in Shortcuts was using GPT-4 Turbo, which is a very old model at this point (from November 2023).

I want to focus on the Apple Foundation models for now. To understand how they work, let’s first take a look at a recent post from Apple’s Machine Learning Research blog. On June 9, the company shared some details on the updated Foundation models, writing:

The models have improved tool-use and reasoning capabilities, understand image and text inputs, are faster and more efficient, and are designed to support 15 languages. Our latest foundation models are optimized to run efficiently on Apple silicon, and include a compact, approximately 3-billion-parameter model, alongside a mixture-of-experts server-based model with a novel architecture tailored for Private Cloud Compute. These two foundation models are part of a larger family of generative models created by Apple to support our users.

Both Foundation models were originally released last year. Apple doesn’t explicitly mention whether the ‘Use Model’ action in Shortcuts is currently using the updated versions of AFM compared to last year’s versions, but I would assume so? Again, I’d love to know more about this. Assuming that the answer is “yes”, we’re then looking at a small, ~3B model running locally on-device, and a bigger, cloud-hosted model (previously referred to as “AFM-server”) running on Private Cloud Compute. How big is exactly that model, though? Again, let’s go back to the blog post:

We found that our on-device model performs favorably against the slightly larger Qwen-2.5-3B across all languages and is competitive against the larger Qwen-3-4B and Gemma-3-4B in English. Our server-based model performs favorably against Llama-4-Scout, whose total size and active number of parameters are comparable to our server model, but is behind larger models such as Qwen-3-235B and the proprietary GPT-4o.

If Llama-4-Scout is “comparable” to AFM-server, we may assume that the updated AFM-server based on a MoE architecture is ~17 billion in size. Apple hasn’t officially documented this anywhere. Still, as a frame of reference, Apple is saying that their server model isn’t as good as modern ChatGPT or the latest large Qwen 3 model. But perhaps internally they do have one that matches the current version of ChatGPT in quality and performance?

Another aspect of the updated Apple Foundation models I’d like to understand are their vision capabilities. On its blog, Apple writes:

We compared the on-device model to vision models of similar size, namely InternVL-2.5-4B, Qwen-2.5-VL-3B-Instruct, and Gemma-3-4B, and our server model to Llama-4-Scout, Qwen-2.5-VL-32B, and GPT–4o. We found that Apple’s on-device model performs favorably against the larger InternVL and Qwen and competitively against Gemma, and our server model outperforms Qwen-2.5-VL, at less than half the inference FLOPS, but is behind Llama-4-Scout and GPT–4o.

The problem is that, right now, trying to analyze an image using the local model with the ‘Use Model’ action in Shortcuts doesn’t work. When I give it this image, I get this response:

I’m sorry, but I can’t describe images directly. However, if you provide a detailed description or tell me what you see, I’d be happy to help you analyze or interpret it!

Whereas running the same prompt with the cloud model returns a pretty good response:

The image shows a person walking two dogs on a paved path. The person is wearing white sneakers and is holding two leashes, one pink and one purple. The dogs are walking side by side, both with light brown fur and wearing harnesses. The path is lined with grass and some small plants on the left side. The scene appears to be outdoors, possibly in a park or a residential area. The lighting suggests it might be daytime.

Testing the vision capabilities of Apple's models in Shortcuts.

Testing the vision capabilities of Apple’s models in Shortcuts.

Does this mean that Shortcuts is still using an old version of the on-device model, which didn’t have vision capabilities? Or is it the new model from 2025, but the vision features haven’t been enabled in Shortcuts yet? Also: why is it that, if I give the cloud model the same picture over and over, I always get the exact same response, even after several days? Is Apple caching a static copy of each uploaded file on its servers for days and associating it with a specific description to decrease latency and inference costs? Again: has this been documented anywhere? Please let me know if so.

The most fascinating part of the ‘Use Model’ action is that it works with structured data and is able to directly parse native Shortcuts entities as well as return specific data types upon configuration. For example, you can tell the ‘Use Model’ action to return a ‘Dictionary’ for your prompt, and the model’s response will be a native dictionary type in Shortcuts that you can parse with the app’s built-in actions. Or you can choose to return text, lists, and Boolean values.

Some of the native data formats supported by the action.

Some of the native data formats supported by the action.

But there’s more: the ‘Use Model’ action can take in Shortcuts variables, understand them, and return them in its output, going beyond the typical limitations of plain text responses from chatbots. From the official description of the ‘Use Model’ action:

A request for the model that optionally includes variables and outputs from previous actions, including calendar events, reminders, photos, and more.

The “and more” is doing a lot of work here, in the sense that, based on my first tests, this action seems to be able to parse any Shortcuts variable you give it. I put together a simple shortcut that gets all my unread items from GoodLinks and asks the local model:

You are an intelligent read-later assistant. Your job is to process multiple articles and return only the ones that match the topic requested by the user.

When analyzing articles, consider their Title and Summary as available in those properties of the Link variable. Return a subset of items in the Link variable that ONLY match the user’s query.

User Request

Can you give me links about Liquid Glass?

Links

Here are the links:

{{Link Variable from GoodLinks}}

Once configured to return a ‘Link’ type in its output, the action can parse GoodLinks entities (which contain multiple pieces of metadata about each saved article), filter the ones that don’t match my query, and return a native list of GoodLinks items about Liquid Glass:

Parsing GoodLinks items with Apple Intelligence.

Parsing GoodLinks items with Apple Intelligence.

This also works with Reminders, Calendar events, photos, and other third-party apps I’ve tested. For example, the model was also able to return variables for News Explorer, an iCloud-based RSS reader that has excellent Shortcuts integration. Interestingly, neither Apple Intelligence models were able to parse 43 unread items in my RSS feeds, citing context length issues. This is something else I’d like to know: Apple’s blog post suggests that AFM was trained with prompts up to 65K tokens in size, but is that actually the case in Shortcuts with the ‘Use Model’ action? Regardless, ChatGPT got it done and was able to summarize all my unread items from News Explorer to give me an overview of what my unread feeds were saying:

AFM couldn't process my 43 unread items from RSS, but ChatGPT could, likely thanks to its bigger 128K context window.

AFM couldn’t process my 43 unread items from RSS, but ChatGPT could, likely thanks to its bigger 128K context window.

How is this possible? We do actually have an answer to this! The ‘Use Model’ action converts Shortcuts entities from apps (such as my GoodLinks articles or reminders) into a JSON representation that AFM or ChatGPT can understand and return. Everything is, of course, based on the App Intents framework and its related app entities for third-party apps that expose actions to Shortcuts.

How app entities are sent by Shortcuts to the model as JSON objects.

How app entities are sent by Shortcuts to the model as JSON objects.

If you’re logged in with a ChatGPT account, you can even see the request and the raw JSON content coming from Shortcuts:

A Shortcuts request with JSON from app entities in ChatGPT.

A Shortcuts request with JSON from app entities in ChatGPT.

Apple went into more detail about how this works in this WWDC session video, which I’m also embedding below. The relevant section starts at the 1:20 minute mark.

Again, I’d love to know some additional details here: are there limits on the numbers of entities converted to JSON behind the scenes that you can pass to a model? Can this JSON-based approach easily scale to more potential Apple Intelligence model integrations in the future, since both Gemini and Claude are pretty great at dealing with JSON content and instruction-following? Is there a specific style of prompting Apple’s Foundation models (similar to Claude or GPT 4.1) that we should be aware of?

I’d love to see the system prompt that injects JSON and related instructions into the main action prompt, but I’m guessing that’s never going to happen since Apple doesn’t seem to be interested in sharing system prompts for Apple Intelligence.

I’m excited about the prospect of “Apple-approved” hybrid automation in Shortcuts that combines the non-deterministic output of LLMs with the traditional, top-to-bottom approach of workflows in Shortcuts. I have so many ideas for how I could integrate this kind of technology with shortcuts that deal with RSS, tasks, events, articles, and more. The fact that Apple designed the ‘Use Model’ action to “just work” thanks to JSON under the hood is very promising: as I’ve shown in my example, it means that entities from Shortcuts actions don’t have to belong to apps that have been updated for iOS 26; I’m running the iOS 18 version of GoodLinks, and the ‘Use Model’ action worked out of the box with GoodLinks entities.

Hopefully, as the dust settles on the first developer betas of iOS/iPadOS/macOS 26, Apple will reveal more details about their Foundation models and their integration with Shortcuts. Who would have thought, just two weeks ago, that I’d be genuinely intrigued by something related to Apple Intelligence?


Access Extra Content and Perks

Founded in 2015, Club MacStories has delivered exclusive content every week for nearly a decade.

What started with weekly and monthly email newsletters has blossomed into a family of memberships designed every MacStories fan.

Club MacStories: Weekly and monthly newsletters via email and the web that are brimming with apps, tips, automation workflows, longform writing, early access to the MacStories Unwind podcast, periodic giveaways, and more;

Club MacStories+: Everything that Club MacStories offers, plus an active Discord community, advanced search and custom RSS features for exploring the Club’s entire back catalog, bonus columns, and dozens of app discounts;

Club Premier: All of the above and AppStories+, an extended version of our flagship podcast that’s delivered early, ad-free, and in high-bitrate audio.

Learn more here and from our Club FAQs.

Join Now

Initial Notes on iPadOS 26’s Local Capture Mode

2025-06-21 07:42:51

Now this is what I call follow-up: six years after I linked to Jason Snell’s first experiments with podcasting on the iPad Pro (which later became part of a chapter of my Beyond the Tablet story from 2019), I get to link to Snell’s first impressions of iPadOS 26’s brand new local capture mode, which lets iPad users record their own audio and video during a call.

First, some context:

To ensure that the very best audio and video is used in the final product, we tend to use a technique called a “multi-ender.” In addition to the lower-quality call that’s going on, we all record ourselves on our local device at full quality, and upload those files when we’re done. The result is a final product that isn’t plagued by the dropouts and other quirks of the call itself. I’ve had podcasts where one of my panelists was connected to us via a plain old phone line—but they recorded themselves locally and the finished product sounded completely pristine.

This is how I’ve been recording podcasts since 2013. We used to be on a call on Skype and record audio with QuickTime; now we use Zoom, Audio Hijack, and OBS for video, but the concept is the same. Here’s Snell on how the new iPadOS feature, which lives in Control Center, works:

The file it saves is marked as an mp4 file, but it’s really a container featuring two separate content streams: full-quality video saved in HEVC (H.265) format, and lossless audio in the FLAC compression format. Regardless, I haven’t run into a single format conversion issue. My audio-sync automations on my Mac accept the file just fine, and Ferrite had no problem importing it, either. (The only quirk was that it captured audio at a 48KHz sample rate and I generally work at 24-bit, 44.1KHz. I have no idea if that’s because of my microphone or because of the iPad, but it doesn’t really matter since converting sample rates and dithering bit depths is easy.)

I tested this today with a FaceTime call. Everything worked as advertised, and the call’s MP4 file was successfully saved in my Downloads folder in iCloud Drive (I wish there was a way to change this). I was initially confused by the fact that recording automatically begins as soon as a call starts: if you press the Local Capture button in Control Center before getting on a call, as soon as it connects, you’ll be recording. It’s kind of an odd choice to make this feature just a…Control Center toggle, but I’ll take it! My MixPre-3 II audio interface and microphone worked right away, and I think there’s a very good chance I’ll be able to record AppStories and my other shows from my iPad Pro – with no more workarounds – this summer.

→ Source: sixcolors.com

Podcast Rewind: A Challenging Challenge, a Couple of Crime Dramas, and a Haptic Trailer

2025-06-21 03:15:29

Enjoy the latest episodes from MacStories’ family of podcasts:

Comfort Zone

Chris is back in his element in the iPadOS 26 world, Matt just wants to play some games, and Niléane oversees the hardest challenge in ages.

This episode is sponsored by:

  • Ecamm Live – Broadcast Better with Ecamm Live. Coupon code MACSTORIES gives 1 month free of Ecamm Live to new customers.

MacStories Unwind

This week, John installs macOS Tahoe, he and Federico each recommend a recent crime drama, and we have a Daredevil Unwind deal.


Magic Rays of Light

Sigmund and Devon highlight the return of The Buccaneers, explore how haptics and other metadata could enhance media experiences on Apple’s platforms, and recap the first season of Your Friends & Neighbors.


Comfort Zone, Episode 54, ‘Better Stickies’ Show Notes

Let us know your favorite thing!

Main Topics

Our Last Played iPhone Games

Follow the Hosts


MacStories Unwind, ‘Beta Season and Crime Dramas’ Show Notes

Unplugged Segment

Picks

Unwind Deal

Daredevil, Seasons 1–3 – lowest price ever on Apple TV for $34.99

MacStories Unwind+

We deliver MacStories Unwind+ to Club MacStories subscribers ad-free and early with high bitrate audio every week.

To learn more about the benefits of a Club MacStories subscription, visit our Plans page.


Magic Rays of Light, Episode 173, ‘The Buccaneers, Enhanced Media Experiences, and Your Friends & Neighbors’ Show Notes

Highlight

Enhanced Media Experiences

Trailer Talk

Releases

Extras

Recap

TV App Highlights

Up Next

Contact


MacStories launched its first podcast in 2017 with AppStories. Since then, the lineup has expanded to include a family of weekly shows that also includes MacStories UnwindMagic Rays of LightRuminateComfort Zone, and NPC: Next Portable Console that collectively, cover a broad range of the modern media world from Apple’s streaming service and videogame hardware to apps for a growing audience that appreciates our thoughtful, in-depth approach to media.

If you’re interested in advertising on our shows, you can learn more here or by contacting our Managing Editor, John Voorhees.


Access Extra Content and Perks

Founded in 2015, Club MacStories has delivered exclusive content every week for nearly a decade.

What started with weekly and monthly email newsletters has blossomed into a family of memberships designed every MacStories fan.

Club MacStories: Weekly and monthly newsletters via email and the web that are brimming with apps, tips, automation workflows, longform writing, early access to the MacStories Unwind podcast, periodic giveaways, and more;

Club MacStories+: Everything that Club MacStories offers, plus an active Discord community, advanced search and custom RSS features for exploring the Club’s entire back catalog, bonus columns, and dozens of app discounts;

Club Premier: All of the above and AppStories+, an extended version of our flagship podcast that’s delivered early, ad-free, and in high-bitrate audio.

Learn more here and from our Club FAQs.

Join Now

Swift Assist, Part Deux

2025-06-19 23:47:09

At WWDC 2024, I attended a developer tools briefing with Jason Snell, Dan Moren, and John Gruber. Later, I wrote about Swift Assist, an AI-based code generation tool that Apple was working on for Xcode.

That first iteration of Swift Assist caught my eye as promising, but I remember asking at the time whether it could modify multiple files in a project at once and being told it couldn’t. What I saw was rudimentary by 2025’s standards with things like Cursor, but I was glad to see that Apple was working on a generative tool for Xcode users.

In the months that followed, I all but forgot that briefing and story, until a wave of posts asking, “Whatever happened to Swift Assist?” started appearing on social media and blogs. John Gruber and Nick Heer picked up on the thread and came across my story, citing it as evidence that the MIA feature was real but curiously absent from any of 2024’s Xcode betas.

This year, Jason Snell and I had a mini reunion of sorts during another developer tools briefing. This time, it was just the two of us. Among the Xcode features we saw was a much more robust version of Swift Assist that, unlike in 2024, is already part of the Xcode 26 betas. Having been the only one who wrote about the feature last year, I couldn’t let the chance to document what I saw this year slip by.

Apple worked with OpenAI on its ChatGPT integration, a limited version of which can be used for free or with a ChatGPT API account.

Apple worked with OpenAI on its ChatGPT integration, a limited version of which can be used for free or with a ChatGPT API account.

I’m not a developer, so I’m not going to review Swift Assist (a name that is conspicuously absent from Apple’s developer tool press release, by the way), but the changes are so substantial that the feature I was shown this year hardly resembles what I saw in 2024. Unlike last year’s demo, this version can revise multiple project files and includes support for multiple large language models, including OpenAI’s ChatGPT, which has been tuned to work with Swift and Xcode. Getting started with ChatGPT doesn’t require an OpenAI account, but developers can choose to use their account credentials from OpenAI or another provider, like Anthropic. Swift Assist also supports local model integration. If your chosen AI model takes you down a dead end, code changes can be rolled back incrementally at any time.

Swift Assist's chatbot lives in the left sidebar.

Swift Assist’s chatbot lives in the left sidebar.

It’s also notable that this is Apple’s first stab – in any app – at a chatbot. The chat interface lives in the left sidebar, where you can request code changes, bug fixes, documentation, and other information relevant to a project. Changes proposed by your selected LLM are color-coded to make them easy to review, too.

It’s great to see Apple go down a path that gives developers the flexibility to choose whichever model they’d like, visualize changes, and roll them back as needed. Whether that’s enough to satisfy developers who have increasingly looked to third-party options to incorporate AI into their workflows remains to be seen. The reactions that I’ve seen to Xcode 26’s new features run the gamut. Some developers are cautiously optimistic and even enthusiastic about Xcode 26, while others have run into roadblocks or decided the update is too little, too late.

Still, what I saw this year struck me as a more solid foundation to build on. My hope is that, by getting these features in developers’ hands at the beginning of the beta period, Apple will have a chance to incorporate developer feedback before releasing Xcode 26 publicly this fall.


Access Extra Content and Perks

Founded in 2015, Club MacStories has delivered exclusive content every week for nearly a decade.

What started with weekly and monthly email newsletters has blossomed into a family of memberships designed every MacStories fan.

Club MacStories: Weekly and monthly newsletters via email and the web that are brimming with apps, tips, automation workflows, longform writing, early access to the MacStories Unwind podcast, periodic giveaways, and more;

Club MacStories+: Everything that Club MacStories offers, plus an active Discord community, advanced search and custom RSS features for exploring the Club’s entire back catalog, bonus columns, and dozens of app discounts;

Club Premier: All of the above and AppStories+, an extended version of our flagship podcast that’s delivered early, ad-free, and in high-bitrate audio.

Learn more here and from our Club FAQs.

Join Now