There was a lot of new stuff announced at WWDC, and there is no shortage of blog posts, videos, sample projects and other content to digest: the community is already hard at work creating content for the new features. I actually had to push some cool stuff to next week’s issue!
What I found most surprising was the stark contrast between the developer community’s excitement about the new features, and the tech outlets’ lack of enthusiasm.
For example, in the WWDC edition of the Hardfork podcast, Casey Newton delivered a scathing verdict on Apple’s Liquid Glass UI. Citing a famous Steve Jobs quote (“Design is not how it looks. Design is how it works”), Casey argued that “Liquid Glass […] is a design that is about how it looks. It is not about how it works. I don’t know what this design is supposed to do that it didn’t before.”
In terms of AI features, tech outlets seem to be equally disappointed. Take this interview with Craig Federighi and Greg Joswiak on the new smart Siri:
Joanna Stern’s question about the new smart Siri in her interrogation of Craig Federighi and Greg Joswiak is a summary of the vibe most of the tech outlets had. While I agree with the frustration about the seeming lack of progress from a consumer perspective, I think the developer community is excited about the new features. Looking at some of the new APIs, I think the time Apple spent on getting them right was well worth it, and it seems the developer community is excited to use those new features in their apps.
What’s your take on the new features? What are you most excited about? Let me know by hitting the reply button - or message me on social media.
Xcode 26’s Coding Intelligence is one of the most exciting new features for iOS developers, bringing AI-powered code assistance directly into Apple’s IDE.
I was super excited to try this out, but wasn’t able to make the ChatGPT integration work. Fortunately, Xcode also supports a BYOM (bring your own model) approach, so I decided to use Gemini 2.5 Pro.
Gemini provides an OpenAI-compatible API, so setting it up should be easy, right? Well, it turns out that it’s slightly more complicated than that. Thanks to Carlo Zottmann’s excellent blog post, I was able to get it working.
The trick is to use Proxyman for URL rewriting. And that opens up a whole new world of possibilities - for example, you can intercept the communication between Xcode and the underlying LLM, and see exactly what’s going on.
In my article, I disseminate the system instructions, user prompts and tool calling mechanisms that power Xcode’s Coding Intelligence feature to help you understand how it works under the hood. It’s a fascinating look into the inner workings of Apple’s AI-powered coding assistant, and quite enlightening. If you thought that coding agents are a black box, this article will open your eyes. It’s less magic, and more elbow grease than you thought.
This is Apple’s very own overview of what’s new in SwiftUI. It’s not only a great overview of what’s new, but also a great starting point for anyone interested in creating content for any of the new features and APIs.
As always, Paul’s SwiftUI summary is one of the most comprehensive resources for getting the lowdown of what’s new and hot in SwiftUI. This article here is an umbrella for an entire network of linked articles that go into the details of everything that is new and updated. You will definitely want to read these.
I covered Firebase AI Logic in issue 82 - it’s Firebase’s AI offering for mobile and web apps, making it easy to securely call Gemini and Imagen APIs from your apps.
Firebase has introduced a cool new experimental feature: hybrid on-device inference for the Firebase AI Logic client SDK for the web.
You can now use on-device models like Gemini Nano with seamless fallback to cloud models for enhanced privacy, offline availability, and cost savings.
Here is a quick example of how to use it:
const model = getGenerativeModel(ai, { mode: "prefer_on_device" });
Available modes are:
prefer_on_device - Uses on-device when available, falls back to cloud
only_on_device - Only uses on-device models
only_in_cloud - Only uses cloud-hosted models
This is a game-changer for web developers who want to provide AI-powered features with better privacy, reliability, and cost efficiency. The seamless fallback ensures your AI features work across all devices and browsers, maximizing your app’s reach.
Ollama is an easy-to-use AI model server that allows you to run and interact with AI models on your local machine. This is not only great for privacy and security, but also for keeping cost in check.
Olleh is a bridge between Apple’s Foundation Models and the Ollama ecosystem, providing both a command-line interface and HTTP API. The HTTP API is compatible with Ollama’s endpoints, making it easy to integrate Apple’s Foundation Models into existing AI workflows, for example with your Genkit flows.
It’s important to note that Olleh is a replacement for Ollama, but only for Apple’s Foundation Models.
Here is a quick example of how to use Olleh with Genkit:
import { genkit, z } from 'genkit'import { ollama } from 'genkitx-ollama';import { startFlowServer } from '@genkit-ai/express'import { logger } from 'genkit/logging'logger.setLogLevel('debug')const ai = genkit({ plugins: [ ollama({ models: [ { name: 'olleh' }, ], serverAddress: 'http://127.0.0.1:43110', }), ], model: `ollama/olleh`,})const mainFlow = ai.defineFlow({ name: 'mainFlow', inputSchema: z.string(),}, async (input) => { const { text } = await ai.generate(input) return text})startFlowServer({ flows: [mainFlow] })
In case you wonder what Olleh stands for, it’s “Hello” reversed. Coincidentally, the default port the server runs on is 43110 (“Hello” in leetspeak). Gotta love the small details, eh?
Similar to Olleh, this project makes Apple’s Foundation Models available to developers, but takes a slightly different approach by providing an OpenAI-compatible API server. This allows you to use Apple’s on-device AI capabilities through familiar OpenAI API endpoints, making it easy to integrate with existing tools and workflows that expect OpenAI’s API format.
The project is implemented as a SwiftUI GUI application rather than a command-line tool, which is a clever solution to Apple’s (undocumented) rate limiting policies for Foundation Models. According to this forum thread, “An app that has UI and runs in the foreground doesn’t have a rate limit when using the models; a macOS command line tool, which doesn’t have UI, does.”
Thomas shows how to use Apple’s Foundation Models framework in a real-world app - his Mastodon client Ice Cubes. After a brief look at Liquid Glass, he dives head first into the FoundationModels framework.
I love how using a real-world app as a driver for learning new technologies helps to cover all the aspects you might not typically think of when learning a new framework - for example, to make sure the app doesn’t crash when the model is not available.
It’s also great to see how Apple’s APIs are designed to make typical use cases straightforward. For example, here is how you can generate tags for a post:
func generateTags(from message: String) async -> Tags { do { let response = try await session.respond( to: "Generate a list of hashtags for this social media post: \(message).", generating: Tags.self) return response.content } catch { return .init(values: []) }}
To learn more about why this works, read Thomas’s article.
With WWDC 2025 behind us, it’s time to take a look at the upcoming conferences. Traditionally, many iOS / Swift conferences run in the second half of the year, to give everyone a chance to try out the latest and greatest features. Expect to see a lot of new content at these conferences.
Here’s the updated list of iOS and Swift conferences happening in 2025:
Speaking of conferences, I’d like everyone to read this article by Tim. Tim runs the ServerSide.swift conference in London, and he provides a rare glimpse into the financials of running a conference.
If you’ve ever wondered what it costs to run a conference, this is a must-read. Conferences aren’t cheap, and the money has to come from somewhere. So if you can, support the community by attending conferences and sponsoring them.
This WWDC recap is different from most other recaps out there. Instead of a single person writing about what they liked, it’s a team of developers sharing their thoughts on what stood out most to them.
For example, here’s what Josh Nozzi had to say about Swift concurrency:
I also feel the MainActor-isolated-by-default build option, alongside Apple’s suggestion of turning the concurrency problem on its head (i.e., main actor first, focused concurrency additions later) gives engineers a logical and purposeful approach to adoption of strict concurrency.
I like this approach - it provides a more nuanced and diverse perspective.
I quoted Steve Jobs in the intro to this issue, so I thought it would be fitting to wrap up with this amazing exhibit of one of his most famous speeches.
It comprises a newly enhanced HD version of the speech, as well as some fascinating behind-the-scenes details, such as some of the notes Steve sent himself via email, and the printed copy of the speech, including some handwritten notes.