Re: Speaking at Bletchley Park, Liquid Glass compatibility, and iOS 26 approaches
Hey everyone!
I’m in the middle of preparing a trip to the UK: first, I’ll be in London for a couple of days to help staff Cloud Summit London. After that, I will be at the Build with AI event @ Bletchley Park. This event will be held at The National Museum of Computing, and I’ve been told that one of the conference stages will be in front of the oldes working computer worldwide - super exciting! I’ve got the honor to deliver part of the keynote, together with Gus Martins and Rosário Fernandez - it should be fun! Final tickets are available here.
If you happen to see me, to say hi - it’s lovely meeting people in real live that you typically only “see” online!
The week after, I’ll be back in London, to record a couple of videos about Firebase AI Logic at the Google office near King’s Cross station - keep your eyes peeled on the Firebase YouTube channel (don’t forget to subscribe to be notified when they land)!
In this issue, I’ve included a couple of links that cover the new Liquid Glass design in your apps, while keeping them backward compatible, something that I will also look into in my next livestream. There are a couple of different approaches, each with different advantages and drawbacks. I like Majid’s approach that includes a clever way of making sure you clean up any obsolete code once you start shipping to iOS 26.
If you want to make use of the new liquid glass modifiers, and also support users on iOS 18, but don’t want to riddle your code with #available(iOS 26.0, *) checks, this library might be just for you.
if #available(iOS 26.0, *) { MyView().glassEffect()} else { MyView()}
This works nicely, although you might need to adjust some of your code slightly. For example, .buttonStyle(.glass) now becomes .backport.glassButtonStyle().
Still a lot easier to read than the equivalent #available check!
How many times have you wanted to display an item’s position when presenting a list of items? Getting a sequence of items and their position is possible by calling enumerated() on the collection, but in order to display this in a SwiftUI list, you had to wrap this in an array.
This is no longer required, making our code easier to read (and safer, too). Natalia shows how it works, and points out some best practices when working with enumerated sequences.
If you’ve used SwiftUI, you’ve come across the Attribute Graph (most likely in a stack trace…) - it’s the engine that powers SwiftUI’s rendering system under the hood.
We don’t know exactly how the Attribute Graph works (well, unless you work on the SwiftUI team at Apple), but Chris is probably one of the few people outside of Apple who have the best understanding of how it (most likely) works.
Check out this recording of a talk Chris gave at the One More Thing conference during WWDC this year - it’s a fascinating deep dive into how SwiftUI’s rendering system works, and how it ensures efficient updates.
Chris provides a list of sources for further reading, and you might find AGDebugKit particularly interesting.
Toolbars are one of the UI elements that have changed very prominently with the new Liquid Glass design - the WWDC session Build a SwiftUI app with the new design has an entire chapter just about toolbars!
One aspect I found really interesting in Majid’s blog post is how he dealt with the different visual appearance toolbars have in iOS 26 vs. previous iOS versions. On iOS 26, using an icon-only label is the preferred style, where as UIs on iOS 18 and earlier typically use text style.
Majid introduces a new label style to handle this difference. By marking this style as obsolete in iOS 26 and above, he forces a compiler error, reminding you to remove the obsolete code once you can ship on iOS 26.
Rudrank has put together a comprehensive collection of examples that demonstrate how to use Apple’s Foundation Models framework and its on-device AI capabilities.
From basic chat, structured data generation, to tool calling - this repo covers basic and avanced features alike, and is definitely worth a star (378 other stargazers thought the same).
Speaking of advanced use cases, and tool calling in particular, you’ll definitely want to check out the next link as well!
Tool calling (often referred to as function calling) is one a powerful feature of modern AI systems, allowing models to interact with external data sources, APIs, and system frameworks. Tool calling is very prominently used in cloud-based AI frameworks (for example, Genkit has a very powerful tool calling implementation), Apple’s Foundation Models framework brings this capability on-device, opening up exciting possibilities for your apps.
Alex explains the concepts behind tool calling using a (not surprisingly coffee-themed) example, and even provides some useful tips for debugging your AI tool calls.
If you weren’t able to attend Deep Dish Swift this year, don’t fret! Josh mentioned that they’re in the process of uploading all session recordings to their YouTube channel - 19 sessions in total, over the course of 19 days!
Also, the dates for 2026 have been announced already, so mark your calendars!