Fatbobman's Swift Weekly #134
Getting AI from Handy to Heartfelt

Getting AI from Handy to Heartfelt
It’s been three years since I started using AI tools in earnest. Over these three years, I’ve witnessed AI’s remarkable leaps in capability — and grown increasingly aware of its limits. By now, AI has firmly established itself as an excellent productivity tool. Yet getting it to produce code that truly feels “heartfelt” — code that aligns with my personal style, ideas, and design philosophy — remains a considerable challenge.
Even though mainstream models now routinely offer 1M-token context windows, the information they can carry still falls far short of a human developer’s holistic grasp of a project as it grows. Compounded by attention decay, the truly usable context is actually quite limited. This is why, over the past stretch of time, I’ve published fewer technical articles written for human readers — and produced quite a lot of documentation written for AI instead. Readers often write to ask why my blog has slowed down; this is the reason.
The point of writing all this dedicated documentation isn’t merely to give AI enough context to work with. More importantly, I’m exploring a SwiftUI state pattern of my own design — one that diverges significantly from the prevailing paradigm. This means AI’s vast trove of pretraining data often becomes noise, even active resistance: it keeps unconsciously pulling you back toward the most common conventions. The whole process therefore takes on a kind of tug-of-war rhythm. I use AI to validate ideas rapidly, turning vague intuitions into precise contracts and interfaces; in parallel, I keep refining the documentation, which in turn constrains the AI and steers it toward code that matches my intent.
There’s still some distance to the finish line, but as the code, structure, and documentation iterate together — and especially as my half-formed ideas gradually crystallize and land — I can feel a clear shift: the AI is becoming noticeably more fluent within this project, and its output is converging on what I have in mind.
There are countless ways to arrive at the same UI outcome. What I want is to use various constraints to make AI choose the specific path I’ve laid out. I’m not claiming my own code is flawless — only that I want AI’s output to feel familiar and within my grasp, so I can step in and maintain it without friction.
With clear goals and thorough guidance, AI now implements things tens or even hundreds of times faster than I can. When “efficiency” is no longer the bottleneck, the real question for the next stage is how to make this lightning-fast assistant more heartfelt — more in tune with us.
Previous Issue|Newsletter Archive
📢 Sponsor Fatbobman’s Swift Weekly
Promote your product to Swift & iOS developers across:
- Blog: 50,000+ monthly visitors
- Newsletter: 4,000+ subscribers, 53% open rate
Perfect for developer tools, courses, and services.
Enjoyed this issue? Buy me a coffee ☕️
Recent Recommendations
Q&A: Swift Concurrency - Formatted
This is a transcript of a Swift Concurrency Q&A with Apple engineers, compiled by Anton Gubarenko. In the session, the engineers addressed many of the most commonly misunderstood aspects of Swift’s current concurrency model: from the behavior change behind nonisolated(nonsending), to the boundaries of @concurrent, to Task lifetime and cancellation. Rather than merely adding more knowledge, this feels more like a “semantic recalibration.”
One noteworthy signal is that Swift is moving from “async implies concurrency by default” toward a more conservative default, where concurrency is introduced explicitly only when needed.
Immediate tasks in Swift Concurrency explained
Swift 6.2 introduced a subtle but useful new feature: Task.immediate. Unlike a regular Task, an immediate task starts synchronously in the current execution context and continues running until the first actual suspension point. Antoine van der Lee offers a clear explanation of its behavior in this article.
This capability mainly fills a long-standing gap: calling async logic from a synchronous context while still preserving execution order, such as immediately updating state when the actor isolation is already correct. In these scenarios, Task.immediate can avoid the timing mismatch caused by scheduling delay.
Its risk is just as direct: if heavy synchronous work runs before the first suspension point, especially on the MainActor, it can block the current executor and cause visible UI hitches.
Task.immediateonly changes when a task starts executing, not the task’s overall lifecycle. In most cases, the regular Task scheduling behavior remains the safer choice.
Concurrency Step-by-Step: Designing Protocols
In this article, Matt Massicotte shares a more practically executable approach to protocol design in the context of Swift 6 strict concurrency.
In Swift 6, protocol design has become more difficult than ever. You are no longer just defining methods; you are also defining isolation boundaries. Should the protocol be marked @MainActor? Should it inherit from Sendable? Should its methods be async? Matt points out that many of the “waterfalls of concurrency errors” developers encounter when adopting Swift 6 may look like isolation-domain conflicts on the surface, but are often architectural problems caused by premature abstraction. If you try to design the perfect protocol before the requirements are clear, it is very easy to get “locked in” by concurrency rules.
Matt’s advice is to avoid starting with a protocol. Start with concrete types instead. Let the interface boundaries gradually emerge from real usage, and postpone abstractions that involve isolation domains or context-dependent capabilities. This approach helps avoid falling into the trap of fat protocols and excessive constraints in the concurrency era. Rather than a protocol design guide, it is more like a pragmatic “delay decisions” strategy for Swift 6.
Automating your Xcode Project
Xcode project files have long been a source of trouble in version control. Starting from this pain point, Leo G Dion introduces practical ways to generate Xcode projects using XcodeGen and Tuist, along with a fairly complete Tuist workflow. The article also walks through the key configuration required for a minimally shippable project: deployment target, App Icon, Privacy Manifest, signing information, and version management, forming a useful automation-oriented “project checklist.”
Although I am currently the only developer on my projects, I have also switched to generating Xcode projects with Tuist. On one hand, these tools provide a higher degree of engineering determinism. On the other hand, their value is further amplified in AI-assisted development: most agents support them well, and when files in the workspace are modified, they can automatically trigger
generatewhen compilation requires it. Tools like Tuist and XcodeGen are gradually becoming more AI-friendly engineering infrastructure.
Appearance Mode Changer
Two days ago, Stewart Lynch celebrated his 75th birthday. In a post, he wrote: “75 years of patches, upgrades, bug fixes, deprecated habits, and surprisingly few fatal errors. Still compiling. Still shipping. Still learning.”
As a well-known video tutorial creator, Stewart has also recently restarted his blog, publishing short tips in written form. This article covers an implementation of appearance mode switching in SwiftUI.
For those of us working in a fast-moving industry that can easily trigger age anxiety, this may be one of the most enviable states a developer can reach. Happy birthday to Stewart, and I hope this “Still” spirit reaches everyone as well.
Six Years Perfecting Maps on watchOS
In this design diary, David Smith looks back on the six-year journey of building the Apple Watch mapping experience for Pedometer++.
The most valuable part of the article is not the implementation itself, but the series of trade-offs behind it: interactions on watchOS must be direct enough, complex configuration is almost unacceptable, and the relationship between map and data requires constant balancing between readability and information density. Even the base map is no longer an off-the-shelf dependency, but something specifically customized for Liquid Glass. His technical choice is also representative: even though MapKit has arrived on watchOS, he still chose a fully custom solution because its configurability and expressiveness remain limited. Behind this decision is not only technical capability, but also a clear sense of product-experience priorities.
This is a classic example of long-term product refinement.
Tools
Kadr: Describing Video Composition with a Swift DSL
Developed by Steliyan Hadzhidenev, Kadr is a Swift-native video composition library that uses a Result Builder DSL to organize AVFoundation’s otherwise scattered concepts — clips, transitions, multiple tracks, filters, overlays, audio, and export workflows — into a declarative API. What makes it worth following is not merely the cleaner syntax, but how it demonstrates the coordination of Swift 6 strict concurrency, Sendable, async/await, and time models such as CMTime in a real media-processing context.
Its companion project, KadrUI, provides a set of SwiftUI-side editing components, including VideoPreview, OverlayHost, multi-track TimelineView, InspectorPanel, and KeyframeEditor. This means the DSL is not limited to export workflows, but can also support core video-editor interactions such as previewing, dragging, trimming, keyframes, and overlay editing.
SwiftVLC: A Modern libVLC Wrapper for SwiftUI
Developed by Omar Albeik, SwiftVLC is a SwiftUI-oriented Swift wrapper around libVLC 4.0. Compared with the traditional VLCKit, it removes the Objective-C middle layer and directly provides an @Observable Player, AsyncStream event streams, typed throws, and a VideoView(player) that can be integrated in a single line. Its value is not just that it is “more Swift,” but that it offers a way to connect low-level multimedia capabilities with the modern Swift concurrency model. If your app needs to handle formats, subtitles, or complex network protocols that AVFoundation does not cover well, libVLC-based solutions remain hard to replace.
Of course, the limitations are equally clear: it requires relatively new system versions, and the underlying libVLC still requires attention to LGPL compliance.
Thanks for reading Fatbobman’s Swift Weekly! This post is public so feel free to share it.
让 AI 从称手到称心
从开始深度使用 AI 工具至今已有三年。三年间,我亲历了 AI 能力的飞跃,也越来越清晰地触摸到它的边界。截至目前,AI 早已是非常出色的效率工具,但如何让它写出真正“称心”——符合我个人风格、想法与设计哲学——的代码,仍是一个不小的挑战。
即便主流模型已普遍提供 1M token 的上下文窗口,但随着项目规模扩大,它所能承载的信息依然远远落后于人类开发者的“全局观”;叠加注意力衰减问题,真正可用的上下文其实相当有限。这也是为什么最近一段时间,我减少了写给人类阅读的技术文章,却产出了不少写给 AI 看的文档——常有读者来信问我博客为何更新放缓,原因正在于此。
之所以要编写大量的专属文档,不仅仅是为了给 AI 提供足够的上下文指导,更重要的是,我正在探索一套自研的 SwiftUI 状态模式。它与当前主流的开发范式有很大区别——这意味着,AI 庞大的预训练数据在此时经常变成“噪音”甚至阻力,它总是不自觉地把你拉回最常见的写法。整个开发过程因此呈现出一种“拉锯”的节奏:我用 AI 快速验证,把模糊的想法落实为精准的合约与接口;同时不断完善文档,反过来约束 AI,让它写出更贴近我心意的代码。
尽管距离最终完成还有一段时间,但随着代码、结构、文档的持续迭代,尤其是那些尚未完全成型的想法逐渐清晰并落地之后,我能明显感觉到:当前的 AI 在这个项目中越来越顺手,它写出的代码正在逐渐向我的预期靠拢。
实现相同的界面结果可以有无数种方式,但我希望通过种种约束,让 AI 只选择我规划的那条路径。我并不是说自己写的代码有多完美,而是希望 AI 产出的东西能让我有熟悉感和掌控感,让我能毫无障碍地介入与维护。
在明确的目标与完善的指导下,AI 的实现速度已是我的几十上百倍。当”效率”不再是瓶颈,如何让这位极速的助手变得更“称心”、更默契,才是下一阶段真正值得投入的方向。
如果您发现这份周报或我的博客对您有所帮助,可以考虑通过 Buy Me a Coffee 支持我的创作。
近期推荐
Swift 并发核心释疑 (Q&A: Swift Concurrency - Formatted)
这是一份来自 Apple 工程师的 Swift Concurrency Q&A 实录,由 Anton Gubarenko 整理。在活动中,工程师们针对当前并发模型中最容易产生误解的问题进行了集中解答。从 nonisolated(nonsending) 的行为变化,到 @concurrent 的使用边界,再到 Task 生命周期与取消机制。与其说这是一次知识补充,不如说是一次“语义校准”。
一个值得注意的信号是:Swift 正在从“async 默认并发”转向“默认保守,仅在需要时显式引入并发”。
解析 Task.immediate 的调度行为 (Immediate tasks in Swift Concurrency explained)
Swift 6.2 引入了一个相当“微妙但有用”的新特性:Task.immediate。与常规 Task 不同,immediate task 会在当前执行上下文中同步启动,并持续执行直到第一个真正的 suspension point。Antoine van der Lee 在本文中对其行为进行了清晰拆解。
这种能力主要用于弥合一个长期存在的空隙:在同步上下文中调用 async 逻辑,同时又希望保持执行顺序(例如在 actor 已正确隔离的前提下立即更新状态)。在这些场景中,Task.immediate 可以避免调度延迟带来的“时序错位”。
不过,它的风险同样直接——如果在第一个 suspension 之前执行了较重的同步工作(尤其是在 MainActor 上),就会阻塞当前执行器,带来明显的 UI 卡顿。
Task.immediate改变的只是“任务何时开始执行”,而不是任务的整体生命周期。在大多数情况下,常规 Task 的调度行为仍然是更安全的选择。
Swift 6 并发语境下的协议设计 (Concurrency Step-by-Step: Designing Protocols)
Matt Massicotte 在这篇文章中分享了一种在 Swift 6 严格并发语境下,更具“工程可执行性”的协议设计方法。
在 Swift 6 中,协议设计变得前所未有地困难——你不仅要定义方法,还要定义隔离边界(Isolation)。要不要加 @MainActor?要不要继承 Sendable?方法需不需要 async?Matt 强调,许多开发者在适配 Swift 6 时遇到的“并发报错瀑布”,表面上是隔离域冲突,本质上其实是过早抽象带来的架构问题。如果在需求不明时就试图设计出完美协议,极易被并发规则“锁死”。
Matt 建议:不要一开始就写 Protocol,而是从具体的实现(Concrete Types)出发。在真实使用场景中不断收敛接口边界,并将涉及隔离域和上下文相关的能力延后抽象。这种方式可以有效避免在并发时代陷入“胖协议”与过度约束的泥潭。与其说这是一种协议设计指南,不如说是 Swift 6 时代一种更务实的“延迟决策”策略。
Xcode 工程自动化构建实践 (Automating your Xcode Project)
Xcode 工程文件长期以来都是版本控制中的“问题源头”。Leo G Dion 从这一痛点出发,介绍了使用 XcodeGen 与 Tuist 自动生成工程的实践方式,并给出了较为完整的 Tuist 使用流程。文章还系统梳理了一个最小可发布工程所需的关键配置:deployment target、App Icon、Privacy Manifest、签名信息以及版本管理策略等,提供了一份适合自动化的“工程清单”。
尽管我的项目目前只有我一个开发者,我也已经切换到使用 Tuist 来生成 Xcode 工程。一方面,这类工具本身具备较高的工程确定性;另一方面,在 AI 辅助编程的场景下,它们的价值被进一步放大——各类 Agent 对其支持普遍较好,在修改工作区文件后如需编译会自动触发 generate。Tuist 和 XcodeGen 这类工具,正在逐渐成为 AI 时代更“友好”的工程基础设施。
SwiftUI 外观主题切换方案 (Appearance Mode Changer)
两天前,Stewart Lynch 迎来了他的 75 岁生日。他在推文上写道:“75 years of patches, upgrades, bug fixes, deprecated habits, and surprisingly few fatal errors. Still compiling. Still shipping. Still learning.”
作为知名的视频教程作者,Stewart 近期也重启了博客,将一些简短的技巧以文字形式整理发布,本文是关于 SwiftUI 外观模式切换的实现方案。
对于身处这个快速迭代、容易让人产生年龄焦虑的行业中的我们来说,这大概就是一位开发者最令人羡慕的理想状态。祝 Stewart 生日快乐,也把这份”Still”的精神分享给大家。
打磨 watchOS 地图体验的六年历程 (Six Years Perfecting Maps on watchOS)
David Smith 在这篇设计日志中回顾了他为 Pedometer++ 打造 Apple Watch 地图体验的六年历程。
文章最有价值的部分并不在具体实现,而在于一系列取舍:watchOS 上的交互必须足够直接,复杂配置几乎不可接受;地图与数据之间,需要在“可读性”与“信息密度”之间不断权衡;甚至连底图也不再依赖现成方案,而是专门为 Liquid Glass 定制。在技术选择上,他也给出了一个颇具代表性的判断:即便 MapKit 已经登陆 watchOS,在可定制性与表现力仍有限的情况下,依然选择完全自建方案。这背后体现的,不只是技术能力,更是对产品体验优先级的清晰判断。
这是一段典型的“长期打磨型”产品演进过程。
工具
Kadr:用 Swift DSL 描述视频合成
由 Steliyan Hadzhidenev 开发的 Kadr 是一个 Swift 原生的视频合成库,使用 Result Builder DSL 将 AVFoundation 中原本分散的剪辑、转场、多轨、滤镜、overlay、音频与导出流程组织为声明式 API。它值得关注的地方不只是语法层面的简化,而在于它展示了在真实媒体处理场景中,如何将 Swift 6 的 strict concurrency、Sendable、async/await 与 CMTime 这类时间模型协同起来。
其配套的 KadrUI 则提供了 SwiftUI 侧的一整套编辑界面组件,包括 VideoPreview、OverlayHost、多轨 TimelineView、InspectorPanel 与 KeyframeEditor,使这套 DSL 不仅可以用于导出流程,也具备支撑视频编辑器预览、拖拽、裁剪、关键帧和叠加层编辑等核心交互的能力。
SwiftVLC:面向 SwiftUI 的现代 libVLC 封装
由 Omar Albeik 开发的 SwiftVLC 是一个面向 SwiftUI 的 libVLC 4.0 Swift 封装。相比传统的 VLCKit,它去除了 Objective-C 中间层,直接提供基于 @Observable 的 Player、AsyncStream 事件流、typed throws,以及一行即可集成的 VideoView(player)。它的价值不仅在于“更 Swift”,还在于提供了一种将底层多媒体能力与现代 Swift 并发模型结合的实现方式。如果你的应用需要处理 AVFoundation 难以覆盖的格式、字幕或复杂网络协议,这类基于 libVLC 的方案依然具有不可替代性。
当然,限制也同样明确:对系统版本要求较高,同时底层 libVLC 仍需关注 LGPL 合规问题。

