Fatbobman's Swift Weekly #124
The Spring Festival Gala, Robots, AI, and LLMs
The Spring Festival Gala, Robots, AI, and LLMs
As a television program with over a billion viewers, the China Central Television’s Spring Festival Gala is undoubtedly an exceptional showcase platform. In this year’s Gala, multiple Chinese robotics manufacturers presented their products in various performances, among which Unitree’s humanoid robots caught the most attention. During the show, multiple models of these humanoid robots executed a series of highly complex martial arts and dynamic movements. Compared to the more static, stationary displays of last year, the complexity and stability of their movements have seen a significant leap—a progress that has drawn attention from global media.
Following the Gala, discussions on social media showed a clear divide. Alongside the amazement at the technological progress, there was no shortage of skeptical voices dismissing the performance as “pre-programmed,” “lacking AI,” or “impractical.” To a certain extent, this reflects the public’s underestimation of the sheer complexity of robotics—especially a lack of awareness regarding the difficulties of motion control, real-time feedback systems, and system-level integration.
One point needs clarification: pre-trained does not equal “record-and-playback.” It is true that humanoid robots currently employ highly orchestrated movement sequences in such performances, but this shares a similar logic with the training of human dancers or athletes. Extensive offline training and rehearsal form the foundation of the movements, but during actual execution, the “body” must still rely on dynamic balance and real-time corrections to cope with real-world disturbances. It is precisely this fault tolerance and real-time repair capability that allows humanoid robots—a naturally unstable bipedal system—to pull off highly dynamic, continuous movements.
Meanwhile, the explosion of Large Language Models (LLMs) in recent years has led many to mistakenly equate LLMs with AI as a whole. In reality, AI, a field with decades of history, encompasses far more than just language understanding. Especially when interacting with the real physical world, the usage of specialized models—such as computer vision, path planning, motion control, and reinforcement learning—in industrial and physical systems still vastly exceeds that of LLMs. In the realm of robotics, the true ceiling of a system’s capability is typically determined by its perception systems, control systems, and low-latency feedback algorithms, rather than its language reasoning abilities.
Even if stronger “cognitive abilities” are introduced to humanoid robots in the future, the better path may not necessarily be directly plugging in an LLM. Instead, it will likely involve building World Models that inherently understand the laws of physics, paired with control systems capable of low-latency responses—two areas that happen to be inherent weaknesses of LLMs. The challenges of Embodied AI are fundamentally different from pure text reasoning.
As for the issue of “practicality,” kung fu or dancing indeed hardly correspond directly to real-world job scenarios. However, it is precisely these movements—which demand extreme balance, coordination, and dynamic response—that provide the perfect validation ground for highly complex and unstable systems like humanoid robots. They function as engineering stress tests, demonstrating the maturity of mechanical design, electronic control, and algorithmic integration, rather than proving short-term commercial viability.
Personally, I remain cautious about the future market size for humanoid robots. There is often a significant chasm between technological breakthroughs and widespread commercial adoption. Nevertheless, judging by the magnitude of progress showcased at this year’s Gala, it is reasonable to conclude that within the next decade, robots or smart machines integrating into our daily work and living environments is no longer just sci-fi imagination. Whether you like “robots” or not, the trajectory of technological evolution is unmistakably clear: we will eventually need to coexist with them.
As for the apocalyptic scenario of “robots enslaving humanity,” I’m not worried about that for now. My more realistic concern is this: if they encounter a bug at work and swing a punch at me, I genuinely couldn’t take the hit.
Previous Issue|Newsletter Archive
📢 Sponsor Fatbobman’s Swift Weekly
Promote your product to Swift & iOS developers across:
- Blog: 50,000+ monthly visitors
- Newsletter: 4,000+ subscribers, 53% open rate
Perfect for developer tools, courses, and services.
Enjoyed this issue? Buy me a coffee ☕️
Recent Recommendations
How to Migrate to @Observable Without Breaking Your App
As more apps raise their minimum deployment target to iOS 17, @Observable is replacing ObservableObject as the new state management foundation. However, when a project has deeply relied on ObservableObject + @Published, migration is far from a simple macro substitution. Pawel Kozielecki draws on a real-world migration experience to systematically walk through the correct usage of property wrappers in the new system — using @State for lifecycle management, @Bindable for two-way bindings, and plain properties for read-only access — while highlighting easily overlooked details such as @ObservationIgnored and computed property tracking blind spots. The real challenge of migration was never about syntax; it’s about truly understanding who owns the view model’s lifecycle.
Testing with Event Streams
Although Swift Testing offers a rich assertion API, in practice you’ll find there’s no single tool that fully corresponds to XCTest’s ability to verify that multiple callbacks are triggered in order (fulfillment + enforceOrder). confirmation requires nesting and cannot directly validate trigger order. Matt Massicotte proposes an approach that better fits Swift’s concurrency model: using AsyncStream to collect events, wrapped in a lightweight EventStream type — yielding event identifiers when callbacks fire, then calling collect at the end to retrieve the full event sequence for comparison against an expected array. As for why not just use a plain array, Matt provides a compelling answer: when @Sendable constraints or inconsistent actor isolation are involved, writing directly to an array creates concurrency safety issues, whereas the AsyncStream-based approach naturally conforms to the concurrency model.
If You’re Not Versioning Your SwiftData Schema, You’re Gambling
SwiftData’s declarative syntax and automatic migration capabilities make it easy to fall into the trap of thinking “the framework will handle everything.” The reality is that once your model structure changes — adding fields, renaming, adjusting relationships — without an explicit schema version and migration plan, you’re left relying on implicit inference. When that inference fails, the result is rarely a graceful migration; more often it’s crashes, data loss, or an app that won’t launch. Mohammad Azam offers direct, pragmatic advice: explicitly declare Schema versions; prepare migration paths for future structural changes; and treat migration design as part of model design, not an afterthought.
This advice applies equally to Core Data. Even when a model is fully compatible with lightweight migration, creating a corresponding model version file for each release (whenever structural changes occur) not only helps track the model’s evolution but also enables clear, controlled rollback when issues arise. Using explicit versioning to govern model evolution is fundamentally about establishing safety boundaries for long-term maintenance.
How to build a simple CLI tool using Swift
There’s an interesting phenomenon in the age of AI Coding: CLI tools are experiencing a renaissance — more and more developers are building CLI tools to power their MCP and Agent workflows. Natascha Fadeeva walks through how to build structured command-line tools using Swift Package Manager and Apple’s official ArgumentParser library: defining root commands and subcommands, handling async network requests, and compiling to a standalone distributable binary. For iOS developers already fluent in Swift, this path is more natural than maintaining a bash or Python script, and easier to evolve alongside the project.
Navigation Notes – Agentic coding
As an experienced developer, Joseph Heck observes that as AI becomes capable of executing tasks, generating code, and driving changes autonomously, the developer’s role shifts from “line-by-line implementer” to “path navigator.” The truly scarce skill is no longer coding speed, but navigation — how developers maintain their sense of direction in complex codebases and multi-agent environments. Joseph offers several practical suggestions: always include “ask me about anything ambiguous” in your prompts; have the agent draft a plan and get your approval before implementation; provide deterministic feedback loops (unit tests, compiler errors) that allow the agent to self-correct; and distill frequently reused instructions into Skill files.
Heck avoids amplifying the “AI will disrupt developers” narrative, instead emphasizing a more grounded reality: agentic coding amplifies existing engineering capabilities. If you’re already good at modular design and abstraction, AI will accelerate you. If your sense of boundaries is fuzzy, AI will just create chaos faster.
Setting up a delivery pipeline for your agentic iOS projects
When code generation, modification, and refactoring begin to be driven by agents, is the traditional CI/CD pipeline still sufficient? Donny Wals opens with a real experience: his app crashed mid-workout at the gym, he handed the crash report to an agent for analysis, and by the time he finished training, a PR was waiting. After merging, a TestFlight build landed shortly after. Around this experience, he systematically outlines how to build a reliable delivery pipeline for agentic iOS projects — one that keeps automated changes controllable, verifiable, and releasable.
The article’s focus isn’t on any specific tool, but on pipeline design itself. Donny emphasizes that code generated by agents is fundamentally “a change that hasn’t been reviewed line by line,” which makes clear quality gates all the more important: automated testing, continuous integration, and the release pipeline must bear ultimate responsibility for delivery. Agents can significantly accelerate implementation, but engineering discipline cannot be relaxed in kind — when velocity increases, control mechanisms become even more critical.
Tracking Token Usage in Foundation Models
Apple’s Foundation Models run on-device with a context window of just 4,096 tokens — once exceeded, the conversation cannot continue. iOS 26.4 introduces token usage tracking APIs to help developers monitor context consumption in real time. Artem Novichkov covers four key metrics: total model context capacity (contextSize), token consumption for Instructions, consumption for individual Prompts, and cumulative usage for the full conversation Transcript. The article also highlights an easily overlooked detail: when a Tool is introduced, its name, description, and argument schema are serialized and counted toward the token budget — the same Instructions jump from 16 to 79 tokens once a Tool is attached. For on-device models, token observability will become essential infrastructure for optimizing the user experience.
Tools
App Store Connect CLI
App Store Connect CLI is an unofficial App Store Connect command-line tool developed by Rudrank Riyam, covering the full release pipeline: TestFlight management, build uploads, code signing, screenshot automation, localization sync, app review submission, notarization, and financial report downloads. The tool was designed from the ground up with agent scenarios in mind and includes dedicated documentation for agent-oriented workflows. If your release pipeline centers on TestFlight, metadata, submission, signing, and CI automation, ASC is worth considering as a lightweight alternative to fastlane.
GRDB 7.10.0: Android, Linux, and Windows Support
GRDB 7.10.0 is a milestone release: it formally introduces support for Android, Linux, and Windows, and adds the ability to use SQLCipher (encrypted databases) via Swift Package Manager — two long-awaited features from the community. This marks a meaningful evolution for Swift’s most mature SQLite wrapper, from an Apple-platform tool into a truly cross-platform data layer solution.
Gwendal Roué notes in the release announcement that because Xcode does not yet support package traits, SwiftPM will still download unused dependencies; until this is resolved, SQLCipher support will continue to require a fork.
Swift System Metrics
Swift System Metrics provides Swift applications — particularly server-side projects — with unified system-level metrics collection: CPU utilization, memory usage, file descriptor counts, and more, exposed through a standardized Metrics interface that integrates with existing monitoring systems such as Prometheus. It is not a standalone monitoring system, but rather an infrastructure component driven by the Swift Server Work Group, designed to align with the Swift Metrics ecosystem and bring system resource metrics into the same observability stack as application-level metrics. The 1.0 release signals API stability and production readiness. For teams building Swift backend services or investing in Swift observability, this is a foundational piece of the puzzle.
Thanks for reading Fatbobman’s Swift Weekly! This post is public so feel free to share it.
春晚、机器人、AI 与 LLM
作为一个观众数量超十亿的电视节目,央视春晚无疑是极佳的展示平台。今年春晚中,多家中国机器人厂商在不同节目中展示了其产品,其中讨论度最高的当属宇树(Unitree)的人形机器人。在表演环节,多款型号的人形机器人完成了大量较为复杂的武术与动态动作展示。与去年偏静态、偏站桩式的呈现相比,今年的动作复杂度与稳定性确实有明显提升,这一点也得到了全球媒体的关注与报道。
春晚之后,社交媒体上的讨论呈现出明显分化。除了对技术进步的惊叹之外,“预编程”、“没有 AI”、“缺乏实用性”等质疑声音同样不少。这在一定程度上反映了公众对机器人技术复杂度的低估——尤其是对运动控制、实时反馈系统和系统级整合难度的认知不足。
需要澄清的一点是:预训练并不等于“录制-回放”。当前人形机器人在此类表演中的确采用了高度规划的动作流程,这与人类舞者、运动员的训练逻辑有相通之处——大量的离线训练与调试构成了动作的基础,但在实际执行过程中,身体仍需依赖动态平衡与即时修正来应对真实环境的扰动。正是这种容错与实时修复能力,才让人形机器人这个天然不稳定的双足系统得以完成高动态的连续动作。
与此同时,近年来大语言模型(LLM)的爆发,让不少人将 LLM 与 AI 等同起来。事实上,AI 作为一个已有数十年发展的领域,远远不止语言理解这一分支。尤其是面对真实的物理世界时,视觉识别、路径规划、运动控制、强化学习等专用模型在工业与实体系统中的使用量,依然远高于 LLM。在机器人领域,真正决定能力上限的往往是感知系统、控制系统以及低延迟反馈算法,而不是语言推理能力。
即便未来为人形机器人引入更强的”认知能力”,更适合的路径也未必是直接接入 LLM,而可能是构建能理解物理规律的世界模型(World Models)与具备低延迟响应能力的控制系统——这两点恰恰是 LLM 的固有短板。具身智能(Embodied AI)的挑战,与纯文本推理存在本质差异。
至于“实用性”的问题,功夫或舞蹈确实难以直接对应现实工作场景。但恰恰是这些对平衡性、协调性与动态响应要求极高的动作,为人形机器人这种高度复杂且不稳定的系统提供了极佳的验证场景。它们更像是工程能力的压力测试,展示的是机械设计、电子控制与算法系统整合的成熟度,而非短期商业落地能力。
我个人对于人形机器人未来的市场规模仍然持审慎态度。技术进步与商业普及之间往往存在不小的鸿沟。但从今年春晚所呈现的进步幅度来看,可以合理判断:在未来十年内,机器人或智能机器以某种形式融入日常工作与生活场景,已不再是科幻想象。无论你是否喜欢“机器人”,技术演进的趋势已经十分明确,我们终将需要与它们共存。
至于“机器人奴役人类”的情景,我暂时并不担心。我更现实的担忧是:如果它们在工作中出现 Bug,给我一拳,我真的挨不住。
如果您发现这份周报或我的博客对您有所帮助,可以考虑通过 爱发电,Buy Me a Coffee 支持我的创作。
近期推荐
如何在不破坏 App 的前提下迁移到 @Observable (How to Migrate to @Observable Without Breaking Your App)
随着越来越多的应用将最低系统版本提升至 iOS 17,@Observable 正在取代 ObservableObject 成为新的状态管理基础设施,但当项目已经深度依赖 ObservableObject + @Published 时,迁移远非简单替换宏即可完成。Pawel Kozielecki 结合一次真实的迁移踩坑经历,从底层机制差异出发,系统梳理了新体系下属性包装器的正确使用方式——用 @State 管理生命周期、用 @Bindable 处理双向绑定、只读场景直接使用普通属性,并特别指出了 @ObservationIgnored、计算属性追踪盲点等容易被忽视的细节。迁移的难点从来不在语法层面,而在于真正厘清“谁拥有 view model 的生命周期”这一根本问题。
验证多个回调按顺序触发 (Testing with Event Streams)
尽管 Swift Testing 提供了丰富的断言 API,但在实际使用中你会发现,并没有一个工具能够完全对应 XCTest 中“验证多个回调按顺序触发”(fulfillment + enforceOrder)的能力。confirmation 既需要嵌套使用,也无法直接校验触发顺序。对此,Matt Massicotte 提出了一种更符合 Swift 并发模型的思路:使用 AsyncStream 收集事件,并封装为一个轻量级的 EventStream 类型——当回调触发时 yield 事件标识,测试结束后通过 collect 获取完整事件序列,再与预期数组进行对比。对于“为什么不直接使用数组”这一疑问,Matt 也给出了充分理由:在存在 @Sendable 约束或 actor 隔离不一致的场景下,直接写入数组会触发并发安全问题,而基于 AsyncStream 的方案则天然符合并发模型的约束。
务必为 SwiftData 模型显式声明 Schema 版本 (If You’re Not Versioning Your SwiftData Schema, You’re Gambling)
SwiftData 的声明式写法与自动迁移能力很容易让人产生“框架会替我处理一切”的错觉,但现实是,一旦模型结构发生变化(字段新增、重命名、关系调整),如果没有显式的 schema version 与 migration plan,就只能依赖隐式推断。一旦推断失败,结果往往不是优雅的迁移,而是崩溃、数据丢失,甚至导致应用无法启动。Mohammad Azam 的建议直接而务实:显式声明 Schema 版本;为未来的结构变化预留迁移路径;将“迁移设计”视为模型设计的一部分,而不是事后补救。
本文的观点同样适用于 Core Data。即便模型完全兼容轻量迁移,为每个发行版本创建对应的模型版本文件(只要发生结构修改),不仅有助于追踪模型演化轨迹,也能在出现问题时实现清晰而可控的回滚。用明确的版本机制约束模型演进,本质上是在为长期维护建立安全边界。
用 Swift 开发 CLI 工具 (How to build a simple CLI tool using Swift)
一个有趣的现象是,在 AI Coding 时代,CLI 正在重焕青春——越来越多的开发者通过构建 CLI 工具来承载自己的 MCP 与 Agent 工作流。Natascha Fadeeva 介绍了如何用 Swift Package Manager 和 Apple 官方的 ArgumentParser 库构建结构化的命令行工具:定义主命令与子命令、处理异步网络请求、最终编译为可独立分发的二进制文件。对于已经熟悉 Swift 的 iOS 开发者来说,这条路径比维护一套 bash/Python 脚本更自然,也更容易随项目一起演进。
在 AI 编程时保持方向感 (Navigation Notes – Agentic coding)
作为一个拥有丰富经验的开发者,Joseph Heck 认为当 AI 能够主动执行任务、生成代码甚至推动改动时,开发者的角色从“逐行实现者”转变为“路径规划者”。真正稀缺的能力不再是写代码的速度,而在“导航”——也就是开发者在复杂代码与多代理环境中如何保持方向感。Joseph 给出了几条建议,例如:在提示词中始终加入”对任何模糊之处向我提问”;先让 Agent 制定计划并获得确认,再开始实施;提供确定性的反馈回路(单元测试、编译器错误),让 Agent 能够自我修正;以及将反复使用的指令集沉淀为 Skill 文件等。
Heck 并没有过度渲染“AI 颠覆开发者”的叙事,而是强调一种更冷静的现实:agentic coding 会放大已有的工程能力。如果你本来就善于模块划分与抽象设计,AI 会加速你;如果边界感模糊,AI 只会更快制造混乱。
为 Agent 驱动的 iOS 项目构建可靠交付管线 (Setting up a delivery pipeline for your agentic iOS projects)
当代码的生成、修改与重构开始由 Agent 驱动时,传统的 CI/CD 流程是否仍然足够?Donny Wals 以一次真实经历展开:健身时应用崩溃,他将 Crash Report 交给 Agent 分析,训练结束后 PR 已经准备就绪,合并后 TestFlight 构建随即落地。围绕这一实践,他系统梳理了如何为“agentic iOS 项目”构建一条可靠的交付管线(delivery pipeline),确保自动化改动依然可控、可验证、可发布。
文章的重点并不在某个具体工具,而在流程设计本身。Donny 强调,Agent 生成的代码本质上仍属于“未经人工逐行审查的改动”,因此更需要明确的边界与质量闸口:自动化测试、持续集成与发布流程必须承担最终的交付责任。Agent 可以显著提升实现速度,但工程纪律不能随之放松——速度提升之后,控制机制反而更为关键。
实时掌握 Foundation Models 的上下文消耗 (Tracking Token Usage in Foundation Models)
Apple 的 Foundation Models 运行在设备端,上下文窗口仅 4096 个 token,一旦超出便无法继续对话。iOS 26.4 新增了 token 用量追踪 API,帮助开发者实时掌握上下文消耗情况。Artem Novichkov 系统介绍了四个关键指标:模型上下文总容量(contextSize)、Instructions 的 token 消耗、单条 Prompt 的消耗,以及完整对话记录(Transcript)的累计用量。文章还揭示了一个容易被忽视的细节:当引入 Tool 时,其名称、描述与参数 Schema 会被序列化并计入 token,同一段 Instructions 在附加 Tool 后 token 数从 16 跃升至 79。对于设备端模型而言,token 的可观测性将成为优化体验的基础设施。
工具
App Store Connect CLI
App Store Connect CLI 是由 Rudrank Riyam 开发的非官方 App Store Connect 命令行工具,功能覆盖 TestFlight 管理、构建上传、代码签名、截图自动化、本地化同步、应用审核提交、notarization,以及财务报告下载等完整发布链路。它从设计阶段就强调 Agent 场景,并提供了面向 Agent 的实践文档。若你的发布流程重心在 TestFlight、元数据、提审、签名与 CI 自动化,ASC 可以作为 fastlane 的轻量替代方案之一。
GRDB 7.10.0: 新增 Android、Linux、Windows 支持
GRDB 7.10.0 是一个具有里程碑意义的版本更新:本次正式引入对 Android、Linux、Windows 的支持,并新增通过 Swift Package Manager 使用 SQLCipher(加密数据库)的能力——这两项功能都长期受到社区期待。这意味着这个 Swift 生态中最成熟的 SQLite 封装库,正在从 Apple 平台工具演进为真正的跨平台数据层解决方案。
Gwendal Roué 在 版本公告 中也特别说明,由于 Xcode 尚未支持 package traits,SwiftPM 目前仍会下载未实际使用的依赖;在相关问题解决之前,SQLCipher 支持将以 fork 形式长期维护。
Swift System Metrics
Swift System Metrics 为 Swift 应用(尤其是服务端项目)提供了统一的系统级指标采集能力,例如 CPU 使用率、内存占用、文件描述符数量等,并通过标准化的 Metrics 接口对外暴露,便于接入 Prometheus 等现有监控体系。它并非一个独立的监控系统,而是由 Swift Server Work Group 推动的基础设施组件,旨在与 Swift Metrics 生态对齐,使系统资源指标与应用级指标纳入同一可观测体系。1.0 的发布意味着 API 已趋于稳定,具备生产环境使用条件。对正在构建 Swift 后端服务、或持续完善 Swift 可观测性能力的团队来说,这是一个基础设施层面的关键拼图。


