Fatbobman's Swift Weekly #128
Is My App Stuck in Review?
Is My App Stuck in Review?
Last Thursday, a user in my Discord community complained that their app had been submitted to App Store Connect for four or five days but still hadn’t entered the review process. While I was enthusiastically analyzing the possible reasons with everyone, my heart suddenly skipped a beat: it seemed that the app I submitted on Monday hadn’t received any review updates either.
Someone suggested I apply for an “Expedited Review”. However, when I clicked into the page, the system prompted that I had “no eligible apps”. Upon closer inspection, I realized I was just a bit rusty from not updating my apps for so long—although my app had completed all the prerequisite steps, I simply hadn’t clicked the “Submit for Review” button at all.
Just a few hours after I finally clicked the button, the app was successfully approved and published.
While my situation was a mere false alarm, discussions in the community about Apple’s app review process slowing down have indeed been increasing recently. Many speculate that this might be related to the recent rise of Vibe Coding. Although there is no official confirmation, Vibe Coding has undeniably lowered the barrier to entry for development. In doing so, it has simultaneously amplified the volume of app submissions and the frequency of iterations in a short period, thereby passing the pressure down to the review team.
In fact, Apple has recently been holding up the review process for apps like Replit, which allow everyday consumers to engage in Vibe Coding. Even when allowing them to remain on the store, Apple has demanded compromises on core features. In Michael Tsai’s blog post covering this news, I came across a very sharp comment:
I thought the implication was that the vibe coding apps were being used to make the vibecoded apps that get submitted.
AI is not only reshaping the way we develop software but also posing new challenges to the app review and distribution systems. One might ask: if we fight magic with magic and let AI fully take over the review process, wouldn’t it be more efficient?
Apple’s review mechanism has never been entirely transparent. Sometimes, whether an app passes smoothly or not even depends on whether you “happen to” encounter a sympathetic reviewer. But looking at it from another angle, at least “humans” remain the most crucial part of this defense line. Human judgment can be flawed and biased, but it still retains a certain degree of flexibility when dealing with rigid rules.
I truly hope that the software ecosystem of the future does not devolve into a closed loop of “AI Development -> AI Review”.
Previous Issue|Newsletter Archive
📢 Sponsor Fatbobman’s Swift Weekly
Promote your product to Swift & iOS developers across:
- Blog: 50,000+ monthly visitors
- Newsletter: 4,000+ subscribers, 53% open rate
Perfect for developer tools, courses, and services.
Enjoyed this issue? Buy me a coffee ☕️
Original
CDE: An Attempt to Make Core Data Feel More Like Modern Swift
In last week’s article, I discussed the current reality of Core Data in modern projects: it hasn’t disappeared and still holds unique value, but the sense of misalignment between it and modern Swift development is becoming increasingly apparent. In this article, I continue along that line of thought and introduce an experimental project of mine: Core Data Evolution (CDE).
It is not a new framework intended to replace Core Data, nor is it an attempt to pull developers back to older technologies. More accurately, it is my own response to this misalignment: if I still recognize the value of Core Data’s object graph model, migration system, and mature runtime, can it continue to exist in modern Swift projects in a more natural way?
Recent Recommendations
Expanding Animations in SwiftUI Lists
Developers often encounter a frustrating animation issue: when dynamically changing the height of a row inside a List, the content does not expand smoothly, but instead jumps abruptly. In this article, Pavel Zak demonstrates through several experiments why common approaches such as conditional rendering with if, withAnimation, or even .transition fail to produce the desired effect inside List. While built-in solutions like DisclosureGroup can achieve smoother results, Pavel presents a more flexible approach: using Animatable combined with view size measurement to ensure that List receives continuously changing height values during the animation, resulting in a truly smooth expansion.
A key characteristic of
List(which is still backed by UIKit/AppKit) is that it requires a definitive row height during layout. Therefore, instead of lettingListdeal with structural changes, developers should, likeDisclosureGroup, transform “discrete changes” into “continuous changes” by providing interpolatable height values. This is also why developers often resort to theAnimatableprotocol when dealing with animation anomalies. For a deeper understanding of this protocol and its use cases, you can refer to my previous article.
SwiftUI iPad Adaptive Layout: Five Layers for Apps That Don’t Break in Split View
While Apple’s push toward multi-window capabilities in iPadOS is well-intentioned, it significantly increases the complexity of layout adaptation. Apps may appear in various forms, such as iPhone-like layouts, traditional full-screen iPad views, or Stage Manager windows. Wesley Matlock points out that relying solely on horizontalSizeClass is often insufficient in real-world scenarios. Developers need to combine container size with size classes to build a more fine-grained LayoutEnvironment, make layout branching decisions at the root view, and leverage mechanisms like ViewThatFits to let the system choose the most appropriate UI based on actual constraints, rather than assumptions about the device.
Pitfalls and workarounds when dealing with RGB HDR Gain Map using ImageIO
The introduction of RGB HDR Gain Map based on the ISO 21496-1 standard in iOS 18 enables richer HDR image processing, but also introduces new pitfalls. Although the relevant APIs can return auxiliary data dictionaries, in the RGB Gain Map scenario the actual bitmap data (kCGImageAuxiliaryDataInfoData) is missing, preventing further processing. In other words, ImageIO is unable to fully read the content it generates in this case. Weichao Deng proposes a hybrid approach: use Core Image to read the Gain Map as a CIImage, manually render it into bitmap data, reconstruct the missing fields, and then write it back via ImageIO. For developers working on camera or image processing apps that involve HDR Gain Maps, this article can save a significant amount of debugging time.
A Vision for Networking in Swift
The Swift Ecosystem Steering Group recently published a vision document on networking, discussing the current fragmentation in Swift’s networking ecosystem and its potential future direction.
The document highlights a clear divide: URLSession, SwiftNIO, and Network.framework coexist with overlapping functionality but incompatible abstractions. Developers often need to commit early to a specific stack, making later changes costly. Additionally, most existing networking APIs were designed before Swift Concurrency and rely on completion handlers, delegates, or reactive patterns, which feel increasingly out of place in modern Swift.
The proposed direction is a unified, layered networking architecture: shared I/O primitives and buffer types at the bottom, reusable protocol implementations (TLS, HTTP/1.1/2/3, QUIC, WebSocket) in the middle, and async/await-based client and server APIs at the top. The swift-http-types package (defining HTTPRequest / HTTPResponse) can be seen as an early step in this direction. The document also emphasizes that SwiftNIO and Network.framework will not be replaced, but will gradually converge toward shared underlying primitives.
The vision is currently open for community feedback. You can participate here.
Preparing Your iOS Codebase for AI Agents
As AI agents (such as Codex and Claude Code) become increasingly involved in real-world development workflows, the focus is shifting from “how to use AI to write code” to “how to make codebases suitable for AI collaboration”. Hesham Salman explores this transition from an engineering perspective.
He argues that AI relies more on explicit contracts than on prompts. By structuring project conventions and behavioral rules through layered AGENTS.md documentation, using a Makefile to standardize build and test workflows, and encoding multi-step processes into reusable “skills,” implicit engineering knowledge can be transformed into structured, machine-readable systems.
One particularly insightful detail: the author requires agents to update documentation whenever they encounter undocumented conventions, while enforcing a strict rule — every change must make the document shorter or more useful. This self-maintaining mechanism prevents both documentation decay and uncontrolled growth, striking a practical balance.
iOS Conf SG 2026 Videos
iOS Conf SG 2026 was held from January 21 to 23 in Singapore, featuring dozens of developers and content creators from around the world sharing their insights and experiences in the Apple ecosystem. Last week, the full set of talks was released. I also had the opportunity to participate as a speaker, and you can explore the sessions as you’re interested.
Tools
TaskGate: Managing Actor Reentrancy
While actors largely eliminate data races, their reentrant nature means that logic which appears sequential can lose its execution order after an await, leading to duplicate work or inconsistent state.
TaskGate, created by Matt Massicotte, addresses this scenario by introducing AsyncGate and AsyncRecursiveGate, which define critical sections for asynchronous code within actors. These ensure that only one task can enter a given section at a time. Unlike traditional locks, TaskGate allows safe asynchronous operations while holding the gate.
Matt explicitly notes that this is not a replacement for well-designed actor models, but rather a supplementary tool when other approaches are insufficient. The gates are intentionally non-Sendable to reduce misuse across actors. If you’re dealing with reentrancy-related state issues or want to better understand this subtle aspect of Swift concurrency, both the library and the related Reddit discussion are worth exploring.
pico-bare-swift
When Apple created Swift, the goal was clearly broader than just app development—it was meant to evolve into a general-purpose language across domains and abstraction levels. However, for a long time, Swift has struggled to gain traction in areas traditionally dominated by C/C++ or Rust. Through this example project, kishikawa katsumi demonstrates another possibility: with Embedded Swift, the language is beginning to enter the domain of embedded systems.
What makes this project particularly appealing is that it turns something traditionally associated with low-level, C-centric development into a structured learning path. It goes far beyond “blinking an LED with Swift,” covering startup code, vector tables, memory initialization, register access, as well as drivers for UART, PWM, I2C, and SSD1306 OLED displays. In a sense, the value of such projects lies not in their practicality, but in how they redefine the boundaries of what Swift can do.
Thanks for reading Fatbobman’s Swift Weekly! This post is public so feel free to share it.
我的 App 审核被卡了?
上周四,我 Discord 社区里的一位网友抱怨,说他的应用在 App Store Connect 上提交了四五天,却迟迟没有进入审核状态。就在我还津津有味地跟大伙儿分析原因时,突然心里一紧:我周一提交的应用,好像也一直没收到审核动态?
有网友建议我去申请一下“加急审核”。可当我点进页面时,系统却提示我“没有可加急的应用”。仔细一查才发现,原来是太久没更新 App,业务都生疏了——我的应用虽然完成了所有前置步骤,但我压根儿就没点那个“提交以供审核”的按钮。
补点按钮没过几个小时,应用就顺利上架了。
尽管我这纯属虚惊一场,但最近社区里关于“苹果审核变慢”的讨论确实多了起来。很多人猜测,这或许与近期 Vibe Coding 的盛行有关。虽然没有官方证实,但 Vibe Coding 确实在降低开发门槛的同时,也在短时间内放大了应用提交的数量与迭代频率,从而把压力传导到了审核环节。
事实上,苹果最近也确实对 Replit 这类允许普通用户进行 Vibe Coding 的应用在审核上进行了卡关。即便允许其上架,也要求在核心功能上做出妥协。在 Michael Tsai 关于此事的博客介绍中,我看到了一条非常敏锐的留言:
I thought the implication was that the vibe coding apps were being used to make the vibecoded apps that get submitted.
这些 Vibe Coding 应用正被用来批量制造那些提交上架的应用。
AI 不仅在重塑开发方式,也正在对应用审核与发行体系提出新的挑战。有人或许会问:如果用魔法打败魔法,让 AI 也全面接管审核流程,会不会更高效?
苹果的审核机制向来不够透明,有时候应用能否顺利过审,甚至取决于是否“碰巧”遇到一位气味相投的审核员。但换个角度看,至少“人”仍然是这道防线中最重要的一环。人的判断会出错,也会带有偏差,但在面对规则时仍保有一定的弹性。
我不希望,未来的软件生态,走向“AI 开发 -> AI 审核”的闭环。
如果您发现这份周报或我的博客对您有所帮助,可以考虑通过 爱发电,Buy Me a Coffee 支持我的创作。
原创
CDE:一次让 Core Data 更像现代 Swift 的尝试
在 上周的文章 中,我聊了聊 Core Data 在当下项目中的一些现实处境:它并没有消失,也仍然有其独特价值,但它和现代 Swift 项目之间的错位感却越来越明显。在本文中,我想继续顺着这个问题往下走,介绍我的一个实验性项目:Core Data Evolution(CDE)。
它不是一个取代 Core Data 的新框架,也不是要把开发者重新拉回旧技术。更准确地说,它是我面对这些错位时,给自己的一种回答:如果我仍然认可 Core Data 的对象图模型、迁移体系和成熟运行时能力,那么能不能让它在现代 Swift 项目中以一种更自然的方式继续存在下去?
近期推荐
实现平滑的 SwiftUI List 展开动画 (Expanding Animations in SwiftUI Lists)
开发者经常会遇到一个动画窘境:在动态调整 List 中某一行高度时,内容并不是平滑展开,而是伴随着明显的高度跳变。在本文中,Pavel Zak 通过几个实验,展示了为什么常见的 if 条件渲染、withAnimation 甚至 .transition 在 List 中都难以达到理想效果。尽管 DisclosureGroup 这种内建方案可以达到预期,但 Pavel 还是给出了一个更灵活的方案:基于 Animatable 与视图尺寸测量的实现方式,让 List 在动画过程中始终获得连续变化的高度,从而实现平滑的展开动画。
List(底层仍然是 UIKit/AppKit 的列表实现)有一个核心特点:它需要在布局阶段就拿到每一行的“确定高度”。因此,对开发者来说,不要让List面对结构变化,而是像DisclosureGroup那样,将“离散变化”转化为“连续变化”,持续提供可插值的高度值。这也是在处理动画异常时,开发者常常借助Animatable协议的原因。想进一步了解该协议的原理与适用场景,可以阅读我之前的一篇文章。
如何更好的适配 iPadOS 的布局 (SwiftUI iPad Adaptive Layout: Five Layers for Apps That Don’t Break in Split View)
尽管苹果强化 iPadOS 多窗口能力的初衷是好的,但这也显著提升了开发者在布局适配上的复杂度。应用可能以类 iPhone、传统 iPad 全屏、Stage Manager 窗口等多种模式呈现。Wesley Matlock 指出,仅依赖 horizontalSizeClass 进行布局判断在实际环境中往往是不够的。开发者需要结合容器尺寸与 size class 构建更细粒度的 LayoutEnvironment,并在根视图中统一完成布局分支决策;同时借助 ViewThatFits 等机制,让系统基于真实约束选择最合适的界面形式,而不是由开发者预先假设设备类型。
RGB HDR Gain Map + ImageIO 中的使用陷阱 (Pitfalls and workarounds when dealing with RGB HDR Gain Map using ImageIO)
iOS 18 中引入的基于 ISO 21496-1 标准的 RGB HDR Gain Map,让开发者在处理 HDR 图像时获得了更高的表现力,但在实际应用中也更容易踩坑:尽管相关接口能够返回辅助数据字典,但在 RGB Gain Map 场景下却缺失了实际的位图数据(kCGImageAuxiliaryDataInfoData),导致后续处理无法继续。换句话说,ImageIO 在这一场景下甚至无法完整读取自身生成的内容。Weichao Deng 提出了一种混合方案:使用 Core Image 读取 Gain Map 的 CIImage,手动渲染为 Bitmap Data,补齐缺失字段后,再通过 ImageIO 写回文件。对于正在开发相机或图像处理类应用、需要处理 HDR Gain Map 的开发者来说,这篇文章或许能帮你省下不少调试时间。
Swift 社区的网络愿景 (A Vision for Networking in Swift)
Swift Ecosystem Steering Group 上周发布了一份关于网络编程的愿景文档,讨论了 Swift 网络生态当前的困境以及未来可能的发展方向。
文档指出,Swift 在网络领域存在明显的分裂:URLSession、SwiftNIO 与 Network.framework 并存,功能重叠却互不兼容,开发者往往需要在项目早期就押注某一套技术栈,且切换成本极高。与此同时,现有的大多数网络 API 都诞生于 Swift Concurrency 之前,依赖 completion handler、delegate 或响应式模式,与现代 Swift 的语言特性存在明显脱节。
文档提出的方向是构建一套分层统一的网络架构:底层共享 I/O 原语与缓冲类型,中间层复用 TLS、HTTP/1.1/2/3、QUIC、WebSocket 等协议实现,顶层提供以 async/await 和结构化并发为基础的客户端与服务端 API。已有的 swift-http-types(定义了 HTTPRequest / HTTPResponse)可以视为这一思路的早期实践。文档同时强调,SwiftNIO 和 Network.framework 不会被废弃,而是将逐步向统一的底层原语收敛。
该愿景目前正在征集社区反馈,可以在此参与。
让你的 iOS 项目更适合 AI 协作 (Preparing Your iOS Codebase for AI Agents)
随着 AI agent(如 Codex、Claude Code 等)逐渐参与到实际开发流程中,问题开始从“如何使用 AI 写代码”转向“如何让代码库本身适合 AI 协作”。Hesham Salman 从工程实践的角度,系统性地探讨了这一转变。
Hesham 指出,相较于提示词,AI 更依赖显式契约。通过分层的 AGENTS.md 文档明确项目约定与行为规则,使用 Makefile 将构建、测试等操作统一为可执行入口,并通过“skills”将多步骤流程编码为可复用的执行方法,从而将原本隐性的工程知识结构化地嵌入到代码库中。
文章中有一个细节令人印象深刻:作者要求 agent 在发现未记录的约定时主动更新文档,同时加入了一条强约束——每次修改都必须让文档更短或更有用。这一自维护机制既防止文档腐化,也避免文档膨胀,是一个值得借鉴的平衡策略。
iOS Conf SG 2026 视频
2026 年 iOS Conf SG 于 1 月 21 日至 23 日在新加坡举行,来自全球的数十位开发者与内容创作者分享了各自在苹果生态开发中的经验与思考。上周,官方放出了本届的全部演讲视频。我也有幸参与了其中的一场分享,感兴趣的读者可以按需挑选观看。
工具
TaskGate:解决 Actor 重入的工具
尽管 actor 在很大程度上避免了数据竞争,但其可重入(reentrancy)特性也意味着,一些看似串行的逻辑在 await 之后可能失去原有的执行顺序,进而造成重复执行或状态不一致。
Matt Massicotte 编写的 TaskGate 正是为这类场景准备的。它提供了 AsyncGate 和 AsyncRecursiveGate 两种机制,用来为 actor 内部的异步代码定义“临界区”,确保同一时间只有一个任务能够进入相关逻辑。与传统锁不同的是,它允许在持有 gate 的同时安全地执行异步调用。
Matt 明确指出:该库并不是用来替代良好的 actor 设计,而更像是一种在其他手段不够合适时的补充工具。库中将 gate 刻意设计为 non-Sendable,以降低跨 actor 误用的风险。如果你正在处理 actor 重入导致的状态一致性问题,或希望更深入理解 Swift 并发中的这一薄弱环节,这个库以及 Matt 在 Reddit 中的 讨论 都值得一看。
pico-bare-swift
苹果当年创建 Swift 时,对它的期待显然不只是用于开发 App,而是希望它最终成长为一门适用于不同领域、不同层次的通用语言。不过,在相当长一段时间里,Swift 在那些传统上由 C/C++ 或 Rust 主导的领域中,始终没有展现出足够的存在感。kishikawa katsumi 通过这个示例项目展示了另一种可能:借助 Embedded Swift,Swift 已经开始具备进入嵌入式开发场景的能力。
这个项目最吸引我的地方,在于它将一件原本带有明显“底层/C 语言专属”色彩的事情,组织成了一条相当清晰的学习路径。它不仅是“用 Swift 点亮一个 LED”,而是将启动代码、向量表、内存初始化、寄存器访问,以及 UART、PWM、I2C、SSD1306 OLED 等外设驱动一并纳入 Swift 的实现范围之中。某种程度上,这类项目的意义不在于“是否实用”,而在于它重新划定了 Swift 的能力边界。


