Taking back the control you thought SwiftUI sacrificed for simplicity’s sake

Image for post
Image for post
Photo by the author.

SwiftUI streamlines many of the common functionalities, capabilities, and controls common in apps. It is quite apparent that a focus during development was to consider how devs could incorporate common UI elements simply and with a lot of the overhead powerfully automated. The framework even takes into consideration usability and design perspectives.

This is extremely welcomed by the iOS community, which has had experience working with Interface Builder, Storyboards, and ViewControllers. Anyone who has dealt with the complexity that even the most simplistic features demanded could understand SwiftUI’s benefits.

The Catch?

As devs begin to build out more and more complex apps with SwiftUI, there are times when it feels like features (such as Navigation) are almost too simplistic. As I scroll through Stack Overflow Q&A’s, I see an increasing number of tagged issues that ask for more dev control. Likewise, I see a number of solutions that involve wrapping older UIKit solutions to cover the gaps. …

A high-level overview of the latest additions to the now fully-formed UI framework—and what it all means

Image for post
Image for post
Photo by Susanna Marsiglia on Unsplash

Apple’s SwiftUI announcement during last year’s WWDC (2019) was a welcomed surprise for Apple devs. The framework embraces a more declarative and reactive approach to building user interfaces—a complete paradigm shift from interface builder and storyboards. In a way, SwiftUI brings the joys of Swift to UI-building, signaling a coming departure of Objective-C inspired systems.

SwiftUI 1.0 (unofficial versioning), though, proved to still be somewhat a prototype framework, showing signs of its infancy. There were:

  • Bugs with the new Previews feature
  • Lackluster and/or imprecise compilation errors
  • Random compilation mismatches
  • Lacking documentation
  • Missing components that kept it from being robust enough to fully replace…


Continuously improve your models without rogue on-device training—or updating your app entirely

Image for post
Image for post
Photo by SpaceX on Unsplash

WWDC20 has dropped a ton of great updates for developers and users. Noticeable throughout the Keynote and Platforms State of the Union addresses were mentions of Core ML models powering some incredible first-party features, such as the sleep training capabilities. ML continues to play a key role in Apple’s vision for the future.

Core ML has been the framework that lets developers take their ideas to the next level. First, it allows developers to either create models (primarily through Create ML, if working within Apple’s ecosystem) or convert models from third-party frameworks into Core ML format.

Second, it provides an API that allows developers to easily access their models in their code to create some amazing experiences. And, lastly, it provides (some) models access to the Neural Engine inside the A-Series chips, which are optimally-engineered to run ML models. …


Danny Bolella

Senior Software Engineer | Scrum Master | Writer | Reader | Husband/Father

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store