SwiftUI streamlines many of the common functionalities, capabilities, and controls common in apps. It is quite apparent that a focus during development was to consider how devs could incorporate common UI elements simply and with a lot of the overhead powerfully automated. The framework even takes into consideration usability and design perspectives.
This is extremely welcomed by the iOS community, which has had experience working with Interface Builder, Storyboards, and ViewControllers. Anyone who has dealt with the complexity that even the most simplistic features demanded could understand SwiftUI’s benefits.
As devs begin to build out more and more complex apps with SwiftUI, there are times when it feels like features (such as Navigation) are almost too simplistic. As I scroll through Stack Overflow Q&A’s, I see an increasing number of tagged issues that ask for more dev control. Likewise, I see a number of solutions that involve wrapping older UIKit solutions to cover the gaps. …
Apple’s SwiftUI announcement during last year’s WWDC (2019) was a welcomed surprise for Apple devs. The framework embraces a more declarative and reactive approach to building user interfaces—a complete paradigm shift from interface builder and storyboards. In a way, SwiftUI brings the joys of Swift to UI-building, signaling a coming departure of Objective-C inspired systems.
SwiftUI 1.0 (unofficial versioning), though, proved to still be somewhat a prototype framework, showing signs of its infancy. There were:
WWDC20 has dropped a ton of great updates for developers and users. Noticeable throughout the Keynote and Platforms State of the Union addresses were mentions of Core ML models powering some incredible first-party features, such as the sleep training capabilities. ML continues to play a key role in Apple’s vision for the future.
Core ML has been the framework that lets developers take their ideas to the next level. First, it allows developers to either create models (primarily through Create ML, if working within Apple’s ecosystem) or convert models from third-party frameworks into Core ML format.
Second, it provides an API that allows developers to easily access their models in their code to create some amazing experiences. And, lastly, it provides (some) models access to the Neural Engine inside the A-Series chips, which are optimally-engineered to run ML models. …