Swift Processing Library Development

by Juan Lee, Google Summer of Code 2020

Juan Lee
Processing Foundation
4 min readOct 15, 2020

--

Mentored by Jonathan Kaufman

2020 marks the Processing Foundation’s ninth year participating in Google Summer of Code, where we work with students on open-source projects that range from software development to community outreach. This week, we’ll be posting articles written by some of the GSoC students, explaining their projects in detail. The series will conclude with a wrap-up post of all the work done by this year’s cohort.

Gif of a rotating group of basic shapes ranging from a cube to a sphere, run on the 3D graphic module in Swift Processing
Early Stage of development of 3D graphics. “Hi, I am a 23-year-old Korean student who is currently pursuing a Bachelor’s Degree in Computer Science at the University of British Columbia. This is my first time participating in Google Summer of Code and also my first time contributing to open source code. I am excited to begin working on expanding the Swift Processing Library and working with Jonathan over the summer.”

Processing brings an easier way for users to build and draw new applications, simplifying the fundamentals of computer programming. Swift Processing aims to bring these features into the iOS environment in order to simplify IOS development and give a creative outlet. The Swift Processing library is at its early stage of development, currently being developed by Jon Kaufman, creating a beginner-friendly abstraction for native iOS APIs within Swift Processing.

With the advancement of technology, iOS applications have permeated into everyday routines and tasks. Being able to create applications to use on smartphones is becoming more valuable as it can reach a larger audience. In addition, the usage of features such as the camera have vastly more use cases on mobile than on the desktop. These new and exciting opportunities are explored through Swift Processing, reaching new areas that Processing could not reach. Without much experience in iOS development, I set out to add additional functionality into the Swift Processing Library.

Camera Functionality

Camera feature added to Swift Processing Library

A user not very familiar with Swift would have a difficult time getting the camera sensor to work: it would require digging through heaps of documentation in order to create a simple camera interface. In order to create a simple camera application, one would need to understand how to connect the output and input streams from the camera sensor; and know the differences between video and photo sensors, and many different settings, such as quality of the photo and the aspect ratios. Swift Processing aims to allow the user to use simple and intuitive code in order to jump into using the camera.

The newly implemented camera functionality brings the use of the camera of the iOS device into a set of simplified functions. The user can call a function to capture an image from the front or back camera, and change other properties, such as the quality and aspect ratio. This interface is much easier to use since one does not need to know the complex details from the native Apple library (AVFoundation).

self.camera = createCamera(FRONT)

A simple line of code like the above is much easier to use and understand for someone who is coding for their first time. In addition, having simple constants, such as FRONT, allows the user to understand what these constants are for, instead of the convoluted native constants that Apple provides. This would allow the user to create an app with simple and clean code.

Animated gif of an example of a basic camera application showing a garden through the device’s camera
Basic camera functionality implementation

3D Graphic Sketches

Features added to the Swift Processing Library:

The second addition to the Swift Processing Library was the 3D graphic sketching feature, which allows users to create and manipulate 3D objects in the iOS environment. Similarly to the camera functionality, creating 3D graphics using the native iOS library is difficult to understand. Having to go through the documentation from Scenekit, which was the iOS library that I used to achieve 3D graphics, is confusing. There are many problems the user needs to worry about in addition to just creating 3D objects on the screen.

Gif of multiple shapes ranging from a cube to a model of a duck rotating on all axes and translating to the right slowly
3D graphics implementation

I created an interface where the user can express themselves without an in-depth knowledge of 3D graphics. Building on top of Scenekit, I created an interface where the user can easily create and manipulate 3D shapes while changing how you can view the scene. In addition, the code optimizes transformations and creation of 3D shapes so that the CPU usage and memory usage can stay at manageable levels. The end-product provides a 3D graphics sketch environment similar to the p5.js 3D sketch feature. The user is provided with a suite of functions to be able to create a 3D sketch in the iOS environment.

Moving Forward

Throughout Google Summer of Code, I worked on the Camera functionality and added 3D Graphic Sketches into the Swift Processing Library. However, there were a few issues and features that were left unresolved that need to be worked on in the future. The 3D textures and lighting is not working correctly, as there are issues with the 3D primitives not showing on the screen when using certain lighting and textures. In addition, there is also an optimization issue when two identical objects are created in the same rotation and position that would create a memory leak. Further work on these issues would widen the variety of tools offered by the Swift Processing 3D sketch environment. As iOS applications are becoming a larger part of our everyday lives, Swift Processing explores these new areas, helping users take advantage of the mobile medium.

--

--