linkedin tracking
icon-sprite Created with Sketch.
Skip to main content
What We Learned at SwiftFest 2019 August 28, 2019
Mobile & Web

What We Learned at SwiftFest 2019

reading time
Cantina logo

Written by

Dakota Kim, Matthias Ferber, , and Shukti Shaikh

Share this article


At the end of July, several Cantina team members attended SwiftFest 2019, a two-day conference on the Swift language and Apple app development in general, held in Boston. This conference took place three months after the most significant WWDC (Worldwide Developer Conference) in years, where Apple introduced iOS 13 and macOS Catalina, the new iPadOS, Catalyst tools for converting iPad apps to macOS desktop apps, and a massive overhaul of UI development via the SwiftUI and Combine frameworks. All these announcements provided a shot of pure adrenaline for Apple developers, and, like the rest of the SwiftFest attendees, we were eager to learn more about what we can do with all these new toys.

Below, several Cantina developers—Dakota, Matthias, Preeti, and Shukti—share their experiences at the conference and some of the things they learned.

Dakota Kim
Dakota Kim

This is my second year attending SwiftFest, and I am over the moon reflecting on the amount of love and work that went into this conference. There were great talks from incredible speakers, some having travelled from countries around the world. This year I volunteered my time to work on the conference's iOS app as well as to help with logistics during the event.

The conference began with some remarks about community from Giorgio Natili, the organizer of both SwiftFest and DroidCon Boston. This emphasis on community and interaction is one of the conference's best qualities. The SwiftFest team works hard to ensure that all feel welcome and supported in attending. The low price of admission, wide variety of talks, code of conduct, gender-neutral pronoun pins, and engaging workshops for all experience levels helped make SwiftFest a place for all to share their experiences with Swift. I'm very grateful for the opportunity to collaborate with the SwiftFest team.

Programming with Purpose

The conference began with a talk from Ish Shabazz, titled “Programming with Purpose.” I attended Ish's talk on State Restoration at the previous SwiftFest, and since then I've been following him on Twitter. Ish shared his experiences and the path that led him to this talk and to his purpose: to help as many people as he can, in small ways, every single day.

Purpose is a powerful thing, and this talk resonated with me deeply, as I share Ish's purpose of helping as many people as I can. Growing up, I viewed coding as a fun way to make things happen. I could share silly scripts/programs with my friends to have a good time. There is nothing wrong with that, but it made solving bigger programs more difficult for me as I didn't have a purpose to push me through the more difficult obstacles. It wasn't until years later that I realized that I could hone and use these skills to help people and solve cool and impactful problems. The first time I wrote software with the explicit aim to help others was at the Tufts Neurocognition Lab. I volunteered my time and helped write software for EEG analysis. It solidified the idea that I could help people with code, even if it was in an indirect way.

Programming is a tool to help create solutions to problems. I am grateful that my skills can be used to help folks from around the world, and I'm very thankful for those who have helped me during my own journey as well.

“Little things can become big things.”


Scalability, reusability, and maintainability are all factors that should be considered when building new things. In order to keep our code clean, we've got to know what we're working with. Extensions are just one of the many tools provided by the Swift language to help developers write healthy and clean code. Neem Serra gave an excellent talk on tips to take advantage of extensions and how extensions can improve these aspects of your Swift code. The following are the key takeaways from Neem's talk:

  1. Extensions add new functionality to an existing class, structure, enumeration, or protocol type.
  2. Extensions can help increase your codebase’s organization by separating functionality by protocol.
  3. Implementing custom functionality of existing data structures via extensions can help minimize the clutter/improve readability within core business logic.
  4. Extensions allow developers to create custom initializers for structs. Since Structs in Swift generate a memberwise initializer based on their properties, the use of a custom initializer helps your models become more flexible and modular.
  5. Extensions can be used to add additional nested types to existing classes, structs, and enumerations.
  6. Extensions allow us to add new computed instance properties and computed type properties to existing Types.
  7. We can use mutating functions within an extension of an Instance type to allow that function to modify the calling instance.

Matthias Ferber
Matthias Ferber

SwiftUI and Combine are Coming

Among the most dramatic announcements at WWDC, from a developer perspective, were SwiftUI, Apple’s brand new, Swift-based, multiplatform UI design language, and a new reactive programming API, Combine, which adds constructs similar to those in the popular third-party RxSwift framework to the native Swift libraries. Combine powers UI bindings in SwiftUI, but it also provides Apple-blessed support for reactive coding anywhere else we might want to use it. We, along with the other attendees, were particularly excited to see these new tools in use.

Unfortunately, since nobody outside Apple had heard of these technologies until three months ago, and they’re still half-baked in the available betas, there was only so much that speakers could show us. The conference closed with a remarkable demonstration by Marc Prud’hommeaux (principal developer at Glimpse I/O, and the author of Stanza) of a SwiftUI toy app that packed a remarkable amount of functionality into only 273 lines of code, including the UI declarations. As a finale, Prud’hommeaux demonstrated the same app building and running as a macOS app—an incredibly ugly one, but runnable without changing a single line of UI code.

With SwiftUI and Combine still incomplete, that was about all we could really expect to see of them at this point. Seeing them in operation in a moderately complex app was still impressive, though, and it provided a more concrete sense of what working with SwiftUI will look like. (A half-day introductory workshop on SwiftUI gestures was offered, but none of us wanted to miss that much of the main track.)

Specific Coding Practices

Many of the sessions dealt with one aspect or another of coding Swift programs.

Protocols: Rob Napier gave a talk entitled either “Protocol Oriented Programming” or “Generics (It’s Not Supposed to Hurt),” depending on where you were looking. Most of this talk concerned good and bad approaches to Swift protocols. He offered some guidelines for judging when a protocol is or is not warranted (for instance, write the concrete code first and let it guide you to the protocols; mock roles, not objects, and don’t create a protocol just for mocking purposes). He also covered some advanced protocol topics like “generalized existentials”; this part went fast, but I gained some fundamentals and vocabulary that will help me catch up on these topics later.

Reactive view models: Atlassian’s Lou Franco demonstrated and explained a critical line of RxSwift code that “broke [his] brain” and changed the way he thought of view models and reactive programming. The line of code, which came from a Medium post by Martin Moizard, reduces a view model to a completely stateless transformation of input events into output view states. This talk reaffirmed my sense that I have more to learn about taking reactive programming to the next level and adopting fully reactive idioms. I’ll be returning to this material a lot. (The project Franco was demonstrating is open source.)

Mutation testing: I’ve always had a few niggling doubts about unit tests. One of them is that code coverage is a terrible measure of test suite completeness: it verifies that code is getting exercised, but not that the results are being tested for correctness. Another is that you can never be quite sure that the tests themselves are correct, especially once your program has evolved since they were originally written.

Mutation testing is a really clever technique that can help address both of those concerns. A mutation testing package actually makes changes to a copy of your source code, introducing errors of the sorts a careless programmer might make, such as using == instead of !=, or using < instead of >, or removing side effects from a method. Then it runs your test suite against the broken code. If the tests for the modified code still pass, then they might not be doing their job correctly.

Sean Olszewski of Pivotal presented his Swift mutation testing package, Muter, and offered recommendations about how to use it most effectively. I left his talk eager to try this tool out on a small side project as soon as possible.

App security: An alarming talk by Kamil Borzym demonstrated how easy it can be to reverse-engineer a network API or decompile an app, using off-the-shelf tools. He made some recommendations to help guard against these attacks, such as stripping out or obfuscating exposed symbols that make the decompiled app easier to understand.

Accessibility: Essential, but Often Ignored

We can’t be reminded too often how important accessibility is in app design, even though it tends to be treated as an afterthought and often goes unaddressed. As an experiment, Leena Mansour tried to last a single week with her iPhone screen turned off, relying entirely on the VoiceOver screen reader, and she didn’t even come close. Her talk vividly described the pain of voice-navigating an interface that wasn’t built for it, and she demonstrated some small things you can do in your app that will greatly improve its usability for the visually impaired. She strongly advised trying the VoiceOver experiment yourself.

Preeti Lekha
Preeti Lekha

Programming with Purpose

Like Dakota, I was struck by the opening keynote, an inspirational and very motivating non-technical talk. It was a good reminder that we don’t have to be an expert to encourage, support, teach and motivate others to take steps towards achieving their goals. Ish Shabazz told of small and big events that had shaped his life. Memorably, as a child, he was encouraged to learn programming by a female teacher with no technical knowledge, who signed herself up for a computing class in order to be able to teach him. “Purpose comes in all sizes, big and small. Programming with purpose and the purpose is to help as many people as one can in small ways every day. Why small ways? Because the biggest moments in life tend to happen on the way to your plans.” - Ish Shabazz

Diversity and Inclusion

The second day’s keynote, given by Diana Rodriguez, was a sincere and honest talk about diversity, inclusion, and evolution in the tech ecosystem, and the embarrassing current statistics (delivered to a room of 95% men, a perfect case-in-point). Being a stand-up comedian, she used humor to dispel nervousness and offense while making her point. She offered advice for bridging the gap, while making us laugh time and again. Some of her recommendations:

  • Lead by example: stand for what is right even if it means standing alone.
  • Call out bullies! E.g. Don’t let men get away with asking questions of female speakers purely to test their knowledge.
  • Interact with different people (inclusion; diversity).
  • Avoid assuming genders.
  • Invest in diversity and enforce a code of conduct.
  • Avoid mansplaining ("well... actually").
  • “An inclusive culture occurs when differences are valued, people are treated fairly and feel accepted and respected, and opportunities are open to all.”

Developing for Social Responsibility

As a female developer, listening to stories of other female achievers from all walks of life has always been very motivating for me. Alicia Carr gave one of those talks. Alicia is an activist, mentor, grandmother, and self-taught developer from Georgia. After learning to code online in 90 days, she built an app, Purple Evolution (PEVO), dedicated to helping victims escape domestic abuse. She described the amount of research and learning the project required, and the challenges she faced and overcame as she lay down the requirements for this unconventional app and brought it to the App Store. Her inspiration came not from her coding fluency but from her tenacity and drive to build something that would help with a problem she’d seen throughout her life. Her ability to view the problem through the special empathetic lens of being a woman, and her knowledge about the needs of the users and of possible predator scenarios, made the app possible. Her story is on YouTube.

Shukti Shaikh
Shukti Shaikh

The talks that resonated the most with me were the ones concerning augmented reality (AR) and machine learning (ML). Although both of these are still emerging technologies, there are numerous libraries and solutions available to any developer interested in trying AR or ML and integrating them into their applications.

Starting out with ARKit

I’ve always been intrigued by AR. I distinctly remember when Pokemon Go was first released. I was still living in New York City at the time, where everyone is almost always in a rush. Seeing groups of people (myself included) just walking around aimlessly, staring at their phones, was a fascinating experience. It was thrilling to see a digitally animated monster, which I had only been able to see as a 2D version on my Game Boy as a kid, appear in front of me, on the street, while I was looking at my phone screen. For many people, I imagine, this was their first true mobile AR experience aside from Snapchat lenses. Today, we have various resources and platforms available that any developer can use to get started on their AR projects.

Namrata Bandekar’s talk was focused on exactly that, showing how simple it is to create an AR demo with Apple’s ARKit framework. The application she made extends from her own interest in the game Portal (“The cake is a lie!” Sorry, had to be done). She wanted to allow users to place a virtual doorway within their environment, leading to a virtual room that users can walk into and out of and look at what’s inside. She also explained some AR design best practices that ensure a good user experience. A great resource for learning more about ARKit is Bandekar’s tutorial, Building a Portal App in ARKit, on

Working with Machine Learning

For intermediate developers who have done work with AR already and have seen the possibilities, often the natural next step is to combine an AR experience with ML models. Soojin Ro, an iOS developer for Webtoon, created an app that takes a picture of a person, crops the image to include only the person’s face, and overlays it on a dollar bill, stylizing the image to match the bill.

Soojin spoke about how, in the first couple of passes of development, the image styling wasn’t functioning as intended. He used Turi Create, an open source toolset for creating Core ML models, to overcome those obstacles by changing around the parameters of the machine learning model. After the conference, I researched Turi Create, and I found a plethora of blog posts, tutorials, and documentation available all over the web where people discuss how they’re using it to train ML models efficiently and without comprehensive knowledge of machine learning.

While Soojin created an application solely for the purpose of entertainment, machine learning models can be integrated into any product to provide additional value to the user experience and fulfill business needs. Dr. Miriam Friedel, from Skafos, presented more of a business use case for ML through an application called Bootfinder. This application allows users to take a picture of a pair of boots, and the app searches Zappos for boots that look similar and presents a list of them to the user for purchase. How does the app know which boots are similar to the boots in the image? Well, that’s where the ML model comes in.

Skafos has made it much simpler to integrate and update ML models throughout the development cycle. Let’s assume that Bootfinder is already up on the App Store and working as expected, with an ML model consisting of 2,500 different pairs of boots. Suppose that, the next day, Zappos has 2,600 pairs of boots instead, and then 2,650 the day after. Would you want to have to constantly revise the app to retrain the model? I wouldn’t want to be the developer that gets stuck with that task. The Skafos platform provides seamless integration between your product and your ML model: you can train a model and upload it directly to your app based on the 2,600 pairs of boots, without ever having to write or change any code within the app itself. How this works is fully explained in the Skafos documentation.

Last but not least, one of the most engaging talks dealing with ML was given by Todd Burner from Google. He introduced ML Kit, a mobile SDK that brings Google’s machine learning expertise to Android and iOS apps. He demonstrated the following uses and features:

  • Detecting text in images
  • Identifying facial features in images
  • Expanding text recognition capabilities (such as non-Latin alphabets) when the device has internet connectivity
  • Hosting a custom pre-trained Tensor Flow Lite model and downloading it to your app
  • Using the downloaded model to run inference and label images

Whether you’re new or experienced in machine learning, you can implement the functionality you need in just a few lines of code. It’s not necessary to have deep knowledge of neural networks or model optimization to get started. If you’d like to check out how to use ML Kit in your applications, there is a codelabs tutorial provided by Google with a sample starter project that anyone can follow.

Let us know what your SwiftFest 2019 takeaways were, we’d love to hear!


Contact us

Talk with our experts today

Tell us how you need help. We’ll be in touch.

Thank you! Your message has been sent!

We’ll be in touch.

Want to know more about Cantina? Check out our latest insights or join us at one of our upcoming events.