How to sync the width or height of two SwiftUI views?
Multiple solutions and examples.
Articles, podcasts and news about Swift development, by John Sundell.
Multiple solutions and examples.
Solving real-world problems.
A new addition to UIKit in iOS 14.
My thanks to the team behind the new app size monitoring tool Emerge for sponsoring Swift by Sundell last week. If you’re looking to optimize your app’s binary size, or if you’re just curious as to how much smaller your app could become with a few tweaks, then I can really recommend checking out Emerge.
Not only does Emerge seamlessly integrate with both GitHub and fastlane to show you how each Pull Request will impact your app’s total size, it also gives you direct, actionable suggestions on how you can reduce that size over time. From detecting excess image metadata, to identifying unused code-level symbols, duplicate files, and more — Emerge will help you optimize your app in many different ways, which in turn will make it faster and easier to download.
See just how much smaller you could make your app’s binary by setting up a quick demo with the Emerge team at emergetools.com. And, when you use that referral link, you’ll also directly support Swift by Sundell in the process — win-win!
Kaitlin Mahar, lead engineer at MongoDB and member of the Swift Server Work Group, joins John to discuss the current state of server-side Swift, designing APIs for server-side libraries, and Swift’s upcoming suite of structured concurrency features.
Clubhouse: A lightweight, yet powerful project management tool that’s built specifically for software teams. Try it for free for two months at clubhouse.io/sundell.
Designing a caching API in Swift, and how strategic uses of caching can give us a big performance boost.
Strategies for picking between computed properties and methods.
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
By default, encoding or decoding an array using Swift’s built-in Codable
API is an all-or-nothing type of deal. Either all elements will be successfully handled, or an error will be thrown, which is arguably a good default, since it ensures a high level of data consistency.
However, sometimes we might want to tweak that behavior so that invalid elements will be ignored, rather than resulting in our entire coding process failing. For example, let’s say that we’re working with a JSON-based web API that returns collections of items that we’re currently modeling in Swift like this:
struct Item: Codable {
var name: String
var value: Int
}
extension Item {
struct Collection: Codable {
var items: [Item]
}
}
Now let’s say that the web API that we’re working with occasionally returns responses such as the following, which includes a null
value where our Swift code is excepting an Int
:
{
"items": [
{
"name": "One",
"value": 1
},
{
"name": "Two",
"value": 2
},
{
"name": "Three",
"value": null
}
]
}
If we try to decode the above response into an instance of our Item.Collection
model, then the entire decoding process will fail, even if the majority of our items did contain perfectly valid data.
The above might seem like a somewhat contrived example, but it’s incredibly common to encounter malformed or inconsistent JSON formats in the wild, and we might not always be able to adjust those formats to neatly fit Swift’s very static nature.
Of course, one potential solution would be to simply make our value
property optional (Int?
), but doing so could introduce all sorts of complexities across our code base, since we’d now have to unwrap those values every time that we wish to use them as concrete, non-optional Int
values.
Another way to solve our problem could be to define default values for the properties that we’re expecting to potentially be null
, missing, or invalid — which would be a fine solution in situations when we still want to keep any elements that contained invalid data, but let’s say that this is not one of those situations.
So, instead, let’s take a look at how we could ignore all invalid elements when decoding any Decodable
array, without having to make any major modifications to how our data is structured within our Swift types.
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
What we’re essentially looking to do is to change our decoding process from being very strict to instead becoming “lossy”. To get started, let’s introduce a generic LossyCodableList
type which will act as a thin wrapper around an array of Element
values:
struct LossyCodableList<Element> {
var elements: [Element]
}
Note how we didn’t immediately make our new type conform to Codable
, and that’s because we’d like it to conditionally support either Decodable
, Encodable
, or both, depending on what Element
type that it’s being used with. After all, not all types will be codable both ways, and by declaring our Codable
conformances separately we’ll make our new LossyCodableList
type as flexible as possible.
Let’s start with Decodable
, which we’ll conform to by using an intermediate ElementWrapper
type to decode each element in an optional manner. We’ll then use compactMap
to discard all nil
elements, which will give us our final array — like this:
extension LossyCodableList: Decodable where Element: Decodable {
private struct ElementWrapper: Decodable {
var element: Element?
init(from decoder: Decoder) throws {
let container = try decoder.singleValueContainer()
element = try? container.decode(Element.self)
}
}
init(from decoder: Decoder) throws {
let container = try decoder.singleValueContainer()
let wrappers = try container.decode([ElementWrapper].self)
elements = wrappers.compactMap(\.element)
}
}
To learn more about the above way of conforming to protocols, check out “Conditional conformances in Swift”.
Next, Encodable
, which might not be something that every project needs, but it could still come in handy in case we also want to give our encoding process the same lossy behavior:
extension LossyCodableList: Encodable where Element: Encodable {
func encode(to encoder: Encoder) throws {
var container = encoder.unkeyedContainer()
for element in elements {
try? container.encode(element)
}
}
}
With the above in place, we’ll now be able to automatically discard all invalid Item
values simply by making our nested Collection
type use our new LossyCodableList
— like this:
extension Item {
struct Collection: Codable {
var items: LossyCodableList<Item>
}
}
One quite major downside with the above approach, however, is that we now always have to use items.elements
to access our actual item values, which isn’t ideal. It would arguably be much better if we could turn our usage of LossyCodableList
into a completely transparent implementation detail, so that we could keep accessing our items
property as a simple array of values.
One way to make that happen would be to store our item collection’s LossyCodableList
as a private property, and to then use a CodingKeys
type to point to that property when encoding or decoding. We could then implement items
as a computed property, for example like this:
extension Item {
struct Collection: Codable {
enum CodingKeys: String, CodingKey {
case _items = "items"
}
var items: [Item] {
get { _items.elements }
set { _items.elements = newValue }
}
private var _items: LossyCodableList<Item>
}
}
Another option would be to give our Collection
type a completely custom Decodable
implementation, which would involve decoding each JSON array using LossyCodableList
before assigning the resulting elements to our items
property:
extension Item {
struct Collection: Codable {
enum CodingKeys: String, CodingKey {
case items
}
var items: [Item]
init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
let collection = try container.decode(
LossyCodableList<Item>.self,
forKey: .items
)
items = collection.elements
}
}
}
Both of the above approaches are perfectly fine solutions, but let’s see if we could make things even nicer by using Swift’s property wrappers feature.
One really neat thing about the way that property wrappers are implemented in Swift is that they’re all just standard Swift types, which means that we can retrofit our LossyCodableList
to also be able to act as a property wrapper.
All that we have to do to make that happen is to mark it with the @propertyWrapper
attribute, and to implement the required wrappedValue
property (which once again could be done as a computed property):
@propertyWrapper
struct LossyCodableList<Element> {
var elements: [Element]
var wrappedValue: [Element] {
get { elements }
set { elements = newValue }
}
}
With the above in place, we’ll now be able to mark any Array
-based property with the @LossyCodableList
attribute, and it’ll be lossily encoded and decoded — comparably transparently:
extension Item {
struct Collection: Codable {
@LossyCodableList var items: [Item]
}
}
Of course, we’ll still be able to keep using LossyCodableList
as a stand-alone type, just like we did before. All that we’ve done by turning it into a property wrapper is to also enable it to be used that way, which once again gives us a lot of added flexibility without any real cost.
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
At first glance, Codable might seem like an incredibly strict and somewhat limited API that either succeeds or fails, without any room for nuance or customization. However, once we get past the surface level, Codable is actually incredibly powerful, and can be customized in lots of different ways.
Silently ignoring invalid elements is definitely not always the right approach — very often we do want our coding processes to fail whenever any invalid data was encountered — but when that’s not the case, then either of the techniques used in this article can provide a great way to make our coding code more flexible and lossy without introducing a ton of additional complexity.
If you’ve got any questions, comments, or feedback, then feel free to reach out via either Twitter or email. If you want to, you can also support my work by either checking out the above sponsor, or by sharing this article to help more people discover it.
Thanks for reading!
Testing asynchronous code is often particularly tricky, and code written using Apple’s Combine framework is no exception. Since each XCTest-based unit test executes in a purely synchronous fashion, we have to find ways to tell the test runner to await the output of the various asynchronous calls that we’re looking to test — otherwise their output won’t be available until after our tests have already finished executing.
So, in this article, let’s take a look at how to do just that when it comes to code that’s based on Combine publishers, and also how we can augment Combine’s built-in API with a few utilities that’ll heavily improve its testing ergonomics.
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
As an example, let’s say that we’ve been working on a Tokenizer
that can be used to identify various tokens within a string (such as usernames, URLs, hashtags, and so on). Since we’re looking to tokenize strings of varying length and complexity, we’ve opted to make our Tokenizer
perform its work on a background thread, and to then use Combine to asynchronously report what tokens that it found within a given String
:
struct Tokenizer {
func tokenize(_ string: String) -> AnyPublisher<[Token], Error> {
...
}
}
Now let’s say that we wanted to write a series of tests to verify that our tokenization logic works as indented, and since the above API is asynchronous, we’re going to have to do a little bit of work to ensure that those tests will execute predictably.
Like we took a look at in “Unit testing asynchronous Swift code”, XCTest’s expectations system enables us to tell the test runner to await the result of an asynchronous call by creating, awaiting and fulfilling an XCTestExpectation
instance. So let’s use that system, along with Combine’s sink
operator, to write our first test like this:
class TokenizerTests: XCTestCase {
private var cancellables: Set<AnyCancellable>!
override func setUp() {
super.setUp()
cancellables = []
}
func testIdentifyingUsernames() {
let tokenizer = Tokenizer()
// Declaring local variables that we'll be able to write
// our output to, as well as an expectation that we'll
// use to await our asynchronous result:
var tokens = [Token]()
var error: Error?
let expectation = self.expectation(description: "Tokenization")
// Setting up our Combine pipeline:
tokenizer
.tokenize("Hello @john")
.sink(receiveCompletion: { completion in
switch completion {
case .finished:
break
case .failure(let encounteredError):
error = encounteredError
}
// Fullfilling our expectation to unblock
// our test execution:
expectation.fulfill()
}, receiveValue: { value in
tokens = value
})
.store(in: &cancellables)
// Awaiting fulfilment of our expecation before
// performing our asserts:
waitForExpectations(timeout: 10)
// Asserting that our Combine pipeline yielded the
// correct output:
XCTAssertNil(error)
XCTAssertEqual(tokens, [.text("Hello "), .username("john")])
}
}
While the above test works perfectly fine, it’s arguably not that pleasant to read, and it involves a lot of boilerplate code that we’ll likely have to repeat within every single Tokenizer
test that we’ll write.
So let’s address those issues by introducing a variant of the await
method from “Async/await in Swift unit tests”, which essentially moves all of the setup code required to observe and await the result of a publisher into a dedicated XCTestCase
method:
extension XCTestCase {
func await<T: Publisher>(
_ publisher: T,
timeout: TimeInterval = 10,
file: StaticString = #file,
line: UInt = #line
) throws -> T.Output {
// This time, we use Swift's Result type to keep track
// of the result of our Combine pipeline:
var result: Result<T.Output, Error>?
let expectation = self.expectation(description: "Awaiting publisher")
let cancellable = publisher.sink(
receiveCompletion: { completion in
switch completion {
case .failure(let error):
result = .failure(error)
case .finished:
break
}
expectation.fulfill()
},
receiveValue: { value in
result = .success(value)
}
)
// Just like before, we await the expectation that we
// created at the top of our test, and once done, we
// also cancel our cancellable to avoid getting any
// unused variable warnings:
waitForExpectations(timeout: timeout)
cancellable.cancel()
// Here we pass the original file and line number that
// our utility was called at, to tell XCTest to report
// any encountered errors at that original call site:
let unwrappedResult = try XCTUnwrap(
result,
"Awaited publisher did not produce any output",
file: file,
line: line
)
return try unwrappedResult.get()
}
}
To learn more about the XCTUnwrap
function used above, check out “Avoiding force unwrapping in Swift unit tests”.
With the above in place, we’ll now be able to drastically simplify our original test, which can now be implemented using just a few lines of code:
class TokenizerTests: XCTestCase {
func testIdentifyingUsernames() throws {
let tokenizer = Tokenizer()
let tokens = try await(tokenizer.tokenize("Hello @john"))
XCTAssertEqual(tokens, [.text("Hello "), .username("john")])
}
}
Much better! However, one thing that we have to keep in mind is that our new await
method assumes that each publisher passed into it will eventually complete, and it also only captures the latest value that the publisher emitted. So while it’s incredibly useful for publishers that adhere to the classic request/response model, we might need a slightly different set of tools to test publishers that continuously emit new values.
A very common example of publishers that never complete and that keep emitting new values are published properties. For example, let’s say that we’ve now wrapped our Tokenizer
into an ObservableObject
that can be used within our UI code, which might look something like this:
class EditorViewModel: ObservableObject {
@Published private(set) var tokens = [Token]()
@Published var string = ""
private let tokenizer = Tokenizer()
init() {
// Every time that a new string is assigned, we pass
// that new value to our tokenizer, and we then assign
// the result of that operation to our 'tokens' property:
$string
.flatMap(tokenizer.tokenize)
.replaceError(with: [])
.assign(to: &$tokens)
}
}
Note how we currently replace all errors with an empty array of tokens, in order to be able to assign our publisher directly to our tokens
property. Alternatively, we could’ve used an operator like catch
to perform more custom error handling, or to propagate any encountered error to the UI.
Now let’s say that we’d like to write a test that ensures that our new EditorViewModel
keeps publishing new [Token]
values as its string
property is modified, which means that we’re no longer just interested in a single output value, but rather multiple ones.
An initial idea on how to handle that, while still being able to use our await
method from before, might be to use Combine’s collect
operator to emit a single array containing all of the values that our view model’s tokens
property will publish, and to then perform a series of verifications on that array — like this:
class EditorViewModelTests: XCTestCase {
func testTokenizingMultipleStrings() throws {
let viewModel = EditorViewModel()
// Here we collect the first two [Token] values that
// our published property emitted:
let tokenPublisher = viewModel.$tokens
.collect(2)
.first()
// Triggering our underlying Combine pipeline by assigning
// new strings to our view model:
viewModel.string = "Hello @john"
viewModel.string = "Check out #swift"
// Once again we wait for our publisher to complete before
// performing assertions on its output:
let tokenArrays = try await(tokenPublisher)
XCTAssertEqual(tokenArrays.count, 2)
XCTAssertEqual(tokenArrays.first, [.text("Hello "), .username("john")])
XCTAssertEqual(tokenArrays.last, [.text("Check out "), .hashtag("swift")])
}
}
However, the above test currently fails, since the first element within our tokenArrays
output value will be an empty array. That’s because all @Published
-marked properties always emit their current value when a subscriber is attached to them, meaning that the above tokenPublisher
will always receive our tokens
property’s initial value (an empty array) as its first input value.
Thankfully, that problem is quite easily fixed, since Combine ships with a dedicated operator that lets us ignore the first output element that a given publisher will produce — dropFirst
. So if we simply insert that operator as the first step within our local publisher’s pipeline, then our test will now successfully pass:
class EditorViewModelTests: XCTestCase {
func testTokenizingMultipleStrings() throws {
let viewModel = EditorViewModel()
let tokenPublisher = viewModel.$tokens
.dropFirst()
.collect(2)
.first()
viewModel.string = "Hello @john"
viewModel.string = "Check out #swift"
let tokenArrays = try await(tokenPublisher)
XCTAssertEqual(tokenArrays.count, 2)
XCTAssertEqual(tokenArrays.first, [.text("Hello "), .username("john")])
XCTAssertEqual(tokenArrays.last, [.text("Check out "), .hashtag("swift")])
}
}
However, we’re once again at a point where it would be quite tedious (and error prone) to have to write the above set of operators for each published property that we’re looking to write tests against — so let’s write another utility that’ll do that work for us. This time, we’ll extend the Published
type’s nested Publisher
directly, since we’re only looking to perform this particular operation when testing published properties:
extension Published.Publisher {
func collectNext(_ count: Int) -> AnyPublisher<[Output], Never> {
self.dropFirst()
.collect(count)
.first()
.eraseToAnyPublisher()
}
}
With the above in place, we’ll now be able to easily collect the N
number of next values that a given published property will emit within any of our unit tests:
class EditorViewModelTests: XCTestCase {
func testTokenizingMultipleStrings() throws {
let viewModel = EditorViewModel()
let tokenPublisher = viewModel.$tokens.collectNext(2)
...
}
}
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
Even though Combine’s stream-driven design might differ quite a bit from other kinds of asynchronous code that we might be used to (such as using completion handler closures, or something like Futures and Promises), we can still use XCTest’s asynchronous testing tools when verifying our Combine-based logic as well. Hopefully this article has given you a few ideas on how to do just that, and how we can make writing such tests much simpler by introducing a few lightweight utilities.
If you’ve got questions, comments, or feedback, then feel free to reach out via either Twitter or email. Thanks for reading, and happy testing!
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
David Smith, creator of apps like Widgetsmith, returns to the show to discuss whether SwiftUI is currently capable and stable enough to build production-level apps, and what sort of things that can be good to keep in mind when starting to deploy SwiftUI in production.
Get the most out of Apple’s new UI framework.
How SwiftUI is ushering in a new era for Apple’s developer tools.
Tips for keeping up with platform and SDK changes.
New in Swift 5.4: Implicit member expressions (also known as “dot syntax”) can now be used even when accessing a property or method on the result of such an expression, as long as the final return type remains the same.
Note that Swift 5.4 is, at the time of writing, in beta as part of Xcode 12.5.
What that means in practice is that whenever we’re creating an object or value using a static API, or when referencing an enum case, we can now directly call a method or property on that instance, and the compiler will still be able to infer what type that we’re referring to.
As an example, when creating a UIColor
instance using one of the built-in, static APIs that ship as part of the system, we can now easily modify the alpha component of such a color without having to explicitly refer to UIColor
itself in situations like this:
// In Swift 5.3 and earlier, an explicit type reference is always
// required when dealing with chained expressions:
let view = UIView()
view.backgroundColor = UIColor.blue.withAlphaComponent(0.5)
...
// In Swift 5.4, the type of our expression can now be inferred:
let view = UIView()
view.backgroundColor = .blue.withAlphaComponent(0.5)
...
Of course, the above approach also works when using our own static APIs as well, for example any custom UIColor
definitions that we’ve added by using an extension:
extension UIColor {
static var chiliRed: UIColor {
UIColor(red: 0.89, green: 0.24, blue: 0.16, alpha: 1)
}
}
let view = UIView()
view.backgroundColor = .chiliRed.withAlphaComponent(0.5)
...
Perhaps even more interesting, though, are what doors that this new capability opens in terms of API design. As an example, in “Lightweight API design in Swift”, we took a look at the following API style, which involves extending a struct with static methods and properties that enables us to use it in a very “enum-like” way:
extension ImageFilter {
static var dramatic: Self {
ImageFilter(
name: "Dramatic",
icon: .drama,
transforms: [
.portrait(withZoomMultipler: 2.1),
.contrastBoost,
.grayScale(withBrightness: .dark)
]
)
}
}
When using Swift 5.4 (or later versions in the future), we might now add something like the following — which lets us easily combine two ImageFilter
instances by concatenating their transforms
:
extension ImageFilter {
func combined(with filter: Self) -> Self {
var newFilter = self
newFilter.transforms += filter.transforms
return newFilter
}
}
With the above in place, we’ll now be able to work with dynamically combined filters using the same lightweight dot syntax as we could previously only use when referencing a single, pre-defined filter:
let filtered = image.withFilter(.dramatic.combined(with: .invert))
Pretty cool! I’m going to keep exploring what kind of APIs that this new language feature enables me to design, and I’ll of course keep sharing my learnings with you through future articles and podcast episodes.
Like always, let me know if you have any questions or feedback by reaching out via either Twitter or email. Thanks for reading!
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
And how that can be customized.
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
Memory management is often especially tricky when dealing with asynchronous operations, as those tend to require us to keep certain objects in memory beyond the scope in which they were defined, while still making sure that those objects eventually get deallocated in a predictable way.
Although Apple’s Combine framework can make it somewhat simpler to manage such long-living references — as it encourages us to model our asynchronous code as pipelines, rather than a series of nested closures — there are still a number of potential memory management pitfalls that we have to constantly look out for.
In this article, let’s take a look at how some of those pitfalls can be avoided, specifically when it comes to self
and Cancellable
references.
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
Combine’s Cancellable
protocol (which we typically interact with through its type-erased AnyCancellable
wrapper) lets us control how long a given subscription should stay alive and active. Like its name implies, as soon as a cancellable is deallocated (or manually cancelled), the subscription that it’s tied to is automatically invalidated — which is why almost all of Combine’s subscription APIs (like sink
) return an AnyCancellable
when called.
For example, the following Clock
type holds a strong reference to the AnyCancellable
instance that it gets back from calling sink
on a Timer
publisher, which keeps that subscription active for as long as its Clock
instance remains in memory — unless the cancellable is manually removed by setting its property to nil
:
class Clock: ObservableObject {
@Published private(set) var time = Date().timeIntervalSince1970
private var cancellable: AnyCancellable?
func start() {
cancellable = Timer.publish(
every: 1,
on: .main,
in: .default
)
.autoconnect()
.sink { date in
self.time = date.timeIntervalSince1970
}
}
func stop() {
cancellable = nil
}
}
However, while the above implementation perfectly manages its AnyCancellable
instance and the Timer
subscription that it represents, it does have quite a major flaw in terms of memory management. Since we’re capturing self
strongly within our sink
closure, and since our cancellable (which is, in turn, owned by self
) will keep that subscription alive for as long as it remains in memory, we’ll end up with a retain cycle — or in other words, a memory leak.
An initial idea on how to fix that problem might be to instead use Combine’s assign
operator (along with a quick Data
-to-TimeInterval
transformation using map
) to be able to assign the result of our pipeline directly to our clock’s time
property — like this:
class Clock: ObservableObject {
...
func start() {
cancellable = Timer.publish(
every: 1,
on: .main,
in: .default
)
.autoconnect()
.map(\.timeIntervalSince1970)
.assign(to: \.time, on: self)
}
...
}
However, the above approach will still cause self
to be retained, as the assign
operator keeps a strong reference to each object that’s passed to it. Instead, with our current setup, we’ll have to resort to a good old fashioned “weak self
dance” in order to capture a weak reference to our enclosing Clock
instance, which will break our retain cycle:
class Clock: ObservableObject {
...
func start() {
cancellable = Timer.publish(
every: 1,
on: .main,
in: .default
)
.autoconnect()
.map(\.timeIntervalSince1970)
.sink { [weak self] time in
self?.time = time
}
}
...
}
With the above in place, each Clock
instance can now be deallocated once it’s no longer referenced by any other object, which in turn will cause our AnyCancellable
to be deallocated as well, and our Combine pipeline will be properly dissolved. Great!
Another option that can be great to keep in mind is that (as of iOS 14 and macOS Big Sur) we can also connect a Combine pipeline directly to a published property. However, while doing so can be incredibly convenient in a number of different situations, that approach doesn’t give us an AnyCancellable
back — meaning that we won’t have any means to cancel such a subscription.
In the case of our Clock
type, we might still be able to use that approach — if we’re fine with removing our start
and stop
methods, and instead automatically start each clock upon initialization, since otherwise we might end up with duplicate subscriptions. If those are tradeoffs that we’re willing to accept, then we could change our implementation into this:
class Clock: ObservableObject {
@Published private(set) var time = Date().timeIntervalSince1970
init() {
Timer.publish(
every: 1,
on: .main,
in: .default
)
.autoconnect()
.map(\.timeIntervalSince1970)
.assign(to: &$time)
}
}
When calling the above flavor of assign
, we’re passing a direct reference to our Published
property’s projected value, prefixed with an ampersand to make that value mutable (since assign
uses the inout
keyword). To learn more about that pattern, check out the Basics article about value and reference types.
The beauty of the above approach is that Combine will now automatically manage our subscription based on the lifetime of our time
property — meaning that we’re still avoiding any reference cycles while also significantly reducing the amount of bookkeeping code that we have to write ourselves. So for pipelines that are only configured once, and are directly tied to a Published
property, using the above overload of the assign
operator can often be a great choice.
Next, let’s take a look at a slightly more complex example, in which we’ve implemented a ModelLoader
that lets us load and decode a Decodable
model from a given URL
. By using a single cancellable
property, our loader can automatically cancel any previous data loading pipeline when a new one is triggered — as any previously assigned AnyCancellable
instance will be deallocated when that property’s value is replaced.
Here’s what that ModelLoader
type currently looks like:
class ModelLoader<Model: Decodable>: ObservableObject {
enum State {
case idle
case loading
case loaded(Model)
case failed(Error)
}
@Published private(set) var state = State.idle
private let url: URL
private let session: URLSession
private let decoder: JSONDecoder
private var cancellable: AnyCancellable?
...
func load() {
state = .loading
cancellable = session
.dataTaskPublisher(for: url)
.map(\.data)
.decode(type: Model.self, decoder: decoder)
.map(State.loaded)
.catch { error in
Just(.failed(error))
}
.receive(on: DispatchQueue.main)
.sink { [weak self] state in
self?.state = state
}
}
}
While that automatic cancellation of old requests prevents us from simply connecting the output of our data loading pipeline to our Published
property, if we wanted to avoid having to manually capture a weak reference to self
every time that we use the above pattern (that is, loading a value and assigning it to a property), we could introduce the following Publisher
extension — which adds a weak-capturing version of the standard assign
operator that we took a look at earlier:
extension Publisher where Failure == Never {
func weakAssign<T: AnyObject>(
to keyPath: ReferenceWritableKeyPath<T, Output>,
on object: T
) -> AnyCancellable {
sink { [weak object] value in
object?[keyPath: keyPath] = value
}
}
}
With the above in place, we can now simply call weakAssign
whenever we want to assign the output of a given publisher to a property of an object that’s captured using a weak reference — like this:
class ModelLoader<Model: Decodable>: ObservableObject {
...
func load() {
state = .loading
cancellable = session
.dataTaskPublisher(for: url)
.map(\.data)
.decode(type: Model.self, decoder: decoder)
.map(State.loaded)
.catch { error in
Just(.failed(error))
}
.receive(on: DispatchQueue.main)
.weakAssign(to: \.state, on: self)
}
}
Is that new weakAssign
method purely syntactic sugar? Yes. But is it nicer than what we were using before? Also yes 🙂
Another type of situation that’s quite commonly encountered when working with Combine is when we need to access a specific property within one of our operators, for example in order to perform a nested asynchronous call.
To illustrate, let’s say that we wanted to extend our ModelLoader
by using a Database
to automatically store each model that was loaded — an operation that also wraps those model instances using a generic Stored
type (for example in order to add local metadata, such as an ID or model version). To be able to access that database instance within an operator like flatMap
, we could once again capture a weak reference to self
— like this:
class ModelLoader<Model: Decodable>: ObservableObject {
enum State {
case idle
case loading
case loaded(Stored<Model>)
case failed(Error)
}
...
private let database: Database
...
func load() {
state = .loading
cancellable = session
.dataTaskPublisher(for: url)
.map(\.data)
.decode(type: Model.self, decoder: decoder)
.flatMap {
[weak self] model -> AnyPublisher<Stored<Model>, Error> in
guard let database = self?.database else {
return Empty(completeImmediately: true)
.eraseToAnyPublisher()
}
return database.store(model)
}
.map(State.loaded)
.catch { error in
Just(.failed(error))
}
.receive(on: DispatchQueue.main)
.weakAssign(to: \.state, on: self)
}
}
The reason we use the flatMap
operator above is because our database is also asynchronous, and returns another publisher that represents the current saving operation.
However, like the above example shows, it can sometimes be tricky to come up with a reasonable default value to return from an unwrapping guard
statement placed within an operator like map
or flatMap
. Above we use Empty
, which works, but it does add a substantial amount of extra verbosity to our otherwise quite elegant pipeline.
Thankfully, that problem is quite easy to fix (at least in this case). All that we have to do is to capture our database property directly, rather than capturing self
. That way, we don’t have to deal with any optionals, and can now simply call our database’s store
method within our flatMap
closure — like this:
class ModelLoader<Model: Decodable>: ObservableObject {
...
func load() {
state = .loading
cancellable = session
.dataTaskPublisher(for: url)
.map(\.data)
.decode(type: Model.self, decoder: decoder)
.flatMap { [database] model in
database.store(model)
}
.map(State.loaded)
.catch { error in
Just(.failed(error))
}
.receive(on: DispatchQueue.main)
.weakAssign(to: \.state, on: self)
}
}
As an added bonus, we could even pass the Database
method that we’re looking to call directly into flatMap
in this case — since its signature perfectly matches the closure that flatMap
expects within this context (and thanks to the fact that Swift supports first class functions):
class ModelLoader<Model: Decodable>: ObservableObject {
...
func load() {
state = .loading
cancellable = session
.dataTaskPublisher(for: url)
.map(\.data)
.decode(type: Model.self, decoder: decoder)
.flatMap(database.store)
.map(State.loaded)
.catch { error in
Just(.failed(error))
}
.receive(on: DispatchQueue.main)
.weakAssign(to: \.state, on: self)
}
}
So, when possible, it’s typically a good idea to avoid capturing self
within our Combine operators, and to instead call other objects that can be stored and directly passed into our various operators as properties.
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
While Combine offers many APIs and features that can help us make our asynchronous code easier to write and maintain, it still requires us to be careful when it comes to how we manage our references and their underlying memory. Capturing a strong reference to self
in the wrong place can still often lead to a retain cycle, and if a Cancellable
is not properly deallocated, then a subscription might stay active for longer than expected.
Hopefully this article has given you a few new tips and techniques that you can use to prevent memory-related issues when working with self
and Cancellable
references within your Combine-based projects, and if you have any questions, comments, or feedback, then feel free to reach out via either Twitter or email.
Thanks for reading!
Solving real-world problems.
Namespaces, functions and Swift 5.3.
Often when working with interactive SwiftUI views, we’re using closures to define the actions that we wish to perform when various events occur. For example, the following AddItemView
has two interactive elements, a TextField
and a Button
, that both enable the user to add a new text-based Item
to our app:
struct AddItemView: View {
var handler: (Item) -> Void
@State private var title = ""
var body: some View {
HStack {
TextField("Add item",
text: $title,
onCommit: {
guard !title.isEmpty else {
return
}
let item = Item(title: title)
handler(item)
title = ""
}
)
Button("Add") {
let item = Item(title: title)
handler(item)
title = ""
}
.disabled(title.isEmpty)
}
}
}
Apart from the leading guard
statement within our text field’s onCommit
action (which isn’t needed within our button action since we’re disabling the button when the text is empty), our two closures are completely identical, so it would be quite nice to get rid of that source of code duplication by moving those actions away from our view’s body
.
One way to do that would be to create our closures using a computed property. That would let us define our logic once, and if we also include the guard
statement that our TextField
needs, then we could use the exact same closure implementation for both of our UI controls:
private extension AddItemView {
var addAction: () -> Void {
return {
guard !title.isEmpty else {
return
}
let item = Item(title: title)
handler(item)
title = ""
}
}
}
With the above in place, we can now simply pass our new addAction
property to both of our subviews, and we’ve successfully gotten rid of our code duplication, and our view’s body
implementation is now much more compact as well:
struct AddItemView: View {
var handler: (Item) -> Void
@State private var title = ""
var body: some View {
HStack {
TextField("Add item",
text: $title,
onCommit: addAction
)
Button("Add", action: addAction)
.disabled(title.isEmpty)
}
}
}
While the above is a perfectly fine solution, there’s also another option that might not initially be obvious within the context of SwiftUI, and that’s to use the same technique as when using UIKit’s target/action pattern — by defining our action handler as a method, rather than a closure.
To do that, let’s first refactor our addAction
property from before into an addItem
method that looks like this:
private extension AddItemView {
func addItem() {
guard !title.isEmpty else {
return
}
let item = Item(title: title)
handler(item)
title = ""
}
}
Then, just like how we previously passed our addAction
property to both our TextView
and our Button
, we can now do the exact same thing with our addItem
method — which gives us the following implementation:
struct AddItemView: View {
var handler: (Item) -> Void
@State private var title = ""
var body: some View {
HStack {
TextField("Add item",
text: $title,
onCommit: addItem
)
Button("Add", action: addItem)
.disabled(title.isEmpty)
}
}
}
When working with SwiftUI, it’s very common to fall into the trap of thinking that a given view’s layout, subviews and actions all need to be defined within its body
, which — if we think about it — is exactly the same type of approach that often led to massive view controllers when working with UIKit.
However, thanks to SwiftUI’s highly composable design, it’s often quite easy to split a view’s body
up into separate pieces, which might not even require any new View
types to be created. Sometimes all that we have to do is to extract some of our logic into a separate method, and we’ll end up with much more elegant code that’ll be easier to both read and maintain.
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
Thanks a lot to the indie team behind the fantastic new macOS app Raycast for sponsoring Swift by Sundell last week. Raycast gives you a Spotlight-like interface for controlling your tools, scripts and tasks. It ships with a large collection of built-in integrations, for services like GitHub, Jira, Zoom, and Google Docs, and it also lets you invoke your own custom scripts as well.
For example, to be able to deploy Swift by Sundell using Raycast, all that I had to do was to write a simple script file containing a few pieces of metadata, and a call to publish deploy
— like this:
#!/bin/bash
# @raycast.schemaVersion 1
# @raycast.title Deploy Swift by Sundell
# @raycast.mode fullOutput
# @raycast.currentDirectoryPath ~/developer/swiftbysundell
publish deploy
After importing the above file, I can now simply invoke Raycast using its system-wide keyboard shortcut, type deploy
into its search box, and hit return — and Swift by Sundell will be deployed within seconds. Really nice!
Of course, the above just scratches the surface of what Raycast is capable of. You can use it to update GitHub issues and pull requests, search for files in your Google Drive, start or schedule Zoom calls, and much more. It even includes a clipboard manager and lots of other system integrations.
Try out the completely free Raycast beta, to see how it can help you improve your various workflows, and to support Swift by Sundell.
A quite common, yet surprisingly hard problem to solve when building views using SwiftUI is how to make two dynamic views take on the same width or height. For example, here we’re working on a LoginView
that has two buttons — one for logging in, and one that triggers a password reset — which are currently implemented like this:
struct LoginView: View {
...
var body: some View {
VStack {
...
Group {
Button("Log in") {
...
}
Button("I forgot my password") {
...
}
}
.padding()
.background(Color.blue)
.foregroundColor(.white)
.cornerRadius(20)
}
}
}
If you use the above PREVIEW
button, you can see that our buttons have different widths, which is to be expected, since they’re displaying two different texts that aren’t equally long. But what if we actually wanted them to have the same width (which would arguably look a lot better) — how could that be achieved?
We’ll ignore approaches that involve hardcoding the width of our views (which won’t work for most modern apps that are expected to adapt to things like user accessibility settings and localized strings), or manually measuring the size of each underlying string. Instead, we’ll focus on solutions that are more robust and easier to maintain.
If we’re fine with having our buttons stretch out to fit the width of the container that they’re displayed in, then one potential solution would be to give both buttons an infinite max width. That can be done using the frame
modifier, which when combined with some additional horizontal padding could give us a quite nice result (at least when our LoginView
will be displayed in a narrow container, such as when running on a portrait-oriented iPhone):
struct LoginView: View {
...
var body: some View {
VStack {
...
Group {
Button("Log in") {
...
}
Button("I forgot my password") {
...
}
}
.frame(maxWidth: .infinity)
.padding()
.background(Color.blue)
.foregroundColor(.white)
.cornerRadius(20)
.padding(.horizontal)
}
}
}
Again, feel free to use the above PREVIEW
button to see what the rendered result looks like for that implementation.
While we could always use other, more static maxWidth
values in order to prevent our buttons from being stretched out too much, perhaps we’d instead like to dynamically compute that value based on the width of our container. To do that, we could use a GeometryReader
, which gives us access to the geometry information for the context in which our buttons will be rendered.
However, a quite substantial downside of using a GeometryReader
in this context is that it occupies all of the space that’s available within our UI — meaning that our LoginView
could end up becoming much larger than what we’d expect, and we’d also need to make sure that our root VStack
ends up stretching all of that available space as well, in order to retain our desired layout.
Here’s what such an implementation could look like if we wanted our buttons to not be stretched beyond 60% of the total available width:
struct LoginView: View {
...
var body: some View {
GeometryReader { geometry in
VStack {
...
Group {
Button("Log in") {
...
}
Button("I forgot my password") {
...
}
}
.frame(maxWidth: geometry.size.width * 0.6)
.padding()
.background(Color.blue)
.foregroundColor(.white)
.cornerRadius(20)
}
.frame(maxWidth: .infinity)
}
}
}
Another potential downside of the above approach is that we have to make sure to pick a width multiplier that will work for all screen and text size combinations, which can sometimes be quite tricky.
If our app uses iOS 14 or macOS Big Sur as its minimum deployment target, then another potential solution to our width syncing problem would be to use a LazyHGrid
. Non-adaptive, horizontal grids automatically sync the width of their cells, which lets us achieve our desired button layout like this:
struct LoginView: View {
...
var body: some View {
VStack {
...
LazyHGrid(rows: [buttonRow, buttonRow]) {
Group {
Button("Log in") {
...
}
Button("I forgot my password") {
...
}
}
.frame(maxWidth: .infinity)
.padding()
.background(Color.blue)
.foregroundColor(.white)
.cornerRadius(20)
}
}
}
private var buttonRow: GridItem {
GridItem(.flexible(minimum: 0, maximum: 80))
}
}
However, note that while the width of our two buttons is now completely dynamic (and fully synced), we have to hard-code the maximum height of each of our grid’s rows, since otherwise our grid would end up taking up all of the available space (just like a GeometryReader
). That might not be an issue in certain situations, but it’s still something to be aware of if we end up using the above solution.
Perhaps the most dynamic and robust solution to the problem of syncing the widths or heights of two dynamic views is to use SwiftUI’s preferences system to measure and then send the size of each button to their parent view. To do that, let’s start by creating a PreferenceKey
-conforming type that reduces all transmitted button width values by picking the maximum one — like this:
private extension LoginView {
struct ButtonWidthPreferenceKey: PreferenceKey {
static let defaultValue: CGFloat = 0
static func reduce(value: inout CGFloat,
nextValue: () -> CGFloat) {
value = max(value, nextValue())
}
}
}
With the above in place, we could then embed a GeometryReader
within the background
of each button (which makes it assume the same size as the button itself), and then send and observe our width values using our new preference key. We’d then store the maximum value in a @State
-marked property, which will cause our view to be updated when that value was modified. Here’s what such an implementation could look like:
struct LoginView: View {
...
@State private var buttonMaxWidth: CGFloat?
var body: some View {
VStack {
...
Group {
Button("Log in") {
...
}
Button("I forgot my password") {
...
}
}
.background(GeometryReader { geometry in
Color.clear.preference(
key: ButtonWidthPreferenceKey.self,
value: geometry.size.width
)
})
.frame(width: buttonMaxWidth)
.padding()
.background(Color.blue)
.foregroundColor(.white)
.cornerRadius(20)
}
.onPreferenceChange(ButtonWidthPreferenceKey.self) {
buttonMaxWidth = $0
}
}
}
Note that we’re using Color.clear
(rather than EmptyView
) as the return value within our GeometryReader
closure. That’s because EmptyView
instances are completely ignored by SwiftUI, which would prevent our preferences values from being sent in this case.
While the above solution is definitely much more involved than the ones we explored earlier, it does give us the most dynamic solution (by far), since we’re no longer forced to hard-code any aspects of our layout. Instead, our buttons will now adapt according to their labels, which gives us a solution that works across a wide range of screen sizes, languages, and accessibility settings.
To learn more about the above technique, and many other SwiftUI layout tools, check out my three-part guide to the SwiftUI layout system.
While SwiftUI’s declarative design has many advantages, it does make tasks like the one we explored in this article quite difficult. Neither of the above solutions are really perfect, and things like this could sort of make us wish for an Auto Layout-like constraint-based layout system, which would make it much easier to express that two views should be synced in terms of width or height.
Regardless, I hope that the above techniques will be helpful when you’re building your SwiftUI views, and feel free to reach out via either Twitter or email if you have any questions, comments, or feedback.
Thanks for reading!
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
How SwiftUI is ushering in a new era for Apple’s developer tools.
An essential part of SwiftUI.
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
Matt Gallagher, creator of Cocoa with Love, returns to the show to discuss how the introduction of SwiftUI and Combine has impacted how apps are architected on Apple’s platforms, and what sort of principles that are good to keep in mind when designing a solid app architecture.
Get the most out of Apple’s new UI framework.
Building flexible and adaptive layouts with very little code.
Why rewrite views when we can reuse them?
Thanks a lot to the Genius Scan team for sponsoring Swift by Sundell (both the website and the podcast!) last week. Their support lets me keep all of Swift by Sundell free for the entire community, which I’m really passionate about.
You might already know Genius Scan because of their fantastic iOS app that lets you scan all sorts of documents with really high-quality results. It’s used by millions of people around the world (including me), and now, you can let your users use that same advanced scanning functionality right within your app as well.
All that you have to do is to include Genius Scan’s powerful, yet easy to use SDK, which can be integrated into any iOS or Android app with just a single line of code. You can then dive much deeper if you wish, and customize their scanner view to fit your app’s design, and even build completely custom integrations using their suite of lower-level image processing components.
So if your app involves working with real-life documents of any kind, for example if you’re building something like a todo, finance, or project management app — then I really recommend checking out the Genius Scan SDK. And, when requesting a demo, simply mention Swift by Sundell, and they’ll give you a 20% discount for a whole year if you eventually end up buying a license.
Support Swift by Sundell by checking out this sponsor:
Bitrise: My favorite continuous integration service. Automatically build, test and distribute your app on every Pull Request — which lets you quickly get feedback on each change that you make. Start 2021 with solid continuous integration from Bitrise, using hundreds of ready-to-use steps.
How to round the corners of a UIKit or SwiftUI view in various ways.
How a Swift property wrapper can refer to its enclosing type, and examples of how that capability could be used.
Ellen Shapiro returns to the show to discuss framework and SDK development, and how that often requires a somewhat different process from app development. Also, API design, GraphQL, using the standard library’s protocol-oriented design, and more.
How SwiftUI’s AnyView type can often be avoided, and why it might be a good idea to do so.
Thanks to DetailsPro for sponsoring Swift by Sundell.
How to validate email addresses in Swift using a dedicated RawRepresentable type and Foundation’s NSDataDetector API.
How URLSession can be used to perform POST requests and file uploads without any third party libraries.
How key paths can be made much more powerful when used to query and filter collections.