Weekly Swift articles, podcasts and tips by John Sundell.

Type inference-powered serialization in Swift

Published on 05 Aug 2018
Basics article available: Codable

Type inference is a key feature of the Swift type system and plays a big part in the syntax of the language - making it less verbose by eliminating the need for manual type annotations where the compiler itself can infer the types of various values.

The beauty of type inference is that it's easy to forget that it's even there, even though it's heavy at work for almost every single line of code that we write. However, by taking type inference into account when designing APIs - especially ones that use generic types - we can potentially reduce verbosity and "noise" in our code even further.

This week, let's take a look at how we can make Swift's Codable API a bit nicer and less verbose to use, by leveraging the power of type inference.

Decoding

In case you haven't used Codable before, it's Swift's built-in API for encoding and decoding serializable values to and from binary data. It's mostly used for JSON coding, and includes compiler-level integration to automatically synthesize much of the boilerplate that is normally required when writing serialization code.

Codable is actually just a simple typealias for combining the Decodable and Encodable protocol, both of which define the requirements for a type that can either be decoded or encoded, respectively.

First, let's take a look at how we can use type inference when decoding a Decodable value from JSON data. Here we're using the default way to decode a set of data into a User instance, which we then pass to a function to let our login system know that a user has logged in:

let decoder = JSONDecoder()
let user = try decoder.decode(User.self, from: data)
userDidLogin(user)

There's nothing really wrong with the above code, but let's see if we can make it a bit nicer by using type inference. First, let's add an extension on Data that'll let us decode any Decodable type directly, by simply calling decoded() on it:

extension Data {
    func decoded<T: Decodable>() throws -> T {
        return try JSONDecoder().decode(T.self, from: self)
    }
}

Since we're not using an argument for specifying what type we're decoding into, we're leaving it up to Swift's type system to figure it out by using its inference capabilities. All we have to do is tell it what type a certain value should be treated as, and the compiler will figure out the rest. That lets us use a really nice syntax when decoding data, by simply using the as keyword to specify the type, like this:

let user = try data.decoded() as User

Even better - since we're now using type inference, when the type is already known from the context, we don't need to specify a type at all. What that means is that we can now reduce our three lines of decoding code from before to a single line that passes any decoded value directly into the userDidLogin function - and since that accepts a User that's what the type system will use when decoding:

try userDidLogin(data.decoded())

Pretty sweet! 🍭

Improving flexibility

Our new type inference-powered Decodable API is really nice and convenient to use, but it could use some additional flexibility. First of all, it currently always creates a new JSONDecoder instance inline when decoding, making it impossible to reuse a decoder or to set one up with specific settings. This is something we can easily solve though, in a fully backward compatible manner, by parameterizing the decoder while using a default argument:

extension Data {
    func decoded<T: Decodable>(using decoder: JSONDecoder = .init()) throws -> T {
        return try decoder.decode(T.self, from: self)
    }
}

Parameterizing dependencies like we do above doesn't only increase flexibility, it improves the testability of our code as well - since we've essentially started using dependency injection. For more info on this and other kinds of dependency injection - check out "Different flavors of dependency injection in Swift".

It's now possible to use any JSONDecoder with our new API, but it's still strongly tied to JSON as the source format. While that's probably fine for most apps, some code bases also use Property Lists for serialization - which Codable fully supports out of the box as well. Wouldn't it be nice if we could make our new type inference-based API support any kind of Codable-compatible serialization mechanism?

The good news is that this can easily be achieved by using a protocol. Unfortunately there's no common built-in protocol for all decoder types, but we can easily create our own. All we have to do is extract the decode function into a protocol and then make all decoders we're interested in conform to it through extensions, like this:

protocol AnyDecoder {
    func decode<T: Decodable>(_ type: T.Type, from data: Data) throws -> T
}

extension JSONDecoder: AnyDecoder {}
extension PropertyListDecoder: AnyDecoder {}

Finally we can update our Data extension to use our new protocol, still using a new JSONDecoder instance as the default for maximum convenience:

extension Data {
    func decoded<T: Decodable>(using decoder: AnyDecoder = JSONDecoder()) throws -> T {
        return try decoder.decode(T.self, from: self)
    }
}

Convenience and flexibility - best of both worlds! 😀

Encoding

Now let's take a look at how we can do the same thing as we did for decoding for encoding as well. We'll start by creating a similar protocol as we did for decoding, and make JSONEncoder and PropertyListEncoder conform to it:

protocol AnyEncoder {
    func encode<T: Encodable>(_ value: T) throws -> Data
}

extension JSONEncoder: AnyEncoder {}
extension PropertyListEncoder: AnyEncoder {}

Next, let's extend Encodable to use our new protocol, again with a default value for maximum convenience:

extension Encodable {
    func encoded(using encoder: AnyEncoder = JSONEncoder()) throws -> Data {
        return try encoder.encode(self)
    }
}

We can now also super easily encode any Encodable value, by simply calling encoded() on it, like this:

let data = try user.encoded()

With encoding, we're not really using type inference as much, since the type we're encoding into is always Data - but we've still reduced verbosity compared to the default way of using the Encodable API.

Decoding

containers

Finally, let's take a look at how we can leverage type inference when working with decoding containers. The major advantage Codable has over all third party JSON frameworks is that it's tightly integrated with the compiler, and can automatically synthesize conformances given that the JSON structure matches the way we have structured our values in code.

However, that's not always the case, and sometimes we need to write manual conformances to Decodable and Encodable in order to tell the system how to translate between JSON and our types. Let's say our app deals with videos and has a Video structure to represent a video, like this:

struct Video {
    let url: URL
    let containsAds: Bool
    var comments: [Comment]
}

One common reason to have to write a manual Decodable implementation is when wanting to use default values. By default, if a value is missing from the JSON data, it's treated as an error - which is not always what we want. So in order for Video to support default values, we might end up with an implementation that looks something like this:

extension Video: Decodable {
    enum CodingKey: String, Swift.CodingKey {
        case url
        case containsAds = "contains_ads"
        case comments
    }

    init(from decoder: Decoder) throws {
        let container = try decoder.container(keyedBy: CodingKey.self)
        url = try container.decode(URL.self, forKey: .url)
        containsAds = try container.decodeIfPresent(Bool.self, forKey: .containsAds) ?? false
        comments = try container.decodeIfPresent([Comment].self, forKey: .comments) ?? []
    }
}

There's quite a lot of verbosity and manual specification of types going on above, so let's try to improve that using type inference.

We'll start by extending the KeyedDecodingContainerProtocol with two methods - one that lets us specify a default value, and one that fails in case a value wasn't found for a given key - both using the same strategy as our Decodable extension from before - relying on the compiler to infer the type rather than requiring us to specify it at the call site:

extension KeyedDecodingContainerProtocol {
    func decode<T: Decodable>(forKey key: Key) throws -> T {
        return try decode(T.self, forKey: key)
    }

    func decode<T: Decodable>(
        forKey key: Key,
        default defaultExpression: @autoclosure () -> T
    ) throws -> T {
        return try decodeIfPresent(T.self, forKey: key) ?? defaultExpression()
    }
}

Above we're using @autoclosure to not have to unnecessarily evaluate the default expression in case it isn't needed. For more on using auto closures in Swift, check out "Using @autoclosure when designing Swift APIs".

We can now update our Decodable implementation from before, now with heavily reduced verbosity and much cleaner call sites, since the compiler is able to infer all types as we're assigning the resulting values directly to properties:

extension Video: Decodable {
    enum CodingKey: String, Swift.CodingKey {
        case url
        case containsAds = "contains_ads"
        case comments
    }

    init(from decoder: Decoder) throws {
        let container = try decoder.container(keyedBy: CodingKey.self)
        url = try container.decode(forKey: .url)
        containsAds = try container.decode(forKey: .containsAds, default: false)
        comments = try container.decode(forKey: .comments, default: [])
    }
}

Much easier to read! 👍

Conclusion

Type inference can be a really powerful tool, not only when calling APIs but also when designing them. By relying on the compiler to resolve the right type, rather than requiring API users to manually specify them, we can end up with cleaner code that is much nicer to read and APIs that are easier to use.

However, there's also a few dangers with relying on type inference too much. First, it can be a common source of slow build times, as the compiler has to do more work in order to type-check an expression (especially if there's operators or multiple overloads involved). Second, by removing explicit type annotations we risk making some code a bit harder to understand. Like always, it's important to always consider when and how to design APIs with type inference in mind, and to not over-do it.

What do you think? Do you currently leverage type inference in your own APIs or through extensions on top of the standard library, or is it something you'll try out? Let me know - along with your questions, comments or feedback - on Twitter @johnsundell.

Thanks for reading! 🚀