UI testing analytics code in Swift
UI testing can be a really useful tool in order to assure that the key user interactions of your app work as expected. Maybe even more importantly though, it's a great way to add an extra level of protection against regressions as you are continuously changing and refactoring your code base.
However, UI tests can be quite expensive, both to write and to run. Since they require the app to be fully running and actually interact with it through Apple's accessibility APIs, they can take a long time to run (especially if you have a large suite of them). They can also be a bit tricky to implement, as you need to query your view hierarchy in a way that's not always easy to figure out.
So being a bit more selective and carefully picking where and how to deploy UI tests becomes very important. It's a great tool to have, but - like most technologies - it's not a silver bullet. This week, let's take a look at how to add UI tests to verify an app's analytics code, and how it can be a great way to quickly gain a broad UI testing coverage.
Before we begin, I suggest reading "Getting started with Xcode UI testing in Swift" and "Building an enum-based analytics system in Swift" - if you haven't done so already, since we'll base our UI tests on the code from those posts.
Why UI tests?
Given that UI tests can be expensive, why use them to verify analytics code? Isn't unit tests enough? For me, there are three reasons why UI tests are a great fit for analytics code:
- It enables us to do full integration testing of our analytics. Actually tapping on controls and moving between screens and asserting that the right events are logged.
- Since we have to move across pretty much our entire app in order to fully test our analytics, it's a great way to easily assert that all of our core user interactions are working as expected.
- Since UI tests are less strictly bound to the implementation of our view controllers (since they can only assert what visually is being displayed) they are more likely to stand the test of time as we are iterating on and refactoring our app. For something like analytics, this can mean less maintenance while still getting solid protection against regressions.
As an example, earlier this year I inherited a project that was in much need of a refactoring of its UI layer. It also didn't have analytics, so one of my first tasks was to add such a system to the app. So what I did was as I added analytics, I also implemented UI tests for all events. That way, when it was time to get started with the UI refactoring work, I was much more confident that my changes wouldn't break anything, since more or less the whole app was covered with UI tests because of the analytics.
Setting things up
So what do we need to do in order to make an analytics system testable through UI tests? First up, we need a visual way for our UI tests to be able to assert that a certain event was logged. Since UI tests don't get code-level access to our app (like unit tests do), we can't simply use mocks to capture events, we need to get a bit more... creative π
The easiest way I've found to be able to capture events in UI tests is to simply trigger an alert view every time an event is logged. That way, our UI tests can simply look for an alert with a given text to assert that a certain event was logged. It's also really convenient since Xcode's UI test runner automatically dismisses alerts, so that we won't risk messing up our app's view hierarchy.
This is where the layered architecture from last week's post comes in handy. Since we abstracted the actual logging work into an AnalyticsEngine
protocol, we can easily implement an engine that displays an alert for all events:
class AlertAnalyticsEngine: AnalyticsEngine {
private let window: UIWindow
// We initialize the engine with a window, instead of a view controller, since
// we want to display an alert from the currently displayed view controller.
init(window: UIWindow) {
self.window = window
}
func sendAnalyticsEvent(named name: String, metadata: [String : String]) {
// We turn the metadata dictionary into a string and
// display it as the alert's message.
var message = "Metadata: "
for (key, value) in metadata {
message.append("(\(key), \(value))")
}
let alert = UIAlertController(
title: "Analytics event: \(name)",
message: message,
preferredStyle: .alert
)
alert.addAction(UIAlertAction(
title: "Dismiss",
style: .cancel,
handler: nil
))
// In order to handle both view controllers presented as part of a
// container view controller and as modals, we need to check if there's
// currently a presented view controller.
var viewController = window.rootViewController
if let presentedViewController = viewController?.presentedViewController {
viewController = presentedViewController
}
viewController?.present(alert, animated: false, completion: nil)
}
}
Next, let's use our new AlertAnalyticsEngine
whenever we are running in UI testing mode. Just like how we made sure that our app's state is always reset before running a UI test in "Getting started with Xcode UI testing in Swift", we'll use the same --uitesting
command line argument to choose what analytics engine to use:
extension AnalyticsManager {
static func make(withWindow window: UIWindow) -> AnalyticsManager {
if CommandLine.arguments.contains("--uitesting") {
return AnalyticsManager(engine: AlertAnalyticsEngine(window: window))
}
return AnalyticsManager(engine: CloudKitAnalyticsEngine())
}
}
We can now call the above API when setting up our AnalyticsManager
, for example in some form of dependency container, flow controller, factory or in our AppDelegate
:
let analyticsManager = AnalyticsManager.make(withWindow: window)
And with that, our app is now ready for us to UI test our analytics code! π
Let's get testing
Let's start by testing that the loginScreenViewed
event is logged when the login screen has been displayed. Since we'll also use the --uitesting
command line argument to reset our app's state, we can make the assumption that the login screen is the first thing being displayed when the app has been launched. So all we have to do is to launch the app and assert that the correct alert is being displayed:
class AnalyticsUITests: XCTestCase {
var app: XCUIApplication!
// MARK: - XCTestCase
override func setUp() {
super.setUp()
// Since UI tests are more expensive to run, it's usually
// a good idea to exit if a failure was encountered
continueAfterFailure = false
app = XCUIApplication()
// We send the uitesting command line argument to the app to
// reset its state and to use the alert analytics engine
app.launchArguments.append("--uitesting")
}
// MARK: - Tests
func testLoggingLoginScreenViewed() {
app.launch()
XCTAssertTrue(app.isDisplayingAlertForAnalyticsEvent(.loginScreenViewed))
}
}
Above we are using an isDisplayingAlertForAnalyticsEvent
method on XCUIApplication
to verify that the correct alert is being displayed. This is a pattern I recommend in general - to create abstractions and extensions on XCUIApplication
to make your testing code more simple and clean. Let's go ahead and implement that method now:
extension XCUIApplication {
func isDisplayingAlertForAnalyticsEvent(_ event: AnalyticsEvent) -> Bool {
let alert = alerts.element
// First verify that an alert exists at all
guard alert.exists else {
return false
}
// Gather our expected title and message, by following the same process as
// in our production code. While this is code duplication, I personally
// think it's a good idea not to create too many shared abstractions
// between testing and production code, since it prevents false positives
// in case a shared abstraction has a bug in it.
let expectedTitle = "Analytics event: \(event.name)"
var expectedMessage = "Metadata: "
for (key, value) in event.metadata {
expectedMessage.append("(\(key), \(value))")
}
// Verify that both the title and the message exists within the alert
guard alert.staticTexts[expectedTitle].exists else {
return false
}
guard alert.staticTexts[expectedMessage].exists else {
return false
}
return true
}
}
With that, we have written our first UI test to verify that our analytics work as expected, and in the process we have also verified that the login screen is indeed the first thing the user sees when the app is opened! π
Verifying asynchronous events
That was just the warmup, now let's dive into the fun stuff! π Asynchronous code is known for being challenging to test in general, since tests are usually very synchronously structured. Thankfully, with UI tests, all we have to do when dealing with asynchronous tasks is to wait for a certain UI element to appear on the screen.
Let's write another test that verifies that the user can successfully log in to our app - which is an asynchronous, network-bound feature. Just like how we defined a method that lets us verify that a given analytics alert is being displayed, we'll start by doing the same thing for logging in:
extension XCUIApplication {
func login(withUsername username: String, password: String) {
// We can query views based on their content, their accessibility
// identifier, or - in the case of text fields - their placeholder
let usernameTextField = textFields["Username"]
usernameTextField.tap()
usernameTextField.typeText(username)
let passwordTextField = secureTextFields["Password"]
passwordTextField.tap()
passwordTextField.typeText(password)
let loginButton = buttons["Login"]
loginButton.tap()
}
}
With the above method in place, we can now easily write tests that verify our app's login feature, by calling it and then waiting for an alert to appear:
func testLoggingLoginSucceeded() {
app.launch()
app.login(withUsername: "[email protected]",
password: "some-long-and-very-secure-string")
// Since this is an asynchronous network-bound operation, we'll wait for
// a while for an alert to appear on the screen before
// performing an assertion
_ = app.alerts.element.waitForExistence(timeout: 5)
XCTAssertTrue(app.isDisplayingAlertForAnalyticsEvent(.loginSucceeded))
}
While it can be tempting to start adding waiting time to tests like the one above, I suggest waiting for a given element to appear instead of always waiting for an arbitrary number of seconds. It'll make your tests more stable and faster to run.
Verifying metadata
Finally, let's write a test that verifies that the correct metadata is being sent for a given analytics event. For this one we'll use the loginFailed
event which contains a LoginFailureReason
. The good news is that since we already set up our isDisplayingAlertForAnalyticsEvent
method to also verify metadata, all we need to do is to attempt to login with invalid credentials, and assert that the correct analytics event was logged:
func testLoggingLoginFailed() {
app.launch()
app.login(withUsername: "[email protected]", password: "password")
_ = app.alerts.element.waitForExistence(timeout: 5)
XCTAssertTrue(app.isDisplayingAlertForAnalyticsEvent(
.loginFailed(reason: .invalidCredentials)
))
}
As you can see, once you have the setup in place and you start building up a "library" of UI testing convenience APIs for yourself, writing new tests is quite easy and quick. Like with many things, getting started is the hardest part, it gets easier and easier from there! π
Conclusion
UI tests can be an awesome tool when you want to verify how your code integrates with the rest of your app, and how it behaves when the user actually interacts with the UI. By deploying UI tests for things like analytics, we can both gain confidence in that we are sending the right events at the right time, but also that our key user interactions that triggered those events work as expected.
While UI tests won't completely protect you against bugs and regressions, they are a good safety net to have. My own suite of UI tests have stopped me multiple times from accidentally causing a regression when refactoring or adding new features, so when used selectively (like for analytics) I think they often are a net win.
What do you think? Have you used UI tests like this before, or is it something you'll try out? Let me know, along with any questions, comments or feedback you have - on Twitter @johnsundell.
Thanks for reading! π