Adobe Analytics mobile app validation with Project Griffon

Alex Bishop
9 min readMay 3, 2020

#1 Overview

This post adds to the Adobe Experience Platform SDK series, so if you’re just getting started on the implementation journey then I’d recommend starting here. QA is an essential part of any implementation project, and mobile app validation is often more challenging to co-ordinate, so Griffon has been a timely addition.

There are no pre-requisites for the implementation phase but, in order to access the Griffon UI, you will need to fill out the request form here. The steps below cover implementing Griffon in your app, connecting to a Griffon session, understanding the basics of the UI and exporting data to a csv file. This post focuses on Adobe Analytics but the ideas are applicable to any data that is generated by the SDK. Similarly, the steps refer to an iOS Objective-C implementation but everything is transferable to Android, the main difference being that any references to pods/Cocoapods would be swapped out for their Gradle equivalent.

#2 Implementing Griffon

The Launch configuration is extremely simple because the extension requires no configuration, so you just need to install it and then move on to the app implementation phase.

I talked through Podfiles in the second article of this series (see here), so I won’t go through it again here; however, just be aware that you will need to add an entry for ACPGriffon as per the screenshot below:

I also talked through the Mobile Install instructions in the second article; again, bear in mind that there is an import statement requirement for Griffon.

You will need to add that ACPGriffon.h import statement into your AppDelegate file, so you should end up with something looking like this:

You also need to register the ACPGriffon extension; again, the installation instructions provide you with everything that you need.

There is plenty of information in the help docs on exactly where to add this statement but, assuming you already have some other extensions installed, your AppDelegate file should look something like this (note the Griffon extension being registered on row36):

The final part of the implementation phase is adding the session start call into your app, which is the line of code you can see on row96 below:

When I first started looking into Griffon, I somehow missed this step out, so I can assure you from experience that it is essential, otherwise nothing will work. You might notice this session start call is making use of deeplink functionality; and, yes, this does unfortunately mean that if your app doesn’t have deeplink functionality then you can’t use Griffon.

#3 Start a Griffon session

Assuming you’ve filled out the request form and now have access to the UI, head to https://experience.adobe.com/#/griffon to get started. The first thing to do is to create a new session by providing a session name and Base URL. The base url should be in the format of your app’s URLScheme plus “://” but speak to your developer if you’re not sure what the base url is.

If the details you entered were valid then you should see a screen which provides a couple of different ways of launching your session. I’m just using a sample app via Xcode’s simulator, so I’ll use the link, but there is also the option of using a QR code.

The next step is to add the deeplink URL into the browser and then click “Open” when the dialog box is displayed:

You should hopefully now see a Project Griffon screen with a keypad, which is where you will enter the pin that is displayed in the Griffon UI. I’ve noticed that sometimes when I first launch the app I get a connection error but this is solved by clicking “retry” and then entering the pin.

If your connection was successful, you will see a green tick in the Griffon UI, as well as the AEP SDK logo displaying on your app:

#4 Understanding the UI

As soon as the connection is successful you will see a number of events displaying in the Griffon UI:

Whilst all of this event data provides useful information, a lot of it isn’t relevant to the analytics implementation. However, you can use the Adobe Analytics Event List filter to narrow things down to analytics requests only:

When my app launches there is a track state call to register a page view and a lifecycle call to capture all the useful lifecycle metrics:

After navigating around a few different pages, you start to get a feel for what you should expect to see during a validation session:

You can also click into each hit to get a more detailed view of the information that has been collected. A lot of this information is generated automatically by the sdk, however, if you’ve added context data variables to customise your implementation they will also show up here:

Once you have finished testing, you then have the option of “Export to JSON”. This is a useful feature because it will only export your analytics hits if you have the filter applied, although this does mean that you need some way of translating the export into something a bit more user friendly.

#5 Custom Validation basics

Whilst exporting your hit data in JSON format is useful, you also have the option to apply validation to your data first, as well as being able to re-format the output.

I find that the code block is a bit small for my liking, which then means a lot of scrolling is involved, but this is easily resolved by finding the “CodeMirror cm-s-default” div and changing the height attribute:

As per the Griffon UI, if you don’t have any filtering applied to your custom validation then you will end up with a huge amount of events to wade through.

However, it’s a simple task to apply a filter so that you are only dealing with the AnalyticsTrack event type:

Looking into one of the events in a bit more detail, you can see that the payload has some standard ACPExtensionEvent data, as well as your context data variables and view state information:

#6 Export data to Excel

One option with Custom Validation would be to create your own rules based on expected values and then loop through the different event properties to see if they fit your criteria. However, in my case, I just want to export the data so that I have an easy to read overview of my testing session.

The code examples below will make it very clear why I am not a developer, however, they should give you an idea of what is possible. The first thing I want to do is turn the timestamp of each hit into something that is human readable:

The next step is to loop through the context data variables within each hit, as well as some of the other event data that provides useful information:

I then want to order the hits based on the timestamp, so that they match the order of my test journey, as well as providing the column headers for my csv file:

The outcome of the steps above is that I now have a results array, which has my column headers in position 0 and then all of my hit data in the subsequent positions, ordered by timestamp:

You may notice that the context is “index.html” — the code editor is within an iframe, so if you do create window-scoped variables just be aware that you will need to change the context from “top” in order to access them. In case you’re not familiar with how to do this, I find the easiest way is to right click somewhere in the code editor, click “inspect element” and then head back to the console tab.

The reason that I’ve structured everything in arrays is because the function that exports the data to a csv file expects this format, however, this is by no means the only way to do things.

Clicking validate runs the code and triggers the csv file download:

After opening the file in Excel, and changing the time and timestamp columns to “text”, I’m left with a clean view of each hit that was in my session:

Summary

In most implementations there will be different context data variables included for different view states and interactions, so the logic required to generate the csv structure will be more complex, however, the general concept remains the same. In previous app implementations I’ve experienced issues ranging from getting hold of a test build through to network restrictions making the use of tools like Charles Proxy a nightmare. These are just a few examples of why I can see Griffon becoming an important part of the validation toolkit. However, as I briefly alluded to earlier, the main drawback is that it does require deeplink functionality in the app.

When there are problems with accessing a test build, the answer can often be for the developers to send screenshots or logs of the analytics data being generated, however, this makes it really difficult to get a clear sense of whether everything is working correctly or not. Griffon means that a developer doesn’t have to worry about any of this, all they have to do is create a new Griffon session and deeplink into the app. An even better option would be to integrate Griffon before any of the general app QA process begins, then you get the benefit of a huge amount of test session data with no extra effort required from those that are testing.

I hope this post has given you a better understanding of what Griffon is capable of, as well as some ideas of how it can help to enhance your existing app QA processes. I don’t think they’re on the roadmap but I can also think of a few interesting use cases based on retrieving the data from Griffon via APIs, so I hope that’s something that can be re-visited in the future!

--

--