Originally published at Analyst Admin.
In my experience working with Adobe Analytics, I’ve found that Processing Rules help in some cases, but oftentimes they create more work. I try to avoid using Processing Rules whenever possible. In this post, I will cover the main reasons why using Adobe Analytics Processing Rules is not worth it for me.
Before we get into it, let me start by saying that I have a Processing Rules Exporting Tool. Check it out if you work with a lot of processing rules and report suites.
Tracing Issues Is More Difficult
Part 1 — Additional Point of Failure
Picture this — you see a strange value in your Custom Conversion report, so you begin to investigate. The page data layer looks fine. The data beacon looks fine. You can’t replicate the issue. Then you begin to wonder, maybe the issue is only on certain devices? Or only on certain browsers? You continue to spend time checking and rechecking your tracking. After a while, you remember the Processing Rule you created a year ago, and sure enough, you find that it was the culprit causing the issue.
In a typical Adobe Analytics data pipeline, data gets collected at the browser, then goes to Adobe for processing. Any point where data is created, collected, or transformed can be a point of failure. For example:
- The server can send faulty data
- The tag manager can be misconfigured
- Adobe Analytics could be filtering your data via Bot Rules or IP Filters
- Adobe Marketing Channel Processing Rules could be miscategorizing your traffic sources
If you add Processing Rules to the pipeline, you would add an additional transformation step. An increase in steps would inherently increase the complexity of the model and introduce an additional point of failure which makes tracing issues more difficult.
Part 2 — Processing Rules Have a Cascading Effect
That’s right — Processing Rules are daisy-chained. This means that variables are transformed and passed down to the next Processing Rule, which makes tracing issues difficult.
Let’s say you have 50 rules, and you identify that rule #25 is transforming your variable. By the time beacon data gets to rule #25, your data could have been transformed 24 different ways. This means that if you see a data beacon with a value “abc”, by the time data gets to rule #25 it could say “xyz” instead. A somewhat useful but human-error-prone way to check what Processing Rules are doing is to take a sample value and manually go through each rule and keep track of the transformations on paper.
Furthermore, you also need to worry about the rules that follow the rule that you are working with. Each rule has the same potential to modify your variable whether it’s before or after the rule that you edit.
Take this example:
- Rule #1: Set v3 on checkout page where campaign = social
- Rule #2: Set c3 from v3
- Rule #3: Set v5 into c5
- Rule #4: Patch for v3 missing on cart page
- Rule #5: Set v4 based on v3 equal to “us|shop”
- Rule #6: Set v3 on homepage for mobile
- Rule #7: Add v3 when v3 is not set
If you needed to update Rule #4, it would behoove you to also check the rules before and after #4 to make sure the final state of v3 is what you were expecting.
Now picture 100+ rules, a laggy browser, 10 team members, and more not-so-great rule names — It’s a recipe for disaster.
The Processing Rules Interface Is Clunky
When you first load the Processing Rules editor, all rules are collapsed — this makes it impossible to CMD+F (search) by rule definition. Expanding each rule can take a long time since the more rules that you have expanded the worse the whole page lags. I’ve had times where after expanding 100+ rules the page will crash and I need to restart at the top.
In my desperation, I tried every browser imaginable and found that Firefox is the fastest when dealing with 100+ processing rules.
Bonus: The interface is so clunky and slow that one time while waiting for a rule to expand I built an entire Processing Rules exporting tool that uses the 1.4 APi. Try it here.
Processing Rules Are Limited
Yes — 150 is the max…can we lower it to zero?
Keep in mind that you will only see this message when you try to save. The page will allow you to add more than 150, but it won’t actually save.
Documentation Is An Issue
Keeping track of Adobe eVars, Props, Success Events, and their corresponding Processing Rules can only be done manually. There is no automated solution for extracting variable transformations from Processing Rules. This is part of the reason why tracing issues involving Processing Rules is more difficult.
Testing Processing Rules Is Slow
Testing Processing Rules requires patience since there is no real-time Processing Rules testing feature. For example, if you created a new rule for an eVar and wanted to validate the rule by checking the data, you would have to wait up to 90 minutes for the eVar data to get processed and become available in Analysis Workspace or Reports and Analytics.
As a workaround sometimes I will copy the data to a prop since props are available in real-time reporting, which I can validate immediately without waiting for the eVar.
You might think “This guy really hates processing rules!”, and the truth is that I don’t hate them, I just find that using them is not worth the hassle. However, there are perfectly valid reasons to use them — for instance, if you have no choice (Adobe Heartbeat) or if you need to put in a quick patch.
Now let’s hear from you — what’s your experience using Adobe Analytics Processing Rules?
Originally published at https://analystadmin.com