GA Intelligence Alerts as “Analytics Circuit Breakers” for Product Managers

If you are a consumer PM, you likely spend a lot of time analyzing patterns of user behaviour on your product. If it’s a web product, you likely spend a decent percentage of this time in Google Analytics. You may also have specialized analytics systems, but GA is often a PM’s go-to for quick ad-hoc analysis.

Over time, you will use this ad-hoc analysis to build mental models about your product and your users. The problem arises when you treat these behaviours as static. Over time, the same data points you use to build your understanding of user behaviour can become a liability. Eventually they will go stale and then rancid, like a forgotten cup of yogurt in the back of your fridge.

Intelligence Alerts in GA is an under-utilized tool that can help you with this. You can use it to codify your assumptions about user behaviour, and get alerted if they change. Set notification thresholds calibrated to your perceived lower bound on normal usage behaviour. When a niche but important feature breaks silently a few months from now, you will be the first to know.

Google Analytics Custom Alerts

Remote Research for Product Managers

This is a brief review of the book Remote Research, and a summary of points that resonated with me.

Key Concepts

Moderated research – Real-time interaction with a user that is time-expensive, but is easier to discover unanticipated insights due to the greater “texture” of the interaction.

“Moderated research allows you to gather in-depth qualitative feedback: behavior, tone-of-voice, task and time context, and so on. Moderators can probe at new subjects as they arise over the course of a session, which makes the scope of the research more flexible and enables the researcher to explore behaviors that were unforeseen during the planning phases of the study. Researchers should pay close attention to these “emerging topics,” since they often identify issues that were overlooked during the planning of the study.”

Automated research – Data collection process is set up a priori and the research is conducted asynchronously, without your involvement.

“Automated research is nearly always quantitative and is good at addressing more specific questions (“What percentage of users can successfully log in?” “How long does it take for users to find the product they’re looking for?”), or measuring how users perform on a few simple tasks over a large sample. If all you need is raw performance data, and not why users behave the way they do, then automated testing is for you.”

Starting an interaction – The quality of your data in a moderated study is influenced by the consistency and quality of your participant on-boarding process.

“Establish the users’ expectations about what will happen during the study and what kind of mindset they should have entering the study. The most important things to establish are that you want the participants to use the interface like they normally would … And let them know you’d also like them to think aloud while they’re on the site … It’s also nice to set users at ease by reassuring them that you had nothing to do with the design of the interface, so they can be completely honest:”

Time Aware Research – Using live recruitment in a moderated study leads to richer and more authentic interactions with participants that occur in their native environment.

“Remote research is more appropriate when you want to watch people performing real tasks, rather than tasks you assign to them. The soul of remote research is that it lets you conduct what we call Time-Aware Research (TAR).”

Execution Tips

Progress from high to low variability – Start the session with undirected natural tasks, which gives the participant space to surprise you. Finish by running through any tasks the user did not complete naturally, this time in a structured manner.

Timestamp your notes – make timestamps based on “time since session start” instead of absolute times, to make them easier to review later.

Cross-reference “control” metrics with your analytics – Double-check that your research is not biased due to a flaw in the design or structure of the study.

“If there’s a discrepancy between your study findings and the Web site’s analytics (“80% of study participants clicked on the green button, but only 40% of our general Web audience does”), it could mean that the task design was flawed, the target audience of the study differs from that of the main audience, or that there’s an unforeseen issue altogether.”

Ask open-ended questions – Remain neutral to avoid influencing the responses from participants.

“So, tell me what you’re looking at … What’s going through your mind right now? … What do you want to do from here? … When did you decide to leave the site/exit the program? … What brought you to this page?”

Thoughts

Remote Research lays out a comprehensive framework for starting to conduct research studies at your company, and is useful for beginners or for filling in the gaps in your mental model. However it seems more targeted towards large companies with established UX practices than towards startups. If you are executing alone—perhaps as a one-man UX team—you may still feel a gap between theory and execution. The tools section of the book seems dated, which is understandable, however it would be great to see some more tactical information on conducting remote research on the cheap. Two tricks that I have used at work myself are:

  • Running tests from Google Tag Manager – Aligning with the owner of the tracking platform (often Product team) is a quicker way to get the necessary code live than doing it in-house with IT.
  • Use a general session recording tool – Using a tool such as Inspectlet, you can record most or all user interactions and then filter the recordings down afterwards. This allows you to observe a very specific behaviour chain that may not occur frequently enough on your site to target users live.

Book review: Web Form Design

I finished reading Web Form Design recently on the recommendation of a mentor. The author makes a good case about web forms being a high leverage area to invest design efforts. The combination of forms being mandatory, complex, and not particularly sexy, results in an experience that is often the worst part of a user’s interaction with your product. He then breaks down the form into the building blocks of Labels, Input Fields, and Actions, then lays out best practices for each. Here are a few snippets from the book that resonated with me.

Labels

Top-aligned labels – “The results of live site testing across several different geographies have also supported top-aligned labels as the quickest way to get people through forms. These studies also had higher completion rates (over 10 percent higher) than the left-aligned versions of forms they were tested against… One of the reasons top-aligned forms are completed quickly may be because they only require a single eye fixation to take in both input label and input field. [50ms compared to 240ms for right-aligned and 500ms for left-aligned labels] … Top-aligned labels, however, do take up additional vertical real estate.”

Right-aligned labels – “The resulting left rag of the labels in a right-aligned layout reduces the effectiveness of a quick scan to see what information the form requires … That said, in cases where you want to minimize the amount of vertical screen space your form uses, right-aligned labels can provide fast completion times.”

Left-aligned labels – “Left-aligning input field labels makes scanning the information required by a form easier. People can simply inspect the left column of labels up and down without being interrupted by input fields… Unfortunately, a few long labels often extend the distance between labels and inputs and, as a result, completion times may suffer. People have to “jump” from column to column in order to find the right association of input field and input label before entering data. The reason left-aligned forms are the slowest of the three options to complete may be because of the number of eye fixations they require to parse.”

Inside-alignd labels – “In cases where screen real estate is at a premium, combining labels and input fields into a single user interface element may be appropriate… Because labels within fields need to go away when people are entering their answer into an input field, the context for the answer is gone. As such, labels within inputs aren’t a good solution for long forms… It’s also generally a good rule not to use labels within inputs for non-obvious questions. That is, questions that may require people to reference the label while answering.

Input Fields

Tabbing behaviour –“Web form designers should consider what the experience will be like for the large numbers of people who move between input fields using the Tab key, and they should design accordingly.”

Radio buttons – “Allow people to select exactly one choice from two or more always visible and mutually exclusive options. Because radio buttons are mutually exclusive, they should have a default value selected (more on this later). It’s also a good idea to make sure both the radio button and its label can be selected to activate a radio button selection.”

Input switching – “[Sequential] basic text boxes … lead users to skip back and forth between their mouse and keyboard … in order to complete the interaction.”

Length of input fields – “The way we display input fields can produce valuable clues on how they should be filled in… In the eBay Express example … the size of the zip code input matches the size of an actual zip code in the United States: 5 digits. The size of the phone number text boxes match the number of digits in a standard phone number in the United States. The rest of the text boxes are a consistent length that provides enough room for a complete answer.”

Required/optional fields – “If most of the inputs on a form are optional, indicate the few that are required. … When indicating what form fields are either required or optional, text is the most clear. However, the * symbol is relatively well understood to mean required.”

Actions

Secondary actions – “When you reduce the visual prominence of secondary actions, it minimizes the risk for potential errors and further directs people toward a successful outcome.”

Success vs. Error messages – “The key difference between error and success messages, however, is that error messages cannot be ignored or dismissed—they must be addressed. Success messages, on the other hand, should never block people’s progress—they should encourage more of it.

Animating success messages – “Because human beings are instinctively drawn to motion—we had to avoid sabertoothed tigers somehow—animated messages that transition off a page can let people know their actions have been successful. The most common transitions utilized for this are fades, dissolves, or roll-ups.”

Effective in-line validation – “Inline confirmation works best for questions with potentially high error rates or specific formatting requirements… When validating people’s answers inline, do so after they have finished providing an answer, not during the process.”

USB 3.0 causing WiFi interference on a Macbook Air

I have spent enough time troubleshooting my family’s WiFi to pickup a decent understanding of wireless networking. At least decent enough to get to the bottom of any issue eventually using a structured troubleshooting approach.

So it’s been a while since I’ve been as dumbfounded as I was last week trying to figure out whether my connection quality dropped when I plugged in my keyboard, or if I was just going crazy.

Turns out it is a known issue that USB 3.0 can cause interference on the 2.4ghz spectrum when not properly shielded. It’s especially noticeable on the left-side USB port, which leads me to believe the wireless card is located near the port.

I picked up and it totally fixed the issue. The hub itself sits a foot away from my laptop, which reduces the impact of interference from USB 3 devices plugged into the hub. The cable connecting to my laptop’s USB port is quite thick and seems to be well shielded. Lesson learned about not trusting cheapo USB hubs from Alibaba. Anker is now my go-to brand for USB connectivity equipment.

Anker USB Hub

Anker 10-Port 60W USB 3.0 Hub ($40)

 

Tracking: Organizational Challenges

There are plenty of technical guides online about tracking user behaviour using GTM. But I haven’t found as much about dealing with the organizational challenges that may arise when making changes to tracking.

One of my main projects at Carmudi was improving our tracking. The key challenge was that I was not building tracking entirely from scratch. We already had a buggy tracking implementation that was feeding data into some of the most important reports in the organization. Stakeholders get nervous when you propose changes to tracking, even if tracking currently sucks.

As a product manager, my primary interest in tracking is to feed higher-quality data into the product decisions my team makes. Being “data-driven” is chic, but having reliable and relevant data is not a given. It requires some strategic forethought to track the right things and track them properly.

The first thing I did was consolidate all the country-specific containers into a single global container in GTM. Our application is nearly identical between countries, so this was easy from a technical perspective. We removed outdated tags, replaced country-specific IDs with lookup tables, and updated triggers to match. The second major change was change how we name events to communicate user behavior in a more transparent way.

A few lessons learned from the process:

Reports are fragile

Tracking data feeds into many teams’ reports—some of which you may not be aware of. These reports can be quite fragile to changes made to the tracking layer. Even worse than breaking a report, is to subtly impact some of its underlying assumptions, reducing the accuracy and usefulness of that report without anyone realizing it.

The best way to mitigate this risk is to coordinate tightly with BI. Sit down and trace all the “customers” of tracking data to get a better sense of how changes will impact various teams and reports. It is especially important to be aware of which reports are consumed by external stakeholders such as investors. These reports often process the data down to a single number in a spreadsheet cell, without any context around it. For example, inserting a GA event could impact the “bounce rate” calculation on that page.

People are overly confident in their data

Making decisions on real-world data is not as clean-cut as a case study in business school, and it is always good practice to question the source and validity of the data you are using to make a decision. Unfortunately some decision-makers can lose sight of this. Prepare for some push-back against your proposed fixes or improvements to tracking, as this implies that prior decisions were made with flawed data. Data is never infallible, but this can be an uncomfortable reality for some managers.

Decouple tracking from KPI definition

The ideal tracking event crisply describes the nature of the user interaction without commenting on the value to the business. Event names such as “Unique Lead” or “Customer Intent” are opaque and give no visibility into what exactly those actions are, or why they are important to the business. It is better to push the task of KPI definition “up the stack” to management, so that the people who are ultimately consuming the tracking data will be better-equipped to make decisions on it.

The Best of Seth Godin for Product Managers

One of the consistent must-reads that has remained in my RSS feed over the years is Seth Godin’s blog. Seth consistently puts out a stream of incredibly wise thoughts. I have found that some of his posts resonate with me even more when I re-read them at a later point in my life/career. Here are some of my favourite Seth Godin posts, as they relate to the role of Product Manager.

Please, go away – Being out-of-touch with customers hurts every part of an organization, but especially the product team. Sometimes it requires a conscious effort to correct for this. You may receive surprisingly strong push-back from some people on your efforts.

Project management for work that matters – Ten very good pieces of advice for the project mgmt. parts of a PM’s job.

Really Bad Powerpoint – One of Seth’s longer blog posts. A good philosophical guide to using powerpoint effectively. I try to stay away from powerpoint as much as possible, but sometimes it is necessary, especially for interacting with stakeholders.

Not even one noteWhy it is important to choose better features over more features. He also talks about how to make that choice.

Inventing a tribe – Building a successful product vision does not have to involve creating something totally new and revolutionary from scratch. It is far more likely that it will involve connecting and empowering the people that already share a vision with you.

How to live happily with a great designer – Some tips for working effectively with designers.

Two kinds of writing – As a PM you will be interacting with totally different groups of people on a daily basis. It is important to adjust your writing and communication style to each audience. You will want to use a different approach when dealing with customers, engineers, marketing, or stakeholders.

Why do you do it this way? – A good way to test some of the underlying product decisions made in the past. Asking why three times is a great way to uncover the philosophy of a team.

Marketing to the organization – Product managers lead without positional authority, so it becomes important to approach things at a meta level, thinking about what you can do internally to give a product or project the best chance of succeeding.

Doing calculus with Roman numerals – As a non-technical PM, it is especially important to be relentlessly curious and to ask many question about the technical side. Not to make your job easier, but to open up a level of performance that is not possible without understanding the tools being used around you.

Reading books for long-term value

reading-in-darkness

For a while now, my Pocket reading list has been growing at a faster rate than I have been consuming it. Recently this problem has crept into my offline reading as well, and now my GoodReads list is growing hopelessly long.

Initially I approached this as a quantity problem, and started looking into speed-reading as a method of consuming more information. There is a neat tool called Spritz that controls for eye movement to help you learn. But it turned out the problem was about quality of reading, rather than quantity of material. This manifested itself in a disappointing recall of key arguments and theses of books I had read more than a year or two before.

Part of the problem was that I considered the primary goal of reading to be acquiring information. The issue with this approach is that if the raw data is not synthesized, you won’t remember it for as long. I now consider the primary goal of reading to be rewiring parts of my cognitive process based on the information in the book.

Here are a couple of the systems I have put into place to derive more long-term value out of my reading:

Buy a kindle

Buying an Amazon Kindle has been a huge help. Besides the whole “thousand books in your pocket” thing, I find the highlighting feature to be incredibly valuable. I have never been much of a highlighter / markup-er of printed media, but I am well aware of the benefits for cognitively absorbing material. Kindle’s highlights lets you collect snippets from a book and export them as a text file.

Filter your reading list

In an effort to reduce the input side of my reading list problem, I have begun heavily vetting the recommendations or discoveries that I place into my reading list. Anything non-fiction gets checked for in Blinkist to see if there is already a summary available. For other genres, I like to check Maria Popova’s Brain Pickings to see if she has written on that book before. Reading through a summary like this will give you a better sense of whether you should commit to reading the full book. And if you do proceed to read the book, you begin with a rough mental framework that makes it much easier to absorb the arguments and theses into your mental model.

Read deliberately

Shane Parish of Farnam Street has written extensively on the subject of learning, reading, and self-improvement. He has some pieces of good advice that ultimately add up to the act of reading deliberately. Take a second before you begin to think about the author, the context, and your existing knowledge on the subject. While reading, mentally summarize arguments periodically, and try to abstract at a higher level. After you put down a book, spend a couple minutes in silence, contemplating what you’ve just learned, and attempting to synthesize it into your existing mental framework.

Write a book summary

There is a reason that Bill Gates publishes book reviews, and it’s not because he has nothing better to do with his time. Writing these reviews will encourage you to read at the analytical level required to summarize effectively. I usually start by sorting through all of my kindle highlights from a book, then organizing them into thematic groups, and trying to build a structured opinion on the work. Making a value judgement in your summary will force you to go a step further in your reading, to do the work of synthesizing the material and forming an argument.

Mindmapping

I also find it useful to push one level above individual books, and to make a conscious effort of trying to integrate the knew book into my mental frameworks of knowledge. Mindmapping is a good tool for this, as it helps you visualize and form connections between pieces of material without the need to traverse the information in a linear fashion. Another option is to collect key passages into your commonplace book.

Adding these additional layers to my reading “stack” definitely slows down my rate of consumption, but I think it is well worth the increase in comprehension, synthesis, and long-term retention.

How to conduct user research when you can’t reach your users

user-tester

If you are a product manager, you have almost certainly heard about the importance of conducting user research before. Quantitative data can point to where a problem exists, but nothing beats qualitative research for learning why that problem occurs. Large datasets can obscure individual usage patterns, making it hard to “get into the user’s head”. User research helps you understand the conceptual models of your users and to build personas around them.

Normal user research methods involve getting users into a room and watching them interact with your product. But what do you do if you can’t reach your users as easily? What if your users are in different countries, or speak different languages? These factors certainly make user research more difficult, but also simultaneously make it even more important.

One solution I’ve been playing with recently is a combination of Olark live chat and Inspectlet. Inspectlet is a tool that records the cursor movements, clicks and scrolls of your users, and then rebuilds them into a video of the user’s session. At first it almost seems as if you are “spying” on users, though in fact the videos are all assembled post-hoc. Inspectlet is, of course, not as interactive as true user testing, but it does allow you to get surprising insights on user behaviour.

What is really powerful is when you combine these two together. Olark is primarily a live-chat tool, but when you are offline it reverts to a feedback box, placed on a targeted part of your website or product. Here is how I chain the two tools together:

  • Place the Olark feedback box on a specifically targeted element of your website where you expect there will be user frustration. Olark’s premium plan offers targeting, or you can roll your own DIY targeting by firing the Olark tag through Google Tag Manager.
  • After some time, read through the responses Olark sends to your email. If you are tracking foreign-language users, you can translate most messages right from within Google Chrome.
  • When you find a user response that interests you, grab the IP address from the message and filter for that IP in Inspectlet. Unless your product has massive traction already, you’ll probably find a single session that matches that IP address.
  • Watch the user session to learn the process the user went through before leaving the corresponding piece of feedback.

Inspectlet preview

This combination is the most effective solution I have found so far to bridge the user research gap on hard-to-reach users. However I wouldn’t say this is a replacement for conducting real user research. If you can, nothing beats an in-person session.

Pieces of contradicting advice

One of the problems with abstracted tidbits of advice is that they lose much of their meaning when divorced from their context. The correct decision can be heavily weighted by the nuances of the specific scenario. As a result, you often receive seemingly conflicting pieces of advice. The easy example is with contradicting proverbs, which are humorously documented here. But the contradictions also occur in more serious advice given around technology, business strategy, and product development. Here are a couple I have been thinking about recently.

Should you strive to be well-rounded (full-stack?) or should you focus on your strengths?

This can be viewed as a version of the classic generalist–specialist dichotomy. But it is more interesting when applied to the "micro" skill level rather than "macro" level career advice. When it comes to your skills and capabilities, should you focus on your strengths, or invest the time to round-out your weaker skills? This is loosely related to the multi-armed bandit problem, and to the concept of local maxima. What is the optimal mix of breadth and depth?

Should you apply the 80/20 rule, or should you focus on the details?

Ellen Chisa pointed out this contradiction on her blog, specifically in the context of product development. It ties into the concept of Minimum Viable Product (MVP) which is unfortunately often cited as an excuse to cut corners and ship half-baked products into the market. 80/20 style prioritization lets you achieve more output with fixed time/money. But it makes an implicit assumption that you are optimizing for raw efficiency. What if that is not true?

Imagine you are playing Super Mario for a moment. If you get 95% through a level but then die, you start again from the beginning. You are rewarded not for your average performance, but for the number of absolute wins you achieve. You can fail at that 95% over and over, and walk away with a 90% average but without making any real progress to the next level. In the context of product development, you are not optimizing for average happiness of a user but rather number of users happy enough to sign-up / buy. In this sense, users are fungible unit of success.

If you spread your resources out with the 80/20 rule, you could launch 5x the number of features, but at an 80% quality level. This could get you 5x the exposure, or perhaps 5x the engagement, but it does not necessarily lead to 5x the sales / conversions. Imagine a user has some intrinsic standard for how well a solution must fit their needs to sign-up or buy. If this "bar" falls above 80%, then you might lose all your 5x users to a bunch of niche competitors that serve their specific needs at a theoretical 90% level.

It may make more sense to focus your resources on developing something at a 95-100% level but with only 20% of the scope. This involves saying no to 80% of opportunities/features. As a result, you might get objectively fewer users into the start of your funnel. But assuming that your product is well-executed—that you didn’t waste these theoretical resources—then you should have a far higher conversion than in the 80/20 scenario.

Is it better to have the time lead of being first-to-market or the lower risk of being a close second?

Using the "first mover advantage" is a classic business school strategy. It is completely logical in industries such as telecom or social networks, where customers are locked-in and there are strong network effects at play. Yet many first-mover activities center around creating a market, and are not always defensible to a specific company. Competitors can get a "free ride" on your push for regulatory change or established supply chains. When does it make sense to be a trailblazer, and when does it make sense to tuck yourself into the slipstream of the current leader?

The Wirecutter – On trust, and satisficing

I am a big fan of the consumer editorial site The Wirecutter. They earned a position in my stack of newsletter subscriptions for their help with simplifying tech purchasing decisions.

In his book The Paradox of Choice: Why More is Less, Barry Schwartz lays out a dichotomy of people’s decision-making behaviour. Some people are maximizers—those who strive to make the optimal decision. Others are satisficers—those who make a decision as soon as it meets their criteria. Mr. Schwartz’s thesis is that satisficers are happier than maximizers in the long-run. Although their average decision is less optimal, it requires much less effort. Maximizer-behaviour is useful for high-stakes irreversible decisions, but most decisions are not like that. It is difficult to be a maximizer with the sheer volume of smaller decisions we face on a daily basis.

One example that can be surprisingly taxing is deciding what TV, camera, charger, BBQ, or washing machine to buy. You might have strong preferences about some of these, but it is more than likely that you are not familiar with most of the above product categories. Making a truly informed decision requires that you first familiarize yourself with the offerings in the market. Then you must prioritizing your own requirements and analyze each option, before coming to a decision. If you make the wrong decision and you will be reminded of it every time you use the product over the next few years.

Previously I have never trusted a single review to consider it more than a single data-point. Look up a review on Engagdet, Gizmodo, The Verge, and Cnet, and they often all offer conflicting opinions on the same product. But The Wirecutter is different.

First, the reviews are centred around user problems (Which X should I buy?) rather than tech solutions (Review of the new Z 2.0). The editor aggregates reviews from across the web on a select group of options and reports the results. This serves as a “one-stop” source of information instead of as a single data-point.

Second, each review leads with a summary of the recommendation and a link to buy on Amazon. But underneath this summary is a comprehensive breakdown of the logic behind that decision. There are sections such as Why you should trust us, Flaws but not deal-breakers as well as alternative recommendations based on niche use-cases.

On my first couple visits to The Wirecutter, I read the entire page—in classic maximizer behaviour. But after making a few purchasing decisions based on their advice, I have developed a great deal of trust in the editorial team from The Wirecutter. Now I often only skim the review—and if it is a less critical decision, I will simply buy their top recommendation without much extra thought. In a sense, it has allowed me to outsource the burden of “maximizing” tech purchasing decisions to a trusted third-party.

The ultimate test of trust in tech decisions is to ask yourself: “Would I recommend this to my mother?”. If you recommend the wrong product, you might find yourself fixing it or providing support for your next few Thanksgiving Dinners. For me, The Wirecutter has passed this test. Whenever Mom asks for advice on something I have no familiarity with (“Which dashcam should I buy?”), I just link her to The Wirecutter.