Let Me Ask My Analyst.

Such a phrase from the seventies, right? Am I dating myself? Maybe, but hey, I was just a kid back then. I’m all grown-up now, and gaining insights by the day.

The goal of insights are, just as they were in the seventies, when everyone was seeing the original “analysts,” better decision-making. Not much has changed.

I take that back.

A whole lot has changed. We couldn’t have imagined, (or could we?) back when computations were done by punch-cards, that we’d no longer be shrink-wrapping user manuals, but instead looking to true trends analysis to see what our users want from our writing. Now, we are in the realm of truly seeking what patterns in our content are useful and what can go by the wayside, because we know, for instance, that our users no longer need to be told to enter their credentials upon login. They get it. They are familiar with creating passwords, and the concepts that were once totally unfamiliar are now second nature.

It’s a whole new frontier.

Now we are in a new domain.

Companies ask us not to be writers, actually, but content creators, content strategists. I used to scoff at that title, because anyone could use it. There is no credentialing: a licensed content strategist is a unicorn. And yet, real industries call for those who can produce (and produce well) two types of content: structured and unstructured. Yikes!

Structured content can be found. It has a home, a place, it is text-based in the case of email and office or web-based documentation. Unstructured content may include an archive of videos, or even non-text-based things like images and diagrams. There is a huge volume of this type of content, and yet it falls still under the purview of we, the content creators.

Those of us who used to be called “technical writers” or even “document specialists” or something like that find ourselves of course wrangling much more than documentation, doing much more than writing. So the issue became: how do we know if what we are doing works? Are we impacting our audience?

That’s where analysis comes into play and matters. Really, really matters.

Why spend hour upon hour creating a snazzy video or interactive tutorial if no one will watch or, dare I say, interact?

That’s where content analytics comes in.

Analytics measures. Photo credit: Stephen Dawson.

The whole goal of analytics is for us to know who is reading, watching, learning – and then we can improve upon what we’re building based on those engagements. It does little good to create a video training series, only to discover that users don’t have an internet connection on site to watch YouTube. Similarly, it’s not helpful to write detailed documentation and diagrams for users who prefer to watch 2-3 minute video step-throughs. It’s all about knowing one thing: audience. The essential element, always.

The central theme in Agile development, after all, was learning to understand the customer, so the essential element in designing better content, sensibly, ought to be the same thing. When we hunker down and learn what the customer really wants, we develop not just better software, but better content of all types.

With metrics on our side, our companies can identify just what content has real value, what has less, and what can really be dropped altogether. Historically, academic analysis was held to notions of things like how many times a subject blinked while reading an article. (Ho-hum.) Now, though, we can measure things like click-thhroughs, downloads, pauses during video, hover-helps, and more. How very, very cool.

Multiple screens to choose from. Photo credit: Alexandru Acea on Unsplash.

Historically, content analysis was slow, time-consuming, and it was a frustrating process with limited accuracy. Now, though, we can measure the usefulness of our content almost as fast as we can produce it. Content analytics are now available in a dizzying array of fields, reflecting a vast pool of data. The level of detail is phenomenal. For example, I’ll get feedback on this post within hours, if I want. I’ll create tags and labels to give me data that lets me know if I’ve reached the audience I want, whether I should pay for marketing, whether I might consider posting on social media channels, submitting to professional organizations, editing a bit, and so on. I may do all of those things or none of them. (Full disclosure: usually none, unless one of my kind colleagues points out a grievous error. I write for my own satisfaction and to sharpen my professional chops. Just sayin’)

Believe you me, the domain of conent analysis, in all areas, will grow and grow. Striking the perfect chord between efficiency and quality is not just on the horizon, it is in the room. AI-powered writing and editing, paired with the streamline of knowing we’ve reached the proper balance of placement and need – it’s not hyperbole to say the future is here. It’s just turning to my ‘analyst’ to ask whether I’ve written my content well enough and delivered it properly.

My product teams, my business unit, and my company are all grateful. And my work shows it.

Advertisement

How Biased Are Your Release Notes?

Your Own Thinking Can Make a Big Difference

I should probably start with a primer on Cognitive Bias before accusing anyone of allowing such biases to impact their release-note writing. That’s only fair.

Photo courtesy of Ryan Hafey on Unsplash.

In a nutshell, Cognitive Biases are those thinking patterns that result from the quick mental errors we make when relying on our own brains’ memories to make decisions. These biases arise when our brains try to simplify our very complex worlds. There are a few types of cognitive bias, too, not just one. There’s Self-serving biasConfirmation biasanchoring biashindsight bias, inattentional bias, and a handful more. It gets pretty complex in our brains, and we’re just trying to sort things out.

These biases are unconscious, meaning we don’t intend to apply them, and yet we do. The good news is, we can take steps to learn new ways of thinking, so as not to mess things up too badly with all of this brain bias. Whew, right?

Now that we’ve established a basic taxonomy, let’s dive in to how cognitive bias may (or may not) be creeping in to things like your technical writing. It’s not just in your release notes; that was just clickbait. But sure, bias can permeate your release notes and nearly any other part of your documentation. (Probably not code snippets, but I won’t split hairs.)

Writers and designers must recognize their own biases so that they can leave them behind when planning. Acknowledging sets of biases helps to shelve the impulse to draft documentation that is shaped by what they already know, assets they bring to the table, or assumptions they make about experience levels.

Let’s start with Self-serving bias. How might this little bug creep its way in to your otherwise beautiful and purposeful prose?

This bias is essentially when we attribute success to our own skills, but failures to outside factors. In our writing, this can appear when we imply that by default, software malfunctions are based in user error. Rather than allowing for a host of other factors, often our writing recommends checklist items that are user-centric rather than system-focused. While it’s true that there are countless ways that users can mess up our well-designed interfaces, there are likewise plenty of points of failure in our programs. Time to ‘fess up.

Confirmation bias can be just as damaging, wherein as writers we craft and process information that merely reinforces what we already believe to be true. While this approach is largely unintended, it often ignores inconsistency in our own writing. We tend to read and review our own documentation as though it is error-free, both from a process perspective and a grammatical-syntactical one. That’s just illogical. And yet, we persist. The need for collaborative peer-review is huge, as even the very best, most detail-oriented writers will make a typo that remains uncaught by grammar software. Humans are the only substitute, and always will be.

We write anchoring bias into our documentation all too often. This bias causes us, frail humans that we are, to rely mostly on the first information we are given, despite follow-up clarification. If we read first that a release was created by a team of ten, but then later learn that it is being developed by a team of seven, we are impressed by the team of seven because they are doing exceptional work. Now, it may be the case that when the ten-member team was working on it, three of them had very little to do. Yet, we anchor our thinking in the number ten, and set our expectations accordingly.

The notion that we “knew it all along” is the primary component of hindsight bias. We didn’t actually know anything, but somehow, we had a hunch, and if the hunch works out, then we confirm the heck out of it and say we saw it coming.

This can happen all too often when we revise our technical writing, jumping to an outcome we “saw coming,” which causes us to edit, overlook, or wipe out steps on the path to getting there. We sometimes become so familiar with the peccadilloes of some processes that we only selectively choose what information to include, and that’s a problem.

Inattentional bias is also known as inattentional “blindness,” and it’s a real doozy when it comes to technical writing, believe it or not. It is the basic failure to notice something that is otherwise fully visible, because you’re too focused on the task at hand. Sound familiar? Indeed. Our writing can get all sorts of messed up when we write a process document, say an installation guide, and don’t pause to note things that can go wrong, exceptions to the rule, and any number of tiny things that can (and indeed might) occur along the way. What to do when an error message pops up? What if login credentials are missing? System timeout? Sure, there are plenty of opportunities for us to drop these into our doc, and many times we do, but I saved this for last because – this will not surprise anyone who knows me – in order to overcome this particular bias, all you need do is become best buddies with your QA person. Legit. I recommend kicking the proverbial tires of your software alongside your QA buddy to see how often you get a “flag on the play.” That will grab you by the lapels and wake you up from the inattention, for real.

Your users will thank you if you learn, acknowledge, and overcome these biases when you write. Is it easy? Not really. Is it necessary? I’d say so. As my Carnegie Mellon mentor told me more than once, and I’ve lived by these words for my whole career: “Nothing is impossible; it just takes time.”

Take time with your writing, and you’ll soon be bias-free.

It’s All About AI. The Data Told Me So.

AIA conversation with a (junior) colleague this morning started off with “How did you decide to reformat your Best Practices Guide?” and moved on to things like “But how did you know that you should be working in Artificial Intelligence and VUI for this search stuff? I mean, how do you know it will work?”

I couldn’t help but chuckle to myself.

“Rest assured,” I said. “Part of it is just that you know what you know.  Watch your customers. Rely on your gut. But more importantly, trust the data.” The response was something of a blank stare, which was telling.

All too often, tech writers – software writers especially it seems, although I do not have the requisite studies to support that claim – are too steeped in their actual products to reach out and engage in customer usage data, to mine engagement models and determine what their users want when it comes to their doc. They are focused on things, albeit important things, like grammar, standards, style guides, and so on. This leaves little time for customer engagement, so that falls to the bottom of the “to-do” list until an NPS score shows up and that score is abysmal. By that time, if the documentation set is large (like mine) it’s time for triage. But can the doc be saved? Maybe, maybe not.

If you’re lucky like I am, you work for a company that practices Agile or SAFe and you write doc in an environment that doesn’t shunt you to the end of the development line, so you can take a crack at fixing what’s broken. (If you don’t work for a rainbow-in-the-clouds company like mine, I suggest you dust off your resume and find one. They are super fun! But, I digress.)

Back to the colleague-conversation. Here’s how I knew to reformat the BP Guide that prompted the morning conversation:

I am working toward making all of my documentation consistent through the use of templating and accompanying videos. Why? Research.

toiletAccording to Forrester, 79% of customers would rather use self-service documentation than a human-assisted support channel. According to an Aspect CX survey, 33% said they would rather “clean a toilet” than wait for Support. Seriously? Clean a toilet? That means I need to have some very user-friendly, easily accessible documentation that is clear, concise, and usable. My customers do NOT want to head over to support. It makes them angry. It’s squicky. They have very strong feelings about making support calls. I am not going to send my customers to support. The Acquity group says that 72% of customers buy only from vendors that can find product (support and documentation) content online. I want my customers’ experience to be smooth and easy. Super slick.

In retail sales, we already know that the day your product is offered on Amazon is the day you are no longer relevant in the traditional market so it’s a good thing that my company sells software by subscription and not washers and dryers. Companies that do not offer subscription models or create a top-notch customer experience cease to be relevant in a very short span of time thanks to thanks to changing interfaces.

Image result for artificial intelligence

I’m working to make the current customer support channel a fully automatable target. Why? It is low-risk, high-reward, and the right technology can automate the customer support representative out of a job. That’s not cruel or awful; it’s exciting, and it opens new opportunities. Think about the channels for new positions, new functions that support engineers. If the people who used to take support cal

 

ls instead now focus on designing smart user decision trees based on context and process tasks as contextual language designers, it’s a win. If former support analysts are in new roles as Voice of the Customer (VoC) Analysts, think about the huge gains in customer insights because they have the distinct ability to make deep analyses into our most valuable business questions rather than tackling mundane how-to questions and daily fixes that are instead handled by the deep learning of a smart VUI. It’s not magic; it’s today. These two new job titles are just two of the AI-based fields that are conjured by Joe McKendrick in a recent Forbes article so I am not alone in this thinking by far.

His research thinking aligns with mine. And Gartner predicts that by 2020, AI will create more jobs than it eliminates.

So as I nest these Best Practices guides, as I create more integrated documentation, as I rely on both my gut and my data, I know where my documentation is headed, because I rely on data. I look to what my customers tell me. I dive into charts and graphs and points on scales. The information is there, and AI will tell me more than I ever dreamed of…if I listen closely and follow the learning path.

Neural-Network-640x360

 

 

 

Can Software Write This Better than Me?

In my job, Icomputer writer’m expected to use a variety of tools to ensure accuracy, word count, compliance, style – adherence to a host of things that keep me “in line” with the company’s overall design and standards. This causes me to wonder: as part of a team of software developers, could my team design software that automates the process of writing the documentation for their own software with enough accuracy that the documentation specialist goes the way of the dinosaur?

I mean, the whole point of some of my products is to automate processes that human beings used to perform, and to automate them to such a degree of precision that people are hardly required to be cognizant, let alone present, for the actions these programs perform. We’ve designed such reliable systems that banking, health care, military information and community design can count on big data to gather and maintain the necessary materials to run our daily lives, and to store that data, to anticipate problems before the occur, and to rectify those problems with limited human intervention.

When you think, “but this type of automation cannot be applied to writing…writing requires critical thinking and analysis!” You would be correct. But you would be overlooking tools like the lexical analyzer Wordsmith, and the automated writing tool also named Wordsmith. Created by Automated Insights, Wordsmith is the API responsible for turning structured data into prose – it literally takes baseline information and makes an article. Could I be out of a job?

Feed me some data, and I write articles, too, only you have to pay me and occasionally socialize with me. Not so for Wordsmith.

The freakish thing about Wordsmith is its accuracy. I’ve studied a good bit about semantic language interpretation, and in my graduate program at Carnegie Mellon University, I dabbled in software interpretation of language, working with a pretty notable team on designing huge dictionaries of strings of language. The thing is, computers are great at reading – they can read at much faster rates than humans, they can digest huge chunks of information and store that information at infinitely larger capacities than the human brain can and their recall is spectacular. Skeptical? Just watch the amazing Jeopardy matches between IBM’s Watson and you’ll soon see that computing power can be harnessed to cull through the informational equivalent of roughly one million books per second. Humans just can’t keep up. Humans who write can’t touch that.

If computers learn a perfect formula for the Great American Novel, we are doomed.

Something to consider. Let’s try to keep this a secret from my bosses, shall we? My team of very excellent software developers may decide that this is a project worth undertaking, and the next thing you know, it won’t be just data that Wordsmith will be analyzing.

In the meantime, I will rely on the human eye and the need for context clues and interpretation that Wordsmith and Watson lack. I’ll count on the systems of reliability and emotion. I’ll count on what I know. What I learned in that high-functioning graduate analysis: A computer cannot tell the significant difference between these two exchanges:

Scene 1: A funeral home. A somber affair, all is quiet. A man says to a woman:

“Sorry for your loss.”

The appropriate response? She shakes his hand and nods, quietly.

Scene 2: A soccer game. A sunny afternoon. The breeze is blowing gently.

A boy says to a girl:

“Sorry for your loss.”

The appropriate response? She high-fives him and replies, “No sweat! Let’s grab some pizza! Woooo hooo!” As they tumble into a minivan, shouting jubilantly, kicking off their shoes.

No computer can decipher the differences in – “Sorry for your loss.”

Sorry, Wordsmith.

I’m going out for pizza.