Singing a New Tune in AI – Prompt Tuning

We are all well aware that AI is the hottest topic everywhere. You couldn’t turn around in 2023 without hearing someone talk about it, even if they didn’t know what the heck it was, or is. People were excited, or afraid, or some healthy combination of both.

From developers and engineers to kids and grandmas, everyone wanted to know a thing or two about AI and what it can or cannot do. In my line of work, people were either certain it would take away all of our jobs or certain that it was the pathway to thousands of new jobs.

Naturally I can’t say for certain what the future holds for us as tech writers, but I can say this – we as human beings are awful at predicting what new technologies can do. We nearly always get it wrong.

When the television first arrived, there were far more who claimed it was a fad than those who thought it would become a staple of our lives. The general consensus was that it was a mere flash in the pan, and it would never last more than a few years. People simply couldn’t believe that a square that brought images into our homes would become a thing that eventually brought those images to us in every room of our homes, twenty four hours a day, offering news and entertainment, delivering everything we needed all day and all night. They couldn’t fathom that televisions would be so crystal clear and so inexpensive that every holiday season the purchase of a bigger, better, flatter, thinner television would be a mere afterthought.

And yet here we are.

So now that we’ve got that out of the way, on to total world domination!

But seriously.

If you aren’t already using AI, or at least Gen AI in the form of something like Chat GPT, where are you, even? At least have a little play around with the thing. Ask it to write a haiku. Let it make an outline for your next presentation. Geez, it’s not the enemy.

In fact, it’s so much not the enemy that it can help you outline your book (like I’ve done), revise a paragraph (like I’ve done), or tweak your speech (like I have done many, many times). The only thing you really need to understand here is that you are, indeed, smarter than the LLM. Well, mostly.

The LLM, or large language model, does have access to a significantly grander corpus of text than you can recall at any given moment. That’s why you are less likely to win on Jeopardy than if you were to compete against it. It’s also why it might be true that an LLM competitor might make some stuff up, or fill in some fuzzy details if you ask it to write a cute story about your uncle Jeffrey for the annual Holiday story-off. (What? Your family does not actually have an annual story-off? Well, get crackin’ because those are truly fun times…fun times…). The LLM knows nothing specific about your uncle Jeffrey, but does know a fair bit about, say, the functioning of a carburetor if you need to draft a paragraph about that.

The very, very human part is that you must have expertise in how to “tune” the prompt you offer to the LLM in the first place. And the second place. And the third place!

Prompt tuning is a technique that allows you to adapt LLMs to new tasks by training a small number of parameters. The prompt text is added to guide the LLM towards the output you want, and has gained quite a lot of attention in the LLM world because it is both efficient and flexible. So let’s talk more specifically about what it is, and what it does.

Prompt tuning offers a more efficient approach when compared to fine tuning entirety of the LLM. This results in faster adaptation as you move along. Second, it’s flexible in that you can apply tuning to a wide variety of tasks including NLP (natural language processing), image classification, and even generating code. With prompt tuning, you can inspect the parameters of your prompt to better understand how the LLM is guided towards the intended outputs. This helps us to understand how the model is making decisions along the path.

The biggest obstacle when getting started is probably designing an effective prompt at the outset. To design an effective prompt, it is vital to consider the context and structure of the language in the first place. You must imagine a plethora of considerations before just plugging in a prompt willy-nilly, hoping to cover a lot of territory. Writing an overly complex prompt in hopes of winnowing it down later might seem like a good idea, but in reality what you’ll get is a lot of confusion, resulting in more work for yourself and less efficiency for the LLM.

For example, if you work for a dress designer that creates clothing for petite women and you want to gather specific insights about waist size, but don’t want irrelevant details like shoulder width or arm length and competing companies, you might try writing a prompt to gather information. The challenge is to write a broad enough prompt, asking the AI model for information about your focus area (petite dresses), while filtering out information that is unrelated and avoiding details about competitors in the field.

Good Prompt/Bad Prompt

Bad prompt: “Tell me everything about petite women’s dresses, sizes 0 through 6, 4 feet tall to 5 feet 4 inches, 95 lbs to 125 lbs, slender build by American and European designers, and their products XYZ, made in ABC countries from X and Y materials.”

This prompt covers too many facets and is too long and complex for the model to return valuable information or to handle efficiently. IT may not understand the nuances with so many variables.

A better prompt: “Give me insights about petite women’s dresses. Focus on sizes 0 to 6, thin body, without focusing on specific designers or fabrics.”

In the latter example, you are concise and explicit, while requesting information about your area of interest, setting clear boundaries (no focus on designers or fabrics), and making it easier for the model to filter.

Even with the second prompt, there is the risk of something called “overfitting,” which is too large or too specific. This will lead you to refine the prompt to add or remove detail. Overfitting can lead to generalization or added detail, depending on which direction you need to modify.

You can begin a prompt tune with something like “Tell me about petite dresses. Provide information about sizes and fit.” It is then possible to add levels of detail that the LLM may add so that you can refine the parameters as the LLM learns the context you seek.

For example, “Tell me about petite dresses and their common characteristics.” This allows you to scale the prompt to understand the training data available, its accuracy, and to efficiently adapt your prompt without risking hallucination.

Overcoming Tuning Challenges

Although it can seem complex to train a model this way, it gets easier and easier. Trust me on this. There are a few simple steps to follow, and you’ll get there in no time.

  1. Identify the primary request. What is the most important piece of information you need from the model?
  2. Break it into small bites. If your initial prompt contains multiple parts or requests, break it into smaller components. Each of those components should address only one specific task.
  3. Prioritize. Identify which pieces of information are most important and which are secondary. Focus on the essential details in the primary prompt.
  4. Clarity is key. Avoid jargon or ambiguity, and definitely avoid overly technical language.
  5. As Strunk and White say, “omit needless words.” Any unnecessary context is just that – unnecessary.
  6. Avoid double negatives. Complex negations confuse the model. Use positive language to say what you want.
  7. Specify constraints. If you have specific constraints, such as avoiding certain references, state those clearly in the prompt.
  8. Human-test. Ask a person to see if what you wrote is clear. We can get pretty myopic about these things!

The TL;DR

Prompt tuning is all about making LLMs behave better on specific tasks. Creating soft prompts to interact with them is the starting point to what will be an evolving process and quickly teaching them to adapt and learn, which is what we want overall. The point of AI is to eliminate redundancies to allow us, the humans, to perform the tasks we enjoy and to be truly creative.

Prompt tuning is not without its challenges and limitations, as with anything. I could get into the really deep stuff here, but this is a blog with a beer pun in it, so I just won’t. Generally speaking (and that is what I do here), prompt tuning is a very powerful tool to improve the performance of LLMs on very specific (not general) tasks. We need to be aware of the challenges associated with it, like the hill we climb with interpretability, and the reality that organizations that need to fine-tune a whole lot should probably look deeply at vector databases and pipelines. That, my friends, I will leave to folks far smarter than I.

Cheers!

Med-Tech, Fin-Tech, and MarCom – Oh My!

Photo by Nick Fletcher.

The whole field of technical writing, or professional writing, seems to have expanded like a giant infinite balloon in the last decade. Where previously it was a specialty, now it’s an entire field complete with sub-specializations.

How cool is that?!

I told the story just the other day that when I graduated from high school, I knew I was off to college to major in English. It had always been my best subject, I love reading but I love writing more, and it was just the obvious choice. Except…I also asked for money instead of gifts because I was determined to buy my own computer. Other than some desktop publishing, I couldn’t envision what the two had in common, but I was connecting them somehow.

Had I only known then that I would spend my career as a technical writer, I probably would have gotten a much earlier start. I focused on essays and creative nonfiction, which I later taught until I discovered what I solidly believe is the best professional writing graduate program anywhere – at Carnegie Mellon. Indeed, the robotics and engineering monolith hosts an impressive writing program for students looking at Literary and Cultural Studies, Professional & Technical Writing, and Rhetoric. I opted for the last of the three and am happy with my choice, even though I landed a career in Prof & Tech.

Evangelizing this field is easy for me, even as it becomes more complicated. I can see clearly now that taking an Apple IIGS to college was the harbinger that I would eventually be a software writer. I work now for a major software company and love what I do.

But wait – there’s more. (Please say that in an infomercial voice. You won’t be sorry.)

I wrote proposals for federal-level contracts for a while. I taught Human-Computer Interaction. I edited science articles. The breadth of writing is not unique to me, and it was very helpful.

Because the company I work for delivers software solutions for medical clinical trials. Eureka! Again, that college freshman had zero idea that she could combine a love of writing, and interest in computers, and a genuine interest in science. Back then, the marriage of all three seemed impossible.

And yet…

As a technical writer starting out, it’s perhaps not so important to “find focus” in a given industry. However, once you decide you indeed want to produce professional documentation, specializing in an interest is helpful. There are so many areas to choose from that it’s nearly impossible to NOT find one that is interesting as well as challenging. I would not, for example, find deep satisfaction in writing installation manuals for gas pipelines. But someone does. Someone enjoys that very much. I participated in a review panel for a writing competition and my assigned document was an infant incubator (baby warmer) user manual to be read by nurses. I found the content to be expertly delivered, and yet I had no actual interest in what the device does or how to use it. Give me something about gene therapy research and predictive modeling? I am IN!

Some writers find that they are fascinated by banking, taxes, estate planning and so on – welcome to tech writing for loads and loads of financial applications from Turbo Tax to Betterment. The field is growing so rapidly that every investment tool, firm, and product needs a skilled writer. For those who find dollars and cents and amortization and net worth interesting it’s a huge category, and you can specialize in all sorts of ways. Someone who digs marketing but doesn’t want to be a marketer will find a spot in a real estate app, a travel tool, or even music software like iTunes. They all need documentation. Every. Single. One.

What about the folks who say the documentation is superfluous? While it may be true that an app like iTunes or Netflix is so intuitive that it doesn’t need user doc, the moment a user is stymied and needs an answer, that documentation is one thousand percent necessary.

I often talked with my students about the wide variety of uses for their writing skills, many of which would leave plenty of time for creating poetry, fiction, and the like. Heck, even I write memoir in my spare time.

But it’s Sci-tech, Med-tech, and Bio-tech that butter my bread. If you find any area that interests you, I can guarantee there’s a technical document somewhere for you to write and edit, and it’s all about that field.

Minimum Viable Documentation: the Bare Minimum Doesn’t Have to be Bare.

Do less with more.

Practice minimalism.

Do “documentation triage.”

All of these are concepts we’ve heard about, and have been tasked with understanding and implementing across our documentation practices. But what are they and why do they matter? And how will creating “Minimum Viable Documentation” impact what we do and how we do it?

We learned ages ago in the software development world that “perfect is the enemy of done.” That applies to writing as well. We can move participles and conjunctions around all day and still never achieve perfection. As a professor, I told every single one of my students, usually on day one, that they would never receive a 100% on any paper they submitted to me. I wholeheartedly believe that there is always room to improve. I think the best writers throughout the history of time would agree with me. As a teaching tool, I often used e. b. white’s “Once More to the Lake” as an example of how revisiting a piece of writing many years later can fundamentally shift it to the better without trying to change it in tone or style whatsoever.

So how do we create documentation that is good enough without sacrificing our standards?

The first step, according to esteemed technical writer Neal Kaplan is to “prioritize ruthlessly.” It’s vital to look at what actually needs to be documented, and focus on that. The rest is, as they say, gravy.

When we are early in our writing careers, and sometimes just when we are early in our writing projects, we think it is important to document everything. No step or function should be relegated to the trash heap. We fail to plan, and therefore plan to fail. When we look at documentation as a broad-sweeping need for users, we miss the point. We lament that no one reads the documentation, but we keep writing it. Why? Are we hoping users will do an about-face? They won’t. Ao as my high school English teacher reminded us time and again: plan your work and work your plan.

That planning is what Kaplan calls triage. I just call it good practice.

  1. Start planning long before you start writing.

2. Give every task an estimated level of effort – even if it’s a guess, start guessing.

3. Use documentation architecture – design your plan, and plan to grow.

4. Stand by your plan,but be open to changing it.

See how not all of those numbered items are aligned? I chose that architecture because the first one is vital, the others then follow. Plan. Plan. Plan. They are each important, though. Following these 4 will result in documentation that gets the job done, doesn’t leave bare spots, but isn’t full of extra bells and whistles.

You’d never hire a contractor to work on your house without an estimated level of effort. You want to know how long that project will take, how difficult it will be. Your contractor lets you know it will be a couple of days, or a couple of weeks. It is an estimate and it is not a firm commitment, but it sort of time-boxes the work to be done. It’s rarely spot-on. You might have an architect that shows you how fantastic a skylight would be, but once your contractor starts the work, he tells you that will be a challenge to complete based on the roof, the seal, whatever. So you nix the skylight.

Your contractor takes a dive into the work and then recognizes that perhaps you would prefer the sink on the other side of the room, or maybe you want a different fixture than the one you saw in the store. You can put your foot down based on time, cost, and aesthetics. It may be that the extra time required to purchase a new fixture will change the look of the room enough that the pivot was worth it.

On the other hand, maybe the new fixture is out of budget or will take too long to install. It’s something you can see having installed in a year or two, but it isn’t necessary right now. Okay, so you know that down the road you have some more work to do, but getting the standard fixture installed keeps you on time, on budget, and your rom is usable.

Usable.

That is the key word in all of this. Usable.

Our “bare minimum” is hardly bare when our documentation is usable. Minimum viable documentation is choosing to follow the design plan your architect and contractor gave you, but adding all of the decorative touches later. You’ll have a fully functioning and attractive room with the key elements. You can add the flourish later, if you add it at all.

I’ve extended this metaphor about as far as I want to, and if I continue to write I will violate the basic principles of conciseness. Tell your teams that Minimum Viable Documentation is not the writing equivalent of a rough draft. It is the equivalent of writing a 2-page paper when you recognize that a dissertation is unnecessary. You can be sure that every comma, bullet, and number is correct if you write briefly. The viable minimum is so much better than the bare minimum, and worlds better than the alternative.

Let Me Ask My Analyst.

Such a phrase from the seventies, right? Am I dating myself? Maybe, but hey, I was just a kid back then. I’m all grown-up now, and gaining insights by the day.

The goal of insights are, just as they were in the seventies, when everyone was seeing the original “analysts,” better decision-making. Not much has changed.

I take that back.

A whole lot has changed. We couldn’t have imagined, (or could we?) back when computations were done by punch-cards, that we’d no longer be shrink-wrapping user manuals, but instead looking to true trends analysis to see what our users want from our writing. Now, we are in the realm of truly seeking what patterns in our content are useful and what can go by the wayside, because we know, for instance, that our users no longer need to be told to enter their credentials upon login. They get it. They are familiar with creating passwords, and the concepts that were once totally unfamiliar are now second nature.

It’s a whole new frontier.

Now we are in a new domain.

Companies ask us not to be writers, actually, but content creators, content strategists. I used to scoff at that title, because anyone could use it. There is no credentialing: a licensed content strategist is a unicorn. And yet, real industries call for those who can produce (and produce well) two types of content: structured and unstructured. Yikes!

Structured content can be found. It has a home, a place, it is text-based in the case of email and office or web-based documentation. Unstructured content may include an archive of videos, or even non-text-based things like images and diagrams. There is a huge volume of this type of content, and yet it falls still under the purview of we, the content creators.

Those of us who used to be called “technical writers” or even “document specialists” or something like that find ourselves of course wrangling much more than documentation, doing much more than writing. So the issue became: how do we know if what we are doing works? Are we impacting our audience?

That’s where analysis comes into play and matters. Really, really matters.

Why spend hour upon hour creating a snazzy video or interactive tutorial if no one will watch or, dare I say, interact?

That’s where content analytics comes in.

Analytics measures. Photo credit: Stephen Dawson.

The whole goal of analytics is for us to know who is reading, watching, learning – and then we can improve upon what we’re building based on those engagements. It does little good to create a video training series, only to discover that users don’t have an internet connection on site to watch YouTube. Similarly, it’s not helpful to write detailed documentation and diagrams for users who prefer to watch 2-3 minute video step-throughs. It’s all about knowing one thing: audience. The essential element, always.

The central theme in Agile development, after all, was learning to understand the customer, so the essential element in designing better content, sensibly, ought to be the same thing. When we hunker down and learn what the customer really wants, we develop not just better software, but better content of all types.

With metrics on our side, our companies can identify just what content has real value, what has less, and what can really be dropped altogether. Historically, academic analysis was held to notions of things like how many times a subject blinked while reading an article. (Ho-hum.) Now, though, we can measure things like click-thhroughs, downloads, pauses during video, hover-helps, and more. How very, very cool.

Multiple screens to choose from. Photo credit: Alexandru Acea on Unsplash.

Historically, content analysis was slow, time-consuming, and it was a frustrating process with limited accuracy. Now, though, we can measure the usefulness of our content almost as fast as we can produce it. Content analytics are now available in a dizzying array of fields, reflecting a vast pool of data. The level of detail is phenomenal. For example, I’ll get feedback on this post within hours, if I want. I’ll create tags and labels to give me data that lets me know if I’ve reached the audience I want, whether I should pay for marketing, whether I might consider posting on social media channels, submitting to professional organizations, editing a bit, and so on. I may do all of those things or none of them. (Full disclosure: usually none, unless one of my kind colleagues points out a grievous error. I write for my own satisfaction and to sharpen my professional chops. Just sayin’)

Believe you me, the domain of conent analysis, in all areas, will grow and grow. Striking the perfect chord between efficiency and quality is not just on the horizon, it is in the room. AI-powered writing and editing, paired with the streamline of knowing we’ve reached the proper balance of placement and need – it’s not hyperbole to say the future is here. It’s just turning to my ‘analyst’ to ask whether I’ve written my content well enough and delivered it properly.

My product teams, my business unit, and my company are all grateful. And my work shows it.

You got the “D”?

Photo by Mark Claus on Unsplash

No matter what size your company is, or how well-known, I can assure you, every opportunity grasped or missed is the direct, straight-line result of a decision made or a decision delayed.

At lots of companies, decisions get stuck in a pipeline of waiting, approval, hemming and hawing, and sometimes they just die on a vine of withering that is just plain sad to see. Call it bureaucracy, call it checks and balances, call it whatever you like, but it is the companies that are agile, that pivot quickly (stop me if I venture too far into buzz words here, but they caught on for a reason) that survive and even thrive to defeat the others.

Making good decisions, and making them quickly, are the shining coins of successful businesses.

So why is a tech writer posting anything about them? I thought you’d never ask! You see, technical writing has ventured into the deep, lush forest of what now has the lovely, shiny name of “Product Operations” at some of those agile, savvy businesses. Those smart folks took a look around and figured out that the smart folks typing away were…wait for it…learning.

Yes, indeed. And they were right.

We writers have been busy doing, you guessed it, reading. We actually read the stuff we write, believe it or not, and some of it seeps in and we understand it. So when it came time to clear the bottlenecks of business, it sort of made sense to turn to the technical writers who’ve been sitting there reading and writing everything from user guides to employee handbooks all these years and ask them for some insight.

Some of us agreed to offer an opinion or two, and Product Operations was born.

One of the repeated mantras, week in and week out, that I offered to my teams in this process was:

If you oppose, you must propose.

Yep – learn it, commit it to memory. Take it to your teams. Feel free to swipe it. I think I stole it from someone else, and I’m not giving them credit here, so you don’t even have to say you nicked it from me, you can just take it and use it and hog all the credit. (See if it gets you a bit of a promotion or a raise. That’d be nice.)

What it means is, you can point out where something goes wrong – a process, a system, a way of doing things. Go ahead and say it doesn’t work. But then – you have to pony up a way to fix it. We may not use that way, but we won’t ditch it right out of the gate. The most important thing is, you can’t just complain. If you don’t have some sort of solution in mind, even a solution that, in the end, doesn’t fit, you have to keep your trap shut. Don’t point out the flaw until you’ve conjured up a workaround. Even a bad workaround.

A less-than-great decision executed quickly is usually better than no decision executed, or a good decision executed slowly. I mean, a bad decision is going to be a bad decision no matter what. But if you have a brilliant idea but it takes you five years to execute on it, do you really think it was worth it? You can tweak and modify as you go if you just get out of the gate. This is not cutting your bangs we’re talking about here, and even if it was, they’ll grow back. Usually we are talking about developing a new software program or implementing meeting-free Tuesdays. Start building Rome right away. By the time you get the blueprints made, you’ll find the perfect bricklayer, I assure you.

Start mixing mortar.

Right there, that’s the trick. Pick up a stick, or a shovel, or whatever the implement is, and choose. It’s just mortar. You can’t stir it once it dries. So, begin making decisions on a small scale in order to bring about success. As my grandmother once told me, everything can be fixed except death.

I think she might have been exaggerating a bit, so I don’t take it quite that far, but I have determined as a writer that until I hit “send” or “publish,” that I can just decide to write, I can move words around, I can float ideas and concepts, and that to do so is never bad.

In the mode of Product Operations, I started looking at things like information architecture, content strategy, and the blend of systems as a necessary way to get decisions made.

Lo and behold, it worked.

My team began to look at the gaps in our processes and realized that although we each knew what to do when there was a software outage, we each knew what to do when we had to deliver “less than positive” news to a partner or affiliate, we had never concretized that anywhere. So, off we went to create an “Incident Management Guide.”

Similarly, although we had our system down cold for how to manage time and processes in-house, we realized that if a significant part of our team left by, say, taking new opportunities with other companies, the knowledge vacuum would be fierce. The amount of just “stuff” we carried around in our brains about the day-to-day that kept the job pleasant and smooth was astonishing. So we set out to make it a thing. This was despite our tendency as a crew to just be renegade in our approach to daily office behavior, where a meeting was just as likely to be over coffee as it was to be in a conference room. Suddenly, processes and procedures were born.

Some decisions matter, some don’t.

All of this is to say, throughout my years (and they are numerous) I have found that there’s a certain transition from small operation to large, from laid back attitude to not, and back again, that says you somehow gotta put some stuff in writing even when you are pretty sure you don’t. It just makes life easier when you deputize people to have The “D” and to turnaround that decision more easily because they know they can. To build a team with that mojo because it says in a manual (online or in print, with chill or without – you do you) that they can. We built a very nifty team of people, and then we were really able to get stuff done on a big scale by saying, “here’s how we get stuff done.”

I’m just pointing out that it makes life easier if you know:

Do you got the “D”?

It’s All About AI. The Data Told Me So.

AIA conversation with a (junior) colleague this morning started off with “How did you decide to reformat your Best Practices Guide?” and moved on to things like “But how did you know that you should be working in Artificial Intelligence and VUI for this search stuff? I mean, how do you know it will work?”

I couldn’t help but chuckle to myself.

“Rest assured,” I said. “Part of it is just that you know what you know.  Watch your customers. Rely on your gut. But more importantly, trust the data.” The response was something of a blank stare, which was telling.

All too often, tech writers – software writers especially it seems, although I do not have the requisite studies to support that claim – are too steeped in their actual products to reach out and engage in customer usage data, to mine engagement models and determine what their users want when it comes to their doc. They are focused on things, albeit important things, like grammar, standards, style guides, and so on. This leaves little time for customer engagement, so that falls to the bottom of the “to-do” list until an NPS score shows up and that score is abysmal. By that time, if the documentation set is large (like mine) it’s time for triage. But can the doc be saved? Maybe, maybe not.

If you’re lucky like I am, you work for a company that practices Agile or SAFe and you write doc in an environment that doesn’t shunt you to the end of the development line, so you can take a crack at fixing what’s broken. (If you don’t work for a rainbow-in-the-clouds company like mine, I suggest you dust off your resume and find one. They are super fun! But, I digress.)

Back to the colleague-conversation. Here’s how I knew to reformat the BP Guide that prompted the morning conversation:

I am working toward making all of my documentation consistent through the use of templating and accompanying videos. Why? Research.

toiletAccording to Forrester, 79% of customers would rather use self-service documentation than a human-assisted support channel. According to an Aspect CX survey, 33% said they would rather “clean a toilet” than wait for Support. Seriously? Clean a toilet? That means I need to have some very user-friendly, easily accessible documentation that is clear, concise, and usable. My customers do NOT want to head over to support. It makes them angry. It’s squicky. They have very strong feelings about making support calls. I am not going to send my customers to support. The Acquity group says that 72% of customers buy only from vendors that can find product (support and documentation) content online. I want my customers’ experience to be smooth and easy. Super slick.

In retail sales, we already know that the day your product is offered on Amazon is the day you are no longer relevant in the traditional market so it’s a good thing that my company sells software by subscription and not washers and dryers. Companies that do not offer subscription models or create a top-notch customer experience cease to be relevant in a very short span of time thanks to thanks to changing interfaces.

Image result for artificial intelligence

I’m working to make the current customer support channel a fully automatable target. Why? It is low-risk, high-reward, and the right technology can automate the customer support representative out of a job. That’s not cruel or awful; it’s exciting, and it opens new opportunities. Think about the channels for new positions, new functions that support engineers. If the people who used to take support cal

 

ls instead now focus on designing smart user decision trees based on context and process tasks as contextual language designers, it’s a win. If former support analysts are in new roles as Voice of the Customer (VoC) Analysts, think about the huge gains in customer insights because they have the distinct ability to make deep analyses into our most valuable business questions rather than tackling mundane how-to questions and daily fixes that are instead handled by the deep learning of a smart VUI. It’s not magic; it’s today. These two new job titles are just two of the AI-based fields that are conjured by Joe McKendrick in a recent Forbes article so I am not alone in this thinking by far.

His research thinking aligns with mine. And Gartner predicts that by 2020, AI will create more jobs than it eliminates.

So as I nest these Best Practices guides, as I create more integrated documentation, as I rely on both my gut and my data, I know where my documentation is headed, because I rely on data. I look to what my customers tell me. I dive into charts and graphs and points on scales. The information is there, and AI will tell me more than I ever dreamed of…if I listen closely and follow the learning path.

Neural-Network-640x360

 

 

 

Refiguring The Primary Measure of Progress

pencilIn Agile Software Development,

“working software is the primary measure of progress” and the manifesto values “working software over comprehensive documentation.”  That is all well and good, but as a writer, I often pause on that one word – comprehensive. I take a moment there and wonder who determines what comprehensive means, and whether sometimes we leave the customer, and the customer’s needs, in the dust when we use that word.

instructions2

Before every developer everywhere has a head explosion, I think we can all agree that expansive documentation is silly and frivolous. So maybe I would swap out comprehensive for expansive. Perhaps I’d have chosen “frivolous documentation,” or “needless documentation.” I’m just not sure I would have chosen the word comprehensive. I get what the manifesto is going for. I do. I want to create lean, usable doc every time. I don’t want to give more than is needed. I want to respond to change fast. My goal is accurate, deliverable doc that addresses stakeholder needs. I suppose I would like to think that, by definition then, my doc is complete, or…comprehensive…in that it includes everything a customer needs. But I agree that I don’t want it to include anything more. I just want to avoid coming up short.

What if I rewrote the twelve principles from a doc-centric focus? Would they work just as well?

  1. My highest priority is to satisfy the customer through early and continuous delivery of valuable documentation.
  2. Welcome changing requirements, even late in documentation. Agile processes harness edits for the customer’s competitive advantage.
  3. Deliver accurate documentation frequently, from a couple of weeks to a couple of months, with a preference for the shorter timescale.
  4. Business people, developers, and writers must work together daily throughout the project.
  5. Write documentation around motivated individuals. Give them the environment and support they need and trust them to get the doc written.
  6. The most efficient and effective method of conveying information to and within a writing team is face-to-face conversation.
  7. Clear, concise documentation is the primary measure of progress.
  8. Agile processes promote sustainable writing. All team members should be able to maintain a constant pace indefinitely.
  9. Continuous attention to linguistic excellence and good design enhances agility.
  10. Simplicity – the art of maximizing the amount of writing not done or words not written – is essential.
  11. The best architectures, sites, and manuals emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how to create more effective doc and then adjusts accordingly.

Okay, so some of this is silly, and I just wasted twelve lines by playing it out all the way to the end, but it was for good reason. Continuous deployment can happen not just with techmanualdeveloping software, but in the embracing of documentation as part of that software, but instead it is often “kicked to the curb” by many teams as a misinterpretation of that one element of the original manifesto.

 

 

It’s not that documentation should be cast aside; it’s that comprehensive documentation is being misunderstood as frivolous or extraneous documentation.

I did not realize that this was a pernicious issue quite so much until I was giving a workshop to a small class of new developers from across my company, hailing from a variety of offices, and they looked at me funny when I described some of how my team does doc (we are pretty good at the whole agile thing, but sometimes we regress into “mini-waterfalls” and I hate them). One of the young devs was astonished that I manage to get my developers to hand over documentation early in the process so that I have it in a draft state, while the code is still being written, not after the code is all in place. It’s a struggle, I’ll admit, but when I this happens, the result is a joyous celebration of shared workload and well-written doc. We can collaborate, change, shift and learn. And the sprint goes more smoothly with only minor changes at the end, not a doc dump in week 4.

To teams who are not doing this: cut it out with the waterfall, guys. Doc is essential, and your users need it. Otherwise, they are calling support and costing your company thousands with every call. Change your practice and build your doc while you build your product.

You might be asking, “How can you possibly write the doc when you haven’t written the code?”

You are not the first person to ask this, trust me. The answer is pretty simple. A long, long time ago, you couldn’t. Or, at least it wasn’t wise to, because it was a redoubling of work. When writers put their stuff into pdfs or word documents, and then had to make major changes, it meant a bunch of editing and rewriting things. Therefore, developers got in the habit of writing all of the code, then examining the processes and hunkering down to crank out the doc. Fair enough. Now, though, we developed wiki spaces and collaborative tech writing tools that now allow inline editing and let developers look at the cool formatting and linking that tech writers have done with our work.

And – before the code even gets written, there is a design plan in place, and usually a design document to go with it. There are code specifications, right? You can have a nice chat with your friendly tech writer and go over this, either through a face-to-face (see Agile point #6) or via any of the vast number of other tools designed to communicate with your team. Before a developer starts coding, there is a project plan – share that plan. Once the general information has stabilized, it’s okay to let the writer have at it. The benefit is that the writer is having her way with the general plan while the developer is coding away. At the end of both work days, the writer and developer have each created something. In a typically short iteration, it is unlikely that the coding will change significantly, so the two can touch base frequently to mark changes (See Agile point #4).

Are there risks to this process? Of course, just like there are with waterfall. Remember that with waterfall, there were a fair amount of times that programming crept right up to the deadline and documentation was hastily delivered and could therefore be sloppy and lacking. And in this method, it is far easier to tell you about writing documentation continuously than it is to actually do it. It takes time, it takes effort, and it takes dedication – because the primary risk for doc is the same as it is for the software: that the customer’s need will change midstream. That problem was (more or less) solved by short iterations in development, and it is (more or less) solved the same way in documentation.mario

The benefit is this – by writing the doc alongside the development, you can be sure that you deliver the doc in sync with the product. You’ll never sent the product out the door with insufficient instruction, and you will never cost your company thousands of dollars in support calls because your customers don’t understand how the product works or how to migrate it. But deliver a product without comprehensive – yes, I’m back to that word – deliver it without a complete doc set, and you may regret it. Trial and error is okay if you want to see how fast Mario can get his Kart down the hill in order to beat Luigi and save the princess, but do you really want to rely on that when the client is a multi-million dollar bank?

I’ve been teaching writing and writing processes for a very long time, and believe me, the action of draft, revise, revise, revise is not new.

It’s just Agile.