Why the Humanities Matter in STEM

Photo credit: Prateek Katyal, 2023.

A 2017 article in the Washington Post discussed how now, in the age of big data and STEM (Science, Technology, Engineering, and Mathematics), liberal arts and humanities degrees are perceived as far less valuable in the marketplace. I saw the same opinion held strong at both universities where I taught English. Many, many students believed wholeheartedly that the only thing they could do with a degree in English is teach.

I was hard-pressed to convince them otherwise since I was, in fact, teaching.

The Post article goes on to argue, however, for abundant evidence that humanities and liberal arts degrees are far from useless.

When I started graduate school in 2007 at university that beautifully balances the arts and sciences (shout out to you, Carnegie Mellon!), my advisor recommended I take “the Rhetoric of Science.” I meekly informed her that I wasn’t really into science. I thought it would be a bad fit, that I would not fare well and my resulting grade would reveal my lack of interest. She pressed, saying there was a great deal to learn in the class and that it wasn’t “scienc-ey.”

She was absolutely right. I was fascinated from the start. The course focused on science as argument, science as rebuttal, but most of all science as persuasive tool. Or, at least the persuasiveness came from how we talk and write about science. My seminar paper, one of which I remain proud, was titled: “The Slut Shot. Girls, Gardasil, and Godliness.” I got an A in the class, but more importantly I learned the fortified connection between language and science.

The National Academies of Sciences, Engineering, and Medicine urges a return to a broader conception of how to prepare students for a career in STEM. Arguing that the hyper-focus on specialization in college curricula is doing more harm than good, they argue that broad-based knowledge and examination of the humanities leads to better scientist. There is certainly the goal among academics to make students more employable upon graduation, and yet there is consensus that exposure to humanities is a net benefit.

The challenge is that there’s no data. Or, limited data anyway. The value of an Art History course or a Poetry Workshop at university is hard to measure against the quantifiable exam scores often produced in a Chemistry or Statistics class.

In a weak economy, it’s easy to point to certifications and licenses over the emotional intelligence gained by reading Fitzgerald or Dickinson. We find, though, that students (and later employees) who rely wholly on the confidence that science and technology provide answers, viewing it with an uncritical belief that solutions to all things lie in the technology – well, those beliefs are coming up short. Adherence to the power of science as the ultimate truth provides little guidance in the realm of real-world experiences.

In short, not all problems are tidy ones.

After all, being able to communicate scientific findings is the icing on the cake. We don’t get very far if we have results but do not know how to evangelize them.

In American universities right now, fewer than 5% of students major in the humanities. We’ve told them that’s no way to get a job. The more we learn about Sophocles, Plato, Kant, Freud, Welty, and others, the more prepared we are to take on life’s (and work’s) greatest challenges. It is precisely because the humanities are subversive that we need to keep them at the heart of the curriculum. Philosophical, literary, and even spiritual works are what pick at the underpinnings of every political, technological, and scientific belief.

While science clarifies and distills and teaches us a great deal about ourselves, the humanities remind us how easily we are fooled by science. The humanities remind us that although we are all humans, humans are each unique. Humans are unpredictable. Science is about answers and the humanities are about questions. Science is the what and the humanities are the why.

If we do our jobs well in the humanities, we will have generations to come of thinkers who question science, technology, engineering, and math.

And that is as it should be.

I welcome discussion about this or any other topic. I am happy to engage via comment or reply. Thanks for reading.

Advertisement

Med-Tech, Fin-Tech, and MarCom – Oh My!

Photo by Nick Fletcher.

The whole field of technical writing, or professional writing, seems to have expanded like a giant infinite balloon in the last decade. Where previously it was a specialty, now it’s an entire field complete with sub-specializations.

How cool is that?!

I told the story just the other day that when I graduated from high school, I knew I was off to college to major in English. It had always been my best subject, I love reading but I love writing more, and it was just the obvious choice. Except…I also asked for money instead of gifts because I was determined to buy my own computer. Other than some desktop publishing, I couldn’t envision what the two had in common, but I was connecting them somehow.

Had I only known then that I would spend my career as a technical writer, I probably would have gotten a much earlier start. I focused on essays and creative nonfiction, which I later taught until I discovered what I solidly believe is the best professional writing graduate program anywhere – at Carnegie Mellon. Indeed, the robotics and engineering monolith hosts an impressive writing program for students looking at Literary and Cultural Studies, Professional & Technical Writing, and Rhetoric. I opted for the last of the three and am happy with my choice, even though I landed a career in Prof & Tech.

Evangelizing this field is easy for me, even as it becomes more complicated. I can see clearly now that taking an Apple IIGS to college was the harbinger that I would eventually be a software writer. I work now for a major software company and love what I do.

But wait – there’s more. (Please say that in an infomercial voice. You won’t be sorry.)

I wrote proposals for federal-level contracts for a while. I taught Human-Computer Interaction. I edited science articles. The breadth of writing is not unique to me, and it was very helpful.

Because the company I work for delivers software solutions for medical clinical trials. Eureka! Again, that college freshman had zero idea that she could combine a love of writing, and interest in computers, and a genuine interest in science. Back then, the marriage of all three seemed impossible.

And yet…

As a technical writer starting out, it’s perhaps not so important to “find focus” in a given industry. However, once you decide you indeed want to produce professional documentation, specializing in an interest is helpful. There are so many areas to choose from that it’s nearly impossible to NOT find one that is interesting as well as challenging. I would not, for example, find deep satisfaction in writing installation manuals for gas pipelines. But someone does. Someone enjoys that very much. I participated in a review panel for a writing competition and my assigned document was an infant incubator (baby warmer) user manual to be read by nurses. I found the content to be expertly delivered, and yet I had no actual interest in what the device does or how to use it. Give me something about gene therapy research and predictive modeling? I am IN!

Some writers find that they are fascinated by banking, taxes, estate planning and so on – welcome to tech writing for loads and loads of financial applications from Turbo Tax to Betterment. The field is growing so rapidly that every investment tool, firm, and product needs a skilled writer. For those who find dollars and cents and amortization and net worth interesting it’s a huge category, and you can specialize in all sorts of ways. Someone who digs marketing but doesn’t want to be a marketer will find a spot in a real estate app, a travel tool, or even music software like iTunes. They all need documentation. Every. Single. One.

What about the folks who say the documentation is superfluous? While it may be true that an app like iTunes or Netflix is so intuitive that it doesn’t need user doc, the moment a user is stymied and needs an answer, that documentation is one thousand percent necessary.

I often talked with my students about the wide variety of uses for their writing skills, many of which would leave plenty of time for creating poetry, fiction, and the like. Heck, even I write memoir in my spare time.

But it’s Sci-tech, Med-tech, and Bio-tech that butter my bread. If you find any area that interests you, I can guarantee there’s a technical document somewhere for you to write and edit, and it’s all about that field.

AI Isn’t Out for Your Job

Because We Can’t Even Agree On What It Is

Photo courtesy of Possessed Photogrpahy.

There is such a swirl of discussion around Artificial Intelligence and whether it will supplant human work. This largely stems from a real misunderstanding of what AI is and what it can do. According to IBM, Artificial Intelligence is: “the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

There’s a lot packed into that definition. No wonder people don’t know quite what to make of it!

It seems to me that the biggest challenge is that people mistake Intelligence for Sentience. While humans certainly possess both, it’s not even a realistic goal in some ways to presume that we can master Artificial Sentience.

What do I mean by this?

In a nutshell, “sentience is the ability to feel, sense or experience perceptions subjectively. Those working in the field believe that capability to be decades away, if achievable at all. Building self-awareness into a machine is an unlikely undertaking, anyway. To reach that state, researchers would have to build programs that achieve generalized intelligence, a single learning machine with the ability to problem solve, recognize environments and changing requirements, and the capacity to learn more. Once that capacity is established, the next step is just reaching the edges of the potential ability to learn consciousness.

Therein lies the rub. A machine is unlikely to “learn” how to “feel,” since the uniquely human capability of complex emotions is perhaps the ultimately most difficult thing to teach. Just as a machine has no conception of physical pain or healing, the ability to feel attachment or loss is not necessarily a programmable skill.

No one in the field claims that a Roomba has any particular attachment to the floors it vacuums, despite the fact that it does indeed learn what the obstacles to successful vacuuming are. No one believes that a robot vacuum enjoys the sensation of a 120-volt charge running through its wires, even if we imagine those wires to function much like veins.

Robots still can’t feel.

Robots can mimic, absolutely. They are fantastic at aping what humans do, if they are taught to do so. Within series after series of If/Then statements, a computer can perform all sorts of functions. My robot vacuum “understands” that If there is a chair leg in the way, it should rotate 30 degrees and try again. If the block is still there, it will rotate again an again until it is able to move past the obstacle. With the most recent robot vacuums (to stick with this example), the vacuum “knows” that the chair is there and “learns” to avoid it, unless it arrives at that obstacle and finds it has been moved. That said, the vacuum is not frustrated by the obstacle, nor is it relieved when the chair is moved to another room.

I think of it often as similar to what I learned in my first college linguistics course: animals can communicate, but they do not have language. That is, my dog can let me know when he is happy, but he cannot tell me that his father was poor but honest. Machines can learn, but the challenge is that they cannot emote. They can mirror emotion, sure, but to have original and unique feelings about their situations? No. At least not yet, and not for the conceivable immediate future.

But that does not mean that machines cannot learn. They very much can. Just as a third grader can sit quietly, ingesting information and retaining it for future reference, a machine can do the same. A student learning multiplication tables soon learns that 5 x anything can only result in a number ending in 5 or 0. A machine has the capacity to learn and retain that information and apply it to millions of situations, millions of times faster than the human mind can do. A machine can aggregate vocabularies, numerical sets, geographic data, and more – all at a rate far more effectively and efficiently than the human brain.

That’s why machine learning and artificial intelligence are thrilling and fantastic.

But until that computer program can harness things like anticipation (that the car in the next lane is looking like it will merge into yours), or fear (that the spider crawling up the side has the potential to cause damage), and responds accordingly, our jobs are all safe. Machines can certainly learn to edge over if a car is too close or to restart if an interloper is nearby, but it does not have the capacity for self-awareness.

We mustn’t confuse intelligence with sentience, and if we do that, we’ll soon welcome the advancements of AI and machine learning, harnessing them for what they are.

But remember, too, that until the machine can understand notions like initiative, relaxation, and joy, it remains a tool for us to use. And remember, too, that it may well be the mindless robots may be our biggest threat, not the ones that could one day feel and thus bring us things like helpfulness, empathy, and emotional support.

Susan is a technical content strategist and researcher of all things automated. She lives in Baltimore with her two dogs and snuggles with them when she’s not on her bike or swimming laps in the pool. She is an avid traveler and reader of nonfiction. Subscribe to this blog to learn more about technical writing, communicating, and machine learning within those domains.

Let Me Ask My Analyst.

Such a phrase from the seventies, right? Am I dating myself? Maybe, but hey, I was just a kid back then. I’m all grown-up now, and gaining insights by the day.

The goal of insights are, just as they were in the seventies, when everyone was seeing the original “analysts,” better decision-making. Not much has changed.

I take that back.

A whole lot has changed. We couldn’t have imagined, (or could we?) back when computations were done by punch-cards, that we’d no longer be shrink-wrapping user manuals, but instead looking to true trends analysis to see what our users want from our writing. Now, we are in the realm of truly seeking what patterns in our content are useful and what can go by the wayside, because we know, for instance, that our users no longer need to be told to enter their credentials upon login. They get it. They are familiar with creating passwords, and the concepts that were once totally unfamiliar are now second nature.

It’s a whole new frontier.

Now we are in a new domain.

Companies ask us not to be writers, actually, but content creators, content strategists. I used to scoff at that title, because anyone could use it. There is no credentialing: a licensed content strategist is a unicorn. And yet, real industries call for those who can produce (and produce well) two types of content: structured and unstructured. Yikes!

Structured content can be found. It has a home, a place, it is text-based in the case of email and office or web-based documentation. Unstructured content may include an archive of videos, or even non-text-based things like images and diagrams. There is a huge volume of this type of content, and yet it falls still under the purview of we, the content creators.

Those of us who used to be called “technical writers” or even “document specialists” or something like that find ourselves of course wrangling much more than documentation, doing much more than writing. So the issue became: how do we know if what we are doing works? Are we impacting our audience?

That’s where analysis comes into play and matters. Really, really matters.

Why spend hour upon hour creating a snazzy video or interactive tutorial if no one will watch or, dare I say, interact?

That’s where content analytics comes in.

Analytics measures. Photo credit: Stephen Dawson.

The whole goal of analytics is for us to know who is reading, watching, learning – and then we can improve upon what we’re building based on those engagements. It does little good to create a video training series, only to discover that users don’t have an internet connection on site to watch YouTube. Similarly, it’s not helpful to write detailed documentation and diagrams for users who prefer to watch 2-3 minute video step-throughs. It’s all about knowing one thing: audience. The essential element, always.

The central theme in Agile development, after all, was learning to understand the customer, so the essential element in designing better content, sensibly, ought to be the same thing. When we hunker down and learn what the customer really wants, we develop not just better software, but better content of all types.

With metrics on our side, our companies can identify just what content has real value, what has less, and what can really be dropped altogether. Historically, academic analysis was held to notions of things like how many times a subject blinked while reading an article. (Ho-hum.) Now, though, we can measure things like click-thhroughs, downloads, pauses during video, hover-helps, and more. How very, very cool.

Multiple screens to choose from. Photo credit: Alexandru Acea on Unsplash.

Historically, content analysis was slow, time-consuming, and it was a frustrating process with limited accuracy. Now, though, we can measure the usefulness of our content almost as fast as we can produce it. Content analytics are now available in a dizzying array of fields, reflecting a vast pool of data. The level of detail is phenomenal. For example, I’ll get feedback on this post within hours, if I want. I’ll create tags and labels to give me data that lets me know if I’ve reached the audience I want, whether I should pay for marketing, whether I might consider posting on social media channels, submitting to professional organizations, editing a bit, and so on. I may do all of those things or none of them. (Full disclosure: usually none, unless one of my kind colleagues points out a grievous error. I write for my own satisfaction and to sharpen my professional chops. Just sayin’)

Believe you me, the domain of conent analysis, in all areas, will grow and grow. Striking the perfect chord between efficiency and quality is not just on the horizon, it is in the room. AI-powered writing and editing, paired with the streamline of knowing we’ve reached the proper balance of placement and need – it’s not hyperbole to say the future is here. It’s just turning to my ‘analyst’ to ask whether I’ve written my content well enough and delivered it properly.

My product teams, my business unit, and my company are all grateful. And my work shows it.

How Biased Are Your Release Notes?

Your Own Thinking Can Make a Big Difference

I should probably start with a primer on Cognitive Bias before accusing anyone of allowing such biases to impact their release-note writing. That’s only fair.

Photo courtesy of Ryan Hafey on Unsplash.

In a nutshell, Cognitive Biases are those thinking patterns that result from the quick mental errors we make when relying on our own brains’ memories to make decisions. These biases arise when our brains try to simplify our very complex worlds. There are a few types of cognitive bias, too, not just one. There’s Self-serving biasConfirmation biasanchoring biashindsight bias, inattentional bias, and a handful more. It gets pretty complex in our brains, and we’re just trying to sort things out.

These biases are unconscious, meaning we don’t intend to apply them, and yet we do. The good news is, we can take steps to learn new ways of thinking, so as not to mess things up too badly with all of this brain bias. Whew, right?

Now that we’ve established a basic taxonomy, let’s dive in to how cognitive bias may (or may not) be creeping in to things like your technical writing. It’s not just in your release notes; that was just clickbait. But sure, bias can permeate your release notes and nearly any other part of your documentation. (Probably not code snippets, but I won’t split hairs.)

Writers and designers must recognize their own biases so that they can leave them behind when planning. Acknowledging sets of biases helps to shelve the impulse to draft documentation that is shaped by what they already know, assets they bring to the table, or assumptions they make about experience levels.

Let’s start with Self-serving bias. How might this little bug creep its way in to your otherwise beautiful and purposeful prose?

This bias is essentially when we attribute success to our own skills, but failures to outside factors. In our writing, this can appear when we imply that by default, software malfunctions are based in user error. Rather than allowing for a host of other factors, often our writing recommends checklist items that are user-centric rather than system-focused. While it’s true that there are countless ways that users can mess up our well-designed interfaces, there are likewise plenty of points of failure in our programs. Time to ‘fess up.

Confirmation bias can be just as damaging, wherein as writers we craft and process information that merely reinforces what we already believe to be true. While this approach is largely unintended, it often ignores inconsistency in our own writing. We tend to read and review our own documentation as though it is error-free, both from a process perspective and a grammatical-syntactical one. That’s just illogical. And yet, we persist. The need for collaborative peer-review is huge, as even the very best, most detail-oriented writers will make a typo that remains uncaught by grammar software. Humans are the only substitute, and always will be.

We write anchoring bias into our documentation all too often. This bias causes us, frail humans that we are, to rely mostly on the first information we are given, despite follow-up clarification. If we read first that a release was created by a team of ten, but then later learn that it is being developed by a team of seven, we are impressed by the team of seven because they are doing exceptional work. Now, it may be the case that when the ten-member team was working on it, three of them had very little to do. Yet, we anchor our thinking in the number ten, and set our expectations accordingly.

The notion that we “knew it all along” is the primary component of hindsight bias. We didn’t actually know anything, but somehow, we had a hunch, and if the hunch works out, then we confirm the heck out of it and say we saw it coming.

This can happen all too often when we revise our technical writing, jumping to an outcome we “saw coming,” which causes us to edit, overlook, or wipe out steps on the path to getting there. We sometimes become so familiar with the peccadilloes of some processes that we only selectively choose what information to include, and that’s a problem.

Inattentional bias is also known as inattentional “blindness,” and it’s a real doozy when it comes to technical writing, believe it or not. It is the basic failure to notice something that is otherwise fully visible, because you’re too focused on the task at hand. Sound familiar? Indeed. Our writing can get all sorts of messed up when we write a process document, say an installation guide, and don’t pause to note things that can go wrong, exceptions to the rule, and any number of tiny things that can (and indeed might) occur along the way. What to do when an error message pops up? What if login credentials are missing? System timeout? Sure, there are plenty of opportunities for us to drop these into our doc, and many times we do, but I saved this for last because – this will not surprise anyone who knows me – in order to overcome this particular bias, all you need do is become best buddies with your QA person. Legit. I recommend kicking the proverbial tires of your software alongside your QA buddy to see how often you get a “flag on the play.” That will grab you by the lapels and wake you up from the inattention, for real.

Your users will thank you if you learn, acknowledge, and overcome these biases when you write. Is it easy? Not really. Is it necessary? I’d say so. As my Carnegie Mellon mentor told me more than once, and I’ve lived by these words for my whole career: “Nothing is impossible; it just takes time.”

Take time with your writing, and you’ll soon be bias-free.

It’s All About AI. The Data Told Me So.

AIA conversation with a (junior) colleague this morning started off with “How did you decide to reformat your Best Practices Guide?” and moved on to things like “But how did you know that you should be working in Artificial Intelligence and VUI for this search stuff? I mean, how do you know it will work?”

I couldn’t help but chuckle to myself.

“Rest assured,” I said. “Part of it is just that you know what you know.  Watch your customers. Rely on your gut. But more importantly, trust the data.” The response was something of a blank stare, which was telling.

All too often, tech writers – software writers especially it seems, although I do not have the requisite studies to support that claim – are too steeped in their actual products to reach out and engage in customer usage data, to mine engagement models and determine what their users want when it comes to their doc. They are focused on things, albeit important things, like grammar, standards, style guides, and so on. This leaves little time for customer engagement, so that falls to the bottom of the “to-do” list until an NPS score shows up and that score is abysmal. By that time, if the documentation set is large (like mine) it’s time for triage. But can the doc be saved? Maybe, maybe not.

If you’re lucky like I am, you work for a company that practices Agile or SAFe and you write doc in an environment that doesn’t shunt you to the end of the development line, so you can take a crack at fixing what’s broken. (If you don’t work for a rainbow-in-the-clouds company like mine, I suggest you dust off your resume and find one. They are super fun! But, I digress.)

Back to the colleague-conversation. Here’s how I knew to reformat the BP Guide that prompted the morning conversation:

I am working toward making all of my documentation consistent through the use of templating and accompanying videos. Why? Research.

toiletAccording to Forrester, 79% of customers would rather use self-service documentation than a human-assisted support channel. According to an Aspect CX survey, 33% said they would rather “clean a toilet” than wait for Support. Seriously? Clean a toilet? That means I need to have some very user-friendly, easily accessible documentation that is clear, concise, and usable. My customers do NOT want to head over to support. It makes them angry. It’s squicky. They have very strong feelings about making support calls. I am not going to send my customers to support. The Acquity group says that 72% of customers buy only from vendors that can find product (support and documentation) content online. I want my customers’ experience to be smooth and easy. Super slick.

In retail sales, we already know that the day your product is offered on Amazon is the day you are no longer relevant in the traditional market so it’s a good thing that my company sells software by subscription and not washers and dryers. Companies that do not offer subscription models or create a top-notch customer experience cease to be relevant in a very short span of time thanks to thanks to changing interfaces.

Image result for artificial intelligence

I’m working to make the current customer support channel a fully automatable target. Why? It is low-risk, high-reward, and the right technology can automate the customer support representative out of a job. That’s not cruel or awful; it’s exciting, and it opens new opportunities. Think about the channels for new positions, new functions that support engineers. If the people who used to take support cal

 

ls instead now focus on designing smart user decision trees based on context and process tasks as contextual language designers, it’s a win. If former support analysts are in new roles as Voice of the Customer (VoC) Analysts, think about the huge gains in customer insights because they have the distinct ability to make deep analyses into our most valuable business questions rather than tackling mundane how-to questions and daily fixes that are instead handled by the deep learning of a smart VUI. It’s not magic; it’s today. These two new job titles are just two of the AI-based fields that are conjured by Joe McKendrick in a recent Forbes article so I am not alone in this thinking by far.

His research thinking aligns with mine. And Gartner predicts that by 2020, AI will create more jobs than it eliminates.

So as I nest these Best Practices guides, as I create more integrated documentation, as I rely on both my gut and my data, I know where my documentation is headed, because I rely on data. I look to what my customers tell me. I dive into charts and graphs and points on scales. The information is there, and AI will tell me more than I ever dreamed of…if I listen closely and follow the learning path.

Neural-Network-640x360

 

 

 

Can Software Write This Better than Me?

In my job, Icomputer writer’m expected to use a variety of tools to ensure accuracy, word count, compliance, style – adherence to a host of things that keep me “in line” with the company’s overall design and standards. This causes me to wonder: as part of a team of software developers, could my team design software that automates the process of writing the documentation for their own software with enough accuracy that the documentation specialist goes the way of the dinosaur?

I mean, the whole point of some of my products is to automate processes that human beings used to perform, and to automate them to such a degree of precision that people are hardly required to be cognizant, let alone present, for the actions these programs perform. We’ve designed such reliable systems that banking, health care, military information and community design can count on big data to gather and maintain the necessary materials to run our daily lives, and to store that data, to anticipate problems before the occur, and to rectify those problems with limited human intervention.

When you think, “but this type of automation cannot be applied to writing…writing requires critical thinking and analysis!” You would be correct. But you would be overlooking tools like the lexical analyzer Wordsmith, and the automated writing tool also named Wordsmith. Created by Automated Insights, Wordsmith is the API responsible for turning structured data into prose – it literally takes baseline information and makes an article. Could I be out of a job?

Feed me some data, and I write articles, too, only you have to pay me and occasionally socialize with me. Not so for Wordsmith.

The freakish thing about Wordsmith is its accuracy. I’ve studied a good bit about semantic language interpretation, and in my graduate program at Carnegie Mellon University, I dabbled in software interpretation of language, working with a pretty notable team on designing huge dictionaries of strings of language. The thing is, computers are great at reading – they can read at much faster rates than humans, they can digest huge chunks of information and store that information at infinitely larger capacities than the human brain can and their recall is spectacular. Skeptical? Just watch the amazing Jeopardy matches between IBM’s Watson and you’ll soon see that computing power can be harnessed to cull through the informational equivalent of roughly one million books per second. Humans just can’t keep up. Humans who write can’t touch that.

If computers learn a perfect formula for the Great American Novel, we are doomed.

Something to consider. Let’s try to keep this a secret from my bosses, shall we? My team of very excellent software developers may decide that this is a project worth undertaking, and the next thing you know, it won’t be just data that Wordsmith will be analyzing.

In the meantime, I will rely on the human eye and the need for context clues and interpretation that Wordsmith and Watson lack. I’ll count on the systems of reliability and emotion. I’ll count on what I know. What I learned in that high-functioning graduate analysis: A computer cannot tell the significant difference between these two exchanges:

Scene 1: A funeral home. A somber affair, all is quiet. A man says to a woman:

“Sorry for your loss.”

The appropriate response? She shakes his hand and nods, quietly.

Scene 2: A soccer game. A sunny afternoon. The breeze is blowing gently.

A boy says to a girl:

“Sorry for your loss.”

The appropriate response? She high-fives him and replies, “No sweat! Let’s grab some pizza! Woooo hooo!” As they tumble into a minivan, shouting jubilantly, kicking off their shoes.

No computer can decipher the differences in – “Sorry for your loss.”

Sorry, Wordsmith.

I’m going out for pizza.