‘Appropriate Use’

And Other Strange Terms in Generative AI

Google has for quite some time now been the de facto word in all things web. Sure, Microsoft and Bing gave it a shot, but never quite broke through the hefty barriers set by behemoth Google. So when the folks at Google set out to put some guardrails around AI generated content, the world paid attention.

Google wrote search guidance about AI-generated content, focusing on ‘appropriateness’ rather than attribution. Their stance is that automation has long been used to generate content, and that AI can in fact assist in appreciable ways. One of the most interesting bits about their policy is noting that it isn’t just some bot that can create propaganda or distribute misinformation. In fact, they are quick to point out the very human-ness in that capability. Since the beginning of information dissemination there’s been the capacity to distribute falsehood, after all.

Further, Google asserts that AI-generated content is given no search privilege over any other generated content. And yet, despite this assertion, we know that AI tools can be asked via well-designed queries to write content that is specifically designed to tick all of the SEO boxes, thus potentially rocketing it upward in search efforts. The human brain cannot log and file all of the best potential search terms. We just don’t have that sort of computing power. But AI does. And it has it in spades.

Creating relevant, easily findable content is the whole effort for tech writers. Our jobs depend on being able to place the content users need where the content consumers expect to find it. Many of us have trained for years to scratch the surface of this need, and most of us continue to refine that ability by monitoring our users’ journeys and mapping their pathways. But now AI can do this at a speed we never could. We rely on analytics to tell us what to write where.

Moreover, as humans we rely on our own inherent creativity to design engaging and timely documentation. Every single writer I have ever known, including myself of course, has experienced a degree of “writer’s block,” sometimes even when the prompt is clear and direct. It’s tough to just get started. But when a program has access to all of the ideas ever written (more or less), that block is easy to dismantle. But when we rely on AI to generate the basis for our content, even if we intend to polish, edit, and curate that content, where does the authorship belong? Is it ‘appropriate use’ to place an author byline as the sole creator if a large language model is the genesis of the work? Google’s guidance is merely to “make clear to readers when AI is part of the content creation process.” Their clarification is…unclear.

Google does recommend, in it appropriate use guidance, to remain focused on the ‘why’ more than the ‘how.’ What is it that we, as content generators, are trying to achieve in our writing? We go back to the audience and purpose more than the mechanics and we’ll be fine. Staying in tune with the reason or reasons for our writing will keep us in line with all appropriate use guidelines. For now. If we are writing merely for clicks or views, we’ve lost our way. If we continue to write for user ease and edification, well okay then.

Even Google acknowledges that Trust is at the epicenter of their E-E-A-T guidelines (experience, expertise, authoritativeness, and trustworthiness) which is the basis of their relevant content rankings. AI could certainly create content with a high level of expertise, noticeable experience, and authoritativeness, but we’ve found that the trustworthiness is suspect.

For now, our ‘Appropriate Use’ likely remains in the domain of those of us with conscience, which AI notably lacks. Avoiding content created merely for top rankings still nets humans, and human readers, the desired end result, even if it doesn’t make the top of the list.

Appropriate is not always Popular.