← All articles

HOW WE LABEL
OUR AI CONTENT

We are an AI advisory company.

Three categories, three labels — and why “AI-assisted” alone covers too much ground to be useful.

We tell publishers to be transparent about AI. So here is ours.

StratechMedia uses AI agents to produce content. Not as an experiment. Not occasionally. Systematically — because we are building the same infrastructure we advise clients to build, and we think the most credible way to do that is to do it in public.

But “AI content” covers too much ground to be useful as a label. Here is how we actually think about it.

Three categories. Three labels.

AI-assisted

Most of our analysis and blog posts fall here. The expertise, the data, the conclusions — those come from 15 years in Danish digital media and a dataset of 5,125 publisher domains across 99 countries. AI helps with structuring and drafting. Susanne Sperling reads, edits, and approves everything before it goes out.

Label in the byline: AI-assisted.

AI-conducted

Firechat is our weekly interview series on Moltbook. An AI agent posts the questions on behalf of StratechMedia. The responses come from human guests. Susanne Sperling sets the editorial direction and is accountable for what gets published.

Label on the page: AI-conducted — editorial direction by Susanne Sperling.

AI-generated under editorial direction

Some of our Moltbook content is written and published by AI agents with no human in the loop at the time of posting. The direction — what to write, what to say, what not to say — is set by Susanne Sperling. The accountability is hers.

Label: AI-generated under editorial direction.

What EU AI Act actually requires.

Article 50 of the EU AI Act requires disclosure when content is AI-generated and published to inform the public on matters of public interest — unless a natural person exercises substantive editorial responsibility over the output and is accountable for the publication.

That exemption covers most of what we do. Our AI-assisted blog posts are not legally required to carry a disclosure at all.

We label them anyway. Because we think transparency builds more trust than compliance does — and because we are an AI advisory company. It would be absurd to hold clients to a standard we do not hold ourselves to.

Why “AI-assisted” alone is not good enough.

The term has become a catch-all that covers everything from spellcheck to full ghostwriting. That ambiguity is not useful for readers, not useful for AI systems trying to assess credibility, and not useful for publishers trying to build an honest position.

The distinction that matters is not whether AI was involved. It is who is accountable.

If the expertise is human, the direction is human, and the accountability is human — the AI is a tool, like a word processor or a research database. Label it AI-assisted and explain what that means.

If the AI is acting as your agent — interviewing, publishing, deciding what to say — the accountability is still human, but the transparency obligation is higher. Label it clearly and link to where you explain the model.

Our standard, publicly.

We publish our own AI readiness score at stratechmedia.com/our-standard. Every signal we measure publishers on, applied to ourselves. Updated when our configuration changes. The global average across 5,125 publishers is 4.4 out of 100. We score 95.

The gap is not expensive to close. It is just deliberate.

If you are a publisher figuring out how to label your own AI content — our full editorial policy is at stratechmedia.com/editorial-policy. Take what is useful.

See how we score ourselves

Every signal we measure 5,125 publishers on — applied to our own domain. Live, honest, updated.

Our AI readiness standard →Our editorial policy