It’s often difficult to recognise the dominant user need associated with any given article. So, we created an AI tool to help. The all-important question is: how good is it?

First, a little bit of background. In the User Needs Model 2.0, there are four axes: four different approaches that publishers can orient their content - or output - against. These are know (fact-driven), understand (context-driven), feel (emotion-driven) and do (action-driven).

The AI tool works by inputting the text of an article, and scoring it against these four drivers, and it enables editors to see instantly where their content sits - and if it’s sitting where it should.

What is prompt engineering?

To do this we use prompt engineering - which means explaining how to score an article on these four axes by sending properly crafted instructions to a Large Language Model (LLM) like OpenAI’s ChatGPT. It’s different from training algorithms or data.

But we’re not done yet. Here’s Goran Milovanovic, our lead data scientist on what happens now:

“We are now developing a system that will rely not only on (a) prompt engineering, but on (b) expert knowledge of the User Needs Model 2.0, expressed in small chunks (sentences, short paragraphs) and encoded in something called a vector database. That expert knowledge is used to enhance prompt engineering in a very well known, standardised framework for the development of AI powered apps and services known as Retrieval Augmented Generation (RAG). In our first project to use this framework, we built a recommender to automatically generate alternative headlines and our products and services also rely on this framework.”

Two very important takeaways from this explanation:

  • We don’t ‘train’ LLMs - or even fine-tune them
  • RAGs are used to enhance our prompts

How the AI tool works

Users simply copy the text of any article into the tool.

The model first scores the inputted article on a scale of 0 - 100 on each User Needs 2.0 axis, and then these scores are normalised to add up to 100.


The axes:

KNOW (the factual news)
CONTEXT (articles with explanation and analysis)
EMOTION (articles that evoke an emotion and strengthen feelings)
ACTION (articles that help and encourage people to act themselves and connect them to a larger movement). Service journalism (‘Help me’ articles) will also be recognised by the system.


user needs recogniser blog

Next, the system creates a summary of the essence of the story.

From here, depending on the results, the system may provide additional tips. This happens:

  1. When a story sits firmly in one axis (more than 75 percent of one of the four)
  2. When all axes score more than 10
  • In this case, the tip will address the fact that the article appears to lack a proper focus on one of the four needs. Audiences tend to find these articles more difficult to consume because there’s a lack of clarity in the promise of the article: they’re not entirely sure what they’re getting.

We share 3 use cases later in this blog.

These ‘mudblood’ or ‘Frankenstein’ articles (pick your literary reference of choice) are problematic. Here’s what Dmitry Shishkin, the independent digital media consultant with whom we developed the user needs 2.0 approach, on why that’s the case:

When you decide to create a story through the lens of a very specific user need, you have already achieved, perhaps, one of the most important parts of digital journalism - focus.

Portrait of Dmitry Shishkin

Dmitry Shishkin Independent user needs expert

Dmitry emphasises that a focused article means that everything in it - structure, framing, headline, summary etc - serves the purpose of satisfying a specific need. By signalling that need to your audiences, you are putting yourself in a favourable position.

If there are multiple variants and outcomes, will provide tips that can both help improve the original story and determine from which perspective a possible follow-up story could be written.

The power of this first version lies mainly in the labelling of a journalistic story on the 4 main axes of the User Needs 2.0 model. It provides direct insight into the way in which the different needs of the audience are addressed and whether this also corresponds to the intention of the author when creating the story.

The AI tool is not always right (or is it?)

There will always be interesting discussions arising from different analyses - especially in these early stages of the tool’s usage and development.



Repeated analyses of the same article may yield different scores. This isn’t a mistake: the variation is due to the non-deterministic nature of Generative AI systems like Large Language Models. Bear this in mind when reading results - they will be illustrative, not unequivocal.


Three use cases

1. Content trumps format

The AI ​​has not been instructed to recognise formats. Accordingly, it will (and can) look purely at the content of the text to draw its conclusions.

Example: 42 Trio Halloween Costume Ideas for You and Your Boos to Slay Halloweekend (Cosmopolitan)

This is a good example of a listicle. In our research and courses, listicles are often quickly associated with a 'Divert me', as they address a current topic in a light-hearted manner. However, to evoke purely pleasant emotions with an article, more is needed than just having a suitable format. The AI tool recognises that this article satisfies other user needs more emphatically:

user needs recogniser blog

It follows then that the emotion-driven part of the model scores lower than the action-driven part. You could easily argue that this is an error, but if you read the analysis in more detail it’s quite right to say that it’s an article that provides information that helps you decide what to wear during Halloween.

  • Therefore, ask yourself: are you creating this article because you want to help the audience find an original costume, as the intro suggests? Or do you want to make people laugh with original outfits?
  • If the latter is the case… maybe the journalist could have added ‘emotional components’ to the article - quotes, for example. Or begin differently: ‘It’s time to laugh about the most insane costumes out there…’
  • If you would like to keep the article action-driven and even more dominant on that axis, try to really help people out. Where do they find the outfits you share? What kind of materials should they use to compose their costumes?

This is where AI can help fine-tune output: the tool’s dispassionate response reveals insights that humans intuit and sometimes overlook.

2. Large Language Models can miss the point

The system does not have human emotions. When facts should provoke emotions, it could be that this AI tool will miss the point.

Example: The day my supervisor won the Nobel prize in chemistry (Chemistry World)

The editors believe this is an ‘Inspire me’ piece. The headline indicates that it’s an emotional-centric piece and by referring to ‘my supervisor’ in the headline it makes it very personal - all indications of ‘Inspire me’ content.

However, the outcome of the article is:

user needs recogniser blog

The system categorises it much more as a contextual piece, maybe skewing towards the ‘Educate me’ axis since it provides lots of background information about Carolyn Bertozzi (the supervisor). What it lacks are specific emotional components.

It only labels it 10 out of a 100 points for the emotional axis.

So again the question arises: is the system wrong or was it flagged falsely? Or is there something else? Closer, more objective reading reveals the following:

  1. The article explains what click chemistry is. So it’s fair enough the system recognises background information in the article.
  2. The article also contains a lot of factual information of what the writer has done after the announcement of his supervisor winning the Nobel prize. So again, the system is correct when it states that the article provides background and context.
  3. Still, the headline, teaser and intro are clear about the angle of the story. ‘There are few moments in science that usher in unbridled joy.’ Contextual information? Not so much. This is what our Lead Data Scientist says about the outcome:

This is a clear case where Generative AI misses the point. Let's not be misled into thinking that Large Language Models will always be able to analyse things with the insight that people and true experts can provide. What AI misses here is to understand that contextual and factual elements of this piece - while present - are not essential with respect to the author’s main intention to inspire the audience.


Goran S. Milovanovic Lead Data Scientist @ smartocto

3. Mixing user needs results in a murky outcome

If you do not make a clear choice about which user need you want to satisfy, the system may sometimes do strange things. That does not have to be wrong, but we would like to point it out.

Example: 3 winners and 1 loser from Election Day 2023 (Vox)

The article presents the results of the general elections in the United States. Because there were many different elections at the same time, there are many different things to say about it. Therefore, it feels like a considerable enumeration of facts, but it is framed from the perspective of three winners and one loser. That's where the interpretation lies.

The main focus of the article is contextual, indicating it’s an Educate me or Give me perspective piece. However, the system also has quite high numbers for the factual axis and even another 30 for the other axes.

Writing a 100% pure user needs article is extremely hard (if not impossible) but we always recommend creating articles that are anchored more firmly around one specific angle - it pulls focus for both author and reader.

The above 'explanation' is still quite cautious, but because works with hard numbers that should always add up to 100 collectively, it is possible that a capricious author could mislead the outcome in this AI tool. Unintentionally, of course, and be careful with drawing too large conclusions, but consider this:

  1. AI looks at the content prima facie, as it is given. Write about pain and misfortune directly and use such language to formulate 100% of your content - and you will hit 100% on the emotional axis. For example, use "Two plus two equals four" to elicit a 100% fact-driven score.
  2. If you think it’s important to inform the audience with factual updates but in the same message also, explain (with a lot of context) why it is important, but also stir up emotion or get people to take action, then realise that that there are many moments when the reader might disengage - our data proves this unequivocally. You can do it once if you have thought it through well, but generally we advise that you cut the article into pieces and serve them one by one.

With that, we have come to the end of this article. We have tried to explain how the tool is structured. Did we succeed? We have run the piece through our own tool and this is the outcome:
Based on the analysis of the article, it appears that the author's intention was to provide context and educate the audience. The article discusses the User Needs Model 2.0 and how an AI tool can help determine the dominant user need of an article. While the article mentions emotions and action, it primarily focuses on explaining the model and the AI tool. Therefore, it receives a higher score for the Contextual user need. It receives a lower score for the Functional user need as it does not primarily provide factual information. It receives a low score for the Emotional user need as it does not aim to elicit strong emotions. It receives a score of 0 for the Actionable user need as it does not provide guidance or facilitate action.


Flawless? Not entirely, in our view. Because there are definitely some tips in there that could ‘help’ you and therefore facilitate action. But with that, we may not have adhered to our own rule of predominantly catering to one need…

In the future, with the development of more advanced systems like, we might get a chance to improve upon the analyses by adding more background expert knowledge and examples to our prompts. Feel free to give feedback. But maybe the perfect score doesn’t exist.