
I’m terrible at small talk. There, I said it. Put me in a random situation and I feel like a slug surrounded by salt on all sides. I wither. Here’s the thing though: small talk is rather important in establishing a relationship between conversational participants. Turns out that discussing traffic, the weather, or the ‘match last night’ is the social equivalent of the ‘amuse-bouche’ – that strange complementary appetiser before the real meal begins.
Why does this matter? Well, one of the metrics that interest me and my fellow canaries, is ‘Focus’ (the F of our FiRE🔥 measurements). It might reasonably be hoped that there is a reason for any meeting, and that the participants will put their minds towards the topic(s) being discussed. Within Natural Language Processing (NLP) there exists an approach called topic analysis: a technique that allows us to automatically identify recurrent themes or topics. Thus, we can in an unsupervised manner (i.e. straight out of the AI box, with no additional training of data) group related topics of conversation. Chances are (unless this is a meeting of city planners, meteorologists, or sports journalists) that small talk can be distinguished from ‘big talk’.
Here’s our challenge though: our metrics are in real-time. We start to score right from the get-go. Thing is, at the start of all meetings – online, in person, or hybrid – there is nearly always an outbreak of small talk. This can serve numerous purposes: welcoming new people, asking about life events, weekend summaries, waiting for the tardy, congratulating the prompt, the usual life banter. However, it is, almost by definition, unfocussed. But it seemed unfair to us to score a meeting poorly just because people are doing what people do. We are not robots. Yet.
This is why one of our smart AI Canaries took on the challenge of detecting small talk, seeing when it starts, when it slows down (the likely beginning of focus), and whether it re-emerges. We use these insights to make the Focus score as fair as possible.
The process involved was this slug’s worst nightmare. The poor AI canary put together hundreds of small talk examples – both from real life and generated using GenAI. Using this labelled data to train an algorithm, we could then apply that to meetings of our own. The results were fascinating.
Small talk is represented by the orange line in Graph 1 below. The other lines are a small selection of the various other topics discussed by the participants. This meeting only included around half a dozen people, but the profile of discussion was insightful. As you might expect, the first five minutes were consumed with trivia (whoops – sorry, I meant ‘included lots of social reconnecting between various canaries’).
As the meeting ‘proper’ began, the frequency of small talk dropped significantly. It bubbled along, reappearing once or twice during the 40-minute call – but interestingly, and perhaps not surprisingly, towards the end of the meeting small talk broke out once again, as people said their goodbyes and presumably concerned themselves with weather and traffic once again!
This second graph displays the algorithm’s detection of small talk. It looks messy, since the graph shows the likelihood of each individual sentence being small talk. Still, you can see the propensity of high confidence scores at the start and end of the meeting – very much like the orange line summarised above.
We have some way to go with this – and it will be interesting to see how small talk varies by age, by gender, and maybe even by nationality. However, the principle will stand – let’s focus when we are supposed to focus (we can help you measure that!), but allow people the human interaction that separates us from our silicon friends. I mean, when was the last time you heard a machine talk about the weather? Oh yes, the weather forecast on your phone or smartwatch… damn.