Of imposters and artifice

In his 17th Century masterpiece, the French playwright Molière tells the story of Orgon, an earnest but influenceable Parisian bourgeois who invites a certain Monsieur Tartuffe into his home to serve as a kind of consultant, charged with maintaining the family’s spiritual health.  Tartuffe talks a good game and plays on the insecurities of the insecure Orgon.  Before long Orgon has the sinister Tartuffe betrothed to his daughter Mariane (already in love with the handsome young gallant Valère) and Tartuffe shamelessly attempts to bed Orgon’s wife.  By Act Five, Tartuffe appears to have taken possession of Orgon’s home and assets.

On many levels, this feels like a parable for (what’s popularly known as) AI and its role in the world of qualitative consumer insight.  On these issues, much earnest content has been written.

In the past, visionary replacementists were pitted against naysayersReplacementists argued that AI could do everything a human operative could, and often better.  Naysayers contended that the machines were flawed. 

Now new contradictions and tensions have emerged. 

Some of the replacementists have established companies with brilliant engineers on their payrolls, developed their own machines which can analyse textual and visual data in the service of tricky marketing tasks such as demand space mapping, segmentation and the identification of semiotic codes.  Of course they’ve not taken human out of the mix, but they’ve built their offerings around their own tech. 

Kudos to them. 

And now as their success is plain to see, rather than naysayers, we see the emergence of a new type: the embracer.  Typically the small qualitative operator who argues that the tech’s not going to go away, and that it must be embraced, so as to produce better outputs, in combination with- and as servant to- human skill and sensitivity.  These embracers tend to dismiss the naysayers as luddites, dinosaurs, naïve stick-in-the-muds, apt to bleat on about their special feel for things.  They hail the marvellous potential of AI to liberate the imaginative researcher from grunt work and free up even more mind space for her special feel.

But let’s not throw the naysayer’s babies out with the replacementist’s bathwater.  This requires us not simply to think about “AI” but about the various applications of these technologies.  In the qualitative world of consumer insight, there are quite a few:  machine transcription, summarisation of transcriptions, machine analysis and clustering of responses, generation of concepts, creation of consumer avatars and visualisation of textual outputs (including consumer personae) among others.

Machine transcription is lauded as a massive time-saver, again enabling and freeing the consultant’s mind from grunt work.  And yet one might also argue that transcription is, in itself, an act of critical evaluation and engagement with the data.  Instead, machine transcription gives us a 2 hour group across 72 pages.  Fine for picking out a verbatim or two, granted, but not apt to be read otherwise - that’s for the machine.  It might be argued that having a colleague in the room to transcribe live, as an input to discuss (with a small d) has more to offer.

Having freed us from the grunt work of transcribing, the applications will now analyse it, in the sense of, for example, spotting patterns among similar typologies, identified based on voice recognition.  And importantly, delivered pretty much instantly. It’s easy to imagine a future scenario in which these outputs become the output in the eyes of research buyers, and not a nice to have extra.  In the same way that much of the developed world is now congenitally unable to navigate based on landmarks and spacial awareness because of the impact of smartphone maps, the qualitative specialist is in danger of breeding out analytical abilities.  And it ends in the sad spectacle of professional margins tapering away to nothing, handed over meekly to the lavishly funded AI corporations.

Similarly when it comes to generating concepts...  One highly respected practitioner I spoke to commented that having access to these tools for concept generation was like having a high functioning junior on hand to help out. But as we breed out the need for high functioning juniors, then there will in due course be no high functioning seniors in the pipeline (though probably a few low functioning ones.)  Perhaps that’s fine, but we should be aware what we’re doing and what we’re offering to client organisations longer term.

Consumer avatars: a marvellous way of getting ‘feedback’  on concepts, so much less messy than interaction with complicated, contradictory, inarticulate human beings with their crisps and their curly sandwiches.  Perhaps these techniques produce the right, commercially advantageous solutions, perhaps they don’t.  But they do encourage marketeers to engage not with the physically embodied beings that consume or could consume their products, but with composites of the utterances of people (one hopes) derived from whatever data are available.  Truer, more representative visions of important consumer commonalities or reductive clichés with all quirk and inspiring unpredictability digitally excised? The entertainment quality of some contemporary brand communication suggests the latter. (Older readers will remember a time when you kind of looked forward to the ad breaks.)

Qualitative consumer insight has never been in a strong position to suggest that it’s unimpeachable truth, or empirically derived reliable advice: the samples are statistically insignificant, and analysis often in some measure a product of subjective perspectives, and the life experiences, even emotions of the analyst.  It’s always been at its strongest as a vehicle for the communication of inspiring and thought-provoking ideas, and strategic suggestions,  relying on those often subjective perspectives. If organisations don’t want this any more, so be it.  But the qualitative consultant should be careful in presenting products designed and liable to replace her, as tools of deeper, more inspiring insight.

And if she does so, it’s surely incumbent on her to have a firm view as to what’s going on under the bonnet and to be very clear as to what that means for her clients.  And here’s the rub...

Artificial Intelligence is not intelligence.  If we understand intelligence in terms of sophistication of understanding of external phenomena and ideas,  and the ability to make reasoned judgements about these,  and to express them then AI doesn’t pass.  It can’t do this, it’s not sentient and has no physical agency. (Yes yes yes, someone is reading this and archly saying “...yet!” but let’s cross that bridge if we come to it.)  And it can only extrude outputs based on what  it’s trained on.  These are limited because not everything goes on online.  Real, physical, unmediated human interaction doesn’t happen online; OK, your airfrier might catch some of it, but...

So those avatars, for example,  unavoidably present an incomplete picture.  It behoves the qualitative consultant to come clean on this kind of thing.  Perhaps she should refer to it as Large Language Model derived output.  Or LLM output if she wants less of a mouthful.  But not intelligence.   And much less AI (an acronym which allows us to avoid confronting these realities.)

She should mention at some point in her sales pitches that what the LLM applications do is to extrude text and/or images based on probabilistic algorithms applied to the vast corpora of data on which they’re trained.  And she ought to outline some of the pitfalls inherent, notably the human tendency unquestioningly to attribute authority to these outputs.

Our human proclivity, when presented with language,  is not simply to receive its pure inherent meaning (because there is no such thing), but to derive meaning informed by instinctive inferences as to the speaker’s shared beliefs and context.  Meaning is created in the interaction, not conveyed by the words alone.  These, linguists argue, are inherent aspects of human communication, which happen whether or not the speaker is present with the receiver. In other words, when we read text inherent cognitive processes are triggered in which another human being is constructed. (Just pick up the nearest bit of text to you now – anything – and read a bit of it and you’ll see how you can’t avoid it)  The implication, of course, is that when we read text extruded from a LLM, we ‘form’ a human creator of one sort or another.  Which in turn brings a human authority to any text regardless of whether it’s extruded by machine or human.   The operator who refers to these outputs as intelligence merely nourishes such perceptions.

We should bear in mind also that the corporations who own these products and the providers of the vast amounts of venture capital funding their development have a vested interest in the ‘intelligence’ of AI remaining unchallenged.  They want us all to think of it as intelligence.  One might even say the high profile doomsters who talk of existential risks to humanity are part of the conspiracy (This shit is soooo crazy cool we need to figure out ways of  keeping  it in check guys! is the drift.   It’s the Don’t hurt ‘em Hammer schtick.) These technologies will progress the human race - cure cancer and reverse global warming.  In reality, the outsize measures of energy and water needed to cool the ever expanding data centres are well documented.

It may be that all this is so much piss in the wind and that we live in an age where human understandings of human behaviours and sentiments are simply no longer considered useful by organisations looking to develop strategies for sustainable profit and growth for their brands.  And if so, so be it.  But let’s not meekly condemn the business of subjective human understanding and inspiration to death as we allow over-hyped LLMs to sneak,  like Monsieur Tartuffe,  into our professional homes promising to better us as they quickly go about despoiling us.

Tartuffe, by the way, ends up scotched at the very last by a surprise revelation, and it’s happy ever after for Orgon and his family.  Ironically, this ending device is known in dramatic circles as a deus ex machina.  (Literally: God out of the machine.) The qualitative specialist certainly shouldn’t count on this happening.

Next
Next

No it’s NOT just semantics!