Where next with assurance: a review of ICAEW’s consultation paper

Your reading for the weekend is Where next with assurance?, a consultation paper from ICAEW which aims to take their series on assurance, “The Journey” to the next stage. It draws on feedback from ICAEW members who are developing assurance engagements in practice, and brings together strands from the corporate reporting and assurance debate around the world.

I have mixed feelings about this paper, so I hope that many of you will read it and respond to the consultation questions ask.  It is a valuable contribution to the assurance debate, but I can’t give my unqualified assent to the five key views expressed.

Addressing each of those “We think that…” statements in turn:

“Rather than focusing on the annual report – or any other single report of an organisation – we should think about the right way to use assurance to meet the needs of the organisation itself”

I don’t believe that the needs of the organisation which is the subject of the assurance are the only, or principal driver of the need for assurance.  In my view, assurance – and the role of the chartered accountant – are relevant to the public interest, to better business and to a better society; the first question we should ask about assurance is whether it is meeting those needs.

Moreover, if we were trying only, or mainly, to address the needs of organisations themselves, many of the issues assurance providers face with determining whether engagements are appropriate for users’ needs, who should be permitted to use and rely on reports and what consequences this has for the assurance provider’s liability would disappear.   So to position assurance as focussed primarily on addressing the needs of an organisation, not its stakeholders, skirts around some of the biggest practical obstacles to the development of an assurance market in the UK.

“The role of the board in determining the need for assurance, internally and externally, is vital to understanding the future of assurance”

No-one could disagree with this statement.  But I’m concerned that this view risks polarising the relationship between executive directors and the other parties who might be interested in assurance.  ICAEW skirts close to painting a picture of a world where directors know best and the very valid concerns of other stakeholders, including the public and those who feel excluded from the debate on trust in business, are automatically accorded lower value.

I believe that one of the most valuable roles of a chartered accountant is to facilitate engagement, and therefore relationship, between executive directors and other stakeholders, so that we can achieve a consensus on where better, more useable, information – and more assurance – is needed.

The paper correctly observes that the modern assurance market is undeveloped “…apart from in a few specific and regulated areas.”  It fails, though, to explore the reasons why a thriving assurance market has developed only where assurance is required by regulation.  This question is of fundamental, structural important to the market and we will not get very far if we ignore it.

The challenge facing assurance providers is determining whether there is a market, and for what, in the absence of regulation.

“Getting the right assurance in the right place is essential. This means asking the right questions about risks and information flows, and in a complex organisation it means keeping track of the situation with an assurance map”

Again, I’m sure no-one could argue with this.  However I don’t agree that “…the first step is working out where there are risks associated with information flows.”   Recent corporate scandals have demonstrated that where boards do not start from the perspective of strategy (and the risks that threaten achievement of strategy), they do not accord an appropriate level of importance to strategic non-financial information flows, and therefore these don’t make it onto the assurance map to start with.

I’d say that VW would be one example of this – their business model had an inherent tension between the need to comply with environmental regulation and the preference of their customers for the high performance that can be achieved unconstrained by regulation.  Unless that strategic tension is acknowledged, the related information flows are unlikely to be regarded as high risk.  Tesco is another such example where the challenge for the Board was recognising the extent to which the company’s profitability depended on pushing compliance with regulation to the absolute limits.

Essentially, once a company has fallen into the trap of not making meaningful disclosure of the most strategic information, then any assurance map that starts by asking what risks relate to the information which is gathered and disclosed will already have some significant omissions.

“Assurance can be provided over risk disclosures or forward-looking information, even if the question asked is different from ‘is this true and fair?’”

The paper sets out the four characteristics of “useful” forward-looking information – it is: understandable, relevant, reliable, and comparable.   The paper then proposes that “An assurance provider can carry out an engagement to provide an opinion on whether information that cannot be assessed yet for accuracy has the four characteristics for usefulness.”  I would agree that those are characteristics of a good basis of preparation of forecast information, but in my view, any assurance opinion on that forecast information would s be expressed as “properly prepared” in accordance with the basis of preparation, rather than “very useful”.

Of course a good basis of preparation does result in useful information, but not necessarily for every user – indeed existing assurance practice, including case law relevant to the financial statement audit, recognises that the needs of a homogenous population of users may differ even from that of any individual user who is a member of that population.  If we imply that it is simple to determine what is “useful” to a potentially very large range of users, we again skirt round one of the most difficult issues in the development of an assurance market.

Needless to say, the answer to the question “what would be useful here?” may well be one of the unknown unknowns as I think we could argue it was in the financial crisis. Is it “useful” for any set of forecasts to anticipate the zombie apocalypse? I suspect all will argue that it is not, until afterwards, when they’ll ask where the auditors were.

“Assurance can add value to narrative information using current principles and techniques, and the skilled judgement of preparers and assurance providers”

The previous section of the paper concludes with the words: “An assurance engagement on these subjects might consider whether the information is useful, or whether the process has been implemented as described, rather than asking ‘is this true and fair?!”  This is, I think, a misleading question, since it suggests that the financial statement auditor is consciously considering whether the financial statements are “true” and “fair”, as if those were separable testable characteristics.  In reality, in my view, the phrase “true and fair” has passed into regulatory rhetoric, as having an understood meaning as a whole (compliance with GAAP) that cannot be analysed down to its constituent parts.

The relevant question when exploring the assurance market is how long it might take for non-financial assurance phraseology to pass into the equivalent assurance rhetoric.  I would argue that, in the context of ISAE 3402 “fairly presents”, “suitably designed” and “operating effectively” have crossed the rhetorical Rubicon.  But I think it may be a long time before “fair, balanced and understandable” has acquired a similar standing as a phrase, the meaning of which is understood and shared, without reference to its constituent parts.

In conclusion

I think it needs to be clearer up-front that this paper is intended to be a provocation, a thought piece, rather than a technical analysis.  I find the idea of a continuum of assurance that embraces internal and external assurance helpful but I think the paper could be clearer about the fact that its references to assurance address the role of the chartered accountant in not only providing independent external assurance, but also in developing innovative interfaces which allow the value of internal assurance to be unlocked for stakeholders.

Advertisements

Assurance: the completeness conundrum

In December 2015, ICAEW’s Audit and Assurance Faculty published the third “milestone” of its journey towards assuring the whole of the annual report, entitled The journey milestone 3: Assuring the appropriateness of business information. The paper sets out to answer the question: how can an assurance provider be confident that disclosures made by a business give a fair picture of what is going on in that business with the greater part of the paper given over to the question of how the assurance provider should address the question of completeness.

In practice, assurance providers don’t get asked for opinions over completeness alone.  Preparers and users of information are most often looking for an opinion over the general “rightness” of some disclosed information and that “rightness” is frequently expressed in terms of “proper preparation” or “fair presentation” in accordance with a disclosed basis,

The first challenge, then, for the assurance provider is to decide to what extent an assertion from management and an opinion from an assurance provider couched in the terms “is properly prepared” or “fairly presents” inherently includes an assertion from management and an opinion from the assurance provider that the disclosed information is complete.

This is further complicated when the information disclosed is not intended to be 100% of the set of data, but a selection of data above or below a particular threshold, or with particular characteristics, for example, “the most significant items”. This would be the case for key performance indicators (KPIs).

In my view, for any data set which management present as a sub-set of a finite universe of data – e.g. top 5 contracts by revenue in the period, broadcasts with more than 1 million viewers – the assertion that it is properly prepared inherently implies that it is complete.  Certainly, in the case of finite populations, management can demonstrably begin with the complete universe of, say, contracts or broadcasts, and then select the complete set of the five with highest revenue or the ones that had more than a threshold number of viewers, respectively.

For such a dataset to be properly prepared must, surely, mean that it is the complete sub-set.  To provide assurance over its proper preparation, without testing completeness – even if disclosing as a limitation in the scope of work that no testing was carried out on completeness – doesn’t seem to be an option.

In the case of infinite populations, such as KPIs, it’s impossible for management to demonstrate that they selected from the full set.  Instead, they are demonstrating that they applied a fair process to the setting of boundaries to create a finite subset.

For some assurance engagements over KPIs, completeness is intentionally taken out of the question.  An example would be assurance over NHS quality accounts, where the indicators within the scope of the assurance are mandated by an external body.  Management are not asserting that those indicators are a complete sub-set meeting a definition (e.g. “key”); the assurance provider’s opinion on proper preparation cannot be interpreted as an opinion on the completeness of the indicators selected by the external body.  Indeed, the user of the assurance is left to form their own view as to whether the right indicators have been brought into scope.

It would not, however, be possible for management themselves to select a sub-set of indicators to disclose and ask the assurance provider for an opinion on proper preparation, excluding from scope the question of completeness.  Without any explicit or implied assertion from management that boundaries have been set so as to mark out a complete sub-set of some kind, the data is being presented as, apparently, selected at random.  In that scenario, what could be the rational purpose for its disclosure or for assurance?

The paper doesn’t address these initial acceptance questions, though in practice, in my view the question “is it possible to form an objective opinion as to whether this set of information is, in a meaningful sense, complete” is critical to the decision to accept an assurance engagement over non-financial information.

In the guidance that is provided, the paper rightly goes beyond the question of completeness into relevance. It notes that, in the case of KPIs, first management and subsequently the assurance provider have to consider whether the set chosen is sufficient (i.e. complete) and necessary (i.e. all the selections meet the definition of “key” performance indicators.)

I’m reminded of the FRC’s Guidance on the Strategic report which states “The number of items disclosed as a result of the requirements to disclose principal risks or KPIs will generally be relatively small; they should not, for example, result in a comprehensive list of all performance measures used within the business or of all risks and uncertainties that may affect the entity.”  Clearly the FRC thinks that, in the context of the strategic report, over-disclosure is as much of an issue as incompleteness.

The greater part of the paper focusses on how the assurance provider gathers evidence over completeness.  For readers like me, with a background in financial statement audit, it might have been helpful to break this section down more clearly in two parts corresponding to controls testing and substantive testing.

The paper recommends mapping out what management are trying to achieve by way of completeness (referred to in the paper as “control objectives”) to their process for achieving it.  The assurance provider considers whether each stage of the process will achieve what is intended (design), and whether there is evidence that the process was adhered to (operation).

Finally, you look at the actual output – the sub-set selected.  Does it make sense?  In the case of KPIs for example, we could create an expectation for the extent to which we’d expect this set of data to be comparable with that of the organisation’s peers, or other datasets prepared by the same organisation for different users, and then test that by actual comparison.  We could also test the extent to which the data is, in practice, used in decision-making by management and Those Charged With Governance.

The paper doesn’t address how the assurance provider will deal with problematic findings.  What if management’s process for achieving completeness – boundary-setting – is not adequate to start with?  Or hasn’t been adhered to in full in the period?  Should the assurance provider decline the engagement, or is it possible to do sufficient testing of the actual output to compensate for the weakness in design and/or operation of the process?

On the other hand, if the process is strong, is testing the output of any additional value? In practice, if management have evidently engaged appropriately with internal and external stakeholders, is the assurance provider ever in a position to argue that the resulting data selected is not complete and/or relevant?

I think a good next step would be to collect examples of how these issues are resolved in practice.  In the meantime, this paper will be useful for financial statement auditors who are grappling with the challenges of non-financial assurance, and finding the issue of completeness, outwith a system of double-entry book-keeping, thinly addressed in professional standards and guidance.

It will also help preparers to assess how comfortable they are with completeness in their data collection and reporting processes for non-financial data and their assertions over it.  And finally, it might lead everyone interested in company reporting to reflect on the extent to which “fair, balanced and understandable” inherently implies “complete”.

Audit quality: asking the right questions

I am, of course, delighted to read the headline Audit Committee Chairs believe audit is improving.  The Financial Reporting Council’s press notice about this year’s FRC Audit Committee Chairs (“ACC”) Survey says “ACCs scored their auditors highly across all questions. There was also evidence of improvement in all categories with the highest being that of independence and objectivity. The lowest overall scores, for a second year, were for questions on professional scepticism and the auditor’s response to regulatory oversight suggesting there is still some work for firms to do in this area.”

But how much does the survey really tell us about audit quality?  Not enough, in my view.

By asking about focus, approach, risk assessment, materiality, resources, scepticism, independence, communication and interaction,  the questionnaire effectively imposes its own answer to the question “What is a good audit?”  Each of those factors may be an important dimension of quality, but it seems to me that we’re limiting the survey to confirming what we think we already know, rather than risking uncovering a more challenging definition of audit quality.

The survey did go further this year in asking ACCs what other criteria they use to assess audit quality outside of the headings in the questionnaire.  But I’d also like to know which components of audit quality they regard as the most important.  Also, given there is an inherent tension between some of the factors, which way would the ACCs sway if they had to choose between factors?

I’d like to try to structure the questionnaire so that ACCs rate the components of quality in order of importance or are required to prioritise apparently competing factors.  As an example, they could be asked to choose between pairs of factors such as independence versus continuity; sector specialism versus breadth of experience; high level of coverage versus timeliness of identification of issues – both in terms of how important those factors are to quality and then in terms of which they felt their auditor exhibited more strongly.

Of course, given many ACCs will have completed this questionnaire before, they will be aware of what aspects of quality the previous surveys have enquired about.  So the questionnaire needs to actively probe to identify components of audit  quality other than the old comfortable favourites.  These other components might include firm sector specialism and individual (partner) sector specialism but ought also to include some of the aspects of quality that we all find it hard to acknowledge and talk about, for instance empathy, emotional resilience, a sense of proportion.

It would be particularly interesting to know to what extent “perceived calibre of individual” – one of the alternative components of audit quality volunteered by the ACCs – is made up of observable characteristics and behaviours as opposed to subjective impressions.  This could be illuminated by asking the ACCs to score individuals from the audit team across a range of observables and then to score individuals overall for “perceived calibre”.

Finally, while I don’t want to be cynical, I note that more than 10% of the respondents had changed auditor as a result of a competitive tender within the previous 12 months.  It’s reasonable to expect that ACCs who have recently selected a new auditor and, of course, done so on the basis of quality, will be satisfied with the quality of their auditor.

The debate around indicators of audit quality is largely being led by regulators, who have a vested interest in keeping the emphasis on indicators  that can be quantitatively measured – even where a measure may have a non-linear relationship to audit quality.  Indeed, the US regulator, the Public Company Accounting Oversight Board (PCAOB), says much of the information it measures might contain flaws, but it is still worth producing it in order to promote debate about the quality of the audit.  And I agree – it’s no bad thing for ACCs have market-level, firm-level and engagement-level data that they can use to frame a discussion with an audit engagement partner as to how their firm will deliver a quality audit.

It would be a mistake, however, to give too much weight to the things we can measure at the expense of broader and deeper insight into the nature of and drivers of audit quality.  And it could harm quality if the drive to achieve attractive audit quality metrics were to become a disincentive to audit firms to innovate.

Perhaps most important of all, though, the ACCs are just one of many stakeholder groups.  The ACCs represent those with direct equity investments in the company but there is a much wider group of “investors” who are exposed to the risk of loss of capital in public companies – society as whole.  The FRC recognises this – at least, that’s what I infer from their introduction of the concept of the “objective, reasonable and informed third party” in the consultation document Enhancing Confidence in Audit.  How does that objective, reasonable and informed third party judge the quality of an audit or the calibre of an auditor?  We need to ask a lot more questions – a lot more of the right questions – if we’re going to find out.

 

False Assurance: how ICAEW’s new film invites us to think about human factors in auditing

I had the privilege of being invited to speak at ICAEW’s premiere of False Assurance “An exciting film drama created to provoke discussions on how accountants, auditors and company directors should act when faced with difficult situations.”  Here is a slightly extended version of the speech I gave.


It is a privilege and a pleasure to be given the opportunity to share some personal reflections on False Assurance. It is really excellent – I love it – and I think that Duncan and the Professional Standards team should feel really proud not only of the content but also of the production values.

I first watched the film four weeks ago and I have to say I had an almost visceral reaction to it.  It was very uncomfortable to watch.  The scenarios were so plausible.  Also, I have to confess to being a bit of an aficionado of those classic public information films from the 1970s – you know, the ones that dole out disfigurement and death to drink-drivers, children trespassing on railway lines and women running in the street.  This film is like one of those: it builds up a sensation of mounting dread.  You know something bad is going to happen to these nice people, but what? And to whom?  Here, the answer might as well be: everything that possibly could, and to everyone.

That’s the beauty of it.   The scenario that is developed is one in which there are a number of factors that all contribute to corruption and fraud going undetected for some time.  None of the characters are unbelievably good, and none unbelievably bad – all of them succumb to pressures that we see in real life in one form or another.

I’ve worked in both Professional Practice and Audit Quality for a number of years now, so I’m particularly interested in how the auditors in the film behave and why – and how we should respond to that. In my experience, audit firms tend to take what we might call a “person approach” to dealing with quality issues. Poor decisions are seen as arising primarily from flaws in an individual person’s mental processes such as forgetfulness, inattention, poor motivation, carelessness, negligence, and recklessness.

When we try to eliminate individual weaknesses, the sort of measures we put in place are directed mainly at reducing unwanted variability in human behaviour.  It’s a regulatory compliance approach.  So we ask for more procedures and more checklists; we design disciplinary measures that appeal to fear – if not fear of litigation, fear of sanctions – naming, blaming, shaming and, these days, fining.  There’s an uncomfortable implied moral subtext to this approach in that it seems to inherently assume that bad things happen to bad people.

The film instead highlights the value of what we might call the “system model”.  In this model for understanding failure, human errors are seen as inevitable products of systemic weakness. We can’t change the human condition, so we have to change the conditions in which humans operate.

An audit team is a system of defensive layers – like the “Swiss Cheese” model proposed by James Reason, Professor of Psychology at Manchester University[1]. There are holes continually opening, shutting, and shifting in each slice of cheese. The presence of holes in any one “slice” does not normally cause a bad outcome. Usually, this can happen only when the holes in many layers momentarily line up, as in the film, where there are multiple opportunities for the fraud to be identified, and multiple failures – some individually minor – are required for it to go undetected.

In the film, you see the individual active failures – poor decisions made by each character – but you also observe the latent conditions that increase the possibility of poor decision-making.   Professor Reason uses the analogy of mosquitos for active failures, versus mosquito breeding grounds for latent conditions.  You can swat all the mosquitos you want, but if you don’t drain the swamp, they’ll keep coming – and you’ll have to keep swatting.  In the film, these swampy conditions include overwork, time pressure, a culture of rewarding strong relationships with client executives and the sort of hierarchy where none of the senior people seem to seriously entertain the possibility that the concerns of more junior members of the team might ever amount to much.

I want to make particular mention too of the way the CFO in the film plays on institutional sexism by criticising the female audit partner for “interrogating” him.  Research at Stanford University is ongoing but shows that women receive 2.5 times the amount of feedback that men do about aggressive communication styles. Another study found that negative personality criticism showed up thirty times as frequently in appraisals of women as in appraisals of men, though the population selected for that review had all been considered to be equally strong performers. The women were much more likely to be described as “abrasive”, “coming on strong”, “strident” or “aggressive”.

So one of the latent conditions in our profession is a particular disadvantage to women.  It seems women are much more likely to be criticised for the robust challenge, persistence and scepticism that would be praised in a male colleague.

So why does the idea of personal responsibility for failure persist? Well for one, we tend to prefer it – it resonates with our ideas of responsibility and accountability.  It’s much easier to sanction a person than to change the culture that fostered that person’s mistakes. And sadly, we’re all human and we find blaming individuals emotionally satisfying.

We also like the idea of single causes because we are afraid of risks we can’t control. Sidney Dekker, Professor of Safety Science at Griffith University, Australia says “The failures which emerge from normal everyday systems interactions question what ‘‘normal’’ is. It is this threat to our epistemological and moral accountancy that makes accidents of this kind so problematic. We would much rather see failure as something that can be traced back to a single source, a single individual. When this is not possible in the assignation of blame and responsibility, accuracy or fairness matters less than closing or reducing the anxiety associated with having no cause at all. In the Western scheme of things, being afraid is worse than being wrong, being fair is less important than being appeased. Finding a scapegoat is a small price to pay to maintain the illusion that we actually know how a world full of risk works.”[2]

So what do we do?  We need a reporting culture and we need safe spaces to analyse what is reported.  Without a detailed analysis of mishaps, incidents and near misses we have no way of uncovering recurrent error traps or of knowing where the “edge” is until we fall over it.  Both Reason and Dekker refer in their work to “Just Culture”, in particular restorative Just Culture rather than retributive Just Culture.

A Just Culture is one with a vital “collective understanding of where the line should be drawn between blameless and blameworthy actions” (Reason).  It’s a culture that learns and prevents by asking why it made sense at the time for highly intelligent, highly educated, highly trained and highly regulated professionals to do what they did. How many audit firms are really asking that question about failures?

I am hoping therefore that no-one is going to leave the film thinking that the next step is to warn audit partners about that Bad Things will happen to them if they don’t get written representations about related parties.  Let’s instead look for our swamps and set about draining them.  In the context of the film that might include:

  • Looking at how complaints to “relationship” partners about audit team members are handled
  • An honest look at whether and how individual patronage plays a part in promotion processes and the allocation of valuable work within firms
  • Examining the trends/differences in language used in performance appraisals to describe certain behaviours when shown by women or men.

Those are just a few suggestions – there are many other areas to consider.

As Professor Reason says “Perhaps the most important distinguishing feature of high reliability organisations is their collective preoccupation with the possibility of failure. They expect to make errors and train their workforce to recognise and recover them. They continually rehearse familiar scenarios of failure and strive hard to imagine novel ones. Instead of isolating failures, they generalise them. Instead of making local repairs, they look for system reforms.”

I would like to say that I work in a High Reliability Organisation.  But are we audit firms prepared to turn that unflinching scrutiny on ourselves?

[1] Human error, models and management (Reason) BMJ. 2000 March 18; 320 (7237): 768–770

[2] Cognitive engineering and the moral theology and witchcraft of cause (Dekker, Nyce) 2011

Unable to see the wood for the trees: a tale of thought leadership in the professional services industry

Guest post by Simon Griffiths, thought leadership editor

I’m not sure anyone could ever realistically put a figure on the amount that the professional services industry spends on thought leadership. Surveys, research, writers, designers, academics; these things rarely come cheap. And that’s before you even factor in the cost of the time spent by the authors themselves.

Whatever the number is, it’s typically seen as acceptable because thought leadership (TL) is apparently something that all such organisations should produce as a matter of course. It’s an expected part of being a professional strategist or advisor.

But what if much of that cost was wasted? What if much of the content produced does little to enhance brand values, to kick-start interesting client conversations or to move an organisation a step closer to being seen as genuine thought leaders?

Sadly, I think that’s exactly what is happening. In fact, I would suggest that the majority of the content currently generated within professional services which purports to be TL isn’t actually TL at all.

I believe that most of it should more accurately be described as commercial marketing content.

So much content that I see has no clear hypothesis, it favours straightforward reportage over more editorially engaging insight and opinion and often makes little or no attempt to hide the self-serving commerciality which gave rise to the content in the first place.

That’s not to say it’s bad content. Some of it can be quite interesting as reference material. Some of it is beautifully presented and marketed. Some of it, I’m sure, will make its authors thousands of pounds in fees.

But it’s not thought leadership.

If it’s commercial marketing collateral, designed to tell the reader what the author knows and/or what the author does, then let’s just call it commercial marketing collateral.

Let’s reserve TL as a classification for content which genuinely articulates what the author thinks. Let’s reclaim that tag for the content for which it was intended.

What’s in a name? Well, the longer that we – as an industry – continue to pump out content which does little to turn the dial towards something more cerebral or intellectual, the more disenchanted our buyers become. Pretty soon, that TL tag is going to be seen as the most sure-fire way of having content sent immediately to Deleted Items.

An assessment of content

Reviewing content is always going to be a hugely subjective exercise. Nevertheless, to investigate this issue further, we selected 47 pieces of content (on the subjects of cyber, social media and mobility) at random from the Whitespace database (covering 24 different content-producing organisations). Our team then reviewed each piece of content against seven key indicators which I believe characterise a decent piece of TL.

The seven questions the reviewers considered were as follows:

  • Does this content feature (a) a clear hypothesis; or (b) no hypothesis?
  • Does this content contain (a) opinion and insight; or (b) reportage and factual commentary?
  • Is the subject matter (a) topical; or (b) out-of date and/or self-serving?
  • Is the content (a) not explicitly commercial; or (b) explicitly commercial?
  • Is this content (a) cerebral / conceptual / inspirational; or (b) granular / detailed?
  • Does this content contain or highlight (a) intellectual tension; or (b) no intellectual tension?
  • Does this content’s credibility (if it has any!) come from (a) primary research and/or who the author is; or (b) from other sources, such as case studies?

Reviewers awarded one point for each of the categories above in which a piece of content exhibited characteristics which made it closer to the (a) end of the scale than the (b) end.

I set an unashamedly high editorial bar on this review process. If TL is a barometer of an organisation’s intellectual prowess – which I think it should be – then it needs to be robustly challenged from an editorial point of view.

My feeling was that anything which scored six or seven on this scale fell into the category of content which focused on “what we think” and therefore deserved to be called TL.

Of those 47 pieces, just seven scored this highly. Even if the bar is dropped to bring in those pieces which scored five, that only adds two more to the overall score.

So, at best, that means that we score nine of 47 pieces as being genuine TL; less than 20 percent. Not a great return for all that time and investment.

Twenty-one pieces scored two or less, with 17 scoring either three or four. Personally, my biggest issue here is not with the really low scoring pieces. I don’t think they were ever intended to be seen as TL. They’re straightforward, commercial pieces which have been incorrectly positioned (but more of that later).

The real issue that I think professional services firms have is with that slug of content in the middle. I speculate that much of the content here did set out with the intention of being TL but that a lack of genuine insight and intellectual property left it holed beneath the waterline – along with a lack of bravery.

Same old, same old

A typical example would be the large survey – so beloved of professional services – which scores points on the scale above for having a hypothesis (just about), for being topical, for not being explicitly commercial and for deriving credibility from primary research.

That’s four points in the bag but three more go begging because the end product is typically a list of one reported statistic after another which quickly defaults to providing ‘top ten tips’ for honing in on a particular (granular) problem and thus highlights no intellectual tension.

Where’s the insight; where’s the follow-on debate; where’s the appetite for saying something to genuinely make the reader raise an eyebrow in surprise or contemplation?

All of these things are absent; sacrificed in favour of simply telling the reader what the author knows (or, more correctly, what the author has learnt from this latest survey).

The value of hypotheses

When looking at what our panel of reviewers thought to the 47 pieces of content, there’s an interesting observation about the hypotheses behind each of those pieces. Eighteen of those pieces had no discernible hypothesis; rather raising the question of whether something can ever be called TL if it has no genuine hypothesis to be proved or disproved.

Personally, I feel that a decent hypothesis is a bare minimum for something to be called TL. In all honesty, 18 failures out of 47 was probably a rather kind reflection on this particular selection of content. Several pieces exhibited hypotheses which verge on the self-serving – statements of fact rather than mysteries to be investigated or debated – yet we gave the authors the benefit of the doubt.

As examples, compare and contrast the following hypotheses (as summarised by our reviewers):

  • Businesses today need bespoke forensic devices which will help them perform the analyses required in order to actively navigate and respond to cyber threats.
  • Cyber attacks are more prevalent than ever before. Combating them requires more investment, better leadership and improved accountability.
  • After a strong initial uptake of social tools and technologies, organisations now find themselves at a crossroads. If they want to capture a new wave of benefits, they’ll need to change the ways they manage and organise themselves.
  • Cyber criminals are exploiting the boundaries which exist between public and private sector. Far greater information sharing between the two is going to be required in the fight against cyber crime.

Unsurprisingly, item #1 was not awarded a point under our scoring system for having a “proper” hypothesis. Rightly so, as that is a sales pitch, masquerading as a hypothesis.

Hypothesis #2 didn’t make the grade either. While it sounded promising, what followed was a survey, accompanied by a ten step guide to combating cyber threats, summarised by our reviewer as “a perfectly good marketing document”. The “hypothesis” was decidedly self-serving.

Hypothesis #3 was awarded a point by our review. The content which followed however was a prime example of ‘what might have been’ with the reviewer stating it was, “heavy on the reportage; one statistic follows another with very little thought given to what this actually means”. A promising hypothesis was thus let down by a lack of insight, leaving the content to score just four out of seven.

Hypothesis #4 also picks up a point in the review. This piece then goes on to pick up all six other points as well with an interesting blend of first person opinionated style, well reasoned arguments, author credibility and intellectual tension.

The most interesting point about this final item however is that it is “just” an article; not a report, survey or third party white paper. To many viewers, this piece cannot be TL because it does not look like traditional TL.

However, if you ascribe to the same set of defining characteristics for TL that I do, then this is TL – because you can feel that there is genuine intellect, insight and experience all being brought to bear.

At the heart of all that is good

For me, this is a critical distinction. I believe that true TL starts with opinion, insight and speculation, followed by data and evidence, not the other way around. Yet the manner in which TL content is typically generated across the industry often runs counter to this, focusing initially and primarily on data, putting the author(s) in the role of little more than reporter or salesman.

Done properly, I think that TL authors should more accurately be seen as pundits, analysts or even gurus.

So why does so much content get produced in this fashion? Most probably because TL is now simply seen as a sub-set of marketing, designed to produce content whose aim is to generate revenue today. As such, why wouldn’t you focus on current issues, methodologies, solutions, ‘top ten tips’, case studies and advice?

The problem therefore is the way in which the very expression “thought leadership” has been perverted to mean something different to what was originally intended.

My view is that proper TL is about sales tomorrow, not sales today. It’s about building up brand value, establishing your people as the cleverest in the market and allowing the reader to take a brief step back from the hurly-burly of modern business life to take a more considered view of some of the bigger issues swirling around at the fringe.

Those organisations prepared to indulge some of their brightest minds by allowing them to think in this manner will be the winners in the long run. While this should never extend to complete vanity projects which have little or no hope of ever being commercialised, it does nevertheless require an organisation to be rather less draconian about how a degree of commerciality must be shoe-horned into every piece of content.

It’s that desire to commercialise all outputs which has served to shift the perception of what TL is. To many people, TL is now something which says ‘here’s a problem; and here’s our solution’. Personally, I think that’s just an advert.

I believe that TL is actually something which says ‘here’s a problem you’ve not even considered yet’ or ‘here’s a very new take on an existing issue’. While commerciality cannot be ignored, bringing it into the equation too early risks ruining nascent TL.

I think I need a “we”

Another threat to good quality TL is, in my view, the apparent insistence on by-lining content to multiple authors or appearing to speak with one single corporate voice. In both instances, you can virtually guarantee that the quest for consensus during the writing process has resulted in all the intellectually juicy stuff being stripped out.

In commercial content, that’s absolutely understandable. If I want someone to fix my Finance function’s risk management model for example, I want to know that my advisors all speak with one voice when telling me the best way forward.

In more aspirational TL content however, I want to hear the most interesting views from the most experienced individuals.

If my question is not “can you fix my risk model?” but is actually “what should the model look like in three years’ time?”, I’m likely not looking for a single answer – because I’m smart enough myself to know that the answer is only now evolving or emerging.

Our industry seems unwilling to admit that multiple, different views can be held within a single organisation and that there is a debate to be had. I take the opposite view; that our clients might just enjoy being part of that debate.

Plus, this is a great way of showcasing individual intellectual capability. In an industry in which organisations claim to differentiate between themselves on the strength of their people, passing up on this opportunity – in favour of profiling more tools, top tips and methodologies – has always seemed perverse to me.

That’s especially poignant right now as content marketing becomes de rigeur across the industry. Such marketing needs content which is likable and sharable (in social media terms). Such content needs to imbued with a sense of personality and charisma; something which is really tricky to achieve within content which is based on consensus.

Trusting the used car salesman

There’s a point to be made here as well about audience expectations. To use an analogy, if your car is broken or you know you need a new car, you seek out a mechanic or a salesman. But if you’re actually intrigued to know more about the future for car design, or maybe how that could impact on your car buying choices in future, you seek out the guru, the pundit or the opinion columnist, not the mechanic or the salesman.

Too often within professional services, the pieces of content which could have been more forward-looking, speculative or cerebral are left in the hands of the mechanic or the salesman – who quickly revert to type by trying to fix something or to sell you something.

As an industry therefore, we only have ourselves to blame if many of our clients are dismissive of our TL efforts – because they know a sales pitch when they read one. We are our own worst enemies in this regard.

As mentioned before, I have no problem with the ‘what we know’ or ‘what we do’ content. It is absolutely worthwhile content which serves its purposes. It just doesn’t deserve to be called TL.

If I think of a virtual briefcase of ten pieces of content being taken into a meeting by a client-facing employee, I can imagine that eight or nine of those slots should be stocked with contemporary, commercial content. However, one or two should be reserved for something just a little bit different; something to inspire a different type of conversation.

That – I believe – is the content which will really get you noticed. This is what will differentiate you from the competition; not yet another survey, co-authored with yet another third party research house or academic.

It is this content for which the tag of “thought leadership” should be reserved.

‘I never asked to be thought leadership, you know…’

There is an interesting point though about the content which never asked to be seen as TL but has been positioned as such anyway – mainly because, as an industry, we have become so slack in terms of defining what is and isn’t TL.

Talk to Whitespace founder Fiona Czerniawska and she will tell you how firms become frustrated at how their overall Whitespace ranking is often based on content which they never said was TL. What Whitespace represent though is the ‘everyman’ client; the visitor to the corporate website who has neither the appetite nor the inclination to try to establish what is TL and what isn’t. In their eyes, they just see content.

Every organisation that has ever given the impression that pretty much anything it produces is TL (and there are many) has contributed to this muddying of the waters.

Arguably, the best way forward now for any organisation is to undertake a severe content cull across their online platforms, to properly articulate what constitutes TL in their mind and to explicitly signpost their clients in the direction of where that content ‘lives’.

In addition, while acknowledging that really strong, commercial content should always represent the majority of any organisation’s total output, a second (smaller) stream of more intellectually minded content should be carefully nurtured and protected.

One stream should not be mistaken for the other: a small corner of any firm’s content landscape should remain forever cerebral!

Only once the sign goes up that says “only TL lives here” AND the editorial bar is raised far higher than it is now will our clients have a more favourable perception of professional services TL.

%d bloggers like this: