Assurance: the completeness conundrum

In December 2015, ICAEW’s Audit and Assurance Faculty published the third “milestone” of its journey towards assuring the whole of the annual report, entitled The journey milestone 3: Assuring the appropriateness of business information. The paper sets out to answer the question: how can an assurance provider be confident that disclosures made by a business give a fair picture of what is going on in that business with the greater part of the paper given over to the question of how the assurance provider should address the question of completeness.

In practice, assurance providers don’t get asked for opinions over completeness alone.  Preparers and users of information are most often looking for an opinion over the general “rightness” of some disclosed information and that “rightness” is frequently expressed in terms of “proper preparation” or “fair presentation” in accordance with a disclosed basis,

The first challenge, then, for the assurance provider is to decide to what extent an assertion from management and an opinion from an assurance provider couched in the terms “is properly prepared” or “fairly presents” inherently includes an assertion from management and an opinion from the assurance provider that the disclosed information is complete.

This is further complicated when the information disclosed is not intended to be 100% of the set of data, but a selection of data above or below a particular threshold, or with particular characteristics, for example, “the most significant items”. This would be the case for key performance indicators (KPIs).

In my view, for any data set which management present as a sub-set of a finite universe of data – e.g. top 5 contracts by revenue in the period, broadcasts with more than 1 million viewers – the assertion that it is properly prepared inherently implies that it is complete.  Certainly, in the case of finite populations, management can demonstrably begin with the complete universe of, say, contracts or broadcasts, and then select the complete set of the five with highest revenue or the ones that had more than a threshold number of viewers, respectively.

For such a dataset to be properly prepared must, surely, mean that it is the complete sub-set.  To provide assurance over its proper preparation, without testing completeness – even if disclosing as a limitation in the scope of work that no testing was carried out on completeness – doesn’t seem to be an option.

In the case of infinite populations, such as KPIs, it’s impossible for management to demonstrate that they selected from the full set.  Instead, they are demonstrating that they applied a fair process to the setting of boundaries to create a finite subset.

For some assurance engagements over KPIs, completeness is intentionally taken out of the question.  An example would be assurance over NHS quality accounts, where the indicators within the scope of the assurance are mandated by an external body.  Management are not asserting that those indicators are a complete sub-set meeting a definition (e.g. “key”); the assurance provider’s opinion on proper preparation cannot be interpreted as an opinion on the completeness of the indicators selected by the external body.  Indeed, the user of the assurance is left to form their own view as to whether the right indicators have been brought into scope.

It would not, however, be possible for management themselves to select a sub-set of indicators to disclose and ask the assurance provider for an opinion on proper preparation, excluding from scope the question of completeness.  Without any explicit or implied assertion from management that boundaries have been set so as to mark out a complete sub-set of some kind, the data is being presented as, apparently, selected at random.  In that scenario, what could be the rational purpose for its disclosure or for assurance?

The paper doesn’t address these initial acceptance questions, though in practice, in my view the question “is it possible to form an objective opinion as to whether this set of information is, in a meaningful sense, complete” is critical to the decision to accept an assurance engagement over non-financial information.

In the guidance that is provided, the paper rightly goes beyond the question of completeness into relevance. It notes that, in the case of KPIs, first management and subsequently the assurance provider have to consider whether the set chosen is sufficient (i.e. complete) and necessary (i.e. all the selections meet the definition of “key” performance indicators.)

I’m reminded of the FRC’s Guidance on the Strategic report which states “The number of items disclosed as a result of the requirements to disclose principal risks or KPIs will generally be relatively small; they should not, for example, result in a comprehensive list of all performance measures used within the business or of all risks and uncertainties that may affect the entity.”  Clearly the FRC thinks that, in the context of the strategic report, over-disclosure is as much of an issue as incompleteness.

The greater part of the paper focusses on how the assurance provider gathers evidence over completeness.  For readers like me, with a background in financial statement audit, it might have been helpful to break this section down more clearly in two parts corresponding to controls testing and substantive testing.

The paper recommends mapping out what management are trying to achieve by way of completeness (referred to in the paper as “control objectives”) to their process for achieving it.  The assurance provider considers whether each stage of the process will achieve what is intended (design), and whether there is evidence that the process was adhered to (operation).

Finally, you look at the actual output – the sub-set selected.  Does it make sense?  In the case of KPIs for example, we could create an expectation for the extent to which we’d expect this set of data to be comparable with that of the organisation’s peers, or other datasets prepared by the same organisation for different users, and then test that by actual comparison.  We could also test the extent to which the data is, in practice, used in decision-making by management and Those Charged With Governance.

The paper doesn’t address how the assurance provider will deal with problematic findings.  What if management’s process for achieving completeness – boundary-setting – is not adequate to start with?  Or hasn’t been adhered to in full in the period?  Should the assurance provider decline the engagement, or is it possible to do sufficient testing of the actual output to compensate for the weakness in design and/or operation of the process?

On the other hand, if the process is strong, is testing the output of any additional value? In practice, if management have evidently engaged appropriately with internal and external stakeholders, is the assurance provider ever in a position to argue that the resulting data selected is not complete and/or relevant?

I think a good next step would be to collect examples of how these issues are resolved in practice.  In the meantime, this paper will be useful for financial statement auditors who are grappling with the challenges of non-financial assurance, and finding the issue of completeness, outwith a system of double-entry book-keeping, thinly addressed in professional standards and guidance.

It will also help preparers to assess how comfortable they are with completeness in their data collection and reporting processes for non-financial data and their assertions over it.  And finally, it might lead everyone interested in company reporting to reflect on the extent to which “fair, balanced and understandable” inherently implies “complete”.

%d bloggers like this: