Models are, by nature, simplifications of (and approximations to) the real-world. Errors can be introduced at each stage (as presented in Figure below):

Specification error. This is the difference between the behaviour of the real-world situation and that captured within the specification or intentions of the model (sometimes this individual part is referred to as “model risk” or “model error”). Although one may often be able to provide a reasonable intuitive assessment of the nature of some such errors, it is extremely challenging to provide a robust quantification, simply because the nature of the real world is not fully known. (By definition, the ability to precisely define and calculate model error would only arise if such error were fully understood, in which case, it could essentially be captured in a revised model, with error then having been eliminated.) Further, whilst one may be aware of some simplifications that the model contains compared to the real-life situation, there are almost certainly possible behaviors of the real-life situation that are not known about. In a sense, one must essentially “hope” that the model is a sufficiently accurate representation for the purposes at hand. Of course, a good intuition, repeated empirical observations and large data sets can increase the likelihood that a conceptual model is correct (and improve one’s confidence in it), but ultimately there will be some residual uncertainty (“black swans” or “unknown unknowns”, for example).

Implementation error. This is the difference between the specified model (as conceived or intended) and the model as implemented. Such errors could result by mistake (calculation error) or due to subtler issues, such as the use of a discrete time axis in Excel (when events in fact materialize in continuous time), or of a finite time axis (instead of an unlimited one). Errors also arise frequently in which a model calculates correctly in the base case, but not in other cases (due to mistakes, or overlooking key aspects of the behavior of the situation).

Decision error. This is the idea that a decision that is made based on the results of a model could be inappropriate. It captures the (lack of) effectiveness of the decision-making process, including a lack of understanding of a model and its limitations. Note that a poor outcome following a decision does not necessarily imply that the decision was poor, nor does a good outcome imply that the decision was the correct choice. Some types of model error relate to multiple process stages (rather than a single one), including where insufficient attention is given to scenarios, risk and uncertainties.

BENEFITS OF USING FINANCIAL MODELS IN DECISION SUPPORT

Providing Numerical Informational model calculates the possible values of variables that are considered important in the context of the decision at hand. Of course, this information is often of paramount importance, especially when committing resources, budgeting and so on.Nevertheless, the calculation of the numerical values of key variables is not the only reason to build models; the modelling process often has an important exploratory and insight-generating aspect (see later in this section). In fact, many insights can often be generated early in the overall process, whereas numerical values tend to be of most use later on.Capturing Influencing Factors and RelationshipsThe process of building a model should force a consideration of which factors influence the situation, including which are most important. Whilst such reflections may be of an  intuitive or qualitative nature (at the early stages), much insight can be gained through the use of a quantitative process. The quantification of the relationships requires one to consider the nature of the relationships in a very precise way (e.g. whether a change in one would impact another and by how much, whether such a change is linear or non-linear, whether other variables are also affected, or whether there are (partially) common causal factors between variables, and so on).

Generating Insight and Forming Hypotheses

The modelling process should highlight areas where one’s knowledge is incomplete, what further actions could be taken to improve this, as well as what data is needed. This can be valuable in its own right. In fact, a model is effectively an explicit record of the assumptions and of the (hypothesized) relationships between items (which may change as further knowledge is developed). The process therefore provides a structured approach to develop a better understanding. It often uncovers many assumptions that are being made implicitly (and which may be imprecisely understood or incorrect), as well as identifying the assumptions that are required and appropriate. As such, both the qualitative and the quantitative aspects of the process should provide new insights and identify issues for further exploration.The overlooking or underestimation of these exploratory aspects is one of the main inefficiencies in many modelling processes, which are often delegated to junior staff who are competent in “doing the numbers”, but who may not have the experience, or lack sufficient project exposure, authority, or the credibility to identify and report many of the key insights, especially those that may challenge current assumptions. Thus, many possible insights are either lost or are simply never generated in the first place. Where a model produces results that are not readily explained intuitively, there are two generic cases:◾ It is over-simplified, highly inaccurate or wrong in some important way. For example, key variables may have been left out, dependencies not correctly captured, or the assumptions used for the values of variables may be wrong or poorly estimated.◾ It is essentially correct, but provides results which are not intuitive. In such situations, the modelling process can be used to adapt, explore and generate new insights, so that ultimately both the intuition and the model’s outputs become aligned. This can be a value-added process, particularly if it highlights areas where one’s initial intuition may be lacking.In this context, the following well-known quotes come to mind:◾ “Plans are useless, but planning is everything” (Eisenhower).◾ “Every model is wrong, some are useful” (Box).◾ “Perfection is the enemy of the good” (Voltaire).Decision Levers, Scenarios, Uncertainties, Optimization, Risk Mitigation and Project DesignWhen conducted rigorously, the modelling process distinguishes factors which are controllable from those which are not. It may also highlight that some items are partially  controllable, but require further actions that may not (currently) be reflected in the planning nor in the model (e.g. the introduction of risk mitigation actions). Ultimately, controllable items correspond to potential decisions that should be taken in an optimal way, and non-controllable items are those which are risky or subject to uncertainty.The use of sensitivity, scenario and risk techniques can also provide insight into the extent of possible exposure if a decision were to proceed as planned, lead to modifications to the project or decision design, and allow one to find an optimal decision or project structure.Improving Working Processes, Enhanced Communications and Precise Data Requirements A model provides a structured framework to take information from subject matter specialists or experts. It can help to define precisely the information requirements, which improves the effectiveness of the research and collection process to obtain such information. The overall process and results should also help to improve communications, due to the insights and transparency generated, as well as creating a clear structure for common working and co-ordination.

The Financial Modelling: Backward Thinking and Forward Calculation Processes

The modelling process is essentially two-directional (see Figure):

◾ A “backward thought process”, in which one considers a variable of interest (the model output) and defines its underlying, or causal, factors. This is a qualitative  process, corresponding to reading Figure from left to right. For example, cash flow may be represented as being determined from revenue and cost, each of which may be determined by their own causal factors (e.g. revenue is determined by price and volume). As a qualitative process, at this stage, the precise the nature of the relationships may not yet be made clear: only that the relationships exist.
◾ A “forward-calculation process”, in which one which starts with the assumed values of the final set of causal factors (the “model inputs”) and builds the required calculations to determine the values of the intermediate variables and final outputs. This is a numerical process corresponding to reading Figure from right to left. It involves defining the nature of the relationships sufficiently precisely that they can be implemented in quantitative formulae. That is, inputs are used to calculate the intermediate variables, which are used to calculate the outputs. For example, revenue would be calculated (from an assumed price and volume), and cost (based on fixed and variable costs and volume), with the cash flow as the final output.

Note that the process is likely to contain several iterations: items that may initially be numerical inputs may be chosen to be replaced by calculations (which are determined from new numerical inputs), thus creating a model with more input variables and detail. For example, rather than being a single figure, volume could be split by product group. In principle, one may continue the process indefinitely (i.e. repeatedly replacing hard-coded numerical inputs with intermediate calculations). Of course, the potential process of creating more and more detail must stop at some point:

◾ For the simple reason of practicality.
◾ To ensure accuracy. Although the creation of more detail would lead one to expect to have a more accurate model, this is not always the case: a detailed model will require more information to calibrate correctly (for example, to estimate the values of all the inputs). Further, the capturing of the relationships between these inputs will become progressively more complex as more detail is added.

It may be of interest to note that this framework is slightly simplified (albeit covering the large majority of cases in typical Excel contexts):

◾ In some applications (notably sequential optimization of a time series, and decision trees), the calculations are required to be conducted both forward and backward, as the optimal behavior at an earlier time depends on considering all the future consequences of each potential decision.
◾ In econometrics, some equations may be of an equilibrium nature, i.e. they contain the same variable(s) on both sides of an equation(s). In such cases, the logic flow is not directional, and will potentially give rise to circular references in the implemented models. 

The Stages of Financial Modelling

The modelling process can be considered as consisting of several stages, as shown in Figure

The key characteristics of each stage include:

◾ Specification: This involves describing the real-life situation, either qualitatively or as a set of equations. In any case, at this stage one should also consider the overall objectives and decision-making needs, and capture the core elements of  the behaviour of the real-world situation. One should also address issues relating to the desired scope of model validity, the level of accuracy required and the trade-offs that are acceptable to avoid excessive complexity whilst providing an adequate basis for decision support.

 Implementation: This is the process to translate the specification into numerical values, by conducting calculations based on assumed input values. For the purposes of this text, the calculations are assumed to be in Excel, perhaps also using additional compatible functionality (such as VBA  macros, Excel add-ins, optimisation algorithms, links to external databases and so on).

◾ Decision support: A model should appropriately support the decision. However, as a simplification of the real-life situation, a model by itself is almost never sufficient. A key challenge in building and using models to greatest effect is to ensure that the process and outputs provide a value-added decision-support guide (not least by providing insight, reducing biases or correcting invalid assumptions that may be inherent in less-rigorous decision processes), whilst recognising the limitations of the model and the modelling process.

Note that in many practical cases, no explicit specification step is conducted; rather, knowledge of a situation is used to build an Excel workbook directly. Since Excel does not calculate incorrectly, such a model can never truly be “(externally) validated”: the model specification is the model itself (i.e. as captured within the formulae used in Excel). Although such “self-validation” is in principle a significant weakness of these pragmatic approaches, the use of a highly formalized specification stage is often not practical (especially if one is working under tight deadlines, or one believes that the situation is generally well-understood). Some of the techniques discussed in this text (such as sensitivity-driven model design and the following of other best practices) are particularly important to support robust modelling processes, even where little or no documented specification has taken place or is practically possible.

Leverage your subject expertise to grow skills & earn a stable income through academic help. Build a better future for yourself. www.eagletutor.in

Leave a Reply

Your email address will not be published. Required fields are marked *

Open chat
Hello
Can we help you?