When Is Previous Performance a Useful Indicator of Future Results?
It can be tempting to rely solely on previous results when making decisions, but it is important to remember the dangers that can come with doing so.
Agencies often give specific examples or case studies to show their performance. This can also be problematic as there are a lot of moving parts to any campaign. Previous results are often misleading and can lead to making wrong choices, leading to undesired outcomes.
In this blog post, we will discuss why relying solely on previous results can be dangerous and how to better assess situations before making a decision.
Average Results vs Standard Deviations
When analysing previous results, it is important to go beyond simply looking at average figures and take into account the distribution of the results. Average results can provide a general overview of the data, but they may not tell the whole story. Standard deviations, on the other hand, give us a measure of how much the data varies from the average.
By considering the standard deviations, we can better understand the range and distribution of the data. This information is crucial in making decisions because it allows us to assess the level of uncertainty associated with the results. A smaller standard deviation suggests that the data points are closely clustered around the average, indicating a higher level of reliability. Conversely, a larger standard deviation implies a greater degree of variation and potential unpredictability.
For example, we openly publish our historic performance on our portal. Our rolling average for telemarketing isn't actually our mean average. We actually show what we've been able to achieve in 95% of cases. Why do we do this? Because if we showed our mean average, it shows a much higher figure. This is skewed by some of our longer-term campaigns, where we knock out lead after lead with high levels of consistency.
If someone were to be analysing this sales data from the past year to predict future sales. Whilst we say that this is not a guarantee, etc. It's important for us to own the accuracy of the information that we provide. As such, the telemarketing average is actually what we were able to achieve in 95% of cases, rather than the mean average. For example, we work as a White Label Telemarketing Service London & UK-Wide. There’s a lot that can be learnt by considering a small set of data but also rooting that in large sets.
The average sales figure may seem promising, but without considering the standard deviation, we cannot gauge how consistent those sales were. If the standard deviation is large, it means that the sales fluctuated significantly, which could indicate factors that are difficult to control, such as seasonal trends or economic conditions.
By taking into account both the average results and standard deviations, we can make more informed decisions. This approach allows us to better understand the inherent uncertainties in the data and make adjustments accordingly. It ensures that our decisions are not solely based on average figures, but also consider the variability and potential risks involved.
How bias can affect previous results
Bias can significantly impact the accuracy and reliability of previous results. Bias refers to the tendency to favor certain outcomes or interpretations, which can skew the data and lead to misleading conclusions. There are several types of bias that can affect previous results.
Selection bias is one common form of bias that occurs when certain data points are systematically excluded from the analysis. This can occur when only certain groups or individuals are included in the study, leading to a biased representation of the population. For example, if a study only includes data from a specific region or demographic group, the results may not be applicable to the larger population. For example, we work as a White Label Telemarketing Service London & UK-Wide. There’s a lot that can be learnt by considering a small set of data but also rooting that in large sets.
Confirmation bias is another type of bias that can influence previous results.
This occurs when researchers or decision-makers selectively focus on information that supports their pre-existing beliefs or expectations. As a result, they may disregard contradictory data or fail to consider alternative explanations. Confirmation bias can lead to overestimation or underestimation of certain factors, distorting the accuracy of the results.
Publication bias is a form of bias that occurs when only certain studies or results are published or made available to the public. This can happen if studies with negative or inconclusive results are less likely to be published, creating an incomplete or biased view of the available evidence.
In order to mitigate bias and ensure the accuracy of previous results, it is important to implement rigorous research methodologies and be transparent about any potential biases or limitations in the data. Additionally, involving multiple perspectives and independent researchers can help to reduce the impact of bias and increase the reliability of the results. By critically evaluating previous results and considering the potential biases involved, decision-makers can make more informed choices and avoid the pitfalls of relying solely on biased data. For example, we work as a White Label Telemarketing Service London & UK-Wide. There’s a lot that can be learnt by considering a small set of data but also rooting that in large sets.
How to interrogate previous averages to determine accuracy
In order to accurately assess the reliability of previous results, it is essential to interrogate the average figures and determine their accuracy. Here are some strategies to help you determine the accuracy of previous averages:
Examine the sample size: The size of the sample used to calculate the average can have a significant impact on its reliability. A larger sample size generally leads to more accurate results, as it reduces the likelihood of random variations skewing the average.
Consider the source of the data: It is important to evaluate the credibility and validity of the source from which the data was obtained. Look for reputable sources and consider whether the data collection methods were rigorous and unbiased.
Analyse the time frame: The time frame over which the data was collected can also influence its accuracy. Consider whether the data is recent and relevant to the current situation. Trends and circumstances may have changed since the data was collected, making it less applicable to the present.
Seek additional perspectives: Consult experts or colleagues who have experience in the relevant field. They may offer insights or alternative interpretations of the data that can help you evaluate its accuracy.
Compare with your own data sources & experience: Cross-referencing the previous averages with data from other reliable sources can provide a more comprehensive view. If the results align consistently across different sources, it increases confidence in their accuracy.
Ask for drill-downs: Pick a few areas and really drill down on it. Get as specific as you can. This will quickly determine how well put together this data set is and therefore, how trustworthy it is.
By interrogating previous averages and considering these factors, you can better determine their accuracy and make more informed decisions based on reliable data.
To prevent forms of bias, we publish our all-campaign and all-agent averages. We also publish averages over larger periods of time e.g. 2023 so far, or all of 2022.
It is possible for us to show our historic track record in more specific methods, like:
B2C vs B2B
Industry
Our performance is based on starting point of data
Based on the handover point e.g. Booked meeting vs Callback
Performance depends on whether we're handing off to one salesperson or a whole team
Which best-performing pitch
The distribution of results across a team e.g. if everyone is consistent vs one superstar
Best day and times to call
And much more
There are of course a lot of moving parts to any campaign, so this isn't a specific guarantee on what you'll receive. This is merely a reflection of what we've historically been able to do for other clients as a rolling average. For example, we work as a White Label Telemarketing Service London & UK-Wide. There’s a lot that can be learnt by considering a small set of data but also rooting that in large sets.
We share our running breakdown of ratios of leads we've secured for other clients. Definitions of Lead and more details here https://parkrow.marketing/data-driven-lead-generation-our-brand-promise/ or by contacting us via web chat.