Short-term betting results don’t say with high ‘confidence’ how likely a strategy is to succeed long-term. The more results you have, the more clarity and accuracy you have in verifying the profitability of your strategy. Basically, a large sample size of data increases confidence and reduces uncertainty.

**Sample Sizes In Betting**

In betting we can analyse past data to determine estimates or trends. The size of the sample, i.e. the total number of bets recorded, dictates the amount of information we have and (in part) determines the precision or level of confidence that we have in our estimates.

In some of my other blog posts (like my analysis on Drifters & Steamers) I refer to a “test strategy” of recorded bets using **real** odds. Using this same data set I aim to illustrate the importance of using an appropriate sample size. The selection method for the strategy used is irrelevant to this post — what we are looking to answer is one key question:

**What is the estimated (long-term) yield of the bet selection method?**

In estimating the yield we hope to say with *confidence*, whether or not the strategy is profitable, and to determine what % ROI we expect to make going forward.

**The Dangers Of A Small Sample Size**

Determining a “Small” or “Large” sample size is actually quite difficult. The best option is to always collect as much data as possible. Take a look at the early results of the test strategy…

###### The first 15 days of the test strategy produced the following results:

**Bets**: 2,375 bets**Average odds**: 9.95 average odds**Yield**: +5.77%

The achieved yield of +5.77% and the positive trend of the graph is a promising sign considering a total of 2,375 bets were placed — a seemingly substantial sample size to base future predictions on…

WARNING!Making the assumption that this strategy is profitable is precisely the danger in analysing past results in for betting. The sample is not representative of the future, and I'll prove this in the next section on 'Large Sample Sizes'.

Our estimated yield has an associated level of uncertainty which depends upon the underlying variability of the data as well as the sample size. The **more variable** the sample, the **greater the uncertainty** in our estimate.

### Consider the Following Uncertainties in Our Sample:

- Was the selection method for the bets completely unbiased?
- Is the initial 15 (consecutive) days representative of all days in the year?
- Have our selections had an uncharacteristically good run of form, or is this success rate normal?
- Are the average odds (at 9.95) capable of producing high variance results that swing in one direction or the other?
- Has the weather favourably, or unfavourably, impacted the results?

I believe that the selection method is suitably **unbiased** as it’s formed solely from past racing results. But the other uncertainties listed above, amongst many more, could be significant factors for the positive observed results during the first 15 days.

Whilst it might seem a little cynical to pick apart a winning run, it’s better to be critical of your results and to continue collecting data rather than making naive assumptions. Failure to fully analyse results can result in real money losses. Remember: larger sample sizes improve the accuracy of the information you have, and reduce uncertainty.

**The Importance Of Large Sample Sizes**

As I’ve mentioned, the test strategy does indeed take a turn for the worse despite the exceptionally good, promising start. This was evident from continued data collection under the exact same conditions, taking the sample up to 17,717 bets of £2.

###### The following graph incorporates the initial 2,375 bets, up to a total of 17,717 total bets.

**Bets**: 17,717**Average Odds**: 9.9**Yield**: -0.63%

With the increased sample size we have greater precision. Assumptions we could have made previously from the smaller data set are now somewhat disproved. Crucially, the yield (ROI) settles at -0.63%. The inconsistency in the graph gives no real reason for us to believe that this selection method is profitable.

Theoretically, if we could take this sample to infinity and include every future bet, then we would obtain the true value that we are trying to estimate – the actual yield of the strategy with no uncertainty. This is of course impossible, and despite the improved accuracy achieved by increasing the sample size, our predictions still aren’t **necessarily** representative of the future.

Still, despite some level of uncertainty, given my experience in betting I wouldn’t be rushing to use this betting strategy!

**A Step Further: Power & Effect Size**

Increasing the sample size gives greater power to detect differences.

Suppose that we were also interested in whether there’s a difference in the proportion of young and old winning horses. We may, for example, believe that older, more experienced horses perform better. We could ask the question:

Is the observed effect (the difference in results) **significant** given that the total number of future bets is potentially limitless?

Or might the proportions of the older winning horses be the observed effect due to chance?

Without delving too much into the specifics of statistical tests, it’s worth mentioning that you could take things that extra mile by using what’s known as the ‘Binomial test of equal proportions’ or ‘two proportion z-test’. If you find that there is insufficient evidence to establish a difference between young and old horses, then the result is not considered statistically **significant**. Usually a cut-off level is chosen in advance of performing a test (e.g. 10%) and is called the “significance level”. If the difference is greater than 10% over a large data set then we deem there to be a “difference of significance”.

If we increase the sample size of our test strategy to, let’s say 100,000 bets, we would have more data to support estimates based on different aged horses. Increasing our sample size therefore increases the **power **that we have to detect the difference. More formally:

`Statistical power is the probability of finding a statistically significant result, given that there `**really** is a difference (or effect) in the races.

**Final Thoughts**

Large sample sizes give more reliable results with greater precision and power, but perform sound analyses also costs more time and money. Therefore automating data collection, or using sources of available data, is essential for making accurate predictions and assumptions.

Remember that it’s critical to ensure that you use a sufficiently large sample size when attempting to draw meaningful conclusions from betting results. This goes for proofing Tipsters, too. But try not to waste resources by sampling more than you really require.

#### Further Reading:

Drifters & Steamers — The Risers & Fallers Of Betting Markets

- July 2020: Top Horse Racing Tipsters Of The Month - August 5, 2020
- What’s New With Punter2Pro? — 2020 Update - July 23, 2020
- How Efficient Are Sports Betting Markets? Can I Beat Them? - July 20, 2020

Big swing in the results there!! Bit gutting when something seems to work and make money, then tails off. I suppose that’s the reality of betting. Thanks for sharing.

Found this from twitter…Good post. Seems that becoming a pro gambler is harder than you’d think

Did something change in your strategy?

Were you too obvious about what you were doing?

I find it hard to believe it would start losing money after so many bets

No, nothing changed in the strategy. I’ve always thought to myself that it was unusual how a strategy could work so well and then eventually lose. I have some ideas on why this may have happened. Perhaps there are bots that can identify patterns – such as other strategies…

I’ll write a post on my ideas on this.