Dividend Reinvestment – does it matter what strategy you have ? (Part 3)

In Part 1 & Part 2 I just mused on an idea about dividend re-investment (over a relatively short time period and few holdings). in Part 2 I looked at 18 different holdings over a 10 year period, the impact of re-investment thresholds, buying only dips and of some level of re-balance to ensure particular holdings don’t become disproportionate as a percentage of the total portfolio.

A slight tweak to the criteria and I end up selecting 69 different holdings I could use with data going back 15 years.

The thing I wanted to do was create a rather more real-world simulation and look at the possible outcomes of simply randomly picking from a broad pool.

So I know I am going to ‘buy the dips’ and threshold re-investment at £300 (from Parts 1 & 2). Now my simulation brings in some other criteria:

      • randomly picking 18 holdings
      • look at balancing vs non-balancing
      • use a Monte-carlo simulation to run a large spread of random picks and gain statistical insight

In short – some stats to back up what I feel to be right and feel to be likely. That is the aim.

Now I don’t propose to have loads of text as the method is largely already covered. Instead a few more images in summary. So to  first to gain a sense of proportion, all 69 stocks here grouped together (re-based to the start date) – i.e., this is their relative performance over 15 years.

There are a few things notable:

    • The 2020 correction is very clear, but the 2008/9 one is barely noticeable (and yet was pretty sizeable at the time). We tend to worry about corrections at the time but just look back over the past 100 years and they are barely noticeable.
    • There are several stocks that perform exceptionally well and a fair few not very well
    • The majority sit not too far in performance range: a bit like a herd – funny that !!

Individually, it is very clear there are 4 stocks that are stunning and maybe another 12 pretty good along with a few looser’s. If I just look at the lower end it becomes clearer that there are a number of stocks that perform less well over the 15 years

Indeed, over 30% of the selected stocks have a negative annualised rate. (more if I took account of inflation). This simulation is not a contrived to always give a glowing answer.

Individually, it is clear there are some stunning performers. In a wide base of selection, how probably would this be – fairly I would expect.

Relatively stocks go up and down and each goes negative (relative to the start) by some amount, some very large amounts. Over 15 years and two corrections a fair volatility relatively speaking.


So broadly I have a large selection of stocks with a wide range of relative performance over 15 years and two market corrections – perfect …. now lets see what happens when I start randomly picking at letting rip.

The Simulation

I have a couple of indicative graphs – changes every time. Simple maths with 69 to pick and 18 each time how many combinations (answers on a postcard but my laptop would run every combination way past my lifetime).

This means that each run there will be some winners, some looser’s and some middle of the road – it is the proportion that will determine the final Monte-Carlo sum.

So first, running what I determined in Part 2: Buy the dip (2% or more), re-invest a min of £300. What can be seen is clear:

      • A divergence of value of each holding
      • Some holdings becoming very large – although not tabulated nearly every run the largest holding was at least 20% of the portfolio and up to 38% I saw. This is not a balanced portfolio and just increases risks
      • again, the 2020 correction impact is very clear

So I next run a re-balance. I simply decide on what threshold to top-slice. In the end as tabulated below I ran a range and just show 15% and 25% (above the mean of the portfolio). The difference was small which slightly surprised me.

I also varied the number of quarters between re-balance. Four (i.e. annually) works OK but can be a bit labour-some, but 6 or 8 (2 years) gave very coarse results. A holding that flies can do so relatively quickly and would you really want something with 15%, 20% or more of the total portfolio and not look to slice down irrespective of some quasi-planned re-balancing timeframe.

The maxim of ‘let the profits run’ comes to mind, but the name of the game here, is ‘minimise the downside risk’. It is not ‘maximise the growth opportunity’

I also found that the top-slice needed to be limited to prevent really spurious results and found that limiting to 35% worked well. That is, if say a holding was 50% above the mean (so I might expect to halve it), instead I sold a max of 35%. This prevented selling too much away from a bull, but also not letting too much disappear if there was a re-tracement.


One of the things I found in re-balancing holdings that had dropped (10%, 15% or whatever), was firstly, it made calculations tricky and secondly you could end up with a lot of relatively small transactions. So the focus was on top-slicing, then adding that cash into the dividend collection and disbursing on the dips. This is one reason the grouping is not as tight as the last graph in Part 2. It is something to further consider, but I suspect I will simply review periodically and see whether I want to invest in a large downward drop. It would be a lot easier to implement a more comprehensive method using a program. For all its benefits to quick look-and-see, it is clumsy in wide complex calculation.


Individual runs are great, but they are one snapshot in time and all random. A useful way to get a statistical feel is to run a Monte-carlo simulation. Basically, you run a number of simulations (typically 1,000, but could be millions) and then do some simple statistical analysis on the output.

What you invariably get is a form of ‘Bell-curve’ of outputs and you can use this to calculate probability of ranges. A typical range is shown in the following graph (using balancing). The vertical scale is a count of the number of times the output was in the range on the X-line. So out of all the runs it is clear the majority occur around a Median, and the probability of more extraneous outputs (higher or lower) drops off – classic stats range really.

Probability does not mean ‘will occur’. It is just a measure of likelihood.

The Monte-Carlo simulation (I ran 1,000 steps which did take a while each time as in Excel) gave the following data:

The data this show is that the Standard Deviation (a measure of the volatility of output) is significant if no balancing is done.

A better way to state this is there a a tighter range of output possibilities with balancing. The Median level though is significantly higher if balancing is not done (as a small number of individual holdings fly and become disproportionately large relative to the portfolio)

There are some important reasons for this. Firstly profits run, but in this simulation there is a flaw in that I am looking back at data to determine selection candidates. Clearly this does not include companies that failed or fell out of the FTSE350 but does include those that rose into it. In other words, there is a built in imbalance to growth prospects.

In reality, i.e., doing this for real, I would expect at regular points to review changing holdings, or spreading growth wider to reduce risk. The actual output I might reasonably expect would be higher than the balance figure Median, but lower than the unbalanced one. This is sort of self-evident as you are looking at removing underperformance and replacing it is possible performance.

The range of outputs are clearer with this graphs

The absolute number (Median) is less important here – what is important is to realise that balancing (an active act) reduces the long term variability (i.e., it is a thinner wedge) which offers greater certainty but at the expense of possible further gain. This is entirely in line with normal risk-premium approaches to investing: take higher risk, get a possible higher reward.

The way to think about this is the likely output is far more predictable and in a tighter range (even if in absolute terms it comes out lower)

The final parts of the table show the Percentile ranges, ie the probability of the output being at least XX%. These are important as you can see that there is a better than 95% probability of a particular final sum, or an 85% one (or whatever you choose).

Probability and Percentiles are not about absolutes, but about the likelihood OF NOT achieving a particular outcome. The way to really interpret this is the downside risk not the upside possibility.

This part has taken way longer to do (off & on), not least as there is a very large amount of possible data. But worth it for me. I feel I have a much deeper understanding of a ‘problem’ that has niggled my mind for a while now.

So on to Part 4 and Conclusions

Note: None of the above is investment advice. It is my observation and comment. Always do your own research and make your own decisions



  • No review of any stock done .. purely based on filling the selection criteria
  • Period of time is from 3rd Jan 2001 to 3rd Mar 2021 (essentially 15.17 years)
  • Selection criteria:
    • Select from the current FTSE350
    • Yield now 2.6%+
    • Yield 10 years ago 2.2%+ (only have direct data going back 10 years)
    • SP change over the years I did not consider – I ended up with 69 holdings,