🔬 The results from my recent experiment
- John J D Munn

- Apr 16, 2024
- 4 min read
Updated: Nov 16
Last week, I ran an experiment. You can see it here.
I aimed to get recommendations for professionals I can collaborate with - competent people I can recommend, buy from, or cross-promote with.
While the experiment was unsuccessful in its objectives - I didn’t get any relevant recommendations - I am still glad that I did it.
Why do I think it failed in its core objective (recommendations)? What am I doing about it?
Possible issues and discussion:
Not enough people saw it
My open rate on my email last week was 11% lower than normal. That is a big drop. Three reasonable explanations jump out:
My Work Smart Wednesday issue two weeks ago sucked, so people didn’t want to open the next one (which was last week’s)
The titles of last week’s issue sucked, so people didn’t bother opening it
Some of the words in last week’s issue triggered spam filters, so more people never saw that they had the email so couldn’t open it.
My bet would be the spam issue as the drop was so large. If it were the other issues, I am likely to have seen a spike in unsubscribes alongside engagement (but I didn’t). I know I included some known spam words in the email last week, but I thought I would get away with it as I have built up credibility with my openers over a long period. I don’t think I did, hence why I am avoiding those words in this email. I am monitoring editions more closely to double check this assumption.
If you thought the previous titles or editions sucked and that prevented you from engaging, please reply to this email to let me know!
However, the offer was still exposed to ~1,500 people. A typical conversion rate for online giveaways is roughly 25%, so I would expect some recommendations. I don’t think lack of exposure was the issue.
Prize wasn’t big enough
The prize was relatively small, I could certainly afford to offer a bigger prize. While my current CAC is much lower than the prize that was offered, I can afford to spend much more as my retention rates are so good (when people have a call with me they almost always convert to becoming a client and usually want to continue working with me for a long time).
CAC is important when calculating how much you can spend on campaigns as it indicates how much you can spend per acquisition. If I assumed even 1 client would come from this campaign, thanks to collabs with those recommended, the campaign would pay for itself multiple times over.
However, I have seen this work recently with much smaller prizes. I feel it should have been enough. I will re-run the experiment in the near future to test this, both prize size and prize type (maybe money was the wrong choice!).
Prize was too big
Big companies routinely offer smaller prizes than I did. It is possible that mine seemed too big to be true. Certainly possible. Worth testing.
Perceived chance of winning was too low
Most people don’t try when they think they will lose. Only one person would be selected, which may have put people off from trying. I feel this is a strong likelihood.
Timeline to apply was too short
I settled on one week to provide a sense of urgency. Longer would allow people more time to think of who to recommend, but less pressure to actually follow through to give me the recommendation. I have run these in the past with much shorter deadlines that achieved much higher success. Discarded.
People didn’t have recommendations
While possible that people didn’t know someone worth recommending, it is unlikely. The vast majority of entrepreneurs have worked with someone they have found useful, such as a coach, therapist, or accountant. Cause discarded.
Perceived effort or risk was too high
It may have seemed that it would either take too long to find suitable references or people would be scared that theirs aren’t good enough. This is a possibility. While I hope I am not a scary guy to converse with, I may like to reduce the power distance. This may help explain lack of replies overall. Likely.
Lack of clarity about what to do
What I wanted people to do, and criteria of what made a “relevant recommendation”, could have been clearer. This is a likely culprit, people didn’t feel confident making the recommendations. I will test this.
What am I doing about it?
Firstly, I am working on analysing what happened and why. I want to learn from this.
Secondly, I will re-run the experiment a few more times to test my hypotheses.
Thirdly, I am already testing potential improvements. For example, I am working on reducing power distance right now by publicly admitting that not everything I do is perfect.
What can you do to help me?
Tell me - what stopped you from sending a recommendation?
Reply to this email or leave a comment below. I will be eternally grateful.

An image from my experiment
I shared this in my Work Smart Wednesday newsletter. Want the full set of related insights? You can read them here: https://worksmartwednesday.substack.com/p/work-smart-wednesday-april-17-2024
👋 Want to work together?
When you’re ready, here are 3 ways I can help you:
🔍 Clarity Call - we will discuss your situation and create a step-by-step action plan together so you know exactly what you need to do next for maximum impact
👓 StartSmart - Achieve a 6 figure income within 6 months, whether you only have an idea or you have already started your business
🧘 Overload to Optimal - reduce your workweek to 20 hours or less within 90 days while running your 6/7 figure business


