top of page

Learning From Gamblers

As coaches, we love information and we love data. More is better, and the more we have the more confident we are in our decision-making processes. From guiding the training process to predicting race outcomes to athlete recruiting, we’ll take anything we get.

Coaches envy those with multi-camera set-ups capable of documenting every biomechanical deviation. They envy those with access to advanced physiological monitoring that can quickly assess any physiological trait. Clearly, these access to this information provides a competitive advantage.

What if it’s all wrong?

Access to information and data certainly has an impact on confidence and competence. The problem is that the impact might not be what you would expect, and the implications matter. A lot.


Paul Slovic, a psychologist focused on decision making, decided to experiment with some experienced horse-race handicappers. When given no information about the horses that were racing, these individuals were unable to predict winners with any accuracy beyond that of chance (~10%).

These individuals were then allowed access to five pieces of information of their choice (i.e. the pedigree of the horse, the weight of the horse, the performance history of the horse, etc.). With this information in hand, accuracy improved to approximately 17%. While this still might not seem like great performances, they are significantly better than chance and indica that improvements are possible when experts are provided with even a small amount of information.

The handicappers were then provided with access to 10, 20, and 40 informational items of their choice, and asked to predict race outcomes.

The results?

ZERO change in accuracy. It remained at 17%. With eight times as much information, these experts were unable to provide any increase in predictive performance.

What does it mean for coaches?

More information is not better. Coaches love information. From questionnaires to biomechanical testing to body composition scores to physiological values and on, if coaches can get it, they will. It provides a sense of control and it provides more certainty. Unfortunately, it doesn’t always provide effectiveness. As we saw in Slovic’s study, outcomes are not necessarily better.

For years, coaches have been using two simple strategies to guide performance, often with great success. They use a stopwatch and they observe. These two information sources will provide most of what coaches need to know. The rest can become a distraction.

Figure out what information sources do matter for YOU. Remember, that the handicappers were able to choose which informational items they had. They specifically requested information that they believed was of value. They chose differently. For some, certain facts were important. For others, they weren’t.

An interesting conclusion is that the handicappers intuitively knew what information was useful to them. The 5 facts they chose were able to provide all 17% of the improvement. They chose wisely. Coaches should also trust their intuition, while remaining open to using different information streams in the future.

Beyond timing repetitions and watching practice, there are other aspects that can be of value. For me personally, knowing simple performance metrics (times, but also stroke rates/stroke counts/kick counts), and the answer to a simple question, how are you doing?, often provides the information necessary to guide the training process.

Of course, when problems arise of a significant nature, more or different sources of information may be required, as is the expertise of other individuals. In the case of injury, illness, the impact of life stressors, etc., the problem is no longer simply a training issue, and different sources of information become relevant. If someone falls and hurts their wrist, an x-ray is may be warranted. At the same time, x-rays don’t need to be taken every day ‘just in case’. Context matters.

Yet for everyday management of performance, simple is often enough. While lactate measures, sleep scores, wellness questionnaire, heart rate variability, etc., can be useful in some situations, as we’ll see below, not only do they not necessarily provide effective information, they can become problematic in their own way.


While the insights about the value of increasing amounts information are very valuable, they are not the only conclusions to be gained from this examination. While Slovic measured performance as access to information increased, he also asked a very simple question, ‘how confident are you in your predictions?’.

As access to information to increased, confidence increased linearly. Despite being no more accurate regardless of the amount of information available, handicappers became much more confident in their ability to predict performance.

This is a problem.

  • The more confident we are, the less vigilantly we look for information that we are wrong.

  • The more confident we are, the more we will ignore information that clearly indicates we are wrong.

  • The more confident we are, the more boldly we act.

All of the above is true despite the reality that we are clearly no less likely to be wrong!

In the swimming context, the biggest issue is that the more data creates the illusion of certainty. This can lead us to act more boldly, and to resist departing from our chosen path, even when other indicators signal that it is the wrong path. If we were less confident, we might be willing to change. However, our confidence prevents us from doing so.

If we have a system of training that we ‘know’ works, often validated by pseudo-scientific explanations, we’re more likely to ignore obvious signs that the training process is not moving forward. These signs might include the swimmer in question continuing to swim slower and slower!

If we have a lot of biomechanical data, and a strong conception of how swimmers should be swimming, we’re more likely to ignore the individual differences an individual may present, as well as the reasons that the technical model we’re using doesn’t apply. If the potential changes are not a good fit, we may ignore the potential indicators and persist with the planned change, confident in our numbers.

We may have the results of daily questionnaires that indicate a swimmer is healthy, happy, and well rested. The data prove it! Yet the swimmer is clearly miserable, tired, and swimming slow. With the objective information in hand, we tend to ignore the subjective information that is just as telling. In this specific case, if we only had our observations, we might be willing to change. Because we have NUMBERS, we resist going against the ‘facts’.

For me, the most valuable lesson insight I gained from these observations is that I need to remain vigilant in my observations, and resistant to coming to strong, finite conclusions about what is happening. Instead of looking for more evidence to support my position, I need to look for information that contradicts my position.

We all overestimate our expertise, particularly when we have access to a lot of information.

What does acting boldly look like?

Aggressively increasing training loads. The more we ‘know’ about good training and the more we ‘know’ about a swimmer’s abilities, the more likely we are to confidently assign training loads. This is fine until it is obvious that training is not going well, yet we refuse to intervene because we ‘know’ that we made the right decisions initially. If we’re a little less certain, we’ll be more conservative initially and be willing to change more quickly as required.

Aggressively changing technical skills. If we have a technical model that we are certain is accurate, and we have concrete data that indicates a given swimmer doesn’t adhere to that model, we’re much more likely to make significant changes. While this may be appropriate in many cases, it’s possible that the reason the swimmer does not confirm to the model is because they are not a ‘model’ swimmer. As discussed elsewhere, Janet Evans is a perfect example. The more confident we are, the less likely we are to recognize the individual differences that explain not only why someone is not conforming to our model, but why they will never be successful if they are forced to do so.

Aggressively changing the type of training loads. With improved access to physiological testing, as well as improved training theory, we can become more confident in determining how swimmers should train. We can better determine the right training that should be performed. This can lead to making massive changes in the training plan. Of course, it’s possible that the testing was flawed for some reason, the test results can be explained by unrelated or unknown factors, or the individual in question simply doesn’t conform to the modeled theory. Perhaps more subtle changes would be more effective.

Keeping it Simple

Here are some takeaways that summarize the practical

  • More information does not necessarily improve outcomes.

  • More information does improve confidence.

  • This misplaced confidence can lead to taking larger risks.

  • This misplaced confidence can lead to an inability to observe disconfirming information.

  • This misplaced confidence can lead to an unwillingness to change when that information is recognized.

As a solution, we simply need to be suspicious of the usefulness of more information and data, beyond what DOES need to be known. Clearly knowing how fast swimmers are swimming is important. I’m not sure knowing their lactate levels after every single lap is going to aid the process.

Further we need to remain humble that whatever information we do have does not guarantee an outcome, and we need to be vigilant in observing the consequences of our decisions so that we can course correct as navigate the training process. Even when the data all points in the right direction, we need to remain agnostic as whether we are on the right track, carefully considering each decision.

In short, we need to counter the misplaced confidence that can come from access to more and more data. The point is we don’t KNOW, no matter how much information we have. There is always uncertainty. As such, it’s prudent to act with caution, constantly re-evaluating our course of action.


More information provides us with comfort and it provides us with certainty. Coaching is an inherently uncertain process, as there are many variables affecting performance, both known and unknown. More information helps us feel more confident about our course of action, and it helps us feel a sense of control.

This is particularly problematic in our current environment where ever improving technology is enhancing our ability to quantify everything. We can measure more, we can track more, and we can access the certainty of numbers. While this is perceived as a universal good, as we’ve seen, it is not without its potential problems.

The idea is not necessarily that more information is bad. It doesn’t actually hurt performance. The danger is in the confidence we gain from information. We may tend to act more boldly, and we may be more resistant to departing from our chosen path, even when other sources of information demonstrate we clearly should. We’re more likely to take larger risks when we are more confident in the course of our actions.

With access to greater information, it’s imperative that we combat that access with greater humility to ensure that we do not become inappropriately overconfident as a result. If we remain humble and curious, we can make decisions that are less likely to have significantly negative consequences.

In the long-term, it results in faster swimming.

For more, see-

Slovic, Paul. Behavioral Problems of Adhering to a Decision Policy. Paper presented at the Institute for Quantitative Research in Finance. 5/1/1973.


bottom of page