In order to be able to make any valid inferences about a population from a sample, that sample must not only be representative, but it must also be sufficiently large. The smaller a sample is, the more likely it is that chance effects could bias the results. For example, it is much more likely to toss all heads when one is only tossing a coin three times compared to three hundred times. As the sample size increases, the more the sample will come to represent (or ‘look like’) the larger population from which it is drawn, and hence the more likely it is that any statistical inferences about the population will be correct. Exactly how large a sample needs to be is very much dependent on the particular application or question one is examining. One rule of thumb is that at least thirty is necessary in order to conduct meaningful statistical analysis (this is related to a technical result called the central limit theorem). For medical trials or opinion polls, same sizes of at least several hundred are typical, with several thousand being a particularly large trial. In general, sample sizes of less than one hundred should be used with caution, and any of less than ten are likely to be completely useless. While even very small sample sizes can be useful in qualitative research, or for suggesting hypothesis to test in future research, they cannot be used to draw any firm conclusions about the broader population.
Statistical significance and sample size: brief discussion of the importance of considering sample size in making inferences about populations
The importance of n (sample size) in statistics: a clear explanation of exactly why sample size is important
Understanding the relevance of sample size calculation: a brief introductory journal article with a medical focus