**Sampling Methods & Strategies 101**

**Everything you need to know (including examples)**

By: Derek Jansen (MBA) | Expert Reviewed By: Kerryn Warren (PhD) | January 2023

If you’re new to research, sooner or later you’re bound to wander into the intimidating world of **sampling methods** and strategies. If you find yourself on this page, chances are you’re feeling a little overwhelmed or confused. Fear not – in this post we’ll unpack sampling in **straightforward language**, along with loads of **examples**.

**What (exactly) is sampling?**

At the simplest level, sampling (within a research context) is the process of **selecting a subset of participants from a larger group**. For example, if your research involved assessing US consumers’ perceptions about a particular brand of laundry detergent, you wouldn’t be able to collect data from **every single person** that uses laundry detergent (good luck with that!) – but you could potentially collect data from a **smaller subset** of this group.

In technical terms, the larger group is referred to as the **population**, and the subset (the group you’ll actually engage with in your research) is called the **sample**. Put another way, you can look at the population as a full cake and the sample as a single slice of that cake. In an ideal world, you’d want your sample to be perfectly **representative** of the population, as that would allow you to **generalise** your findings to the entire population. In other words, you’d want to cut a perfect cross-sectional slice of cake, such that the slice reflects every layer of the cake in perfect proportion.

Achieving a truly representative sample is, unfortunately, a little trickier than slicing a cake, as there are many **practical challenges and obstacles** to achieving this in a real-world setting. Thankfully though, you **don’t always need** to have a perfectly representative sample – it all depends on the specific research aims of each study – so don’t stress yourself out about that just yet!

With the concept of sampling broadly defined, let’s look at the different **approaches to sampling** to get a better understanding of what it all looks like in practice.

**The two overarching sampling approaches**

At the highest level, there are two approaches to sampling: **probability sampling** and **non-probability sampling**. Within each of these, there are a variety of **sampling methods**, which we’ll explore a little later.

**Probability sampling** involves selecting participants (or any unit of interest) on a statistically** random basis**, which is why it’s also called “random sampling”. In other words, the selection of each individual participant is based on a **pre-determined process **(not the discretion of the researcher). As a result, this approach achieves a random sample.

Probability-based sampling methods are most commonly used in quantitative research, especially when it’s important to achieve a **representative sample** that allows the researcher to **generalise** their findings.

**Non-probability sampling**, on the other hand, refers to sampling methods in which the selection of participants is **not statistically random**. In other words, the selection of individual participants is based on the **discretion and judgment** of the researcher, rather than on a pre-determined process.

Non-probability sampling methods are commonly used in qualitative research, where the **richness** and **depth** of the data are more important than the generalisability of the findings.

If that all sounds a little too conceptual and fluffy, don’t worry. Let’s take a look at some actual **sampling methods** to make it more tangible.

**Probability-based sampling methods**

First, we’ll look at four common probability-based (random) sampling methods:

Importantly, this is **not a comprehensive list** of all the probability sampling methods – these are just four of the most common ones. So, if you’re interested in adopting a probability-based sampling approach, be sure to explore all the options.

**Simple random sampling**

Simple random sampling involves selecting participants in a **completely random fashion**, where each participant has an equal chance of being selected. Basically, this sampling method is the equivalent of **pulling names out of a hat**, except that you can do it digitally. For example, if you had a list of 500 people, you could use a random number generator to draw a list of 50 numbers (each number, reflecting a participant) and then use that dataset as your sample.

Thanks to its simplicity, simple random sampling is **easy to implement**, and as a consequence, is typically quite **cheap and efficient**. Given that the selection process is completely random, the results can be generalised fairly reliably. However, this also means it can **hide the impact of large subgroups** within the data, which can result in minority subgroups having little representation in the results – if any at all. To address this, one needs to take a slightly different approach, which we’ll look at next.

**Stratified random sampling**

Stratified random sampling is similar to simple random sampling, but it kicks things up a notch. As the name suggests, stratified sampling involves **selecting participants randomly**, but from within certain** pre-defined subgroups** (i.e., strata) that **share a common trait**. For example, you might divide the population into strata based on gender, ethnicity, age range or level of education, and then select randomly from each group.

The benefit of this sampling method is that it gives you **more control** over the impact of large subgroups (strata) within the population. For example, if a population comprises 80% males and 20% females, you may want to “balance” this skew out by selecting a random sample from an equal number of males and females. This would, of course, reduce the representativeness of the sample, but it would allow you to identify differences between subgroups. So, depending on your research aims, the stratified approach could work well.

**Cluster sampling**

Next on the list is cluster sampling. As the name suggests, this sampling method involves sampling from **naturally occurring, mutually exclusive clusters** within a population – for example, area codes within a city or cities within a country. Once the clusters are defined, a set of **clusters are randomly selected** and then a set of **participants are randomly selected** from each cluster.

Now, you’re probably wondering, “how is cluster sampling different from stratified random sampling?”. Well, let’s look at the previous example where each cluster reflects an area code in a given city.

With cluster sampling, you would collect data from clusters of participants in a **handful of area codes** (let’s say 5 neighbourhoods). Conversely, with stratified random sampling, you would need to collect data from **all over the city** (i.e., many more neighbourhoods). You’d still achieve the **same sample size** either way (let’s say 200 people, for example), but with stratified sampling, you’d need to do a lot more running around, as participants would be scattered across a vast geographic area. As a result, cluster sampling is often the more **practical and economical** option.

If that all sounds a little mind-bending, you can use the following general rule of thumb. If a population is relatively **homogeneous**, cluster sampling will often be adequate. Conversely, if a population is quite **heterogeneous** (i.e., diverse), stratified sampling will generally be more appropriate.

**Systematic sampling**

The last probability sampling method we’ll look at is systematic sampling. This method simply involves selecting participants **at a set interval**, starting from a **random point**.

For example, if you have a list of students that reflects the population of a university, you could systematically sample that population by selecting participants at an **interval of 8**. In other words, you would randomly select a starting point – let’s say student number 40 – followed by student 48, 56, 64, etc.

What’s important with systematic sampling is that the population list you select from **needs to be randomly ordered**. If there are underlying patterns in the list (for example, if the list is ordered by gender, IQ, age, etc.), this will result in a non-random sample, which would defeat the purpose of adopting this sampling method. Of course, you could safeguard against this by “shuffling” your population list using a random number generator or similar tool.

**Non-probability-based sampling methods**

Right, now that we’ve looked at a few probability-based sampling methods, let’s look at three **non-probability methods**:

- Purposive sampling
- Convenience sampling
- Snowball sampling

Again, this is **not an exhaustive list** of all possible sampling methods, so be sure to explore further if you’re interested in adopting a non-probability sampling approach.

**Purposive sampling**

First up, we’ve got purposive sampling – also known as **judgment**, **selective** or **subjective** sampling. Again, the name provides some clues, as this method involves the researcher selecting participants using his or her own **judgement**, based on the **purpose** of the study (i.e., the research aims).

For example, suppose your research aims were to understand the perceptions of hyper-loyal customers of a particular retail store. In that case, you could use your judgement to engage with **frequent** shoppers, as well as **rare or occasional** shoppers, to understand what judgements drive the two behavioural **extremes**.

Purposive sampling is often used in studies where the aim is to gather information from **a small population** (especially rare or hard-to-find populations), as it allows the researcher to target specific individuals who have **unique knowledge or experience**. Naturally, this sampling method is quite prone to researcher bias and judgement error, and it’s unlikely to produce generalisable results, so it’s best suited to studies where the aim is to go **deep** rather than **broad**.

**Convenience sampling**

Next up, we have convenience sampling. As the name suggests, with this method, participants are selected based on their **availability** or **accessibility**. In other words, the sample is selected based on how **convenient** it is for the researcher to access it, as opposed to using a defined and objective process.

Naturally, convenience sampling provides a **quick and easy** way to gather data, as the sample is selected based on the individuals who are readily available or willing to participate. This makes it an attractive option if you’re particularly **tight on resources** and/or time. However, as you’d expect, this sampling method is unlikely to produce a representative sample and will of course be vulnerable to **researcher bias**, so it’s important to approach it with caution.

**Snowball sampling**

Last but not least, we have the snowball sampling method. This method relies on **referrals from initial participants** to recruit additional participants. In other words, the initial subjects form the first (small) snowball and each additional subject recruited through referral is added to the snowball, making it **larger as it rolls along**.

Snowball sampling is often used in research contexts where it’s **difficult to identify and access** a particular population. For example, people with a rare medical condition or members of an exclusive group. It can also be useful in cases where the research topic is **sensitive or taboo** and people are unlikely to open up unless they’re referred by someone they trust.

Simply put, snowball sampling is ideal for research that involves reaching **hard-to-access populations**. But, keep in mind that, once again, it’s a sampling method that’s highly prone to **researcher bias** and is unlikely to produce a representative sample. So, make sure that it aligns with your research aims and questions before adopting this method.

**How to choose a sampling method**

Now that we’ve looked at a few popular sampling methods (both probability and non-probability based), the obvious question is, “**how do I choose** the right sampling method for my study?”. When selecting a sampling method for your research project, you’ll need to consider two important factors: your **research aims** and your **resources**.

As with all research design and methodology choices, your sampling approach needs to be guided by and aligned with your **research aims, objectives and research questions** – in other words, your golden thread. Specifically, you need to consider whether your research aims are primarily concerned with producing **generalisable** **findings** (in which case, you’ll likely opt for a probability-based sampling method) or with achieving **rich**, **deep** **insights** (in which case, a non-probability-based approach could be more practical). Typically, quantitative studies lean toward the former, while qualitative studies aim for the latter, so be sure to consider your broader methodology as well.

The second factor you need to consider is your **resources** and, more generally, the **practical constraints** at play. If, for example, you have easy, free access to a large sample at your workplace or university and a healthy budget to help you attract participants, that will open up **multiple options** in terms of sampling methods. Conversely, if you’re cash-strapped, short on time and don’t have unfettered access to your population of interest, you may be restricted to convenience or referral-based methods.

In short, **be ready for trade-offs** – you won’t always be able to utilise the “perfect” sampling method for your study, and that’s okay. Much like all the other methodological choices you’ll make as part of your study, you’ll often **need to compromise** and accept practical trade-offs when it comes to sampling. Don’t let this get you down though – as long as your sampling choice is well explained and justified, and the limitations of your approach are clearly articulated, you’ll be on the right track.

**Let’s recap…**

In this post, we’ve covered the basics of sampling within the context of a typical research project.

- Sampling refers to the process of defining a
**subgroup**(sample) from the**larger group**of interest (population). - The two overarching approaches to sampling are
**probability****sampling**(random) and**non-probability sampling**. - Common probability-based sampling methods include
**simple**random sampling,**stratified**random sampling,**cluster****sampling**and systematic sampling. - Common non-probability-based sampling methods include
**purposive**sampling,**convenience**sampling and**snowball**sampling. - When choosing a sampling method, you need to consider your
**research aims**, objectives and questions, as well as your**resources and other practical constraints**.

If you’d like to see an example of a sampling strategy in action, be sure to check out our research methodology chapter sample.

Last but not least, if you need **hands-on help** with your sampling (or any other aspect of your research), take a look at our 1-on-1 coaching service, where we guide you through each step of the research process, at your own pace.

**Psst… there’s more (for free)**

This post is part of our dissertation mini-course, which covers **everything you need** to get started with your dissertation, thesis or research project.