Existing Sample Size Guidelines | Research Gaps (2024)

Research provides numerous techniques to assess sample size, according to Mumtaz Ali Memon et al. (2020). These criteria can be separated into several categories, including item-to-sample ratios, population-to-sample tables, and general sample-size-calculation rules-of-thumb.

  1. Sample-to-item ratio

Generally recommended for exploratory factor analysis, the sample-to-item ratio is used to decide sample size based on the number of items in a study. The ratio should not be less than 5-to-1 (Gorsuch, 1983; Hatcher, 1994; Suhr, 2006). For example, a study with 30 items (questions) would require 150 respondents. A 20-to-1 ratio has also been suggested (Costello & Osborne, 2005). In this case, the same 30-item study would need 600 respondents. Studies that followed this rule include Brown and Greene (2006), Liao, So, and Lam (2015), Yeoh, Ibrahim, Oxley, Hamid, and Rashid (2016), and Forsberg and Rantala (2020), among others. Although a higher ratio is better, researchers who have difficulties meeting the above criterion due to a small sample size can refer to Barrett and Kline (1981), who argued that the sample-to-item ratio has little to do with factor stability. Interested researchers should also look at the work of Gorsuch (1983); Hatcher (1994); Suhr (2006), and Costello and Osborne (2005) for further details.

  1. Sample-to-variable ratio

The sample-to-variable ratio suggests a minimum observation-to-variable ratio of 5:1, but ratios of 15:1 or 20:1 are preferred (Hair et al., 2018). This means that though a minimum of five respondents must be considered for each independent variable in the model, 15 to 20 observations per independent variable are strongly recommended. This is in line with Tabachnick and Fidell (1989), who proposed five subjects for each independent variable as a “bare minimum requirement” for hierarchical or multiple regression analysis. Although the 5:1 ratio appears easy to follow, students should consider higher ratios (e.g. 15:1, 20:1) when determining sample size for their research works. One of the reasons we do not recommend following the 5:1 ratio is that it leads to underpowered studies. For example, a model with five independent variables would require only 25 respondents if one uses the 5:1 ratio. In practice, this is neither sufficient for most inferential analyses (Bartlett et al., 2001) nor convincing to examiners/reviewers about its chance of detecting a true effect. Furthermore, the sample-to-variable rule should be used with caution if sampling or theory generalisability and data representativeness are a concern. This rule can beused for multiple regressions and similar analyses instead. We recommend reading Multivariate Data Analysis by Professor Joseph F. Hair and colleagues (Hair et al., 2010, 2018) for more details on the sample-to-variable method.

  1. Krejcie and Morgan’s table

The Krejcie and Morgan table (KMT, Krejcie & Morgan, 1970) is well known for sample size determination among behavioural and social science researchers. No calculations are required to use this table, which is also applicable to any defined population. The KMT suggests that a sample of 384 is sufficient for a population of 1,000,000 or more. For this reason, 384 has been regarded as the ‘magic’ number in research and has consequently been used in hundreds and thousands of articles and theses thus far. In addition, a sample must be representative of the particular population under study when using the KMT. Unfortunately, researchers often use this method mechanically without understanding its underlying assumptions. We urge future studies not to use the KMT thoughtlessly. The KMT should be used to determine sample size when probability sampling (e.g. simple random, systematic, stratified) is the appropriate choice. We understand that probabilistic sampling techniques are often difficult to employ due to the unavailability of a sampling frame (Memon et al., 2017), such as in tourism studies (Ryan, 2020). Therefore, those who intend to use non-probabilistic sampling techniques (e.g. purposive, snowball, quota) may consider other options to determine sample size (e.g. power analysis). A similar table to the KMT can be found in Sekaran and Bougie’s (2016) Research Methods for Business: A Skill Building

Approach. Sahyaja and Rao (2020), Othman and Mahmood (2020), Yildiz et al. (2020), Kubota and Khan (2019), Papastathopoulos et al. (2019), Baluku et al. (2016), Collis et al. (2004), and Kotile and Martin (2000) are just a few of the many studies in which the KMT has been used to estimate sample size. To understand problems related to probability and non-probability sampling strategies, researchers should refer to Memon et al. (2017), Hulland et al. (2017), and Calder et al. (1981). We also encourage interested researchers to read and understand the original paper by Krejcie and Morgan (1970) before using the KMT in their research.

  1. Online calculators

Similar to the KMT (Krejcie & Morgan, 1970), there are various online calculators available to determine sample size. The Raosoft sample size calculator (Raosoft, 2010) and Calculator.net (Calculator.net, 2015) are among the better known ones. Given their ease of use, these calculators have been frequently applied in social science research (see Amzat et al., 2017; Cruz et al., 2014; Fernandes et al., 2014; Mazanai & Fatoki, 2011; Nakku et al., 2020; N. Othman & Nasrudin, 2016). Online calculators typically require inputs for a study’s confidence level, margin of error, and population size to calculate the minimum number of samples needed. In our experience, the KMT, Raosoft, and Calculator.net are undoubtedly useful in determining sample size. However, researchers should always be mindful of their assumptions pertaining probability sampling techniques and should thus make informed decisions about the use of these tools instead of treating them as template solutions for sample size calculation.

  1. A-priori sample size for structural equation models

The A-priori sample size for structural equation models (Soper, 2020) is a popular application among users of 2nd generation multivariate data analysis techniques (e.g., CB-SEM, PLS-SEM). It is a ‘mini’ online power analysis application that determines the sample size needed for a research that uses the structural equation modelling (SEM) technique. It requires inputs for the number of observed and latent variables in the model, the size of the expected effect, as well as the anticipated probability and level of statistical power. The application generates the minimum sample size essential for detecting a specified effect given the structural complexity of the model. Because of its ability to determine a study-specific minimum sample size (based on the number of latent and observed variables), it is deemed superior to other online sample size calculators. It can be considered for any research design regardless of whether the research employs a probability or non-probability sampling technique for data collection. Valaei and Jiroudi (2016), Balaji and Roy (2017), Dedeoglu et al. (2018), Yadav et al. (2019), and Kuvaas et al. (2020) are among the few studies that have employed A-priori sample size calculation in their structural equation models.

  1. Roscoe’s (1975) guidelines

Roscoe’s (1975) set of guidelines for determining sample size has been a common choice in the last several decades. Roscoe suggested that a sample size greater than 30 and less than 500 is suitable for most behavioural studies, while a sample size larger than 500 may lead to a Type II error (Sekaran & Bougie, 2016). Roscoe also posited that for comparative analysis, if the data set needs to be broken into several subgroups (e.g. male/female, rural/urban, local/international, etc.), 30 respondents should be considered the minimum for each group. The logic behind the rule of 30 is based on the Central Limit Theorem (CLT). The CLT assumes that the distribution of sample means approaches (or tends to approach) a normal distribution as the sample size increases. Although a sample size equal to or greater than 30 is considered sufficient for the CLT to hold (Chang et al., 2006), we still urge researchers to apply this assumption with care. For multivariate data analysis (e.g. regression analysis), the sample size should be 10 times greater than the number of variables (Roscoe, 1975). Sekaran and Bougie (2016) and Kumar et al. (2013) discussed not only the guidelines prescribed by Roscoe (1975) in detail, but also the various procedural and statistical aspects of sample size with relevant examples. Recent studies that used Roscoe’s guidelines to determine sample size include Lin and Chen (2006), Suki and Suki (2017), Seman et al. (2019), and Sultana (2020).

  1. Green’s (1991) procedures

Green (1991) recommended several procedures to decide how many respondents are necessary for research. He proposed N ≥ 50+8m (where m refers to the number of predictors in the model) to determine the sample size for the coefficient of determination (R2). For example, if a model consists of seven independent variables, it needs 50+(8)(7), that is, 116 samples for a regression analysis. For independent predictors (β), N ≥ 104+m was proposed. Thus, the minimum sample size would be 105 for simple regression and more (depending on the number of independent variables) for multiple regressions. Using this equation, 111(i.e. 104+7) cases are required if a model has seven independent variables. Fidell and Tabachnick (2014, p. 164), in turn, stated that “these rules of thumb assume a medium-size relationship between the independent variables and the dependent variable, α = .05 and β = .20” (p. 164). Those interested in both R2 and β should calculate N both ways and choose the larger sample size. Green (1991) believes that “greater accuracy and flexibility can be gained beyond these rules of thumb by researchers conducting power analyses” (p. 164). For further explanation, Green (1991) and Fidell and Tabachnick (2014) are good references. Studies that have determined sample size using the procedures proposed by Green (1991) include Coiro (2010), Brunetto et al. (2012), and Fiorito et al. (2007).

  1. Sample size guidelines for PLS-SEM

The 10-times rule: Barclay et al. (1995) proposed the 10-times rule that was later accepted in the PLS-SEM literature. The 10-times rule recommends that the minimum “sample size should be equal to the larger of (1) 10 times the largest number of formative indicators used to measure one construct or (2) 10 times the largest number of structural paths directed at a particular latent construct in the structural model” (Hair et al., 2017, p. 24). Despite its wide acceptance, doubts have been raised about this rule of thumb. It was heavily criticised by later studies that suggested it is not a valid criterion for determining sample size for PLS-SEM (Hair et al., 2017; Marcoulides & Chin, 2013; Ringle et al., 2018). Peng and Lai (2012) claimed that “the 10-times rule of thumb for determining sample size adequacy in PLS analyses only applies when certain conditions, such as strong effect sizes and high reliability of measurement items, are met” (p. 469). Studies that have used the 10-times rule include Wasko and Faraj (2005) and Raaij and Schepers (2008), among others. We recommend interested researchers to refer to Peng and Lai (2012) and Hair et al. (2017) for further details.

Inverse square root and gamma-exponential methods: As alternatives to the 10-times rule, Kock and Hadaya (2018) proposed the inverse square root and gamma-exponential methods as two new approaches to determine the minimum sample size required for PLS-SEM path models. In their Monte-Carlo simulations, Kock and Hadaya found that the inverse square root method slightly overestimates the minimum required sample size, whereas the gamma-exponential method provides a more accurate estimate. If researchers do not know in advance the value of the path coefficient with the minimum absolute magnitude, the minimum sample size required would be 160 based on the inverse square root. However, if researchers use the gamma exponential method, the sample size would be 146. The inverse square root method is recommended due to its ease of use and its basis in a simple equation. In contrast to the inverse square root method, the gamma exponential method is much more complex and is based on a computer programme. Sample studies that have used the inverse square root and gamma-exponential methods include Cheah et al. (2019), Gursoy et al. (2019), and Onubi et al. (2020). For more details on the use and technical aspects of the inverse square root and gamma-exponential methods, we recommend researchers to read Kock and Hadaya (2018).

Power tables by Hair et al. (2017): Hair et al. (2017) provided power tables to determine appropriate sample sizes for various measurement and structural model characteristics. These tables show the minimum samples required to obtain minimum R2 values of 0.10, 0.25, 0.50, and 0.75 for any of the endogenous constructs in the structural model at significance levels of 1%, 5%, and 10% with a statistical power of 80 percent, including the complexity of a PLS path model (e.g. maximum arrows pointing to a construct). For further illustration on power tables, researchers should refer to Exhibit 1.7 in Hair et al. (2017).

  1. Kline’s (2005, 2016) sample size guidelines for SEM

Kline (2005) offered sample size guidelines for analysing structural equation models, suggesting that a sample of 100 is considered small, a sample of 100 to 200 is medium, and a sample over 200 is considered large. Nevertheless, Kline (2016) recognised that a sample of 200 may be too small for a complex model with non-normal distributions, particularly for those using estimation methods other than maximum likelihood. Also, any sample below 100 cases may not be recommended for any type of SEM technique unless it analyses a very simple model (Kline, 2016). Moreover, model complexity should be considered when estimating sample size. A complex model with more parameters requires a larger sample than a parsimonious model (Kline, 2005). Kline argued that SEM is a large-sample technique and certain estimates (e.g. standard errors for latent construct effects) may be incorrect when the sample size is small. We recommend SEM users to read Kline (2005) and Kline (2016) to understand sample size requirements before performing SEM.

  1. Sample size for multilevel models

Kreft (1996) recommended the 30/30 rule for multilevel models, which dictates that 30 groups with 30 individuals per group should be the minimum sample size for a multilevel study. Later, Hox (2010) modified Kreft’s 30/30 rule into a more conservative 50/20 rule, such that 50 groups with 20 individuals per group should be the minimum sample size for cross-level interactions. However, Hox believes that if researchers are interested in random elements (variance, covariance, and their standard errors), they should go with a 100/10 rule, i.e. 100 groups with a minimum of 10 individuals per group. In the meantime, scholars have recommended the use of power analysis for sample size estimation in multilevel research (see Hox & McNeish, 2020; Scherbaum & Ferreter, 2008). Statistical power can be maximised by calculating the appropriate sample sizes for each level. Power analysis can be performed through MLPowSim, a free computer programme designed to perform power estimation for multilevel models. The MLPowSim is available at https://seis.bristol.ac.uk/~frwjb/esrc.html. Hox and McNeish (2020) is a good reference for researchers interested in multilevel research.

  1. Other rules of thumb

Aside from the rules of thumb discussed above, there are several other guidelines for determining sample size. For example, Harris (1975) recommended a minimum sample size of N ≥ 50+m (where m is the number of predictors). Cochran (1977) suggested that when determining sample size, researchers should identify the margin of error for the items considered most important in the survey and estimate sample size separately for each of these important items. As a result, researchers would get a range of sample sizes, i.e. small sample sizes for scaled/continuous variables and larger sample sizes for categorical/dichotomous variables. Interested researchers can refer to Bartlett et al. (2001) and Cochran (1977) to learn more about Cochran’s sample size estimation.

Nunnally (1978) later proposed guidelines for researchers aiming to cross-validate the results of a regression analysis. In particular, Nunnally suggested that if one wants to select the best variables from as many as 10 possible ones, there should be between 400 and 500 respondents. Another rule to be referred to was put forth by Maxwell (2000), who provided a table with minimum ratios for sample sizes ranging from 70:1 to 119:1. In a similar fashion, Bartlett et al. (2001) developed a table that estimates sample sizes for both categorical and continuous datasets. Besides, Jackson (2003) recommended that SEM users calculate sample size using the N:q ratio (where N is the ratio of cases and q is the number of model parameters that require a statistical estimate).

Existing Sample Size Guidelines | Research Gaps (2024)
Top Articles
Latest Posts
Article information

Author: Madonna Wisozk

Last Updated:

Views: 6024

Rating: 4.8 / 5 (48 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Madonna Wisozk

Birthday: 2001-02-23

Address: 656 Gerhold Summit, Sidneyberg, FL 78179-2512

Phone: +6742282696652

Job: Customer Banking Liaison

Hobby: Flower arranging, Yo-yoing, Tai chi, Rowing, Macrame, Urban exploration, Knife making

Introduction: My name is Madonna Wisozk, I am a attractive, healthy, thoughtful, faithful, open, vivacious, zany person who loves writing and wants to share my knowledge and understanding with you.