Open Journal of Statistics
Vol.08 No.06(2018), Article ID:89141,16 pages
10.4236/ojs.2018.86061

Profile Likelihood Tests for Common Risk Ratios in Meta-Analysis Studies

Chukiat Viwatwongkasem1*, Khanokporn Donjdee2, Tantanut Poodphraw3

1Department of Biostatistics, Faculty of Public Health, Mahidol University, Bangkok, Thailand

2National Institute for Child and Family Development, Mahidol University, Salaya, Nakhon Pathom, Thailand

3Mueang Ranong District Public Health Office, Ranong, Thailand

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: November 6, 2018; Accepted: December 11, 2018; Published: December 14, 2018

ABSTRACT

It is well-known that the power of Cochran’s Q test to assess the presence of heterogeneity among treatment effects in a clinical meta-analysis is low due to the small number of studies combined. Two modified tests (PL1, PL2) were proposed by replacing the profile maximum likelihood estimator (PMLE) into the variance formula of logarithm of risk ratio in the standard chi-square test statistic for testing the null common risk ratios across all k studies ( i = 1 , , k ). The simply naive test (SIM) as another comparative candidate has considerably arisen. The performance of tests in terms of type I error rate under the null hypothesis and power of test under the random effects hypothesis was done via a simulation plan with various combinations of significance levels, numbers of studies, sample sizes in treatment and control arms, and true risk ratios as effect sizes of interest. The results indicated that for moderate to large study sizes ( k 16 ) in combination with moderate to large sample sizes ( n i T , n i C 50 ), three tests (PL1, PL2, and Q) could control type I error rates in almost all situations. Two proposed tests (PL1, PL2) performed best with the highest power when k 16 and moderate sample sizes ( n i T , n i C = 50 , 100 ); this finding was very useful to make a recommendation to use them in practical situations. Meanwhile, the standard Q test performed best when k 16 and large sample sizes ( n i T , n i C 500 ). Moreover, no tests were reasonable for small sample sizes ( n i T , n i C 10 ), regardless of study size k. The simply naive test (SIM) is recommended to be adopted with high performance when k = 4 in combination with ( n i T , n i C 500 ).

Keywords:

Profile Likelihood Test, Cochran Q Test, Meta-Analysis, Heterogeneity, Risk Ratios

1. Introduction

In a clinical trial with binary outcomes, the risk ratio (RR) as an intervention effect is defined by the ratio of probabilities (risks) of having an adverse event between a treatment group and a control group [1] [2] . Let xT and xC be the number of events out of nT and nC, the total number of persons (or the total of times that every person exposed) in the treatment arm and the control arm, respectively. Then the maximum likelihood estimate for RR is obtained as

R ^ R = p ^ T p ^ C = x T / n T x C / n C [3] [4] .

A meta-analysis of study size k is a statistical approach that combines the results from k studies, conducted on the same topic and with the similar methods, into a single summary result. In clinical trials, meta-analysis is an essential tool to obtain a better understanding of how well the treatment effects work. Two popularly statistical models used are the fixed effect model and the random effect model. Under the assumption of the fixed effect model, we assume that all studies share a common effect size. It means that there is no heterogeneity between the studies; all studies contain only one true effect size over all k independent trials, and the observed effect is determined by the common true effect plus the sampling error (within-study error). On the contrast, under the random effects model, the true effect is not the same in all studies; we allow that there is a distribution of true effect sizes. It follows that the combined estimate is not an estimate of one value, but rather it is the average of distribution values. Hence, there are two levels of errors (within-study error and between-study error). Consequently, the observed effect is determined by the mean of all true effects plus the within-study error and the between-study error. In this sense, heterogeneity may refer to various true effect sizes from studies to studies, or the difference of studies gives the difference of the effect sizes so that one can incorporate this heterogeneity into a random effect model. Alternatively, heterogeneity in the effect sizes from different studies may be explained by a set of covariates, such as characteristics of studies, type of treatment status, some average or aggregate characteristics of patients, even publication bias; therefore, a meta-regression approach may be used to account for variation from such covariates among these heterogeneous effects.

Traditionally, before combining the effects of separate studies by using either the fixed effect model as homogeneity or the random effect model as heterogeneity, the conventional Cochran’s Q test is adopted to test whether these treatment effects are homogeneous, or not. Unfortunately, it is widely known that the standard Q test may be inaccurate in testing the null homogeneity of effect sizes in the sense of low power of test. Kulinskaya and Dollinger [5] and Boissel et al. [6] stated that Cochran’s Q test had low power in most situations, especially, when the number of studies (k) was small. The work of Kulinskaya, Dollinger, and Bjørkestøl [7] , Lipsitz et al. [1] and Lui’s [2] were also confirmed the low power problem of Cochran’s Q test. The low power of Q test implies the low ability to detect the effect when the effect actually exists (i.e. the low chance of rejecting the null homogeneous effects when the different effects exist). The simple correction for Q test to solve the problem of low power is taking a larger level of significance; Fleiss [8] recommended using a cut-off significance level of 0.1, rather than the usual 0.05. This has also been a common customary practice for the Cochran’s Q homogeneity test in meta-analysis. Considerably, the way to increase the power is equivalent to the reduction of the chance of type II error. But this reduction of the chance of type II error also increases the risk or the chance of type I error. Obviously, when we make a low power problem better by using a cut-off of 10% for significance criterion, the new problem of allowance for the increase of the chance of type I error may occur. The increasing risk of type I error potentially leads to the problem of not maintaining the type I error at the conventional level of significance. Additionally, Shandish and Haddock [9] stated that when the sample sizes in each study were very large, the null hypothesis of the equal population effects might be rejected even if the individual effect estimates did not really differ much.

Profile likelihood estimation, stated by Ferrari et al. [10] and Böhning et al. [11] , deals with elimination of the nuisance parameters. Generally, let the log-likelihood l ( p , q ) depend on a vector p of parameters of interest and a vector q of nuisance parameters. If q p as a function of p is the solution such that l ( q p | p ) l ( q | p ) for all q , then l ( q p | p ) = l * ( p ) is called the profile log-likelihood. Profile log-likelihood l ( q p | p ) = l * ( p ) is not an ordinary log-likelihood, but log-likelihood maximized over nuisance parameters given the values of the parameter of interest. We can observe that the profile log-likelihood l * ( p ) now depends only on the parameter of interest.

With the Q limitations of low power and not maintaining type I error at the conventional level of significance, many scientists have attempted to propose some new tests and/or some modified tests to be alternative candidates. To meet the gaps of limitations, our proposed tests modified from the standard χ 2 test of homogeneity as an alternative choice are based on the substitution of profile maximum likelihood estimates derived by Böhning et al. [11] into the variance formula of logarithm of risk ratio as the effect measure of interest over all k studies. Another comparative test was the simply naive test based on the variance estimate of the conventional Poisson likelihood. Some numerical examples are illustrated later. Then, the next contribution focuses on a comparison of the performance among these homogeneity tests via a simulation plan. The result is related to the mentioned tests through the type I error probability and the power criteria lying on the later section. The conclusion and discussion are presented finally.

2. Motivational Applications

Two examples of meta-analysis are presented to illustrate the implementation of the related Q test and the other usefulness demonstrates how to set the parameters in a simulation plan. Farquhar et al. [12] conducted a meta-analysis on k = 7 studies to assess 5 years follow up of high dose chemotherapy and autograft comparable with the conventional chemotherapy for poor prognosis breast cancer. The outcome of treatment is event free survival. The value of Cochran’s Q-test was 4.72. Since Q is distributed as a standard chi-square statistic with k-1 degrees of freedom (df), leading to the p-value of 0.58 for accepting the null homogeneity of risk ratios across trials. Additionally, the I 2 statistic denoted as I 2 = 100 % × ( Q d f ) / Q describing the percentage of variation across studies due to heterogeneity is very low of 0%; consequently, a fixed effects model might be appropriate. The conclusion of acceptation of the null hypothesis was that there was no presence of heterogeneity (Figure 1). In addition, there was no difference between treatment and control groups on binary events; the pooled estimate of RR being of 1.01 under a fixed effects model lies on the 95% confidence interval (CI.) of [0.97, 1.06], covering the null value 1. Forest graph of meta-analysis is created by R package provided by Schwarzer et al. [13] , http://meta-analysis-with-r.org/.

Mottillo et al. [14] considered the data from meta-analysis of 16 trails about the metabolic syndrome and cardiovascular risk. The value of Cochran’s Q-test is 6.12. The Chi-square approximation with 15 degrees of freedom provides 0.0003 of the p-value for testing the null homogeneity. The heterogeneity value of I 2 index was 64%. The result shows evidence to conclude heterogeneity of across studies (Figure 2). Furthermore, there exist the treatment effects on the binary outcomes since RR of 2.34 under a random effect model lies away from 1; the 95% CI has the range of [2.02, 2.72], not covering 1.

3. Deriving Profile Likelihood Tests for Common Risk Ratio

The purposes of study are 1) to derive the profile likelihood tests for testing a null common risk ratio RR across k studies in which is equivalent to homogeneity of treatment effects overall k studies ( i = 1 , , k ) by replacing the profile likelihood estimator into the formulas of the estimate of variance of logarithmic relative risk, v a ^ r ( log ( R ^ R i ) ) , of the standard chi-square test; 2) to compare the performance of test statistics based on the profile likelihood method regarding

Figure 1. Forest plot of meta-analysis comparing high dose chemotherapy and autograft with the conventional chemotherapy.

Figure 2. Forest plot of meta-analysis of 16 trials about the metabolic syndrome and cardiovascular risk.

the different formulas of the variance estimates of logarithm of risk ratio with the conventional Cochran’s Q test for testing a null common risk ratio RR across k studies ( H 0 : R R 1 = R R 2 = = R R k = R R ) against H 1 : afalse H 0 , (i.e. H 1 : R R i has a specific distribution).

We followed the work and the notation of Böhning et al. [11] and further proposed some profile likelihood tests by modifying the standard χ 2 test for homogeneity through the various ways of the variance estimates of the logarithm of risk ratios at the ith study.

3.1. Profile Likelihood Estimator under a Fixed Effect Point for a Common Risk Ratio across Studies

The result of the work of Böhning et al. [11] under profile likelihood concept provides a fixed-effect point RR for all k studies ( i = 1 , , k ) as

R R = i = 1 k x i T ( n i T R R + n i C ) ( x i T + x i C ) n i T = i = 1 k x i T i = 1 k ( x i T + x i C ) n i T n i T R R + n i C ,

leading to the iterative processes of the profile maximum likelihood estimator (PMLE) in the following

R ^ R P M L E = i = 1 k x i T ( n i T R ^ R P M L E + n i C ) ( x i T + x i C ) n i T = i = 1 k x i T i = 1 k ( x i T + x i C ) n i T n i T R ^ R P M L E + n i C ,

where x i T and x i C are the numbers of events in treatment and control arms for each clinical trial i and n i T and n i C are the numbers of persons at risk or person-times.

3.2. Some Tests Based on Various Formulas of Variance Estimate of Logarithmic RRi

For testing the null hypothesis, the true relative risks ( R R i ) are the same in all k centers/studies, H 0 : R R 1 = R R 2 = = R R k = R R , for i = 1 , , k versus the alternative that at least one of the effect sizes ( R R i ) differs from the remainder. Alternatively, this is reasonable to assume that all null parameters of the centers to be combined are summarized into a single underlying population parameter, against the alternative parameters different among centers are likely to have a wholly random with a specific distribution. Our proposed tests are modified on the base of a standard χ 2 test for homogeneity in the following form:

χ 2 = i = 1 k ( ln ( R ^ R i ) ln ( R ^ R P M L E ) ) var ( ln ( R ^ R i ) ) 2

where k is the number of studies being combined, R ^ R P M L E is a PMLE of a common RR, R ^ R i = ( x i T / n i T ) / ( x i C / n i C ) is an estimate of R R i at the ith study, two natural logarithm transformations, such as ln ( R ^ R i ) and ln ( R ^ R P M L E ) , are needed to adapt the non-symmetric distribution, and k 1 is degrees of freedom of χ 2 test. It is a common way that the variance of the logarithm of risk ratios at the ith study, var ( ln ( R ^ R i ) ) , is replaced by its various estimates, v a ^ r ( ln ( R ^ R i ) ) , leading to the several candidates χ 2 tests, finally.

1) Simply naive χ 2 test (SIM), based on variance estimate at the ith study under Poisson likelihood by Delta method, is denoted as

χ 2 = i = 1 k ( ln ( R ^ R i ) ln ( R ^ R P M L E ) ) v a ^ r ( ln ( R ^ R i ) ) 2

where v a ^ r ( ln ( R ^ R i ) ) = 1 n i T p ^ i T + 1 n i C p ^ i C = 1 x i T + 1 x i C , p ^ i T = x i T / n i T , and p ^ i C = x i C / n i C .

2) Profile likelihood χ 2 test (PL1) with the same form above will be obtained but getting the different formula due to the variance estimate under the null hypothesis as

v a ^ r ( ln ( R ^ R i ) ) = ( 1 n i T R ^ R P M L E + 1 n i C ) 1 p ^ i C

where R ^ R P M L E = i = 1 k x i T ( n i T R ^ R P M L E + n i C ) ( x i T + x i C ) n i T and p ^ i C = x i C / n i C .

3) Profile likelihood χ 2 test (PL2) will also be obtained after using the different formulas of variance estimate as

v a ^ r ( ln ( R ^ R i ) ) = ( 1 n i T R ^ R P M L E + 1 n i C ) 1 p ^ i C = ( n i C + R ^ R P M L E n i T ) 2 n i T n i C R ^ R P M L E ( x i C + x i T )

where R ^ R P M L E = i = 1 k x i T ( n i T R ^ R P M L E + n i C ) ( x i T + x i C ) n i T and p ^ i C = x i T + x i C n i C + R ^ R P M L E n i T are the results of Böhning et al. [11] under profile likelihood concept.

4) Cochran’s Q test as the weighted sum of squares is distributed as a chi-square statistic with k − 1 degrees of freedom, under the null of homogeneity of treatment effects across k studies, denoted as

Q = i = 1 k w i ( δ ^ i δ ¯ ) 2

where δ ^ i = ln R ^ R i , w i = 1 v a ^ r ( δ ^ i ) = 1 n i T p ^ i T + 1 n i C p ^ i C , p ^ i T = x i T / n i T , p ^ i C = x i C / n i C , and δ ¯ = i = 1 k w i δ ^ i / w i .

4. Monte Carlo Simulation

We perform two simulation plans. One is conducted on type I error for testing a null common risk ratio, RR, over all k studies or in other words for testing the null homogeneity we have H 0 : R R 1 = R R 2 = = R R k = R R . The other is used for comparing the performance of tests with the highest power after all test statistics could be controlled within the same limit range of the empirical type I error.

4.1. Simulation Plan for Studying Type I Error

Parameters Setting: The values of parameter setting followed two motivational examples. Let the common relative risk (RR) be 1, 2 and 4. Baseline risks p i C in the control arm for the ith center ( i = 1 , 2 , , k ) are generated from a uniform distribution in which its range depends on the values of RR. For example, if R R = 1 then p i C ~ U ( 0 , 0.9 ) and the correspondent treatment risks have the possible values less than or equal to 0.9 as p i T = p i C × R R ~ U ( 0 , 0.9 ) for the ith center. If R R = 2 then p i C ~ U ( 0 , 0.45 ) and p i T = p i C × R R ~ U ( 0 , 0.9 ) . The sample sizes n i T and n i C are distributed from Poisson with the mean of 5, 10, 50, 100, 500, 1000. The number of centers k is 4, 16, and 32.

Statistics: Poisson random variables X i T and X i C in treatment and control arms for center i ( i = 1 , 2 , , k ) are generated with parameters ( n i T p i T ) and ( n i C p i C ) , respectively. All candidate tests are then computed. The procedure is replicated 5000 times. From these replicates, the number of the null hypothesis rejections is counted for the actual (empirical) type I error.

Type I error among the tests is considered by comparing the actual (estimated) type I error ( α ^ ) with the nominal level of significance ( α ). The departure of the estimated type I error from the nominal level of significance must not exceed the precise limit. In this study, the evaluation for two-sided tests in terms of the probability is based on Bradley limit [15] yielding the limiting ranges of [ 0.5 α , 1.5 α ] . For an example, at α = 1 % level of significance, α ^ value lies between [0.5%, 1.5%], at α = 5 % level of significance, α ^ value lies between [2.5%, 7.5%], at α = 10 % level of significance, α ^ value lies between [5%, [15%].

If the empirical type I error lies within the range of Bradley limit, then the statistical test can capture type I error.

4.2. Simulation Plan for Studying Power of Tests

Before comparing the power of test statistics, all test statistics could be calibrated to have the same limit range of type I error rate under the null hypothesis. It means that power comparisons of tests are reliable if all tests are previously based on the same range of type I error rate before the process of power comparisons is employed.

Under the alternative hypothesis that R R i has been assumed a specific distribution around the mean ( R R 0 ) of 1, 2, 4, we let ln R R i = ln R R 0 + U m = ln R R 0 + m m ( 2 U 1 ) where U m is a uniform over (-mm, mm) for a given mm = 0.2, 0.4, 0.6, and U is a uniform over (0, 1). Baseline risks p i C are still generated from a uniform distribution over [0, 0.25]. Poisson random variables X i T and X i C are generated with parameters ( n i T p i T ) and ( n i C p i C ) , respectively. The procedure is replicated 5000 times and the number of the null hypothesis rejections is counted for the empirical power.

5. Results

Since it is difficult to present all enormous results from the simulation study, we just have illustrated some instances, coping with 0.05 levels of significances, some common true relative risk values of 1 and 2, in both equal and unequal sample sizes.

5.1. Equal Sample Sizes ( n i T = n i C )

5.1.1. Studying Type I Errors

・ From Table 1, the results show that for small sample sizes ( n i T , n i C 10 ), regardless of study size k, almost all tests cannot control type I error.

・ For moderate to large study sizes ( k 16 ) in combination with moderate to large sample sizes ( n i T , n i C 50 ), two proposed tests (PL1, PL2) can maintain type I error rates in almost all situations. Meanwhile, for moderate to large study sizes ( k 16 ), the Q test seems to handle type I error when sample sizes are large ( n i T , n i C 500 ).

・ For small center size (k = 4), the SIM test can capture type I error on some moderate and large sample sizes ( n i T , n i C 100 ) and the Q test can control type I error on sample size being moderate ( n i T , n i C = 50 , 100 ).

・ In summary, for study size is moderate to large ( k 16 ), two profile likelihood tests (PL1 and PL2) perform well with maintaining type I error rates when sample sizes are moderate to large ( n i T , n i C 50 ); in the meanwhile, the Q test can capture type I error on sample size being quite large ( n i T , n i C 500 ).

Table 1. At 5% significance level, a comparison of the empirical type I error rates among four statistical tests with the equal in the mean of sample sizes.

Note: Bold values indicate that the test statistics can control type I error rate.

5.1.2. Comparing Powers of Tests

・ The process of power comparisons is conducted after all candidate tests can previously maintain the same limit range of type I error.

Table 2 showed that both of the PL1 test and the PL2 test are best with the highest powers when study size is moderate to large ( k 16 ) and sample sizes are moderate ( n i T , n i C = 50 , 100 ) in every degrees of variation (mm = 0.2, 0.4, 0.6), coping with RR = 1, 2. Additionally, in more detail, PL2 seems better than PL1 with higher power.

・ When study size is moderate to large ( k 16 ) and sample size is large ( n i T , n i C 500 ), the Q test is best with the highest power of test in every degrees of variation (mm = 0.2, 0.4, 0.6), coping with RR = 1, 2.

・ For the number of studies is small (k = 4) in combination with large sample sizes ( n i T , n i C 500 ), the best performance of test is the SIM test since it is only one test that can formerly meet the criterion of controlling type I error.

5.2. Unequal Cases ( n i T n i C )

5.2.1. Studying Type I Errors

Table 3 indicates that for RR = 2 and moderate to large study size ( k 16 ), three tests (PL1, PL2, Q) can capture type I error when both of sample sizes in treatment and control groups are moderate to large ( n i T 50 , n i C 100 ). The SIM cannot control type I error in every case of sample sizes.

Table 4 is considered to highlight only for small study sizes (k = 4). For small study sizes (k = 4), the SIM seems to control type I error at least when one sample size of treatment groups is large. Both of PL1 and PL2 tests can control type I error when one sample size of treatment groups is small. The Q test can rarely control type I error in every sample size for small study sizes.

5.2.2. Studying Power of Tests

Table 5 indicates that for moderate to large study sizes ( k 16 ) in combination with moderate sample sizes ( n i T = 50 , n i C = 100 ), two proposed tests (PL1, PL2) perform best and quite close together.

・ For moderate to large study sizes ( k 16 ) in combination of at least one treatment arm being large sample sizes ( n i T 50 , n i C 500 ), Q test seems to have best performance with the highest power, followed by PL2 and PL1 tests.

・ Additionally, when the sample sizes of both treatment and control arms are quite small ( n i T , n i C = 5 , 10 ), regardless of study size (k), no tests among four tests (SIM, PL1, PL2, Q) are reasonable since almost all tests cannot control type I error rates and they give too low powers.

6. Discussion

In this study, we further focus on a comparison of the performance among four statistical tests including the simply naive test approach (SIM), the conventionally null approach of profile likelihood (PL1), the full profile likelihood approach

Table 2. Comparisons of the power of tests after capturing type I error at 0.05 significance level when means of sample sizes in treatment groups are equal ( n i T = n i C = n ).

Note: Bold values indicate that the statistic tests can previously control type I error rates.

Table 3. At 5% significance level, a comparison of the estimated type I error rates for moderate to large study sizes ( k 16 ) with the unequal sample sizes ( n i T n i C ).

Note: Bold words denoting the statistic test can control type I error.

Table 4. At 5% significance level, a comparison of the actual type I error rates for small study size (k = 4) with the unequal sample sizes ( n i T n i C ).

Note: Bold words denoting the statistic test can control type I error.

Table 5. Comparison of the power of tests at 0.05 significance level for moderate to large study sizes ( k 16 ) with the unequal sample sizes ( n i T n i C ) at mm = 0.2.

Note: Bold words denoting the statistic test can formerly control type I error.

(PL2), and the conventionally weighted sum of square approach (Q). The main results found in the followings.

・ No tests could not capture type I error rates for small sample sizes ( n i T , n i C 10 ), regardless of study size k. This same result happened to the study of Mathes and Kuss [16] ; they stated that estimating between-study heterogeneity in meta-analysis of a small number of sample sizes ( n i T , n i C 5 ) is difficult in this situation.

・ The work of Willis and Riley [17] was also confirmed the properties of Q test to be a good test when there are large study sizes (50 studies or more), but for fewer studies the Q test has the low power.

・ We are scientist group that have attempted to propose some new/modified tests to bridge the gaps of limitation of the Q test. The idea of this paper shows how to use two proposed tests (PL1, PL2) based on substituting profile maximum likelihood estimates into the different variance formulas for obtaining the modified standard chi-square tests of heterogeneity.

・ Our profile likelihood tests (PL1 and PL2) for moderate to large study sizes ( k 16 ) in combination with moderate sample sizes ( n i T , n i C = 50 , 100 ) can defeat the Q test with the higher power after capturing the same range of type I error limits.

・ The work of Bagheri, Ayatollahi and Jafari [18] and Viechtbauer [19] which also could evaluate the influence of the size of centers (k) and sample sizes ( n i T , n i C ) on the type I error and the power for the null homogeneity testing in some situations. It means that the investigators should pursue their attempts to find some new/modified tests further.

・ In contrast, although two proposed tests (PL1, PL2) perform well in above situations, they cannot defeat the Q test when the number of studies is moderate to large ( k 16 ) in combination with large sample sizes ( n i T , n i C 500 ). Additionally, in unbalanced cases, for moderate to large study sizes ( k 16 ) and combination of moderate sample size and large sample sizes ( n i T 50 , n i C 500 ), the Q test performs best with the highest power, followed by PL2 and PL1 tests.

7. Conclusion

In summary, the idea of replacement of profile likelihood estimates into the variance formulas of logarithm of relative risks works well when k 16 in combination with ( n i T , n i C = 50 , 100 ).

8. Recommendation

Two proposed tests (PL1, PL2) based on substituting profile maximum likelihood estimates into the different variance formulas, perform best with the highest power (under formerly within the same range of type I error limits) in some situations, for examples, when the number of studies is moderate to large sizes ( k 16 ) in combination with moderate sample sizes ( n i T , n i C = 50 , 100 ). This result leads to the suggestion of the use of two proposed tests in such practical situations.

In contrast, although two proposed tests (PL1, PL2) perform well with the high powers in above situations, they cannot defeat the Q test when numbers of studies are moderate to large ( k 16 ) in combination with large sample sizes ( n i T , n i C 500 ) in both balanced and unbalanced cases. This result leads to the suggestion to use the Q test in these situations. It means that it should be further investigated to find the new appropriate test to fill the gaps of low power of Q test in such situations.

Acknowledgements

The study was partially supported for publication by Faculty of Public Health, Mahidol University, Bangkok, Thailand.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Viwatwongkasem, C., Donjdee, K. and Poodphraw, T. (2018) Profile Likelihood Tests for Common Risk Ratios in Meta-Analysis Studies. Open Journal of Statistics, 8, 915-930. https://doi.org/10.4236/ojs.2018.86061

References

  1. 1. Lipsitz, S.R., Dear, K.B.G., Laird, N.M. and Molenberghs, G. (1998) Tests for Homogeneity of the Risk Difference When Data Are Sparse. Biometrics, 54, 148-160. https://doi.org/10.2307/2534003

  2. 2. Lui, K.J. (2007) Testing Homogeneity of the Risk Ratio in Stratified Noncompliance Randomized Trials. Contemporary Clinical Trials, 28, 614-625. https://doi.org/10.1016/j.cct.2007.02.010

  3. 3. Smolinsky, L. and Marx, B.D. (2018) Odds Ratios, Risk Ratios, and Bornmann and Haunschild’s New Indicators. Journal of Informetrics, 12, 732-735. https://doi.org/10.1016/j.joi.2018.06.011

  4. 4. Améndola, C., et al. (2019) The Maximum Likelihood Degree of Historic Varieties. Journal of Symbolic Computation, 92, 222-242. https://doi.org/10.1016/j.jsc.2018.04.016

  5. 5. Kulinskaya, E. and Dollinger, M.B. (2015) An Accurate Test for Homogeneity of Odds Ratios Based on Cochran’s Q-Statistic. BMC Medical Research Methodology, 15, 49. https://doi.org/10.1186/s12874-015-0034-x

  6. 6. Boissel, J.-P., Blanchard, J., Panak, E., Peyrieux, J.-C. and Sacks, H. (1989) Considerations for the Meta-Analysis of Randomized Clinical Trials: Summary of a Panel Discussion. Controlled Clinical Trials, 10, 254-281. https://doi.org/10.1016/0197-2456(89)90067-6

  7. 7. Kulinskaya, E., Dollinger, M.B. and Bjorkestol, K. (2011) On the Moments of Cochran's Q Statistic under the Null Hypothesis with Application to the Meta-Analysis of Risk Difference. Research Synthesis Methods, 2, 254-270. https://doi.org/10.1002/jrsm.54

  8. 8. Fleiss, J.L. (1986) Analysis of Data from Multiclinic Trials. Controlled Clinical Trials, 7, 267-275. https://doi.org/10.1016/0197-2456(86)90034-6

  9. 9. Shadish, W.R. and Haddock, C.K. (1994) The Handbook of Research Synthesis. Russell Sage Foundation.

  10. 10. Ferrari, S.L., Lucambio, F. and Cribari-Neto, F. (2005) Improved Profile Likelihood Inference. Journal of Statistical Planning and Inference, 134, 373-391. https://doi.org/10.1016/j.jspi.2004.05.001

  11. 11. Bohnimg, D., Kuhnert, R. and Rattanasiri, S. (2008) Meta-Analysis of Binary Data Using Profile Likelihood. Chapman & Hall/CRC Press, Boca Raton.

  12. 12. Farquhar, C.M., Marjoribanks, J., Lethaby, A. and Basser, R. (2007) High Dose Chemotherapy for Poor Prognosis Breast Cancer: Systematic Review and Meta-Analysis. Cancer Treatment Reviews, 33, 325-337. https://doi.org/10.1016/j.ctrv.2007.01.007

  13. 13. Schwarzer, G. (2007) Meta: An R Package for Meta-Analysis. R News, 7, 40–45. https://cran.r-project.org/doc/Rnews/Rnews_2007-3.pdf

  14. 14. Mottillo, S., Filion, K.B., Genest, J., Joseph, L., Pilote, L., Poirier, P., et al. (2010) The Metabolic Syndrome and Cardiovascular Risk: A Systematic Review and Meta-Analysis. Journal of the American College of Cardiology, 56, 1113-1132. https://doi.org/10.1016/j.jacc.2010.05.034

  15. 15. Bradley, J. (1978) Robustness? British Journal of Mathematical and Statistical Psychology, 31, 144-152. https://doi.org/10.1111/j.2044-8317.1978.tb00581.x

  16. 16. Mathes, T. and Kuss, O. (2018) A Comparison of Methods for Meta-Analysis of a Small Number of Studies with Binary Outcomes. Research Synthesis Methods, 9. https://doi.org/10.1002/jrsm.1296

  17. 17. Willis, B.H. and Riley, R.D. (2017) Measuring the Statistical Validity of Summary Meta-Analysis and Meta-Regression Results for Use in Clinical Practice. Statistics in Medicine, 36, 3283-3301. https://doi.org/10.1002/sim.7372

  18. 18. Bagheri, Z., Ayatollahi, S.M.T. and Jafari, P. (2011) Comparison of Three Tests of Homogeneity of Odds Ratios in Multicenter Trials with Unequal Sample Sizes within and among Centers. BMC Medical Research Methodology, 11, 58. https://doi.org/10.1186/1471-2288-11-58

  19. 19. Viechtbauer, W. (2007) Hypothesis Tests for Population Heterogeneity in Meta-Analysis. British Journal of Mathematical and Statistical Psychology, 60, 29-60. https://doi.org/10.1186/1471-2288-11-58