Home  /  Resources & support  /  FAQs  /  One-sided tests for coefficients

The results from estimation commands display only two-sided tests for the coefficients. How can I perform a one-sided test?

Title   One-sided tests for coefficients
Author Kristin MacDonald, StataCorp

Estimation commands provide a t test or z test for the null hypothesis that a coefficient is equal to zero. The test command can perform Wald tests for simple and composite linear hypotheses on the parameters, but these Wald tests are also limited to tests of equality.

One-sided t tests

To perform one-sided tests, you can first perform the corresponding two-sided Wald test. Then you can use the results to calculate the test statistic and p-value for the one-sided test. Let’s say that you perform the following regression:

. sysuse auto, clear
(1978 automobile data)

. regress price mpg weight 
Source SS df MS Number of obs = 74
F(2, 71) = 14.74
Model 186321280 2 93160639.9 Prob > F = 0.0000
Residual 448744116 71 6320339.67 R-squared = 0.2934
Adj R-squared = 0.2735
Total 635065396 73 8699525.97 Root MSE = 2514
price Coefficient Std. err. t P>|t| [95% conf. interval]
mpg -49.51222 86.15604 -0.57 0.567 -221.3025 122.278
weight 1.746559 .6413538 2.72 0.008 .467736 3.025382
_cons 1946.069 3597.05 0.54 0.590 -5226.245 9118.382

If you wish to test that the coefficient on weight, βweight, is negative (or positive), you can begin by performing the Wald test for the null hypothesis that this coefficient is equal to zero.

. test _b[weight]=0

 ( 1)  weight = 0

       F(  1,    71) =    7.42
            Prob > F =    0.0081

The Wald test given here is an F test with 1 numerator degree of freedom and 71 denominator degrees of freedom. The Student’s t distribution is directly related to the F distribution in that the square of the Student’s t distribution with d degrees of freedom is equivalent to the F distribution with 1 numerator degree of freedom and d denominator degrees of freedom.

As long as the F test has 1 numerator degree of freedom, the square root of the F statistic is the absolute value of the t statistic for the one-sided test. To determine whether this t statistic is positive or negative, you need to determine whether the fitted coefficient is positive or negative. To do this, you can use the sign() function.

. local sign_wgt = sign(_b[weight])

Then, using the ttail() function along with the returned results from the test command, you can calculate the p-values for the one-sided tests in the following manner:

. display "Ho: coef <= 0  p-value = " ttail(r(df_r),`sign_wgt'*sqrt(r(F)))
Ho: coef <= 0  p-value = .00406491

. display "Ho: coef >= 0  p-value = " 1-ttail(r(df_r),`sign_wgt'*sqrt(r(F)))
Ho: coef >= 0  p-value = .99593509

In the special case where you are interested in testing whether a coefficient is greater than, less than, or equal to zero, you can calculate the p-values directly from the regression output. When the estimated coefficient is positive, as for weight, you can do so as follows:

H0: βweight = 0 p-value = 0.008 (given in regression output)
H0: βweight <= 0 p-value = 0.008/2 = 0.004
H0: βweight >= 0 p-value = 1 − (0.008/2) = 0.996

When the estimated coefficient is negative, as for mpg, the same code can be used:

. test _b[mpg]=0

 ( 1)  mpg = 0

       F(  1,    71) =    0.33
            Prob > F =    0.5673

. local sign_mpg = sign(_b[mpg])  

. display "Ho: coef <= 0  p-value = " ttail(r(df_r),`sign_mpg'*sqrt(r(F)))
Ho: coef <= 0  p-value = .71633814

. display "Ho: coef >= 0  p-value = " 1-ttail(r(df_r),`sign_mpg'*sqrt(r(F)))
Ho: coef >= 0  p-value = .28366186

However, to calculate the p-values from the regression output directly, you use the following formulas:

H0: βmpg = 0 p-value = 0.567 (given in regression output)
H0: βmpg <= 0 p-value = 1 − (0.567/2) = 0.717
H0: βmpg >= 0 p-value = 0.567/2 = 0.284

On the other hand, if you want to perform a test such as H0: βweight <= 1, you cannot calculate the p-value directly from the regression results. Here you would have to perform the Wald test first.

. test _b[weight]=1

 ( 1)  weight = 1

       F(  1,    71) =    1.35
            Prob > F =    0.2483

. local sign_wgt2 = sign(_b[weight]-1)

. display "Ho: coef <= 1  p-value = " ttail(r(df_r),`sign_wgt2'*sqrt(r(F)))
  Ho: coef <= 1  p-value = .12415299

One-sided z tests

In the output for certain estimation commands, you will find that z statistics are reported instead of t statistics. In these cases, when you use the test command, you will get a chi-squared test instead of an F test. The relationship between the standard normal distribution and the chi-squared distribution is similar to the relationship between the Student’s t distribution and the F distribution. In fact, the square root of the chi-squared distribution with 1 degree of freedom is the standard normal distribution. Therefore, one-sided z tests can be performed similarly to one-sided t tests. For example,

. webuse union, clear
(NLS Women 14-24 in 1968)

. xtset id

Panel variable: idcode (unbalanced)

.  xtlogit union age grade, nolog

Random-effects logistic regression                   Number of obs    = 26,200
Group variable: idcode                               Number of groups =  4,434

Random effects u_i ~ Gaussian                        Obs per group:
                                                                  min =      1
                                                                  avg =    5.9
                                                                  max =     12

Integration method: mvaghermite                      Integration pts. =     12

                                                     Wald chi2(2)     =  69.07
Log likelihood = -10623.006                          Prob > chi2      = 0.0000

union Coefficient Std. err. z P>|z| [95% conf. interval]
age .0149252 .0036634 4.07 0.000 .007745 .0221055
grade .1130089 .0177088 6.38 0.000 .0783002 .1477175
_cons -4.313313 .2426307 -17.78 0.000 -4.788861 -3.837766
/lnsig2u 1.800793 .046732 1.7092 1.892386
sigma_u 2.460578 .0574939 2.350434 2.575884
rho .6479283 .0106604 .6267624 .6685287
LR test of rho=0: chibar2(01) = 6343.94 Prob >= chibar2 = 0.000 . test grade ( 1) [union]grade = 0 chi2( 1) = 40.72 Prob > chi2 = 0.0000

Here the test command returns r(chi2), which can be used along with the normal() function to calculate the appropriate p-values.

. local sign_grade = sign(_b[grade])

. display "H_0: coef<=0  p-value = " 1-normal(`sign_grade'*sqrt(r(chi2)))
H_0: coef<=0  p-value = 8.768e-11

. display "H_0: coef>=0  p-value = " normal(`sign_grade'*sqrt(r(chi2)))
H_0: coef>=0  p-value = 1

Finally, if you want to perform a test of inequality for two of your coefficients, such as H0: βage >= βgrade, you would first perform the following Wald test:

. test age-grade = 0

 ( 1)  [union]age - [union]grade = 0

           chi2(  1) =   27.44
         Prob > chi2 =    0.0000

Then calculate the appropriate p-value:

. local sign_ag = sign(_b[age]-_b[grade])

. display "H_0: age coef >= grade coef. p-value = " normal(`sign_ag'*sqrt(r(chi2)))
H_0: age coef >= grade coef. p-value = 8.112e-08

Again, this approach (performing a Wald test and using the results to calculate the p-value for a one-sided test) is appropriate only when the Wald F statistic has 1 degree of freedom in the numerator or the Wald chi-squared statistic has 1 degree of freedom. The distributional relationships discussed above are not valid if these degrees of freedom are larger than 1.