»  Home »  Resources & support »  FAQs »  Obtaining marginal effects quickly
Note: This FAQ is for Stata 10 and older versions of Stata.

In Stata 11, the margins command replaced mfx.

## I need to run mfx more than once on my dataset, and it's taking a long time. What can I do to make it run as fast as possible?

 Title Obtaining marginal effects quickly Author May Boggess, StataCorp

First, do not compute the marginal effects for all the variables if you are not interested in all of them. You can specify the variables you are interested in by using the varlist() option.

 . sysuse auto, clear
(1978 Automobile Data)

. regress mpg weight length

Source |       SS       df       MS              Number of obs =      74
-------------+------------------------------           F(  2,    71) =   69.34
Model |  1616.08062     2  808.040312           Prob > F      =  0.0000
Residual |  827.378835    71   11.653223           R-squared     =  0.6614
Total |  2443.45946    73  33.4720474           Root MSE      =  3.4137

------------------------------------------------------------------------------
mpg |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
weight |  -.0038515    .001586    -2.43   0.018    -.0070138   -.0006891
length |  -.0795935   .0553577    -1.44   0.155    -.1899736    .0307867
_cons |   47.88487    6.08787     7.87   0.000       35.746    60.02374
------------------------------------------------------------------------------

. mfx, predict(xb) eyex

Elasticities after regress
y  = Linear prediction (predict, xb)
=  21.297297
------------------------------------------------------------------------------
variable |      ey/ex    Std. Err.     z    P>|z|  [    95% C.I.   ]      X
---------+--------------------------------------------------------------------
weight |  -.5460497      .22509   -2.43   0.015  -.987208 -.104891   3019.46
length |  -.7023518      .48867   -1.44   0.151  -1.66012  .255414   187.932
------------------------------------------------------------------------------

. mfx, predict(xb) eyex varlist(length)

Elasticities after regress
y  = Linear prediction (predict, xb)
=  21.297297
------------------------------------------------------------------------------
variable |      ey/ex    Std. Err.     z    P>|z|  [    95% C.I.   ]      X
---------+--------------------------------------------------------------------
length |  -.7023518      .48867   -1.44   0.151  -1.66012  .255414   187.932
------------------------------------------------------------------------------


Also, if you don’t need them, do not compute the standard errors as this can be time consuming. You achieve this with the nose option.

 . mfx, predict(xb) eyex nose

Elasticities after regress
y  = Linear prediction (predict, xb)
=  21.297297
-------------------------------------------------------------------------------
variable |          ey/ex                 X
---------------------------------+---------------------------------------------
weight |       -.5460497            3019.46
length |       -.7023518            187.932
-------------------------------------------------------------------------------


The third thing you can do is use the force option. This prevents mfx from doing some checks that really should be done at least once. This will not save a noticeable amount of time on a small dataset but may save a few seconds on a large dataset. Here is an example you can run to compare:

 . clear

. set obs 1000
obs was 0, now 1000

. set seed 23865

. generate y=int(uniform()*5)

. generate dum=uniform()>0.5

. forvalues i=1/3 {
2. generate xi'=uniform()*20
3. }

. mlogit y x* dum, nolog

Multinomial logistic regression                   Number of obs   =       1000
LR chi2(16)     =      16.56
Prob > chi2     =     0.4145
Log likelihood = -1599.4028                       Pseudo R2       =     0.0052

------------------------------------------------------------------------------
y |      Coef.   Std. Err.      z    p>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
0            |
x1 |  -.0018747   .0173475    -0.11   0.914    -.0358752    .0321257
x2 |  -.0317412   .0178107    -1.78   0.075    -.0666495    .0031671
x3 |   .0078188   .0174158     0.45   0.653    -.0263155    .0419532
dum |   .1263877   .2021265     0.63   0.532     -.269773    .5225484
_cons |   .0561327   .3304511     0.17   0.865    -.5915396     .703805
-------------+----------------------------------------------------------------
2            |
x1 |   .0008903   .0172058     0.05   0.959    -.0328323     .034613
x2 |  -.0027325   .0176622    -0.15   0.877    -.0373498    .0318849
x3 |  -.0162843   .0173379    -0.94   0.348     -.050266    .0176974
dum |   .0784832   .2000959     0.39   0.695    -.3136975     .470664
_cons |   .0107956   .3299801     0.03   0.974    -.6359536    .6575447
-------------+----------------------------------------------------------------
3            |
x1 |  -.0152809   .0167811    -0.91   0.363    -.0481712    .0176094
x2 |    -.02041   .0171957    -1.19   0.235     -.054113     .013293
x3 |   -.003969   .0168479    -0.24   0.814    -.0369902    .0290522
dum |   .1989654   .1952902     1.02   0.308    -.1837963    .5817271
_cons |   .2797189   .3186474     0.88   0.380    -.3448184    .9042563
-------------+----------------------------------------------------------------
4            |
x1 |   .0163031   .0168514     0.97   0.333     -.016725    .0493312
x2 |  -.0270747   .0172928    -1.57   0.117     -.060968    .0068186
x3 |   .0202191   .0169027     1.20   0.232    -.0129096    .0533477
dum |  -.1317624   .1965608    -0.67   0.503    -.5170144    .2534897
_cons |  -.0551544   .3244424    -0.17   0.865    -.6910499    .5807411

------------------------------------------------------------------------------
(y==1 is the base outcome)

. mfx, predict(p outcome(2))

Marginal effects after mlogit
y  = Pr(y==2) (predict, p outcome(2))
=  .18900063
------------------------------------------------------------------------------
variable |      dy/dx    Std. Err.     z    p>|z|  [    95% C.I.   ]      X
---------+--------------------------------------------------------------------
x1 |    .000176      .00213    0.08   0.934  -.004008  .004359    9.9926
x2 |   .0025355      .00218    1.16   0.246  -.001744  .006815   10.2813
x3 |  -.0033915      .00215   -1.58   0.114  -.007602  .000819   9.44558
dum*|   .0048542      .02479    0.20   0.845  -.043738  .053446      .496

------------------------------------------------------------------------------
(*) dy/dx is for discrete change of dummy variable from 0 to 1

. mfx, predict(p outcome(2)) force

Marginal effects after mlogit
y  =  (predict, p outcome(2))
=  .18900063
------------------------------------------------------------------------------
variable |      dy/dx    Std. Err.     z    p>|z|  [    95% C.I.   ]      X
---------+--------------------------------------------------------------------
x1 |    .000176      .00213    0.08   0.934  -.004008  .004359    9.9926
x2 |   .0025355      .00218    1.16   0.246  -.001744  .006815   10.2813
x3 |  -.0033915      .00215   -1.58   0.114  -.007602  .000819   9.44558
dum*|   .0048542      .02479    0.20   0.845  -.043738  .053446      .496
------------------------------------------------------------------------------
(*) dy/dx is for discrete change of dummy variable from 0 to 1


The command set rmsg on tells Stata to show us the amount of time taken on the previous command. If you use that command before mfx, Stata will tell you how long mfx took after it is done. On my computer, it took close to the same time with force as without. Use set rmsg off when you are done looking at times.

Please let me stress here that you should not get in the habit of using force all the time. The checks that force avoids are very, very important because, if any of those checks fail, the results are probably invalid. So, remember, use force only if you have previously run mfx without force and verified that everything checks out OK.

You may be tempted to use the at() option to try and save time. It’s not a bad idea since mfx won’t have to compute the means of the variables, but it won’t save a noticeable amount of time either, unless your dataset is very large. Let’s compare:

 . clear

. set obs 10000
obs was 0, now 10000

. set seed 23865

. generate y=int(uniform()*5)

. forvalues i=1/3 {
2. generate xi'=uniform()*20
3. }

. quietly mlogit y x*

. forvalues i=1/3 {
2. quietly summarize xi' if e(sample)
3. local xmeani'=r(mean)
4. }

. mfx, predict(p outcome(2))

Marginal effects after mlogit
y  = Pr(y==2) (predict, p outcome(2))
=  .18965252
------------------------------------------------------------------------------
variable |      dy/dx    Std. Err.     z    P>|z|  [    95% C.I.   ]      X
---------+--------------------------------------------------------------------
x1 |  -.0008298      .00218   -0.38   0.703  -.005096  .003436   9.96244
x2 |     .00021      .00214    0.10   0.922  -.003975  .004395    9.9926
x3 |   .0025351      .00219    1.16   0.247  -.001759  .006829   10.2813
------------------------------------------------------------------------------

. mfx, predict(p outcome(2)) at(x1=xmean1' x2=xmean2' x3=xmean3')

Marginal effects after mlogit
y  = Pr(y==2) (predict, p outcome(2))
=  .18965252
------------------------------------------------------------------------------
variable |      dy/dx    Std. Err.     z    P>|z|  [    95% C.I.   ]      X
---------+--------------------------------------------------------------------
x1 |  -.0008298      .00218   -0.38   0.703  -.005096  .003436   9.96244
x2 |     .00021      .00214    0.10   0.922  -.003975  .004395    9.9926
x3 |   .0025351      .00219    1.16   0.247  -.001759  .006829   10.2813
------------------------------------------------------------------------------

. mfx, predict(p outcome(2)) at(x1=xmean1' x2=xmean2' x3=xmean3')  force

Marginal effects after mlogit
y  =  (predict, p outcome(2))
=  .18965252
------------------------------------------------------------------------------
variable |      dy/dx    Std. Err.     z    P>|z|  [    95% C.I.   ]      X
---------+--------------------------------------------------------------------
x1 |  -.0008298      .00218   -0.38   0.703  -.005096  .003436   9.96244
x2 |     .00021      .00214    0.10   0.922  -.003975  .004395    9.9926
x3 |   .0025351      .00219    1.16   0.247  -.001759  .006829   10.2813
------------------------------------------------------------------------------


The last thing you can do to save time is to verify that your variables are well scaled. (Please see the FAQ stata.com/support/faqs/statistics/scaling-and-marginal-effects for a full discussion of scaling.) If mfx is taking a long time, you can check whether a lot of iterations are slowing down the process by using the tracelvl() option, as follows:

 . webuse ldose, clear

. glm r ldose, family(binomial n) link(clog)

Iteration 0:   log likelihood = -14.883594
Iteration 1:   log likelihood = -14.822264
Iteration 2:   log likelihood = -14.822228
Iteration 3:   log likelihood = -14.822228

Generalized linear models                          No. of obs      =         8
Optimization     : ML                              Residual df     =         6
Scale parameter =         1
Deviance         =  3.446418004                    (1/df) Deviance =   .574403
Pearson          =  3.294675153                    (1/df) Pearson  =  .5491125

Variance function: V(u) = u*(1-u/n)                [Binomial]
Link function    : g(u) = ln(-ln(1-u/n))           [Complementary log-log]

AIC             =  4.205557
Log likelihood   = -14.82222811                    BIC             = -9.030231

------------------------------------------------------------------------------
|                 OIM
r |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
ldose |   22.04118   1.793089    12.29   0.000     18.52679    25.55557
_cons |  -39.57232   3.229047   -12.26   0.000    -45.90114   -33.24351
------------------------------------------------------------------------------

. mfx, predict(xb) trace(4)

calculating dydx (linear method)

equation i= 1
Iteration (dx): initial h = 1.793e-06
1  h= 1.345e-06
2  h= 1.009e-06
3  h= 7.566e-07
4  h= 5.675e-07
5  h= 4.256e-07
6  h= 3.192e-07
7  h= 2.394e-07
8  h= 1.795e-07
9  h= 1.347e-07
10  h= 1.010e-07
11  h= 7.575e-08
12  h= 5.681e-08
13  h= 4.261e-08
14  h= 3.196e-08
15  h= 2.397e-08
16  h= 1.797e-08
17  h= 1.348e-08
18  h= 1.011e-08
19  h= 7.583e-09
20  h= 5.687e-09
21  h= 4.266e-09
22  h= 3.199e-09
23  h= 2.399e-09
24  h= 1.800e-09
h= 1.800e-09
: df/d(xb) = 1

-------------------------
variable |      dy/dx
---------+---------------
ldose |       22.041
-------------------------

calculating standard errors (linear method)

equation k = 1:

Iteration (dx): initial h = 1.793e-06
1  h= 1.345e-06
2  h= 1.009e-06
3  h= 7.566e-07
4  h= 5.675e-07
5  h= 4.256e-07
6  h= 3.192e-07
7  h= 2.394e-07
8  h= 1.795e-07
9  h= 1.347e-07
10  h= 1.010e-07
11  h= 7.575e-08
12  h= 5.681e-08
13  h= 4.261e-08
14  h= 3.196e-08
15  h= 2.397e-08
16  h= 1.797e-08
17  h= 1.348e-08
18  h= 1.011e-08
19  h= 7.583e-09
20  h= 5.687e-09
21  h= 4.266e-09
22  h= 3.199e-09
23  h= 2.399e-09
24  h= 1.800e-09
h= 1.800e-09
dfdxb = 1

Iteration (dx): initial h = 1.795e-06
1  h= 1.346e-06
2  h= 1.010e-06
3  h= 7.574e-07
4  h= 5.680e-07
5  h= 4.260e-07
6  h= 3.195e-07
7  h= 2.396e-07
8  h= 1.797e-07
9  h= 1.348e-07
10  h= 1.011e-07
11  h= 7.582e-08
12  h= 5.687e-08
13  h= 4.265e-08
14  h= 3.199e-08
15  h= 2.399e-08
16  h= 1.799e-08
17  h= 1.349e-08
18  h= 1.012e-08
19  h= 7.591e-09
20  h= 5.693e-09
21  h= 4.270e-09
22  h= 3.202e-09
23  h= 2.402e-09
24  h= 1.801e-09
25  h= 1.351e-09
26  h= 1.013e-09
27  h= 7.599e-10
28  h= 5.699e-10
29  h= 4.275e-10
30  h= 3.206e-10
31  h= 2.404e-10
32  h= 1.803e-10
33  h= 1.353e-10
h= 1.353e-10
dfdxb = .99999684

-----Iteration (dx) for  d^2f/d(xb_k)d(xb_l)-----
1
Iteration (dx): initial h = 1.796e-06
1  h= 1.347e-06
2  h= 1.010e-06
3  h= 7.577e-07
4  h= 5.683e-07
5  h= 4.262e-07
6  h= 3.197e-07
7  h= 2.398e-07
8  h= 1.798e-07
9  h= 1.349e-07
10  h= 1.011e-07
11  h= 7.586e-08
12  h= 5.689e-08
13  h= 4.267e-08
14  h= 3.200e-08
15  h= 2.400e-08
16  h= 1.800e-08
17  h= 1.350e-08
18  h= 1.013e-08
19  h= 7.594e-09
20  h= 5.696e-09
21  h= 4.272e-09
22  h= 3.204e-09
23  h= 2.403e-09
24  h= 1.802e-09
25  h= 1.352e-09
26  h= 1.014e-09
27  h= 7.603e-10
28  h= 5.702e-10
h= 5.702e-10
dfdxb = .99999988
2
Iteration (dx): initial h = 1.797e-06
1  h= 1.348e-06
2  h= 1.011e-06
3  h= 7.583e-07
4  h= 5.687e-07
5  h= 4.265e-07
6  h= 3.199e-07
7  h= 2.399e-07
8  h= 1.799e-07
9  h= 1.350e-07
10  h= 1.012e-07
11  h= 7.592e-08
12  h= 5.694e-08
13  h= 4.270e-08
14  h= 3.203e-08
15  h= 2.402e-08
16  h= 1.802e-08
17  h= 1.351e-08
18  h= 1.013e-08
19  h= 7.600e-09
20  h= 5.700e-09
21  h= 4.275e-09
22  h= 3.206e-09
23  h= 2.405e-09
24  h= 1.804e-09
h= 1.804e-09
dfdxb = .99999991
3
Iteration (dx): initial h = 1.799e-06
1  h= 1.350e-06
2  h= 1.012e-06
3  h= 7.592e-07
4  h= 5.694e-07
5  h= 4.270e-07
6  h= 3.203e-07
7  h= 2.402e-07
8  h= 1.802e-07
9  h= 1.351e-07
10  h= 1.013e-07
11  h= 7.600e-08
12  h= 5.700e-08
13  h= 4.275e-08
14  h= 3.206e-08
15  h= 2.405e-08
16  h= 1.804e-08
17  h= 1.353e-08
18  h= 1.014e-08
19  h= 7.609e-09
20  h= 5.707e-09
21  h= 4.280e-09
22  h= 3.210e-09
h= 3.210e-09
dfdxb = 1
4
Iteration (dx): initial h = 1.803e-06
1  h= 1.352e-06
2  h= 1.014e-06
3  h= 7.604e-07
4  h= 5.703e-07
5  h= 4.277e-07
6  h= 3.208e-07
7  h= 2.406e-07
8  h= 1.805e-07
9  h= 1.353e-07
10  h= 1.015e-07
11  h= 7.613e-08
12  h= 5.710e-08
13  h= 4.282e-08
14  h= 3.212e-08
15  h= 2.409e-08
16  h= 1.807e-08
17  h= 1.355e-08
18  h= 1.016e-08
19  h= 7.622e-09
20  h= 5.716e-09
h= 5.716e-09
dfdxb = 1
5
Iteration (dx): initial h = 1.807e-06
1  h= 1.355e-06
2  h= 1.016e-06
3  h= 7.624e-07
4  h= 5.718e-07
5  h= 4.288e-07
6  h= 3.216e-07
7  h= 2.412e-07
8  h= 1.809e-07
9  h= 1.357e-07
10  h= 1.018e-07
11  h= 7.632e-08
12  h= 5.724e-08
13  h= 4.293e-08
14  h= 3.220e-08
15  h= 2.415e-08
16  h= 1.811e-08
17  h= 1.358e-08
18  h= 1.019e-08
h= 1.019e-08
dfdxb = 1
6
Iteration (dx): initial h = 1.814e-06
1  h= 1.360e-06
2  h= 1.020e-06
3  h= 7.652e-07
4  h= 5.739e-07
5  h= 4.304e-07
6  h= 3.228e-07
7  h= 2.421e-07
8  h= 1.816e-07
9  h= 1.362e-07
10  h= 1.021e-07
11  h= 7.661e-08
12  h= 5.746e-08
13  h= 4.309e-08
14  h= 3.232e-08
15  h= 2.424e-08
16  h= 1.818e-08
h= 1.818e-08
dfdxb = 1
7
Iteration (dx): initial h = 1.824e-06
1  h= 1.368e-06
2  h= 1.026e-06
3  h= 7.695e-07
4  h= 5.772e-07
5  h= 4.329e-07
6  h= 3.246e-07
7  h= 2.435e-07
8  h= 1.826e-07
9  h= 1.370e-07
10  h= 1.027e-07
11  h= 7.704e-08
12  h= 5.778e-08
13  h= 4.334e-08
14  h= 3.250e-08
15  h= 2.438e-08
h= 2.438e-08
dfdxb = .99999998
8
Iteration (dx): initial h = 1.839e-06
1  h= 1.380e-06
2  h= 1.035e-06
3  h= 7.760e-07
4  h= 5.820e-07
5  h= 4.365e-07
6  h= 3.274e-07
7  h= 2.455e-07
8  h= 1.841e-07
9  h= 1.381e-07
10  h= 1.036e-07
11  h= 7.769e-08
12  h= 5.827e-08
13  h= 4.370e-08
h= 4.370e-08
dfdxb = 1
9
Iteration (dx): initial h = 1.862e-06
1  h= 1.397e-06
2  h= 1.048e-06
3  h= 7.857e-07
4  h= 5.893e-07
5  h= 4.420e-07
6  h= 3.315e-07
7  h= 2.486e-07
8  h= 1.865e-07
9  h= 1.398e-07
10  h= 1.049e-07
11  h= 7.866e-08
12  h= 5.899e-08
h= 5.899e-08
dfdxb = 1
10
Iteration (dx): initial h = 1.897e-06
1  h= 1.423e-06
2  h= 1.067e-06
3  h= 8.003e-07
4  h= 6.002e-07
5  h= 4.501e-07
6  h= 3.376e-07
7  h= 2.532e-07
8  h= 1.899e-07
9  h= 1.424e-07
10  h= 1.068e-07
11  h= 8.012e-08
h= 8.012e-08
dfdxb = 1
11
Iteration (dx): initial h = 1.949e-06
1  h= 1.461e-06
2  h= 1.096e-06
3  h= 8.221e-07
4  h= 6.166e-07
5  h= 4.624e-07
6  h= 3.468e-07
7  h= 2.601e-07
8  h= 1.951e-07
9  h= 1.463e-07
h= 1.463e-07
dfdxb = 1
12
Iteration (dx): initial h = 2.026e-06
1  h= 1.520e-06
2  h= 1.140e-06
3  h= 8.548e-07
4  h= 6.411e-07
5  h= 4.808e-07
6  h= 3.606e-07
7  h= 2.705e-07
8  h= 2.029e-07
h= 2.029e-07
dfdxb = 1
13
Iteration (dx): initial h = 2.143e-06
1  h= 1.607e-06
2  h= 1.205e-06
3  h= 9.039e-07
4  h= 6.779e-07
5  h= 5.085e-07
6  h= 3.813e-07
7  h= 2.860e-07
h= 2.860e-07
dfdxb = 1
14
Iteration (dx): initial h = 2.317e-06
1  h= 1.738e-06
2  h= 1.303e-06
3  h= 9.776e-07
4  h= 7.332e-07
5  h= 5.499e-07
6  h= 4.124e-07
h= 4.124e-07
dfdxb = 1
15
Iteration (dx): initial h = 2.579e-06
1  h= 1.934e-06
2  h= 1.451e-06
3  h= 1.088e-06
4  h= 8.161e-07
5  h= 6.121e-07
h= 6.121e-07
dfdxb = 1
16
Iteration (dx): initial h = 2.972e-06
1  h= 2.229e-06
2  h= 1.672e-06
3  h= 1.254e-06
4  h= 9.404e-07
h= 9.404e-07
dfdxb = 1
17
Iteration (dx): initial h = 3.561e-06
1  h= 2.671e-06
2  h= 2.003e-06
3  h= 1.502e-06
h= 1.502e-06
dfdxb = 1
18
Iteration (dx): initial h = 4.445e-06
1  h= 3.334e-06
2  h= 2.501e-06
h= 2.501e-06
dfdxb = 1
19
Iteration (dx): initial h = 5.771e-06
1  h= 4.329e-06
2  h= 3.246e-06
h= 3.246e-06
dfdxb = 1
20
Iteration (dx): initial h = 7.760e-06
1  h= 5.820e-06
h= 5.820e-06
dfdxb = 1
21
Iteration (dx): initial h = .00001074
1  h= 8.058e-06
h= 8.058e-06
dfdxb = 1
22
Iteration (dx): initial h = .00001522
1  h= .00001141
h= .00001141
dfdxb = 1
23
Iteration (dx): initial h = .00002193
1  h= .00001645
h= .00001645
dfdxb = 1
24
Iteration (dx): initial h = .000032
1  h= .000024
h= .000024
dfdxb = 1
25
Iteration (dx): initial h = .0000471
1  h= .00003533
h= .00003533
dfdxb = 1
26
Iteration (dx): initial h = .00006976
1  h= .00005232
h= .00005232
dfdxb = 1
27
Iteration (dx): initial h = .00010374
1  h= .00007781
h= .00007781
dfdxb = 1
28
Iteration (dx): initial h = .00015472
1  h= .00011604
h= .00011604
dfdxb = 1
29
Iteration (dx): initial h = .00023118
1  h= .00017339
h= .00017339
dfdxb = 1
30
Iteration (dx): initial h = .00034588
1  h= .00025941
h= .00025941
dfdxb = 1
31
Iteration (dx): initial h = .00051792
1  h= .00038844
h= .00038844
dfdxb = 1
32
Iteration (dx): initial h = .00077598
1  h= .00058198
h= .00058198
dfdxb = 1
33
Iteration (dx): initial h = .00116307
1  h= .0008723
h= .0008723
dfdxb = 1
34
Iteration (dx): initial h = .00174371
1  h= .00130778
h= .00130778
dfdxb = 1
35
Iteration (dx): initial h = .00261467
1  h= .001961
h= .001961
dfdxb = 1
36
Iteration (dx): initial h = .00392111
1  h= .00294083
h= .00294083
dfdxb = 1
37
Iteration (dx): initial h = .00588077
1  h= .00441058
h= .00441058
dfdxb = 1
38
Iteration (dx): initial h = .00882026
1  h= .0066152
h= .0066152
dfdxb = 1
39
Iteration (dx): initial h = .0132295
1  h= .00992212
h= .00992212
dfdxb = 1
40
Iteration (dx): initial h = .01984335
1  h= .01488251
h= .01488251
dfdxb = 1
41
Iteration (dx): initial h = .02976412
1  h= .02232309
h= .02232309
dfdxb = 1
42
Iteration (dx): initial h = .04464529
1  h= .03348397
h= .03348397
dfdxb = 1
43
Iteration (dx): initial h = .06696704
1  h= .05022528
h= .05022528
dfdxb = 1
44
Iteration (dx): initial h = .10044966
1  h= .07533724
h= .07533724
dfdxb = 1
45 k= 1, l= 1: d^2f/d(xb_k)d(xb_l) = 0

ldose ... continuous
1   d^2f/dxdb = 1
2   d^2f/dxdb = 0
ldose: Std. Err. = 1.7930887

Marginal effects after glm
y  = linear predictor (predict, xb)
= -.04312548
------------------------------------------------------------------------------
variable |      dy/dx    Std. Err.     z    P>|z|  [    95% C.I.   ]      X
---------+--------------------------------------------------------------------
ldose |   22.04118     1.79309   12.29   0.000   18.5268  25.5556   1.79343
------------------------------------------------------------------------------


The answer is accurate enough, but it sure took some effort to reach it. To see where my scale problem may be, I summarize my data and notice the response variable r has a large range compared with the other variables. So I decide to rescale r and see if that helps:

 . summarize

Variable |       Obs        Mean    Std. Dev.       Min        Max
-------------+--------------------------------------------------------
ldose |         8    1.793425    .0674563     1.6907     1.8839
n |         8      60.125    2.232071         56         63
r |         8      36.375    22.55747          6         61

. replace r=r/10

. glm r ldose, family(binomial n) link(clog)
note: r has noninteger values

Iteration 0:   log likelihood = -12.210073
Iteration 1:   log likelihood = -12.202629
Iteration 2:   log likelihood = -12.202628

Generalized linear models                          No. of obs      =         8
Optimization     : ML                              Residual df     =         6
Scale parameter =         1
Deviance         =  1.261310742                    (1/df) Deviance =  .2102185
Pearson          =  1.264872999                    (1/df) Pearson  =  .2108122

Variance function: V(u) = u*(1-u/n)                [Binomial]
Link function    : g(u) = ln(-ln(1-u/n))           [Complementary log-log]

AIC             =  3.550657
Log likelihood   = -12.20262839                    BIC             = -11.21534

------------------------------------------------------------------------------
|                 OIM
r |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
ldose |   10.26508    3.45761     2.97   0.003     3.488291    17.04187
_cons |  -21.37225   6.326251    -3.38   0.001    -33.77147   -8.973022
------------------------------------------------------------------------------

.  mfx, predict(xb)  trace(4)

calculating dydx (linear method)

equation i= 1
Iteration (dx): initial h = 1.793e-06
1  h= 1.345e-06
2  h= 1.009e-06
3  h= 7.566e-07
4  h= 5.675e-07
5  h= 4.256e-07
6  h= 3.192e-07
7  h= 2.394e-07
h= 2.394e-07
: df/d(xb) = 1

-------------------------
variable |      dy/dx
---------+---------------
ldose |       10.265
-------------------------

calculating standard errors (linear method)

equation k = 1:

Iteration (dx): initial h = 1.793e-06
1  h= 1.345e-06
2  h= 1.009e-06
3  h= 7.566e-07
4  h= 5.675e-07
5  h= 4.256e-07
6  h= 3.192e-07
7  h= 2.394e-07
h= 2.394e-07
dfdxb = 1

Iteration (dx): initial h = 1.795e-06
1  h= 1.346e-06
2  h= 1.010e-06
3  h= 7.574e-07
4  h= 5.680e-07
5  h= 4.260e-07
6  h= 3.195e-07
7  h= 2.396e-07
h= 2.396e-07
dfdxb = 1
k= 1, l= 1: d^2f/d(xb_k)d(xb_l) = 0

ldose ... continuous
1   d^2f/dxdb = 1
2   d^2f/dxdb = 0
ldose: Std. Err. = 3.4576096

Marginal effects after glm
y  = linear predictor (predict, xb)
=  -2.962593
------------------------------------------------------------------------------
variable |      dy/dx    Std. Err.     z    P>|z|  [    95% C.I.   ]      X
---------+--------------------------------------------------------------------
ldose |   10.26508     3.45761    2.97   0.003   3.48829  17.0419   1.79343
------------------------------------------------------------------------------