Statalist The Stata Listserver

[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

Re: st: least products regression

From   Joseph Coveney <>
To   Statalist <>
Subject   Re: st: least products regression
Date   Tue, 22 Aug 2006 20:32:52 +0900

Richard Hiscock wrote:

I am exploring using ordinary least products regression (or geometric mean
regression) rather than the Bland-Altmann difference method to assess
disagreement between to methods of measurement (continuous outcome) when
neither is a gold standard,
I have searched under cross product, OLproduct, geometric regression with no
In particular automated methods to obtain SE and hence CI.
any guidance would be appreciated


googling {stata "reduced major axis regression" OR "ordinary least products"
OR "geometric mean regression" OR "standardised principal component
regression"} turns up an objective function on SAS's site for ordinary least
products regression under the alias of reduced major axis regression.

Stata doesn't seem to have anything canned for what you want, but using an
objective function from SAS's website, you can readily whip up something
using -amoeba-, for example, and use it in conjunction with -jackknife-
or -bootstrap- to get your error estimates.  Try something like what I show
below followng the Web sources.

I recall reading somewhere (it might have been one of Raymond Carroll's
lecture slide shows available on the Web, but don't quote me on this) that
these approaches (Deming regression, reduced axis regression) to
errors-in-variables regression are shakey for everything but the most
restrictive assumption about the ratio of error variances in the x and y
variables.  Both the official -eivreg- and Yulia Marchenko's -deming- want
the user to do some thinking about or independent investigation of this
assumption and to make it explicit.  Because you're making a particular
assumption about this, anyway, using least products regression, you're
probably be better off using one of these commands in conjunction with one
of the resampling techniques.

I recall (again maybe from Professor Carroll's slides) that the approach
also makes a strong assumption about the model being correctly specified,
i.e., linear functional relationship, no missing variables, and so on.

Joseph Coveney

Web source for SAS:

Reduced major axis regression
This regression model minimizes the areas of the right triangles formed by
the data points' vertical and horizontal deviations from the fitted line and
the fitted line. Use SAS/OR PROC NLP with appropriate minimization
criterion. For example:
   proc nlp;
      min area;
      parms b1=1, b0=1;
      area=(y - (b0 + b1*x))**2 / abs(b1);

Errors-in-variables regression [Deming regression]
This is a regression model that minimizes the perpendicular distances from
the data points to the fitted line. Use SAS/OR PROC NLP with an appropriate
minimization criterion. For example:
   proc nlp;
      min dist;
      parms b1=1, b0=1;
      dist=(y - (b0 + b1*x))**2 / (1 + b1*b1);

A first-pass example in Stata of reduced major axix (RMA), alias ordinary
least products regression, alias geometric mean regression, alias . . .

The dataset is a dummy one of uniform y versus uniform x, N = 50.
Substitute your own variables in the objective function called by -amoeba-
and -quasi-.  Also, you might want to play with the initial estimates in
order to assure your self that -amoeba- is nestling into a global minimum.
You can use OLS linear regression to get the vector of initial parameter
estimates, and if the two measurements have the same scale, then you can use
(1,1) or similar.

set more off
set seed `=date("2006-08-22", "ymd")'
capture program drop rma
program define rma
    version 9.2
    tempvar objective sum_objective
    quietly {
        generate double `objective' = ( y - (`1'[1,1] + ///
          `1'[1,2] * x) )^2 / abs(`1'[1,2])
        egen double `sum_objective' = sum(`objective')
    scalar `2' = -`sum_objective'
set obs 50
generate float x = uniform()
generate float y = uniform()
matrix define initial = (0, 1)
scalar maximum = .
matrix define beta = (0, 0)
amoeba rma initial maximum beta
matrix initial = beta
quasi rma initial maximum beta
capture program drop wrapper
program define wrapper, rclass
    syntax [if]
    if "`if'" != "" {
        keep `if'
        quietly count
        return scalar N = r(N)
        quasi rma initial maximum beta
    else {
        quasi rma initial maximum beta
    return scalar b0 = beta[1,1]
    return scalar b1 = beta[1,2]
jackknife b0 = r(b0) b1 = r(b1), rclass: wrapper
bootstrap b0 = r(b0) b1 = r(b1), nowarn: wrapper

*   For searches and help try:

© Copyright 1996–2021 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index