thanks for your answer, but I think I have been unclear in my first message.
I have a theoretical model for the formation of beliefs, where the
belief in round t is a function of the past history of the game (denoted
X), and a gamma parameter, so the "theoretical" belief is f(gamma,X).
The belief for each strategy is updated independantly of the others
(i.e. the belief of a given strategy depends only on past occurences of
the strategy, not on the history of other strategies), but the updating
rule ensures that beliefs sum up to one.
The players have been asked to report their beliefs, which I label
What I want to do is to estimate the gamma parameter using "actual"
beliefs as the dependant variable, f(gamma,X) and an error term in the RHS:
Actual belief = f(gamma,X) + epsilon
Using this equation, I could estimate the gamma parameter using only
data on beliefs for a given strategy, but that would be inefficient
since I would make no use of the information provided by other beliefs.
My problem, however, is that I do not see an obvious estimation strategy
that would make an efficient use of all the information (estimating two
equations with independant errors seems the wrong thing to do, but I'm
being wary of using a bivariate normal distribution in a two-equation
model where the gamma parameter would be the same across equations).
le 08/08/2005 18:45, austin nichols a ecrit :
> I'm not going to read Cheung and Friedman (1997) to figure
> out what the real model would be, I'm just going to assume
> you want to see the theoretical prediction "explains" each
> individual person's beliefs at each point in time, based on
> some initial starting point, using some test of whether the
> coefficients on the theoretical predictions are non-zero.
> It's not clear whether you are inferring beliefs from play,
> or just asking players to report beliefs. This distinction
> could have a huge impact on the error structure.
> Nevertheless, here is one way to go:
> Express beliefs as odds ratios: p(A)/p(not A) so the
> left-hand-side variable ranges from zero to infinity. Then
> take the logs of the odds ratios. Now the LHS variable
> ranges from negative infinity to positive infinity. Do the
> same for your theoretical predictions. If probabilities of
> zero or one are possible, you might have problems, both
> practically and theoretically, since this would be an
> absorbing state for any future updating of beliefs.
> Now you've got three real-valued LHS variables, call them A
> B C, in three equations, to regress on predictions from
> your model. They no longer sum to one, but the deviations
> are clearly correlated across the equations. Estimate each
> regression and then use -suest- to adjust for correlations
> of errors across all 3 simultaneously. You probably want
> to cluster by individual.
> I don't think you really want a learning curve model (aka
> latent growth curve model, see HLM software for these
> models), but you do want to allow for round-specific fixed
> effects, probably. Just -tab round, gen(rd)- and then
> include rd* as regressors in each model, like so...
> g C=ln(pc/(1-pc))
> reg C theoreticalC rd* , score(sC)
> suest A B C, cluster(id)
> Maybe you want indiv fixed effects of some kind, too...
> If you want to see a graph of the log-odds-ratio versus
> probability, try:
> range p 0 1 100
> g lor=ln(p/(1-p))
> line lor p
* For searches and help try: