Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down on April 23, and its replacement, statalist.org is already up and running.

# st: RE: Panel data: large number of linear time trends

 From Nick Cox To "'statalist@hsphsun2.harvard.edu'" Subject st: RE: Panel data: large number of linear time trends Date Fri, 24 Feb 2012 17:49:43 +0000

```Maligning all those who presumably gave advice on this with a blanket criticism is not the best way to win friends and influence people, to allude to a once well-known book.

Why is this thought or implied to be so difficult?

. webuse grunfeld, clear

. statsby slope=_b[year] , by(company) : regress mvalue year
(running regress on estimation sample)

command:  regress mvalue year
slope:  _b[year]
by:  company

Statsby groups
----+--- 1 ---+--- 2 ---+--- 3 ---+--- 4 ---+--- 5
..........

. l

+--------------------+
| company      slope |
|--------------------|
1. |       1   56.35872 |
2. |       2    1.46263 |
3. |       3   7.183384 |
4. |       4    8.91579 |
5. |       5   10.77564 |
|--------------------|
6. |       6   33.69406 |
7. |       7   .4736842 |
8. |       8    27.8215 |
9. |       9   7.434586 |
10. |      10   -.563985 |
+--------------------+

Nick
n.j.cox@durham.ac.uk

William Gui Woolston

I am estimating a panel data model, where the unit of observation is a
county-year.  There are roughly 3,100 counties in the United States,
and I have data for 12 years.

I wish to include linear county-time trends.  That is, I want a
separate time trend for each county.

Estimating this model by "brute force" (by interacting time with a
dummy for each county) would mean having an additional 3,100 variables
to my model.  Is there a more efficient way to estimate this model?

Thank you so much for your consideration.

William

PS.  Note that some versions of this question have appeared in earlier