>> Home >> Bookstore >> Econometrics >> Principles of Econometrics, Fourth Edition

Principles of Econometrics, Fourth Edition

R. Carter Hill, William E. Griffiths, and Guay C. Lim
Publisher: Wiley
Copyright: 2011
ISBN-13: 978-0-470-62673-3
Pages: 758; hardcover
Price: $134.50
Supplements:Using Stata for Principles of Econometrics, 4th Edition

Comment from the Stata technical group

Principles of Econometrics, Fourth Edition, by R. Carter Hill, William E. Griffiths, and Guay C. Lim, is an introductory book for undergraduate econometrics. This book exemplifies learning by doing and gets the reader working through examples as fast as possible with a minimum of theory. Although Principles of Econometrics is designed to be the textbook in a principles of econometrics course, the style and coverage make it useful background reading for higher level courses.

The authors cover a broad area of econometrics. Appendices quickly review the required mathematical, probability, and elementary-statistics tools, and a new Probability Primer provides extra exercises. The first seven chapters cover estimation and inference in linear models without using matrix algebra. The next two chapters cover heteroskedasticity and stationary time series. Chapters 10 and 11 cover the method of moments approach to least squares and instrumental-variables estimators and their application in simultaneous-equation models. New in the fourth edition is a discussion of instrument strength. Chapters 12, 13, and 14 provide nice introductions to the advanced time-series topics of nonstationarity, multiple time series, and time-varying volatility. Chapters 15 and 16 introduce two advanced topics in microeconometrics: panel-data models and models for qualitative and limited dependent variables.

The numerous, nicely discussed examples in this book make the hands-on approach work well. The level of abstraction is held to a minimum, and instruction proceeds by interpreting examples. The many excellent exercises will help interested readers gain experience in and understanding about the methods discussed in the text.

Table of contents

Chapter 1 An Introduction to Econometrics
1.1 Why Study Econometrics?
1.2 What Is Econometrics About?
1.2.1 Some Examples
1.3 The Econometric Model
1.4 How Are Data Generated?
1.4.1 Experimental Data
1.4.2 Nonexperimental Data
1.5 Economic Data Types
1.5.1 Time-Series Data
1.5.2 Cross-Section Data
1.5.3 Panel or Longitudinal Data
1.6 The Research Process
1.7 Writing An Empirical Research Paper
1.7.1 Writing a Research Proposal
1.7.2 A Format for Writing a Research Project
1.8 Sources of Economic Data
1.8.1 Links to Economic Data on the Internet
1.8.2 Interpreting Economic Data
1.8.3 Obtaining the Data
Probability Primer
Learning Objectives
P.1 Random Variables
P.2 Probability Distributions
P.3 Joint, Marginal, and Conditional Probabilities
P.3.1 Marginal Distributions
P.3.2 Conditional Probability
P.3.3 Statistical Independence
P.4 A Digression: Summation Notation
P.5 Properties of Probability Distributions
P.5.1 Expected Value of a Random Variable
P.5.2 Conditional Expectation
P.5.3 Rules for Expected Values
P.5.4 Variance of a Random Variable
P.5.5 Expected Values of Several Random Variables
P.5.6 Covariance Between Two Random Variables
P.6 The Normal Distribution
P.7 Exercises
Chapter 2 The Simple Linear Regression Model
Learning Objectives
2.1 An Economic Model
2.2 An Econometric Model
2.2.1 Introducing the Error Term
2.3 Estimating the Regression Parameters
2.3.1 The Least Squares Principle
2.3.2 Estimates for the Food Expenditure Function
2.3.3 Interpreting the Estimates
2.3.3a Elasticities
2.3.3b Prediction
2.3.3c Computer Output
2.3.4 Other Economic Models
2.4 Assessing the Least Squares Estimators
2.4.1 The Estimator b2
2.4.2 The Expected Values of b1 and b2
2.4.3 Repeated Sampling
2.4.4 The Variances and Covariances of b1 and b2
2.5 The Gauss–Markov Theorem
2.6 The Probability Distributions of the Least Squares Estimators
2.7 Estimating the Variance of the Error Term
2.7.1 Estimating the Variances and Covariance of the Least Squares Estimators
2.7.2 Calculations for the Food Expenditure Data
2.7.3 Interpreting the Standard Errors
2.8 Estimating Nonlinear Relationships
2.8.1 Quadratic Functions
2.8.2 Using a Quadratic Model
2.8.3 A Log-Linear Function
2.8.4 Using a Log-Linear Model
2.8.5 Choosing a Functional Form
2.9 Regression with Indicator Variables
2.10 Exercises
2.10.1 Problems
2.10.2 Computer Exercises
Appendix 2A Derivation of the Least Squares Estimates
Appendix 2B Deviation from the Mean Form of b2
Appendix 2C b2 Is a Linear Estimator
Appendix 2D Derivation of Theoretical Expression for b2
Appendix 2E Deriving the Variance of b2
Appendix 2F Proof of the Gauss–Markov Theorem
Appendix 2G Monte Carlo Simulation
2G.1 The Regression Function
2G.2 The Random Error
2G.3 Theoretically True Values
2G.4 Creating a Sample of Data
2G.5 Monte Carlo Objectives
2G.6 Monte Carlo Results
Chapter 3 Interval Estimation and Hypothesis Testing
Learning Objectives
3.1 Interval Estimation
3.1.1 The t-Distribution
3.1.2 Obtaining Interval Estimates
3.1.3 An Illustration
3.1.4 The Repeated Sampling Context
3.2 Hypothesis Tests
3.2.1 The Null Hypothesis
3.2.2 The Alternative Hypothesis
3.2.3 The Test Statistic
3.2.4 The Rejection Region
3.2.5 A Conclusion
3.3 Rejection Regions for Specific Alternatives
3.3.1 One-Tail Tests with Alternative “Greater Than” (>)
3.3.2 One-Tail Tests with Alternative “Less Than” (<)
3.3.3 Two-Tail Tests with Alternative “Not Equal To” (≠)
3.4 Examples of Hypothesis Tests
3.4.1 Right-Tail Tests
3.4.1a One-Tail Test of Significance
3.4.1b One-Tail Test of an Economic Hypothesis
3.4.2 Left-Tail Tests
3.4.3 Two-Tail Tests
3.4.3a Two-Tail Test of an Economic Hypothesis
3.4.3b Two-Tail Test of Significance
3.5 The p-Value
3.5.1 p-Value for a Right-Tail Test
3.5.2 p-Value for a Left-Tail Test
3.5.3 p-Value for a Two-Tail Test
3.5.4 p-Value for a Two-Tail Test of Significance
3.6 Linear Combinations of Parameters
3.6.1 Estimating Expected Food Expenditure
3.6.2 An Interval Estimate of Expected Food Expenditure
3.6.3 Testing a Linear Combination of Parameters
3.6.4 Testing Expected Food Expenditure
3.7 Exercises
3.7.1 Problems
3.7.2 Computer Exercises
Appendix 3A Derivation of the t-Distribution
Appendix 3B Distribution of the t-Statistic under H1
Appendix 3C Monte Carlo Simulation
3C.1 Repeated Sampling Properties of Interval Estimators
3C.2 Repeated Sampling Properties of Hypothesis Tests
3C.3 Choosing the Number of Monte Carlo Samples
Chapter 4 Prediction, Goodness-of-Fit, and Modeling Issues
Learning Objectives
4.1 Least Squares Prediction
4.1.1 Prediction in the Food Expenditure Model
4.2 Measuring Goodness-of-Fit
4.2.1 Correlation Analysis
4.2.2 Correlation Analysis of R2
4.2.3 The Food Expenditure Example
4.2.4 Reporting the Results
4.3 Modeling Issues
4.3.1 The Effects of Scaling the Data
4.3.2 Choosing a Functional Form
4.3.3 A Linear-Log Food Expenditure Model
4.3.4 Using Diagnostic Residual Plots
4.3.4a Heteroskedastic Residual Pattern
4.3.4b Detecting Model Specification Errors
4.3.5 Are the Regression Errors Normally Distributed?
4.4 Polynomial Models
4.4.1 Quadratic and Cubic Equations
4.4.2 An Empirical Example
4.5 Log-Linear Models
4.5.1 A Growth Model
4.5.2 A Wage Equation
4.5.3 Prediction in the Log-Linear Model
4.5.4 A Generalized R2 Measure
4.5.5 Prediction Intervals in the Log-Linear Model
4.6 Log-Log Models
4.6.1 A Log-Log Poultry Demand Equation
4.7 Exercises
4.7.1 Problems
4.7.2 Computer Exercises
Appendix 4A Development of a Prediction Interval
Appendix 4B The Sum of Squares Decomposition
Appendix 4C The Log-Normal Distribution
Chapter 5 The Multiple Regression Model
Learning Objectives
5.1 Introduction
5.1.1 The Economic Model
5.1.2 The Econometric Model
5.1.2a The General Model
5.1.2b The Assumptions of the Model
5.2 Estimating the Parameters of the Multiple Regression Model
5.2.1 Least Squares Estimation Procedure
5.2.2 Least Squares Estimates Using Hamburger Chain Data
5.2.3 Estimation of the Error Variance σ2
5.3 Sampling Properties of the Least Squares Estimator
5.3.1 The Variances and Covariances of the Least Squares Estimators
5.3.2 The Distribution of the Least Squares Estimators
5.4 Interval Estimation
5.4.1 Interval Estimation for a Single Coefficient
5.4.2 Interval Estimation for a Linear Combination of Coefficients
5.5 Hypothesis Testing
5.5.1 Testing the Significance of a Single Coefficient
5.5.2 One-Tail Hypothesis Testing for a Single Coefficient
5.5.2a Testing for Elastic Demand
5.5.2b Testing Advertising Effectiveness
5.5.3 Hypothesis Testing for a Linear Combination of Coefficients
5.6 Polynomial Equations
5.6.1 Cost and Product Curves
5.6.2 Extending the Model for Burger Barn Sales
5.6.3 The Optimal Level of Advertising: Inference for a Nonlinear Combination of Coefficients
5.7 Interaction Variables
5.7.1 Log-Linear Models
5.8 Measuring Goodness-of-Fit
5.9 Exercises
5.9.1 Problems
5.9.2 Computer Exercises
Appendix 5A Derivation of Least Squares Estimators
Appendix 5B Large Sample Analysis
5B.1 Consistency
5B.2 Asymptotic Normality
5B.3 Monte Carlo Simulation
5B.4 The Delta Method
5B.4.1 Nonlinear Functions of a Single Parameter
5B.4.2 The Delta Method Illustrated
5B.4.3 Monte Carlo Simulation of the Delta Method
5B.5 The Delta Method Extended
5B.5.1 The Delta Method Illustrated: Continued
5B.5.2 Monte Carlo Simulation of the Extended Delta Method
Chapter 6 Further Inference in the Multiple Regression Model
Learning Objectives
6.1 Testing Joint Hypotheses
6.1.1 Testing the Effect of Advertising: The F-Test
6.1.2 Testing the Significance of the Model
6.1.3 The Relationship Between t- and F-Tests
6.1.4 More General F-Tests
6.1.4a A One-Tail Test
6.1.5 Using Computer Software
6.2 The Use of Nonsample Information
6.3 Model Specification
6.3.1 Omitted Variables
6.3.2 Irrelevant Variables
6.3.3 Choosing the Model
6.3.4 Model Selection Criteria
6.3.4a The Adjusted Coefficient of Determination
6.3.4b Information Criteria
6.3.4c An Example
6.3.5 RESET
6.4 Poor Data, Collinearity, and Insignificance
6.4.1 The Consequences of Collinearity
6.4.2 An Example
6.4.3 Identifying and Mitigating Collinearity
6.5 Prediction
6.5.1 An Example
6.6 Exercises
6.6.1 Problems
6.6.2 Computer Exercises
Appendix 6A Chi-Square and F-tests: More Details
Appendix 6B Omitted-Variable Bias: A Proof
Chapter 7 Using Indicator Variables
Learning Objectives
7.1 Indicator Variables
7.1.1 Intercept Indicator Variables
7.1.1a Choosing the Reference Group
7.1.2 Slope-Indicator Variables
7.1.3 An Example: The University Effect on House Prices
7.2 Applying Indicator Variables
7.2.1 Interactions between Qualitative Factors
7.2.2 Qualitative Factors with Several Categories
7.2.3 Testing the Equivalence of Two Regressions
7.2.4 Controlling for Time
7.2.4a Seasonal Indicators
7.2.4b Year Indicators
7.2.4c Regime Effects
7.3 Log-Linear Models
7.3.1 A Rough Calculation
7.3.2 An Exact Calculation
7.4 The Linear Probability Model
7.4.1 A Marketing Example
7.5 Treatment Effects
7.5.1 The Difference Estimator
7.5.2 Analysis of the Difference Estimator
7.5.3 Application of Difference Estimation: Project STAR
7.5.4 The Difference Estimator with Additional Controls
7.5.4a School Fixed Effects
7.5.4b Linear Probability Model Check of Random Assignment
7.5.5 The Differences-in-Differences Estimator
7.5.6 Estimating the Effect of a Minimum Wage Change
7.5.7 Using Panel Data
7.6 Exercises
7.6.1 Problems
7.6.2 Computer Exercises
Appendix 7A Details of Log-Linear Model Interpretation
Appendix 7B Derivation of the Differences-in-Differences Estimator
Chapter 8 Heteroskedasticity
Learning Objectives
8.1 The Nature of Heteroskedasticity
8.1.1 Consequences for the Least Squares Estimator
8.2 Detecting Heteroskedasticity
8.2.1 Residual Plots
8.2.2 Lagrange Multiplier Tests
8.2.2a The White Test
8.2.2b Testing the Food Expenditure Example
8.2.3 The Goldfeld–Quandt Test
8.2.3a The Food Expenditure Example
8.3 Heteroskedasticity-Consistent Standard Errors
8.4 Generalized Least Squares: Known Form of Variance
8.4.1 Variance Proportional to x
8.4.1a Transforming the Model
8.4.1b Weighted Least Squares
8.4.1c Food Expenditure Estimates
8.4.2 Grouped Data
8.5 Generalized Least Squares: Unknown Form of Variance
8.5.1 Using Robust Standard Errors
8.6 Heteroskedasticity in the Linear Probability Model
8.6.1 The Marketing Example Revisited
8.7 Exercises
8.7.1 Problems
8.7.2 Computer Exercises
Appendix 8A Properties of the Least Squares Estimator
Appendix 8B Lagrange Multiplier Tests for Heteroskedasticity
Chapter 9 Regression with Time-Series Data: Stationary Variables
Learning Objectives
9.1 Introduction
9.1.1 Dynamic Nature of Relationships
9.1.2 Least Squares Assumptions
9.1.2a Stationarity
9.1.3 Alternative Paths through the Chapter
9.2 Finite Distributed Lags
9.2.1 Assumptions
9.2.2 An Example: Okun’s Law
9.3 Serial Correlation
9.3.1 Serial Correlation in Output Growth
9.3.1a Computing Autocorrelations
9.3.1b The Correlogram
9.3.2 Serially Correlated Errors
9.3.2a A Phillips Curve
9.4 Other Tests for Serially Correlated Errors
9.4.1 A Lagrange Multiplier Test
9.4.1a Testing Correlation at Longer Lags
9.4.2 The Durbin–Watson Test
9.5 Estimation with Serially Correlated Errors
9.5.1 Least Squares Estimation
9.5.2 Estimating an AR(1) Error Model
9.5.2a Properties of an AR(1) Error
9.5.2b Nonlinear Least Squares Estimation
9.5.2c Generalized Least Squares Estimation
9.5.3 Estimating a More General Model
9.5.4 Summary of Section 9.5 and Looking Ahead
9.6 Autoregressive Distributed Lag Models
9.6.1 The Phillips Curve
9.6.2 Okun’s Law
9.6.3 Autoregressive Models
9.7 Forecasting
9.7.1 Forecasting with an AR Model
9.7.2 Forecasting with an ARDL Model
9.7.3 Exponential Smoothing
9.8 Multiplier Analysis
9.9 Exercises
9.9.1 Problems
9.9.2 Computer Exercises
Appendix 9A The Durbin–Watson Test
9A.1 The Durbin–Watson Bounds Test
Appendix 9B Properties of an AR(1) Error
Appendix 9C Generalized Least Squares Estimation
Chapter 10 Random Regressors and Moment-Based Estimation
Learning Objectives
10.1 Linear Regression with Random x’s
10.1.1 The Small Sample Properties of the Least Squares Estimator
10.1.2 Large Sample Properties of the Least Squares Estimator
10.1.3 Why Least Squares Estimation Fails
10.2 Cases in Which x and e Are Correlated
10.2.1 Measurement Error
10.2.2 Simultaneous Equations Bias
10.2.3 Omitted Variables
10.2.4 Least Squares Estimation of a Wage Equation
10.3 Estimators Based on the Method of Moments
10.3.1 Method of Moments Estimation of a Population Mean and Variance
10.3.2 Method of Moments Estimation in the Simple Linear Regression Model
10.3.3 Instrumental Variables Estimation in the Simple Linear Regression Model
10.3.3a The Importance of Using Strong Instruments
10.3.4 Instrumental Variables Estimation in the Multiple Regression Model
10.3.4a Using Surplus Instruments in Simple Regression
10.3.4b Surplus Moment Conditions
10.3.5 Assessing Instrument Strength Using the First Stage Model
10.3.5a One Instrumental Variable
10.3.5b More Than One Instrumental Variable
10.3.6 Instrumental Variables Estimation of the Wage Equation
10.3.7 Partial Correlation
10.3.8 Instrumental Variables Estimation in a General Model
10.3.8a Assessing Instrument Strength in a General Model
10.3.8b Hypothesis Testing with Instrumental Variables Estimates
10.3.8c Goodness-of-Fit with Instrumental Variables Estimates
10.4 Specification Tests
10.4.1 The Hausman Test for Endogeneity
10.4.2 Testing Instrument Validity
10.4.3 Specification Tests for the Wage Equation
10.5 Exercises
10.5.1 Problems
10.5.2 Computer Exercises
Appendix 10A Conditional and Iterated Expectations
10.A.1 Conditional Expectations
10.A.2 Iterated Expectations
10.A.3 Regression Model Applications
Appendix 10B The Inconsistency of the Least Squares Estimator
Appendix 10C The Consistency of the IV Estimator
Appendix 10D The Logic of the Hausman Test
Appendix 10E Testing for Weak Instruments
10E.1 A Test for Weak Identification
10E.2 Examples of Testing for Weak Identification
10E.3 Testing for Weak Identification: Conclusions
Appendix 10F Monte Carlo Simulation
10F.1 Illustrations Using Simulated Data
10F.1.1 The Hausman Test
10F.1.2 Test for Weak Instruments
10F.1.3 Testing the Validity of Surplus Instruments
10F.2 The Repeated Sampling Properties of IV/2SLS
Chapter 11 Simultaneous Equations Models
Learning Objectives
11.1 A Supply and Demand Model
11.2 The Reduced-Form Equations
11.3 The Failure of Least Squares Estimation
11.4 The Identification Problem
11.5 Two-Stage Least Squares Estimation
11.5.1 The General Two-Stage Least Squares Estimation Procedure
11.5.2 The Properties of the Two-Stage Least Squares Estimator
11.6 An Example of Two-Stage Least Squares Estimation
11.6.1 Identification
11.6.2 The Reduced-Form Equations
11.6.3 The Structural Equations
11.7 Supply and Demand at the Fulton Fish Market
11.7.1 Identification
11.7.2 The Reduced-Form Equations
11.7.3 Two-Stage Least Squares Estimation of Fish Demand
11.8 Exercises
11.8.1 Problems
11.8.2 Computer Exercises
Appendix 11A An Algebraic Explanation of the Failure of Least Squares
Appendix 11B 2SLS Alternatives
11.B.1 The k-Class of Estimators
11.B.2 The LIML Estimator
11.B.2.1 Fuller’s Modified LIML
11.B.2.2 Advantages of LIML
11.B.2.3 Stock–Yogo Weak IV tests for LIML
11B.2.3a Testing for Weak Instruments with LIML
11B.2.3b Testing for Weak Instruments with Fuller Modified LIML
11.B.3 Monte Carlo Simulation Results
Chapter 12 Regression with Time-Series Data: Nonstationary Variables
Learning Objectives
12.1 Stationary and Nonstationary Variables
12.1.1 The First-Order Autoregressive Model
12.1.2 Random Walk Models
12.2 Spurious Regressions
12.3 Unit Root Tests for Stationarity
12.3.1 Dickey–Fuller Test 1 (No Constant and No Trend)
12.3.2 Dickey–Fuller Test 2 (With Constant but No Trend)
12.3.3 Dickey–Fuller Test 3 (With Constant and With Trend)
12.3.4 The Dickey–Fuller Critical Values
12.3.5 The Dickey–Fuller Testing Procedures
12.3.6 The Dickey–Fuller Tests: An Example
12.3.7 Order of Integration
12.4 Cointegration
12.4.1 An Example of a Cointegration Test
12.4.2 The Error Correction Model
12.5 Regression When There Is No Cointegration
12.5.1 First Difference Stationary
12.5.2 Trend Stationary
12.5.3 Summary
12.6 Exercises
12.6.1 Problems
12.6.2 Computer Exercises
Chapter 13 Vector Error Correction and Vector Autoregressive Models
Learning Objectives
13.1 VEC and VAR Models
13.2 Estimating a Vector Error Correction Model
13.2.1 Example
13.3 Estimating a VAR Model
13.4 Impulse Responses and Variance Decompositions
13.4.1 Impulse Response Functions
13.4.1a The Univariate Case
13.4.1b The Bivariate Case
13.4.2 Forecast Error Variance Decompositions
13.4.2a Univariate Analysis
13.4.2b Bivariate Analysis
13.4.2c The General Case
13.5 Exercises
13.5.1 Problems
13.5.2 Computer Exercises
Appendix 13A The Identification Problem
Chapter 14 Time-Varying Volatility and ARCH Models
Learning Objectives
14.1 The ARCH Model
14.2 Time-Varying Volatility
14.3 Testing, Estimating, and Forecasting
14.3.1 Testing for ARCH Effects
14.3.2 Estimating ARCH Models
14.3.3 Forecasting Volatility
14.4 Extensions
14.4.1 The GARCH Model—Generalized ARCH
14.4.2 Allowing for an Asymmetric Effect
14.4.3 GARCH-In-Mean and Time-Varying Risk Premium
14.5 Exercises
14.5.1 Problems
14.5.2 Computer Exercises
Chapter 15 Panel Data Models
Learning Objectives
15.1 A Microeconomic Panel
15.2 Pooled Model
15.2.1 Cluster–Robust Standard Errors
15.2.2 Pooled Least Squares Estimates of Wage Equation
15.3 The Fixed Effects Model
15.3.1 The Least Squares Dummy Variable Estimator for Small N
15.3.2 The Fixed Effects Estimator
15.3.2a Fixed Effects Estimates of Wage Equation for N = 10
15.3.3 Fixed Effects Estimates of Wage Equation from Complete Panel
15.4 The Random Effects Model
15.4.1 Error Term Assumptions
15.4.2 Testing for Random Effects
15.4.3 Estimation of the Random Effects Model
15.4.4 Random Effects Estimation of the Wage Equation
15.5 Comparing Fixed and Random Effects Estimators
15.5.1 Endogeneity in the Random Effects Model
15.5.2 The Fixed Effects Estimator in a Random Effects Model
15.5.3 A Hausman Test
15.6 The Hausman–Taylor Estimator
15.7 Sets of Regression Equations
15.7.1 Grunfeld’s Investment Data
15.7.2 Estimation: Equal Coefficients, Equal Error Variances
15.7.3 Estimation: Different Coefficients, Equal Error Variances
15.7.4 Estimation: Different Coefficients, Different Error Variances
15.7.5 Seemingly Unrelated Regression
15.7.5a Separate or Joint Estimation?
15.7.5b Testing Cross-Equation Hypotheses
15.8 Exercises
15.8.1 Problems
15.8.2 Computer Exercises
Appendix 15A Cluster–Robust Standard Errors: Some Details
Appendix 15B Estimation of Error Components
Chapter 16 Qualitative and Limited Dependent Variable Models
Learning Objectives
16.1 Models with Binary Dependent Variables
16.1.1 The Linear Probability Model
16.1.2 The Probit Model
16.1.3 Interpretation of the Probit Model
16.1.4 Maximum Likelihood Estimation of the Probit Model
16.1.5 A Transportation Example
16.1.6 Further Post-Estimation Analysis
16.2 The Logit Model for Binary Choice
16.2.1 An Empirical Example from Marketing
16.2.2 Wald Hypothesis Tests
16.2.3 Likelihood Ratio Hypothesis Tests
16.3 Multinomial Logit
16.3.1 Multinomial Logit Choice Probabilities
16.3.2 Maximum Likelihood Estimation
16.3.3 An Example
16.4 Conditional Logit
16.4.1 Conditional Logit Choice Probabilities
16.4.2 Post-Estimation Analysis
16.4.3 An Example
16.5 Ordered Choice Models
16.5.1 Ordinal Probit Choice Probabilities
16.5.2 Estimation and Interpretation
16.5.3 An Example
16.6 Models for Count Data
16.6.1 Maximum Likelihood Estimation
16.6.2 Interpretation in the Poisson Regression Model
16.6.3 An Example
16.7 Limited Dependent Variables
16.7.1 Censored Data
16.7.2 A Monte Carlo Experiment
16.7.3 Maximum Likelihood Estimation
16.7.4 Tobit Model Interpretation
16.7.5 An Example
16.7.6 Sample Selection
16.7.6a The Econometric Model
16.7.6b Heckit Example: Wages of Married Women
16.8 Exercises
Appendix 16A Probit Marginal Effects: Details
16.A.1 Standard Error of Marginal Effect at a Given Point
16.A.2 Standard Error of Average Marginal Effect
Appendix A Mathematical Tools
Learning Objectives
A.1 Some Basics
A.1.1 Numbers
A.1.2 Exponents
A.1.3 Scientific Notation
A.1.4 Logarithms and the Number e
A.1.5 Decimals and Percentages
A.1.6 Logarithms and Percentages
A.1.6a Derivation of the Approximation
A.1.6b Approximation Error
A.2 Linear Relationships
A.2.1 Slopes and Derivatives
A.2.2 Elasticity
A.3 Nonlinear Relationships
A.3.1 Rules for Derivatives
A.3.2 Elasticity of a Nonlinear Relationship
A.3.3 Partial Derivatives
A.3.4 Theory of Derivatives
A.4 Integrals
A.4.1 Computing the Area Under a Curve
A.4.2 The Definite Integral
A.4.3 The Definite Integral: Details
A.5 Exercises
Appendix B Probability Concepts
Learning Objectives
B.1 Discrete Random Variables
B.1.1 Expected Value of a Discrete Random Variable
B.1.2 Variance of a Discrete Random Variable
B.1.3 Joint, Marginal, and Conditional Distributions
B.1.4 Expectations Involving Several Random Variables
B.1.5 Covariance and Correlation
B.1.6 Conditional Expectations
B.1.7 Iterated Examples
B.2 Working with Continuous Random Variables
B.2.1 Probability Calculations
B.2.2 Properties of Continuous Random Variables
B.2.3 Joint, Marginal, and Conditional Probability Distributions
B.2.4 Iterated Expectations
B.2.5 Distributions of Functions of Random Variables
B.3 Some Important Probability Distributions
B.3.1 The Bernoulli Distribution
B.3.2 The Binomial Distribution
B.3.3 The Poisson Distribution
B.3.4 The Uniform Distribution
B.3.5 The Normal Distribution
B.3.6 The Chi-square Distribution
B.3.7 The t-distribution
B.3.8 The F-distribution
B.4 Random Numbers
B.4.1 Uniform Random Numbers
B.5 Exercises
Appendix C Review of Statistical Inference
Learning Objectives
C.1 A Sample of Data
C.2 An Econometric Model
C.3 Estimating the Mean of a Population
C.3.1 The Expected Value of Y-bar
C.3.2 The Variance of Y-bar
C.3.3 The Sampling Distribution of Y-bar
C.3.4 The Central Limit Theorem
C.3.5 Best Linear Unbiased Estimation
C.4 Estimating the Population Variance and Other Moments
C.4.1 Estimating the Population Variance
C.4.2 Estimating Higher Moments
C.4.3 The Hip Data
C.4.4 Using the Estimates
C.5 Interval Estimation
C.5.1 Interval Estimation: σ2 Known
C.5.2 A Simulation
C.5.3 Interval Estimation: σ2 Unknown
C.5.4 A Simulation (Continued)
C.5.5 Interval Estimation Using the Hip Data
C.6 Hypothesis Tests About a Population Mean
C.6.1 Components of Hypothesis Tests
C.6.1a The Null Hypothesis
C.6.1b The Alternative Hypothesis
C.6.1c The Test Statistic
C.6.1d The Rejection Region
C.6.1e A Conclusion
C.6.2 One-Tail Tests with Alternative “Greater Than” (>)
C.6.3 One-Tail Tests with Alternative “Less Than” (<)
C.6.4 One-Tail Tests with Alternative “Not Equal To” (≠)
C.6.5 Example of a One-Tail Test Using the Hip Data
C.6.6 Example of a Two-Tail Test Using the Hip Data
C.6.7 The p-value
C.6.8 A Comment on Stating Null and Alternative Hypotheses
C.6.9 Type I and Type II Errors
C.6.10 A Relationship between Hypothesis Testing and Confidence Intervals
C.7 Some Other Useful Tests
C.7.1 Testing the Population Variance
C.7.2 Testing the Equality of Two Population Means
C.7.3 Testing the Ratio of Two Population Variances
C.7.4 Testing the Normality of a Population
C.8 Introduction to Maximum Likelihood Estimation
C.8.1 Inference with Maximum Likelihood Estimators
C.8.2 The Variance of the Maximum Likelihood Estimator
C.8.3 The Distribution of the Sample Proportion
C.8.4 Asymptotic Test Procedures
C.8.4a The Likelihood Ratio (LR) Test
C.8.4b The Wald Test
C.8.4c The Lagrange Multiplier (LM) Test
C.9 Algebraic Supplements
C.9.1 Derivation of Least Squares Estimation
C.9.2 Best Linear Unbiased Estimation
C.10 Kernel Density Estimator
C.11 Exercises
Appendix D
Table 1 Cumulative Probabilities for the Standard Normal Distribution
Table 2 Percentiles for the t-distribution
Table 3 Percentiles for the Chi-square Distribution
Table 4 95th Percentile for the F-distribution
Table 5 99th Percentile for the F-distribution
The Stata Blog: Not Elsewhere Classified Find us on Facebook Follow us on Twitter LinkedIn Google+ Watch us on YouTube