Jump to content

Do these critics still wanna say Sanchez "hasn't improved"?


JetsFanFromQueens

Recommended Posts

Lets take a look at Sanchez outside of the Baltimore game. Yes, you don't have to remind me; I understand that It's ridiculous to eliminate a game from the stat sheet, just to show improved statistics, but I'm trying to look at our quarterback performance in an objective manner. By looking at the complete picture.

Sanchez had no chance against the Ravens, not because of the Ravens as a team defense, but because of the overall situation in regards to the offense. No quarterback stood a chance that night. Not even Joe Montana himself. NY couldn't establish the run due to the fact that we were without one of the leagues best centers. Not only were we without our starting center in Nick Mangold, but Sanchez was under center behind a 3rd string backup, a rookie at that. A rookie who wasn't even with the Jets during offseason workouts. The situation became so ugly, where it came to the point, where our head coach had to replace the rookie with an awful offensive lineman in Ducasse who's yet to work out for us. Our LG in Slauson then became our center, and a player who wasn't ready in Ducasse then became our LG. It wasn't long before Rex had to switch it up once again, moving Slauson back to LG and Baxter in replace of Ducasse at center. All because of disastrous offensive line play.

Moving forward, Is it at least safe to say, that our quarterback had no chance under those circumstances? Could any quarterback have performed under those circumstances? Correct me if I'm wrong, but it was an unsuccessful situation.

With all that said, outside of the Baltimore game; Sanchez has gone 74/122, 62.4%, 918 yards, 7 TD's-4 INT's, 2 rushing TD's and a quarterback rating of 95.2.

This is with two WR's being gone of years past in Edwards and Cotchery, while being replaced by three WR's who weren't here last year in Burress, Mason and Kerley. A harsh decline in offensive line play and run game performance from previous years... but our quarterback has still improved during his expected breakout season of year number three.

For Jet fans to consistently say, on a daily basis, that our offensive coordinator is stunting the growth and development of our quarterback (even though he's improved during all 3 years under Shotty Jr) and or Sanchez hasn't improved this season has led me to believe that these online critics wouldn't notice improvements, even if a quarterback was developing right before their eyes.

Sorry for the long winded post, but I just have alot to say in regards to our quarterback position and Sanchez himself.

Link to comment
Share on other sites

  • Replies 199
  • Created
  • Last Reply

A good way to determine average is to cast off his best and worst game and then average out the rest of it.

In this case it would only leave 3 games which would be a large enough sample size, I think mid season would be a better time to relook at these stats.

Link to comment
Share on other sites

Lets take a look at Sanchez outside of the Baltimore game. Yes, you don't have to remind me; I understand that It's ridiculous to eliminate a game from the stat sheet, just to show improved statistics, but I'm trying to look at our quarterback performance in an objective manner. By looking at the complete picture.

Sanchez had no chance against the Ravens, not because of the Ravens as a team defense, but because of the overall situation in regards to the offense. No quarterback stood a chance that night. Not even Joe Montana himself. NY couldn't establish the run due to the fact that we were without one of the leagues best centers. Not only were we without our starting center in Nick Mangold, but Sanchez was under center behind a 3rd string backup, a rookie at that. A rookie who wasn't even with the Jets during offseason workouts. The situation became so ugly, where it came to the point, where our head coach had to replace the rookie with an awful offensive lineman in Ducasse who's yet to work out for us. Our LG in Slauson then became our center, and a player who wasn't ready in Ducasse then became our LG. It wasn't long before Rex had to switch it up once again, moving Slauson back to LG and Baxter in replace of Ducasse at center. All because of disastrous offensive line play.

Moving forward, Is it at least safe to say, that our quarterback had no chance under those circumstances? Could any quarterback have performed under those circumstances? Correct me if I'm wrong, but it was an unsuccessful situation.

With all that said, outside of the Baltimore game; Sanchez has gone 74/122, 62.4%, 918 yards, 7 TD's-4 INT's, 2 rushing TD's and a quarterback rating of 95.2.

This is with two WR's being gone of years past in Edwards and Cotchery, while being replaced by three WR's who weren't here last year in Burress, Mason and Kerley. A harsh decline in offensive line play and run game performance from previous years... but our quarterback has still improved during his expected breakout season of year number three.

For Jet fans to consistently say, on a daily basis, that our offensive coordinator is stunting the growth and development of our quarterback (even though he's improved during all 3 years under Shotty Jr) and or Sanchez hasn't improved this season has led me to believe that these online critics wouldn't notice improvements, even if a quarterback was developing right before their eyes.

Sorry for the long winded post, but I just have alot to say in regards to our quarterback position and Sanchez himself.

Quantitative theory uses simple, abstract economic models together with a small amount of economic data to highlight major economic mechanisms. To illustrate the methods of quantitative theory, we review studies of the production function by Paul Douglas, Robert Solow, and Edward Prescott. Consideration of these studies takes an important research area from its earliest days through contemporary real business cycle analysis. In these quantitative theoretical studies, economic models are employed in two ways. First, they are used to organize economic data in a new and suggestive manner. Second, models are combined with economic data to display successes and failures of particular theoretical mechanisms. Each of these features is present in each of the three studies, but to varying degrees, as we shall see.

These quantitative theoretical investigations changed how economists thought about the aggregate production function, i.e., about an equation describing how the total output of many firms is related to the total quantities of inputs, in particular labor and capital inputs. Douglas taught economists that the production function could be an important applied tool, as well as a theoretical device, by skillfully combining indexes of output with indexes of capital and labor input. Solow taught economists that the production function could not be used to explain long-term growth, absent a residual factor that he labeled technical progress. Prescott taught economists that Solow's residual was sufficiently strongly procyclical that it might serve as a source of economic fluctuations. More specifically, he showed that a real business cycle model driven by Solow's residuals produced fluctuations in consumption, investment, and output that broadly resembled actual U.S. business cycle experience.

In working through three key studies by Douglas, Solow, and Prescott, we focus on their design, their interrelationship, and the way in which they illustrate how economists learn from studies in quantitative theory. This learning process is of considerable importance to ongoing developments in macroeconomics, since the quantitative theory approach is now the dominant research paradigm being used by economists incorporating rational expectations and dynamic choice into small-scale macroeconomic models.

Quantitative theory is thus necessarily akin to applied econometric research, but its methods are very different, at least at first appearance. Indeed, practitioners of quantitative theory - notably Prescott (1986) and Kydland and Prescott (1991) - have repeatedly clashed with practitioners of econometrics. Essentially, advocates of quantitative theory have suggested that little is learned from econometric investigations, while proponents of econometrics have suggested that little tested knowledge of business cycle mechanisms is uncovered by studies in quantitative economic theory.

This article reviews and critically evaluates recent developments in quantitative theory and econometrics. To define quantitative theory more precisely, Section 1 begins by considering alternative styles of economic theory. Subsequently, Section 2 considers the three examples of quantitative theory in the area of the production function, reviewing the work of Douglas, Solow, and Prescott. With these examples in hand, Section 3 then considers how economists learn from exercises in quantitative theory.

One notable difference between the practice of quantitative theory and of econometrics is the manner in which the behavioral parameters of economic models are selected. In quantitative theoretical models of business cycles, for example, most behavioral parameters are chosen from sources other than the time series fluctuations in the macroeconomic data that are to be explained in the investigation. This practice has come to be called calibration. In modern macroeconometrics, the textbook procedure is to estimate parameters from the time series that are under study. Thus, this clash of methodologies is frequently described as "calibration versus estimation."

After considering how a methodological controversy between quantitative theory and econometrics inevitably grew out of the rational expectations revolution in Section 4 and describing the rise of quantitative theory as a methodology in Section 5, this article then argues that the ongoing controversy cannot really be about "calibration versus estimation." It demonstrates that classic calibration studies estimate some of their key parameters and classic estimation studies are frequently forced to restrict some of their parameters so as to yield manageable computational problems, i.e., to calibrate them. Instead, in Section 6, the article argues that the key practical issue is styles of "model evaluation," i.e., about the manner in which economists determine the dimensions along which models succeed or fail.

In terms of the practice of model evaluation, there are two key differences between standard practice in quantitative theory and econometrics. One key difference is indeed whether there are discernible differences between the activities of parameter selection and model evaluation. In quantitative theory, parameter selection is typically undertaken as an initial activity, with model evaluation being a separate secondary stage. By contrast, in the dominant dynamic macroeconometric approach, that of Hansen and Sargent (1981), parameter selection and model evaluation are undertaken in an essentially simultaneous manner: most parameters are selected to maximize the overall fit of the dynamic model, and a measure of this fit is also used as the primary diagnostic for evaluation of the theory. Another key difference lies in the breadth of model implications utilized, as well as the manner in which they are explored and evaluated. Quantitative theorists look at a narrow set of model implications; they conduct an informal evaluation of the discrepancies between these implications and analogous features of a real-world economy. Econometricians typically look at a broad set of implications and use specific statistical methods to evaluate these discrepancies.

By and large, this article takes the perspective of the quantitative theorist. It argues that there is a great benefit to choosing parameters in an initial stage of an investigation, so that other researchers can readily understand and criticize the attributes of the data that give rise to such parameter estimates. It also argues that there is a substantial benefit to limiting the scope of inquiry in model evaluation, i.e., to focusing on a set of model implications taken to display central and novel features of the operation of a theoretical model economy. This limitation of focus seems appropriate to the current stage of research in macroeconomics, where we are still working with macroeconomic models that are extreme simplifications of macroeconomic reality.

Yet quantitative theory is not without its difficulties. To illustrate three of its limitations, Section 7 of the article reconsiders the standard real business cycle model, which is sometimes described as capturing a dominant component of postwar U.S. business cycles (for example, by Kydland and Prescott [1991] and Plosser [1989]). The first limitation is one stressed by Eichenbaum (1991): since it ignores uncertainty in estimated parameters, a study in quantitative theory cannot give any indication of the statistical confidence that should be placed in its findings. The second limitation is that quantitative theory may direct one's attention to model implications that do not provide much information about the endogenous mechanisms contained in the model. In the discussion of these two limitations, the focus is on a "variance ratio" that has been used, by Kydland and Prescott (1991) among others, to suggest that a real business cycle arising from technology shocks accounts for three-quarters of postwar U.S. business cycle fluctuations in output. In discussing the practical importance of the first limitation, Eichenbaum concluded that there is "enormous" uncertainty about this variance ratio, which he suggested arises because of estimation uncertainty about the values of parameters of the exogenous driving process for technology. In terms of the second limitation, the article shows that a naive model - in which output is driven only by production function residuals without any endogenous response of factors of production - performs nearly as well as the standard quantitative theoretical model according to the "variance ratio." The third limitation is that the essential focus of quantitative theory on a small number of model implications may easily mean that it misses crucial failures (or successes) of an economic model. This point is made by Watson's (1993) recent work that showed that the standard real business cycle model badly misses capturing the "typical spectral shape of growth rates" for real macroeconomic variables, including real output. That is, by focusing on only a small number of low-order autocovariances, prior investigations such as those of Kydland and Prescott (1982) and King, Plosser, and Rebelo (1988) simply overlooked the fact that there is an important predictable output growth at business cycle frequencies.

However, while there are shortcomings in the methodology of quantitative theory, its practice has grown at the expense of econometrics for a good reason: it provides a workable vehicle for the systematic development of macroeconomic models. In particular, it is a method that can be used to make systematic progress in the current circumstances of macroeconomics, when the models being developed are still relatively incomplete descriptions of the economy. Notably, macroeconomists have used quantitative theory in recent years to learn how the business cycle implications of the basic neoclassical model are altered by a wide range of economic factors, including fiscal policies, international trade, monopolistic competition, financial market frictions, and gradual adjustment of wages and prices.

The main challenge for econometric theory is thus to design procedures that can be used to make similar progress in the development of macroeconomic models. One particular aspect of this challenge is that the econometric methods must be suitable for situations in which we know before looking at the data that the model or models under study are badly incomplete, as we will know in most situations for some time to come. Section 8 of the article discusses a general framework of model-building activity within which quantitative theory and traditional macroeconometric approaches are each included. On this basis, it then considers some initial efforts aimed at developing econometric methods to capture the strong points of the quantitative theory approach while providing the key additional benefits associated with econometric work. Chief among these benefits are (1) the potential for replication of the outcomes of an empirical evaluation of a model or models and (2) an explicit statement of the statistical reliability of the results of such an evaluation.

In addition to providing challenges to econometrics, Section 9 of the article shows how the methods of quantitative theory also provide new opportunities for applied econometrics, using Friedman's (1957) permanent income theory of consumption as a basis for constructing two more detailed examples. The first of these illustrates how an applied econometrician may use the approach of quantitative theory to find a powerful estimator of a parameter of interest. The second of these illustrates how quantitative theory can aid in the design of informative descriptive empirical investigations.

In macroeconometric analysis, issues of identification have long played a central role in theoretical and applied work, since most macroeconomists believe that business fluctuations are the result of a myriad of causal factors. Quantitative theories, by contrast, typically are designed to highlight the role of basic mechanisms and typically identify individual causal factors. Section 10 considers the challenges that issues of identification raise for the approach of quantitative theory and the recent econometric developments that share its model evaluation strategy. It suggests that the natural way of proceeding is to compare the predictions of a model or models to characteristics of economic data that are isolated with a symmetric empirical identification.

The final section of the article offers a brief summary as well as some concluding comments on the relationship between quantitative theory and econometrics in the future of macroeconomic research.

1. STYLES OF ECONOMIC THEORY

The role of economic theory is to articulate the mechanisms by which economic causes are translated into economic consequences. By requiring that theorizing is conducted in a formal mathematical way, economists have assured a rigor of argument that would be difficult to attain in any other manner. Minimally, the process of undertaking a mathematical proof lays bare the essential linkages between assumptions and conclusions. Further, and importantly, mathematical model-building also has forced economists to make sharp abstractions: as model economies become more complex, there is a rapidly rising cost to establishing formal propositions. Articulation of key mechanisms and abstraction from less important ones are essential functions of theory in any discipline, and the speed at which economic analysis has adopted the mathematical paradigm has led it to advance at a much greater rate than its sister disciplines in the social sciences.

If one reviews the history of economics over the course of this century, the accomplishments of formal economic theory have been major. Our profession developed a comprehensive theory of consumer and producer choice, first working out static models with known circumstances and then extending it to dynamics, uncertainty, and incomplete information. Using these developments, it established core propositions about the nature and efficiency of general equilibrium with interacting consumers and producers. Taken together, the accomplishments of formal economic theory have had profound effects on applied fields, not only in the macroeconomic research that will be the focal point of this article but also in international economics, public finance, and many other areas.

The developments in economic theory have been nothing short of remarkable, matched within the social sciences perhaps only by the rise of econometrics, in which statistical methods applicable to economic analysis have been developed. For macroeconomics, the major accomplishment of econometrics has been the development of statistical procedures for the estimation of parameters and testing of hypotheses in a context where a vector of economic variables is dynamically interrelated. For example, macroeconomists now think about the measurement of business cycles and the testing of business cycle theories using an entirely different statistical conceptual framework from that available to Mitchell (1927) and his contemporaries.(1)

When economists discuss economic theory, most of us naturally focus on formal theory, i.e., the construction of a model economy - which naturally is a simplified version of the real world - and the establishment of general propositions about its operation. Yet, there is another important kind of economic theory, which is the use of much more simplified model economies to organize economic facts in ways that change the focus of applied research and the development of formal theory. Quantitative theory, in the terminology of Kydland and Prescott (1991), involves taking a more detailed stand on how economic causes are translated into economic consequences. Quantitative theory, of course, embodies all the simplifications of abstract models of formal theory. In addition, it involves making (1) judgments about the quantitative importance of various economic mechanisms and (2) decisions about how to selectively compare the implications of a model to features of real-world economies. By its very nature, quantitative theory thus stands as an intermediate activity to formal theory and the application of econometric methods to evaluation of economic models.

A decade ago, many economists thought of quantitative theory as simply the natural first step in a progression of research activities from formal theory to econometrics, but there has been a hardening of viewpoints in recent years. Some argue that standard econometric methods are not necessary or are, in fact, unhelpful; quantitative theory is sufficient. Others argue that one can learn little from quantitative theory and that the only source of knowledge about important economic mechanisms is obtained through econometrics. For those of us that honor the traditions of both quantitative theory and econometrics, not only did the onset of this controversy come as a surprise, but its depth and persistence also were unexpected. Accordingly, the twin objectives of this paper are, first, to explore why the events of recent years have led to tensions between practitioners of quantitative theory and econometrics and, second, to suggest dimensions along which the recent controversy can lead to better methods and practice.

2. EXAMPLES OF QUANTITATIVE THEORY

This section discusses three related research topics that take quantitative theory from its earliest stages to the present day. The topics all concern the production function, i.e., the link between output and factor inputs.(2)

The Production Function and Distribution Theory

The production function is a powerful tool of economic analysis, which every first-year graduate student learns to manipulate. Indeed, the first example that most economists encounter is the functional form of Cobb and Douglas (1928), which is also the first example studied here. For contemporary economists, it is difficult to imagine that there once was a time when the notion of the production function was controversial. But, 50 years after his pioneering investigation, Paul Douglas (1976) reminisced:

Critics of the production function analysis such as Horst Mendershausen and his mentor, Ragnar Frisch, . . . urged that so few observations were involved that any mathematical relationship was purely accidental and not causal. They sincerely believed that the analysis should be abandoned and, in the words of Mendershausen, that all past work should be torn up and consigned to the wastepaper basket. This was also the general sentiment among senior American economists, and nowhere was it held more strongly than among my senior colleagues at the University of Chicago. I must admit that I was discouraged by this criticism and thought of giving up the effort, but there was something which told me I should hold on. (P. 905)

The design of the investigation by Douglas was as follows. First, he enlisted the assistance of a mathematician, Cobb, to develop a production function with specified properties.(3) Second, he constructed indexes of physical capital and labor input in U.S. manufacturing for 1899-1922. Third, Cobb and Douglas estimated the production function

[Mathematical Expression Omitted].

In this specification, [Y.sub.t] is the date t index of manufacturing output, [N.sub.t] is the date t index of employed workers, and [K.sub.t] is the date t index of the capital stock. The least squares estimates for 1899-1922 were [Mathematical Expression Omitted] and [Mathematical Expression Omitted]. Fourth, Cobb and Douglas performed a variety of checks of the implications of their specification. These included comparing their estimated [Mathematical Expression Omitted] to measures of labor's share of income, which earlier work had shown to be reasonably constant through time. They also examined the extent to which the production function held for deviations from trend rather than levels. Finally, they examined the relationship between the model's implied marginal product of labor ([Alpha]Y/N) and a measure of real wages that Douglas (1926) had constructed in earlier work.

The results of the Cobb-Douglas quantitative theoretical investigation are displayed in Figure 1. Panel A provides a plot of the data on output, labor, and capital from 1899 to 1922. All series are benchmarked at 100 in 1899, and it is notable that capital grows dramatically over the sample period. Panel B displays the fitted production function, [Mathematical Expression Omitted], graphed as a dashed line and manufacturing output, Y, graphed as a solid line. As organized by the production function, variations in the factors N and K clearly capture the upward trend in output.(4)

With the Cobb-Douglas study, the production function moved from the realm of pure theory - where its properties had been discussed by Clark (1889) and others - to that of quantitative theory. In the hands of Cobb and Douglas, the production function displayed an ability to link (1) measures of physical output to measures of factor inputs (capital and labor) and (2) measures of real wages to measures of average products. It thus became an engine of analysis for applied …

Link to comment
Share on other sites

I agree with most of your points, including that Sanchez has improved. The fact he's played in 2 championship Games in 2 years leads most of us to forget that he's still "a baby" as NFL QB's go. While I'll admit he needs to get better in certain areas, mostly in his consistency, I still think there is significant upside to his abilities and alot of room for growth.

Where we may disagree a bit is whether or not Sanchez has improved BECAUSE of Schottenheimer or IN SPITE of Schottenheimer.

My sense is the latter, as I am not a big fan of the OC.

It would be interesting to see Sanchez operating under someone who has a reputation as a great QB developer, say a Norv Turner, or a Sean Payton.

Link to comment
Share on other sites

I agree with most of your points, including that Sanchez has improved. The fact he's played in 2 championship Games in 2 years leads most of us to forget that he's still "a baby" as NFL QB's go. While I'll admit he needs to get better in certain areas, mostly in his consistency, I still think there is significant upside to his abilities and alot of room for growth.

Where we may disagree a bit is whether or not Sanchez has improved BECAUSE of Schottenheimer or IN SPITE of Schottenheimer.

My sense is the latter, as I am not a big fan of the OC.

It would be interesting to see Sanchez operating under someone who has a reputation as a great QB developer, say a Norv Turner, or a Sean Payton.

I agree. I would even settle with an above average QB developer over our non-existent QB developer. In all the years Schitty has been here IMO I feel like we haven't developed a WR outside out Cotch and not one QB. The development taking place in this offense is missing and has been that way for some time.

Link to comment
Share on other sites

the more important question is whether he's improved enough, and obv the answer is no

Funny. If you can't admit that Sanchez has improved as a developing quarterback since being drafted, I'm not sure if you've watched the games dating back to 2009. Did Sanchez not improve last year when compared to his rookie season? Without question. Hell yes he did.

It's sad, but for the critics saying Sanchez "hasn't improved", I guess you've missed on his QB ratings of 8.3, 27.1, 27.8, 37.1, 43.3, 49.7, 56.4, 59.3, 59.9, 60.2, 78.0 etc during his first two years. Or performances of 0 TD's/3 INT's, 0 TD's/5 INT's, 1 TD/4 INT's, 1 TD/3 INT's, 0 TD's/3 INT's during his first two years, or completion %'s of 34.5%, 38.1%, 38.6%, 42.1%, 46.7%, 47.6%, 47.7$, 50.0%, 51.9%, 53.3% etc, etc.

Fast forward to his 3rd year, and he's put up QB rating's of 85.8, 88.7, 93.8, 95.6 and 105.6. Only once has he thrown more INT's than TD's this season, against the Ravens going 0 TD's/1 INT. Completion %'s of 56.0%, 59.1%, 61.4%, 61.5% and 70.8%.

Only 7 times during his first two years, during the regular season, did Sanchez throw for a completion % of 59% or higher. He's already done it 4 times thus far through 6 weeks.

He's improved each and every season. For the critics who can't see this, I guess you forgot how Sanchez performed during his first two years. All because you expect Brady and Manning numbers during his 3rd year. Simply pathetic.

Link to comment
Share on other sites

Aaron Rodgers - everyone's wonderboy this year and well deservedly - spent his first three seasons watching Favre and throwing exactly ZERO passes. In his fourth season on the Packers, he led them to a 6-10 record, albeit with pretty solid stats.

Not saying Sanchez is, or ever will be Rodgers, but he is still at a point in his career service-wise, where Rodgers was holding a clipboard and learning by watching. All I'm saying here is nobody knew a thing about Rodgers until his 4th season and we're still in the middle of Sanchez' 3rd. The fact that he has flashed ability to lead comebacks and win playoff games should only increase our patience to see what he can turn into.

No way am I closing the book on Sanchez before I see this season and at least one more. He's shown enough to earn that much.

Link to comment
Share on other sites

Only 7 times during his first two years, during the regular season, did Sanchez throw for a completion % of 59% or higher. He's already done it 4 times thus far through 6 weeks.

and he'schecking down a lot more.. shouldn't be a surprise..

when you draft a kid at 4 he should be better by year 3.. ideally.. I'm not saying it's a lost cause, but it doesn't look great atm..

and his comp% is still lower then 2 rookies in cam newton and dalton

Link to comment
Share on other sites

Quantitative theory uses simple, abstract economic models together with a small amount of economic data to highlight major economic mechanisms. To illustrate the methods of quantitative theory, we review studies of the production function by Paul Douglas, Robert Solow, and Edward Prescott. Consideration of these studies takes an important research area from its earliest days through contemporary real business cycle analysis. In these quantitative theoretical studies, economic models are employed in two ways. First, they are used to organize economic data in a new and suggestive manner. Second, models are combined with economic data to display successes and failures of particular theoretical mechanisms. Each of these features is present in each of the three studies, but to varying degrees, as we shall see.

These quantitative theoretical investigations changed how economists thought about the aggregate production function, i.e., about an equation describing how the total output of many firms is related to the total quantities of inputs, in particular labor and capital inputs. Douglas taught economists that the production function could be an important applied tool, as well as a theoretical device, by skillfully combining indexes of output with indexes of capital and labor input. Solow taught economists that the production function could not be used to explain long-term growth, absent a residual factor that he labeled technical progress. Prescott taught economists that Solow's residual was sufficiently strongly procyclical that it might serve as a source of economic fluctuations. More specifically, he showed that a real business cycle model driven by Solow's residuals produced fluctuations in consumption, investment, and output that broadly resembled actual U.S. business cycle experience.

In working through three key studies by Douglas, Solow, and Prescott, we focus on their design, their interrelationship, and the way in which they illustrate how economists learn from studies in quantitative theory. This learning process is of considerable importance to ongoing developments in macroeconomics, since the quantitative theory approach is now the dominant research paradigm being used by economists incorporating rational expectations and dynamic choice into small-scale macroeconomic models.

Quantitative theory is thus necessarily akin to applied econometric research, but its methods are very different, at least at first appearance. Indeed, practitioners of quantitative theory - notably Prescott (1986) and Kydland and Prescott (1991) - have repeatedly clashed with practitioners of econometrics. Essentially, advocates of quantitative theory have suggested that little is learned from econometric investigations, while proponents of econometrics have suggested that little tested knowledge of business cycle mechanisms is uncovered by studies in quantitative economic theory.

This article reviews and critically evaluates recent developments in quantitative theory and econometrics. To define quantitative theory more precisely, Section 1 begins by considering alternative styles of economic theory. Subsequently, Section 2 considers the three examples of quantitative theory in the area of the production function, reviewing the work of Douglas, Solow, and Prescott. With these examples in hand, Section 3 then considers how economists learn from exercises in quantitative theory.

One notable difference between the practice of quantitative theory and of econometrics is the manner in which the behavioral parameters of economic models are selected. In quantitative theoretical models of business cycles, for example, most behavioral parameters are chosen from sources other than the time series fluctuations in the macroeconomic data that are to be explained in the investigation. This practice has come to be called calibration. In modern macroeconometrics, the textbook procedure is to estimate parameters from the time series that are under study. Thus, this clash of methodologies is frequently described as "calibration versus estimation."

After considering how a methodological controversy between quantitative theory and econometrics inevitably grew out of the rational expectations revolution in Section 4 and describing the rise of quantitative theory as a methodology in Section 5, this article then argues that the ongoing controversy cannot really be about "calibration versus estimation." It demonstrates that classic calibration studies estimate some of their key parameters and classic estimation studies are frequently forced to restrict some of their parameters so as to yield manageable computational problems, i.e., to calibrate them. Instead, in Section 6, the article argues that the key practical issue is styles of "model evaluation," i.e., about the manner in which economists determine the dimensions along which models succeed or fail.

In terms of the practice of model evaluation, there are two key differences between standard practice in quantitative theory and econometrics. One key difference is indeed whether there are discernible differences between the activities of parameter selection and model evaluation. In quantitative theory, parameter selection is typically undertaken as an initial activity, with model evaluation being a separate secondary stage. By contrast, in the dominant dynamic macroeconometric approach, that of Hansen and Sargent (1981), parameter selection and model evaluation are undertaken in an essentially simultaneous manner: most parameters are selected to maximize the overall fit of the dynamic model, and a measure of this fit is also used as the primary diagnostic for evaluation of the theory. Another key difference lies in the breadth of model implications utilized, as well as the manner in which they are explored and evaluated. Quantitative theorists look at a narrow set of model implications; they conduct an informal evaluation of the discrepancies between these implications and analogous features of a real-world economy. Econometricians typically look at a broad set of implications and use specific statistical methods to evaluate these discrepancies.

By and large, this article takes the perspective of the quantitative theorist. It argues that there is a great benefit to choosing parameters in an initial stage of an investigation, so that other researchers can readily understand and criticize the attributes of the data that give rise to such parameter estimates. It also argues that there is a substantial benefit to limiting the scope of inquiry in model evaluation, i.e., to focusing on a set of model implications taken to display central and novel features of the operation of a theoretical model economy. This limitation of focus seems appropriate to the current stage of research in macroeconomics, where we are still working with macroeconomic models that are extreme simplifications of macroeconomic reality.

Yet quantitative theory is not without its difficulties. To illustrate three of its limitations, Section 7 of the article reconsiders the standard real business cycle model, which is sometimes described as capturing a dominant component of postwar U.S. business cycles (for example, by Kydland and Prescott [1991] and Plosser [1989]). The first limitation is one stressed by Eichenbaum (1991): since it ignores uncertainty in estimated parameters, a study in quantitative theory cannot give any indication of the statistical confidence that should be placed in its findings. The second limitation is that quantitative theory may direct one's attention to model implications that do not provide much information about the endogenous mechanisms contained in the model. In the discussion of these two limitations, the focus is on a "variance ratio" that has been used, by Kydland and Prescott (1991) among others, to suggest that a real business cycle arising from technology shocks accounts for three-quarters of postwar U.S. business cycle fluctuations in output. In discussing the practical importance of the first limitation, Eichenbaum concluded that there is "enormous" uncertainty about this variance ratio, which he suggested arises because of estimation uncertainty about the values of parameters of the exogenous driving process for technology. In terms of the second limitation, the article shows that a naive model - in which output is driven only by production function residuals without any endogenous response of factors of production - performs nearly as well as the standard quantitative theoretical model according to the "variance ratio." The third limitation is that the essential focus of quantitative theory on a small number of model implications may easily mean that it misses crucial failures (or successes) of an economic model. This point is made by Watson's (1993) recent work that showed that the standard real business cycle model badly misses capturing the "typical spectral shape of growth rates" for real macroeconomic variables, including real output. That is, by focusing on only a small number of low-order autocovariances, prior investigations such as those of Kydland and Prescott (1982) and King, Plosser, and Rebelo (1988) simply overlooked the fact that there is an important predictable output growth at business cycle frequencies.

However, while there are shortcomings in the methodology of quantitative theory, its practice has grown at the expense of econometrics for a good reason: it provides a workable vehicle for the systematic development of macroeconomic models. In particular, it is a method that can be used to make systematic progress in the current circumstances of macroeconomics, when the models being developed are still relatively incomplete descriptions of the economy. Notably, macroeconomists have used quantitative theory in recent years to learn how the business cycle implications of the basic neoclassical model are altered by a wide range of economic factors, including fiscal policies, international trade, monopolistic competition, financial market frictions, and gradual adjustment of wages and prices.

The main challenge for econometric theory is thus to design procedures that can be used to make similar progress in the development of macroeconomic models. One particular aspect of this challenge is that the econometric methods must be suitable for situations in which we know before looking at the data that the model or models under study are badly incomplete, as we will know in most situations for some time to come. Section 8 of the article discusses a general framework of model-building activity within which quantitative theory and traditional macroeconometric approaches are each included. On this basis, it then considers some initial efforts aimed at developing econometric methods to capture the strong points of the quantitative theory approach while providing the key additional benefits associated with econometric work. Chief among these benefits are (1) the potential for replication of the outcomes of an empirical evaluation of a model or models and (2) an explicit statement of the statistical reliability of the results of such an evaluation.

In addition to providing challenges to econometrics, Section 9 of the article shows how the methods of quantitative theory also provide new opportunities for applied econometrics, using Friedman's (1957) permanent income theory of consumption as a basis for constructing two more detailed examples. The first of these illustrates how an applied econometrician may use the approach of quantitative theory to find a powerful estimator of a parameter of interest. The second of these illustrates how quantitative theory can aid in the design of informative descriptive empirical investigations.

In macroeconometric analysis, issues of identification have long played a central role in theoretical and applied work, since most macroeconomists believe that business fluctuations are the result of a myriad of causal factors. Quantitative theories, by contrast, typically are designed to highlight the role of basic mechanisms and typically identify individual causal factors. Section 10 considers the challenges that issues of identification raise for the approach of quantitative theory and the recent econometric developments that share its model evaluation strategy. It suggests that the natural way of proceeding is to compare the predictions of a model or models to characteristics of economic data that are isolated with a symmetric empirical identification.

The final section of the article offers a brief summary as well as some concluding comments on the relationship between quantitative theory and econometrics in the future of macroeconomic research.

1. STYLES OF ECONOMIC THEORY

The role of economic theory is to articulate the mechanisms by which economic causes are translated into economic consequences. By requiring that theorizing is conducted in a formal mathematical way, economists have assured a rigor of argument that would be difficult to attain in any other manner. Minimally, the process of undertaking a mathematical proof lays bare the essential linkages between assumptions and conclusions. Further, and importantly, mathematical model-building also has forced economists to make sharp abstractions: as model economies become more complex, there is a rapidly rising cost to establishing formal propositions. Articulation of key mechanisms and abstraction from less important ones are essential functions of theory in any discipline, and the speed at which economic analysis has adopted the mathematical paradigm has led it to advance at a much greater rate than its sister disciplines in the social sciences.

If one reviews the history of economics over the course of this century, the accomplishments of formal economic theory have been major. Our profession developed a comprehensive theory of consumer and producer choice, first working out static models with known circumstances and then extending it to dynamics, uncertainty, and incomplete information. Using these developments, it established core propositions about the nature and efficiency of general equilibrium with interacting consumers and producers. Taken together, the accomplishments of formal economic theory have had profound effects on applied fields, not only in the macroeconomic research that will be the focal point of this article but also in international economics, public finance, and many other areas.

The developments in economic theory have been nothing short of remarkable, matched within the social sciences perhaps only by the rise of econometrics, in which statistical methods applicable to economic analysis have been developed. For macroeconomics, the major accomplishment of econometrics has been the development of statistical procedures for the estimation of parameters and testing of hypotheses in a context where a vector of economic variables is dynamically interrelated. For example, macroeconomists now think about the measurement of business cycles and the testing of business cycle theories using an entirely different statistical conceptual framework from that available to Mitchell (1927) and his contemporaries.(1)

When economists discuss economic theory, most of us naturally focus on formal theory, i.e., the construction of a model economy - which naturally is a simplified version of the real world - and the establishment of general propositions about its operation. Yet, there is another important kind of economic theory, which is the use of much more simplified model economies to organize economic facts in ways that change the focus of applied research and the development of formal theory. Quantitative theory, in the terminology of Kydland and Prescott (1991), involves taking a more detailed stand on how economic causes are translated into economic consequences. Quantitative theory, of course, embodies all the simplifications of abstract models of formal theory. In addition, it involves making (1) judgments about the quantitative importance of various economic mechanisms and (2) decisions about how to selectively compare the implications of a model to features of real-world economies. By its very nature, quantitative theory thus stands as an intermediate activity to formal theory and the application of econometric methods to evaluation of economic models.

A decade ago, many economists thought of quantitative theory as simply the natural first step in a progression of research activities from formal theory to econometrics, but there has been a hardening of viewpoints in recent years. Some argue that standard econometric methods are not necessary or are, in fact, unhelpful; quantitative theory is sufficient. Others argue that one can learn little from quantitative theory and that the only source of knowledge about important economic mechanisms is obtained through econometrics. For those of us that honor the traditions of both quantitative theory and econometrics, not only did the onset of this controversy come as a surprise, but its depth and persistence also were unexpected. Accordingly, the twin objectives of this paper are, first, to explore why the events of recent years have led to tensions between practitioners of quantitative theory and econometrics and, second, to suggest dimensions along which the recent controversy can lead to better methods and practice.

2. EXAMPLES OF QUANTITATIVE THEORY

This section discusses three related research topics that take quantitative theory from its earliest stages to the present day. The topics all concern the production function, i.e., the link between output and factor inputs.(2)

The Production Function and Distribution Theory

The production function is a powerful tool of economic analysis, which every first-year graduate student learns to manipulate. Indeed, the first example that most economists encounter is the functional form of Cobb and Douglas (1928), which is also the first example studied here. For contemporary economists, it is difficult to imagine that there once was a time when the notion of the production function was controversial. But, 50 years after his pioneering investigation, Paul Douglas (1976) reminisced:

Critics of the production function analysis such as Horst Mendershausen and his mentor, Ragnar Frisch, . . . urged that so few observations were involved that any mathematical relationship was purely accidental and not causal. They sincerely believed that the analysis should be abandoned and, in the words of Mendershausen, that all past work should be torn up and consigned to the wastepaper basket. This was also the general sentiment among senior American economists, and nowhere was it held more strongly than among my senior colleagues at the University of Chicago. I must admit that I was discouraged by this criticism and thought of giving up the effort, but there was something which told me I should hold on. (P. 905)

The design of the investigation by Douglas was as follows. First, he enlisted the assistance of a mathematician, Cobb, to develop a production function with specified properties.(3) Second, he constructed indexes of physical capital and labor input in U.S. manufacturing for 1899-1922. Third, Cobb and Douglas estimated the production function

[Mathematical Expression Omitted].

In this specification, [Y.sub.t] is the date t index of manufacturing output, [N.sub.t] is the date t index of employed workers, and [K.sub.t] is the date t index of the capital stock. The least squares estimates for 1899-1922 were [Mathematical Expression Omitted] and [Mathematical Expression Omitted]. Fourth, Cobb and Douglas performed a variety of checks of the implications of their specification. These included comparing their estimated [Mathematical Expression Omitted] to measures of labor's share of income, which earlier work had shown to be reasonably constant through time. They also examined the extent to which the production function held for deviations from trend rather than levels. Finally, they examined the relationship between the model's implied marginal product of labor ([Alpha]Y/N) and a measure of real wages that Douglas (1926) had constructed in earlier work.

The results of the Cobb-Douglas quantitative theoretical investigation are displayed in Figure 1. Panel A provides a plot of the data on output, labor, and capital from 1899 to 1922. All series are benchmarked at 100 in 1899, and it is notable that capital grows dramatically over the sample period. Panel B displays the fitted production function, [Mathematical Expression Omitted], graphed as a dashed line and manufacturing output, Y, graphed as a solid line. As organized by the production function, variations in the factors N and K clearly capture the upward trend in output.(4)

With the Cobb-Douglas study, the production function moved from the realm of pure theory - where its properties had been discussed by Clark (1889) and others - to that of quantitative theory. In the hands of Cobb and Douglas, the production function displayed an ability to link (1) measures of physical output to measures of factor inputs (capital and labor) and (2) measures of real wages to measures of average products. It thus became an engine of analysis for applied …

ts;dr

Link to comment
Share on other sites

And here's the kicker...Bruce Willis was dead!! The whole time!

Went to see that movie...the second the kid told everyone he saw dead people, I put it together. Ruined the movie for everyone I was at the theatre with...feel bad to this day.

GFY

and he'schecking down a lot more.. shouldn't be a surprise..

when you draft a kid at 4 he should be better by year 3.. ideally.. I'm not saying it's a lost cause, but it doesn't look great atm..

and his comp% is still lower then 2 rookies in cam newton and dalton

Dalton throws a nice ball. Kids going to be really good.

Sanchez has numerous times this year check down to LT with a wide open receiver. Which takes me back to a conversation we had a while back...he's flying through his progressions to get to the check down and not seeing the whole play because he so worried about getting the ball out fast.

Link to comment
Share on other sites

tl;dr. Sanchez still stinks.

Pretty much, playcalling has a lot to do with it but when u see these rookies this year that look comfortable in the pocket and they are throwing the ball well. It makes you think maybe Sanchez is just a game manager and not much more. 3 years in this offense and he still looks nervous, uncomfortable and not very confident. Oline hasn't help either.

Link to comment
Share on other sites

When you can't argue fact's, opinions and points being made... hey, why not resort to trolling, derailing and hijacking threads with off topic post's? Seems to be the norm around here; regardless of the thread starter. Talking football is frowned upon amongst a good 10-15 members of this site. That's all it takes to bring a message board down. Congrats and keep up the good posting in regards to football.

Link to comment
Share on other sites

Lets take a look at Sanchez outside of the Baltimore game. Yes, you don't have to remind me; I understand that It's ridiculous to eliminate a game from the stat sheet, just to show improved statistics, but I'm trying to look at our quarterback performance in an objective manner. By looking at the complete picture.

Sanchez had no chance against the Ravens, not because of the Ravens as a team defense, but because of the overall situation in regards to the offense. No quarterback stood a chance that night. Not even Joe Montana himself. NY couldn't establish the run due to the fact that we were without one of the leagues best centers. Not only were we without our starting center in Nick Mangold, but Sanchez was under center behind a 3rd string backup, a rookie at that. A rookie who wasn't even with the Jets during offseason workouts. The situation became so ugly, where it came to the point, where our head coach had to replace the rookie with an awful offensive lineman in Ducasse who's yet to work out for us. Our LG in Slauson then became our center, and a player who wasn't ready in Ducasse then became our LG. It wasn't long before Rex had to switch it up once again, moving Slauson back to LG and Baxter in replace of Ducasse at center. All because of disastrous offensive line play.

Moving forward, Is it at least safe to say, that our quarterback had no chance under those circumstances? Could any quarterback have performed under those circumstances? Correct me if I'm wrong, but it was an unsuccessful situation.

With all that said, outside of the Baltimore game; Sanchez has gone 74/122, 62.4%, 918 yards, 7 TD's-4 INT's, 2 rushing TD's and a quarterback rating of 95.2.

This is with two WR's being gone of years past in Edwards and Cotchery, while being replaced by three WR's who weren't here last year in Burress, Mason and Kerley. A harsh decline in offensive line play and run game performance from previous years... but our quarterback has still improved during his expected breakout season of year number three.

For Jet fans to consistently say, on a daily basis, that our offensive coordinator is stunting the growth and development of our quarterback (even though he's improved during all 3 years under Shotty Jr) and or Sanchez hasn't improved this season has led me to believe that these online critics wouldn't notice improvements, even if a quarterback was developing right before their eyes.

Sorry for the long winded post, but I just have alot to say in regards to our quarterback position and Sanchez himself.

Thank you for mimicking what I've been saying. That takes guts. Of course you are right. Well put. Like I said, it WILL even out, ecause everyone will have a crappy game. Bsically after 16 samples you get a clear read, not 6. He is top 10-12 (10 is hard there are too many guys with mega weapons, but 12, with intangibles, great pick)

Link to comment
Share on other sites

When you can't argue fact's, opinions and points being made... hey, why not resort to trolling, derailing and hijacking threads with off topic post's? Seems to be the norm around here; regardless of the thread starter. Talking football is frowned upon amongst a good 10-15 members of this site. That's all it takes to bring a message board down. Congrats and keep up the good posting in regards to football.

JFFQ, you bring a tear to my eye. I feel you, brother. We're with him, bro. He will be vindicated!

Link to comment
Share on other sites

Sanchez has numerous times this year check down to LT with a wide open receiver. Which takes me back to a conversation we had a while back...he's flying through his progressions to get to the check down and not seeing the whole play because he so worried about getting the ball out fast.

yeah, which brings us back to why you were wrong. He's not really going through his progressions if he's missing wide open recievers

Link to comment
Share on other sites

When you can't argue fact's, opinions and points being made... hey, why not resort to trolling, derailing and hijacking threads with off topic post's? Seems to be the norm around here; regardless of the thread starter. Talking football is frowned upon amongst a good 10-15 members of this site. That's all it takes to bring a message board down. Congrats and keep up the good posting in regards to football.

This is just a thought, because I like you. You're a good fan, with a good heart and a great addition to the board.

The facts, the points, the opinions have been argued at nauseum. A new thread with a "different" take, gets old after a while. Its the same stupid talking points over and over again. The reality is, even if Sanchez is improving...its not really by a great deal...if any at all. That is reality.

The pro Sanchez guy will make statements like the one above aftering cherry picking stats to their liking. The haters, will point out the absurdity in making a case of this nature and claiming you are basing it off of "facts".

Whats my suggestion. Stop trying to convince people. Its easy to see, he's not that much better than he ever was before but that doesnt mean the books closed and it doesnt mean he hasnt improved.

Link to comment
Share on other sites

When you can't argue fact's, opinions and points being made... hey, why not resort to trolling, derailing and hijacking threads with off topic post's? Seems to be the norm around here; regardless of the thread starter. Talking football is frowned upon amongst a good 10-15 members of this site. That's all it takes to bring a message board down. Congrats and keep up the good posting in regards to football.

what facts? The fact is his not improving as much as you'd like to see..

If jason works his way in here, I'm sure he'll be happy to show you..

again, his completion % is lower then cam newton and andy daltons.. why are you touting it as a sign of great things to come

Link to comment
Share on other sites

what facts? The fact is his not improving as much as you'd like to see..

If jason works his way in here, I'm sure he'll be happy to show you..

again, his completion % is lower then cam newton and andy daltons.. why are you touting it as a sign of great things to come

I'm pretty sure Dalton's completion percentage is going to drop considerably after he plays the Ravens.

Link to comment
Share on other sites

When you can't argue fact's, opinions and points being made... hey, why not resort to trolling, derailing and hijacking threads with off topic post's? Seems to be the norm around here; regardless of the thread starter. Talking football is frowned upon amongst a good 10-15 members of this site. That's all it takes to bring a message board down. Congrats and keep up the good posting in regards to football.

Let me help you out because you're new and JiF likes you. When you start a straw man argument that uses 20,000 words to essentially say, "Based upon my genius, here's why what you think you're seeing is WRONG: 1) You're stupid and 2) This analysis built around an all but useless, fake statistic in QB Rating," you're immediately subjecting yourself to getting sh*t upon.

In sum, methinks you doth protest too much.

Link to comment
Share on other sites

yeah, which brings us back to why you were wrong. He's not really going through his progressions if he's missing wide open recievers

Its tough, we're obvioulsy not in his head...but it looks like he goes through them to fast. And goes to the check down before the play develops.

Its either he looks for a split second and moves on or he's not even giving them a look. Impossible for you and I to figure out. Either way, the result is, he's getting to the check down too fast.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...