The productivity paradox refers to the slowdown in productivity growth in the United States in the 1970s and 80s despite rapid development in the field of information technology (IT) over the same period. As highlighted in a widely-cited article[1] by Erik Brynjolfsson, productivity growth slowed down at the level of the whole U.S. economy, and often within individual sectors that had invested heavily in IT, despite dramatic advances in computer power and increasing investment in IT. Similar trends were seen in many other nations.[2] While the computing capacity of the U.S. increased a hundredfold in the 1970s and 1980s,[3] labor productivity growth slowed from over 3% in the 1960s to roughly 1% in the 1980s. This perceived paradox was popularized in the media by analysts such as Steven Roach and later Paul Strassman. The concept is sometimes referred to as the Solow computer paradox in reference to Robert Solow‘s 1987 quip, “You can see the computer age everywhere but in the productivity statistics.”[4] The paradox has been defined as a perceived “discrepancy between measures of investment in information technology and measures of output at the national level.”[5]
Leave a Reply