News

Subscribe

image

November 15, 2009

Richard Nelson gave an interview in June 2002 to Margherita Fronte that still makes a good read today. In the interview, he explains why he developed evolutionary economics, discusses the relationship between science and technolgy, and takes a stand on what kinds of inventions should be patentable. Click on Read more for the full interview or go the website of the Fondazine Bassetti.

Professor Nelson, how did you develop the concept of evolutionary economics?

Human knowledge changes gradually over time, and we by no means take it for granted that the changes are driven by any precise sense of purpose. Indeed, the concept of evolutionary economics is a very natural one. Basically, much of classical economic theory can be viewed as a subject that considered progress in evolutionary terms, or one that was guided by processes that can be planned in detail but whose results are selected by economic, political and social mechanisms that determine their development and impact on society. However, modern economic theory has forgotten all this. my own view is that the most obvious way to describe the processes involved in innovation is to do so in evolutionary terms

Technology has a fundamental role as a driving force of economic growth, a concept that was already clear at the time of Adam Smith and the first industrial revolution. As technological innovation was introduced, new organisations, institutions and forms of power came into being, which by spreading the technologies exerted a long-term influence on society and economic development.

Does technological innovation follow on from science, or is it the possible practical applications of a discovery that determine the development of the various sectors of scientific research?

I believe that, to a greater extent than people are generally willing to admit, it is the possible applications and therefore technology that drives science, although the process actually works both ways. The factors determining the development of any branch of science may well be the practical spin-offs, but also the quest for explanations as to how an innovation that has already been introduced actually works. The history of medicine is full of examples of this type. Recently, my studies were focused on medicine, and I was struck by how infrequent it has been for important advances in the basic science to lead to the introduction of cutting edge new medical practices. Rather, the opposite has happened. We need only think of the way vaccines developed. The invention was the result of a fundamental insight, but it was only later on that immunology gave us a detailed explanation of why vaccinations protect us from disease. Similar phenomena can be found in electronics. In this sector the situation is more varied, because in many cases in addition to the inventors idea, the innovations were preceded by a knowledge of physics. However, some sectors of physics developed precisely to reach a more complete understanding of how objects that had already been invented and may well have produced important changes in the economy and society, actually worked, such as transistors, for example.

If it is technology that leads science, is it still technology that determines which branches of scientific research receive funding?

Yes, and this is particularly true for some sectors. Lets take biotechnology, for example. The development of biotechnology has had a strong impact, especially in the United States, on university-level scientific research. People have also let themselves be carried away by excessive enthusiasm: in fact, if we analyse the businesses that have been set up over the last 15 years, the numbers operating in the biotechnology sector and achieving significant economic results are actually very low. Something similar happened with IT, with the new economy “bubble”. The problem is that in the United States, especially for the pharmaceuticals and biotechnology sectors, the economic link between university research and industry has become too close. Up until the 1950s the universities had a firm policy of forbidding patentability, but this tendency has been reversed. More recently, even public funding for research has been conditioned by patents, and the universities have tried to obtain additional funding by selling licences to industry. The Bayh-Dole Act of 1980 enabled the universities to keep their patents and set up their own technology transfer offices to issue licences to industry. In actual fact, even if it is difficult to make a precise calculation, many universities did not achieve the hoped-for financial returns. Betting too heavily on patents prevents people from planning research projects that are wider in scope and structured with the longer view in mind. Moreover, companies are increasingly asking the universities for exclusive licenses, and this causes disputes. Although exclusive rights can be useful in some cases, in my view university policy should be designed to permit the widest possible use of ideas.

In what cases can patents be useful?

Only results that are very close to practical application, in terms of the amount of further research needed to achieve this, should be patented. Indeed, if exclusive rights were not granted in such cases, no company would invest to pursue the technological innovation that might derive from the research, given the risk that other competitors might do the same or get there first. Results that can be used in a wide range of other studies, on the other hand, should not be patented, such as the human genome sequence??. Nor should results that are themselves a research tool be patented: these should, rather, be disseminated as widely as possible.

What can the Europeans learn from the US experience?

Europeans are perhaps a bit too enthusiastic in their view of events in the United States. The importance of patents for the development of scientific research needs to be put into perspective. The risk is that the universities might become part of business. The economic relations between industry and scientific research institutes threaten the freedom of action of the latter, by compromising their independence and fundamental role as independent, objective bodies. This is particularly true of pharmaceutical research. Researchers funded by industry find themselves in a conflict of interests which hampers their critical judgement.

The publishers of the principal medical reviews recently reached an agreement that if the authors of the articles submitted were in a situation where a conflict of interests might arise they should declare this. Do you think this might be useful?

I think that given the situation as it stands today all initiatives can potentially be useful. I dont believe, however, that the solution to the conflict of interests problem lies in any one initiative, because the question is complex and has developed over the years. However, I feel that the fact that reviews publishing articles by scientists from the academic world and from industry have not thus far been asking their authors to declare any potential conflict of interests might have contributed to the problem. It is certainly useful for them to do so now. However, the researchers could lie… which is why the solution cannot be found in any single initiative.

(orginally, published on 25 June 2002)  by Fondazine Bassetti

Margherita Fronte collaborates with the Giannino Bassetti Foundation




Return to all News items