Dialogue: New Chapter in the History of Computing

蔣楠發表於2020-08-29

Distinguished Guest

enter image description here enter image description here

Martin Campbell-Kelly is emeritus professor in the Department of Computer Science at the University of Warwick, where he specializes in the history of computing.

Professor Campbell-Kelly is a Fellow of the British Computer Society and a Fellow of the Learned Society of Wales. He is a member of the ACM History Committee, a committee member of the BCS Computer Conservation Society, and a trustee of the National Museum of Computing. He is a member of the editorial boards of the IEEE Annals of the History of Computing, the International Journal for the History of Engineering and Technology, and the Springer Series in the History of Computing.

Professor Campbell-Kelly is also the first author of Computer: A History of the Information Machine (Third Edition).


Dialogue with Professor Campbell-Kelly

(Chinese transcript)

Q: A mathematician, engineer and inventor, Charles Babbage is considered by some as "father of the modern computer." In addition to his well-known Difference and Analytical Engines, the former Lucasian Professor of Mathematics at Cambridge was also a remarkable talent in other fields: he was author of the most influential economic classic in 1830s, On the Economy of Machinery and Manufactures, and the man who solved the Vigenère cipher during the Crimean War. Do you think Babbage would rank with Alan Turing and John von Neumann in the history of computing?

A: Charles Babbage is better described as the "pioneer of the computer" rather than the "father of the modern computer." His engines were an intellectual triumph, but they had no influence on modern computing. Babbage was a towering intellect and polymath. He was an authority in many fields including mathematics, engineering design, and economics. If it were not for the invention of the computer, he would today be known as the leading industrial economist of his day. Because of the information explosion in the twentieth century it is not possible to compare the stature of Babbage with that of Alan Turing or Jon von Neumann. In Babbage's era one could be an expert in several broad fields of study. In Turing and von Neumann's day one could only be a world expert in a few narrow subject areas.

Q: Served as the training ground of Silicon Valley in the 20th century, Fairchild Semiconductor has spawned lots of leading semiconductor manufacturers including Intel and AMD, while Kleiner Perkins, Sequoia Capital and other top Silicon Valley venture capital firms have Fairchild's gene. It seems PayPal could be on a par with Fairchild in the early 21st century: founders of Tesla, SpaceX, YouTube, LinkedIn and many other tech companies came from PayPal. In your opinion, what are the reasons that Fairchild Semiconductor and PayPal can spread their influence gradually?

A: The cases of Fairchild Semiconductor and PayPal are quite different. The "Fairchildren" left Fairchild because of the firm's autocratic management style and limited vision. The PayPal diaspora had experienced strong leadership and outstanding vision, and were confident they could emulate these in different ventures. What the examples have in common is they both benefitted from Silicon Valley's intellectual infrastructure—universities (notably Stanford University) and a plentiful supply of a highly educated technical workforce. Education was the single most important factor that set Silicon Valley apart from other geographical regions. The model has been emulated worldwide. In all cases access to capital is vital, but private-sector venture capital, state-provided funds, or a mixture are equally effective.

Q: In his paper published in 1994, American mathematician Peter Shor proposed Shor's algorithm, proving that quantum computers could theoretically crack encryption algorithms based on public-key cryptography such as RSA. Do you think the security system would see quantum computers as a game-changer in future?

A: The advent of quantum computers means that the world will have to invent new methods of encryption. However, time is on our side. The first quantum computers will be very expensive and there will be very few of them. Those most affected by insecure encryption will be state security organizations, the military, and big financial institutions. There are very strong incentives for these organizations to prepare for the advent of quantum computers by devising new encryption techniques or other secure systems. No doubt research is ongoing in these organizations but we have not been told about it. In the wider world, there are good and bad prospects. Much organized crime is now facilitated by secure communications. When the police and crime fighting organizations can decrypt their communications the world will be a safer place. The bad prospect is that governments will be able to harvest the secret communications of legitimate oppositions.

Q: Top spot in the TOP500 supercomputer speed ranking now goes to the ARM-based Japanese supercomputer Fugaku, announced on June 22, 2020, which turns in a High Performance Linpack result of 415.5 petaflops. But some scholars decide that supercomputers as well as underlying semiconductor technologies are approaching the limit of basic science, and quantum computing is believed to be the development direction ahead. In recent years, Internet giants like Google, Microsoft, IBM and Alibaba have entered the field of quantum computing. What do you think of the future of classical computers and quantum computers?

A: The first classical stored-program computers of the early 1950s were almost entirely used for scientific calculations. However, so-called von Neumann architecture is "universal" and has been applied in many information processing and communications domains—from the business mainframes of the 1960s up to the smartphones of today. Like the first stored-program computers, the first quantum computers will be primarily used for scientific computation and "number crunching." It is not known whether or not quantum computers will be "universal" in the same way as the classical computer. But who knows? We are only on the opening page of this new chapter in the history of computing.

Q: Over the past 50 years, Moore's law has been regarded as the golden rule for successfully predicting trends towards semiconductor technology. As development of semiconductor industry slows down, however, many enterprises, including Nvidia, AMD and TSMC, believe Moore's law is dead or soon to be dead. Is this law still any guide today?

A: The ever-increasing performance of computers we have witnessed for fifty years have been driven by Moore's Law and manufacturing improvements. Today's processors are millions of times faster and use vastly less electrical power. Although Moore's Law is coming to its end, there are many ways to make computers more powerful by using specialized chips—for example in signal processing, communications, or encryption. We can also gain speed by parallelism and using multiple processors. The computer revolution still has some way to go, and we can expect hundred-fold improvements in speed and capacity over the next decade or two. However, hardware improvements cannot go on forever, so more resources will be then put into software to make better use of the computers we have.

Q: London-based artificial intelligence company DeepMind, developer of AlphaGo, is well known in China. Google's acquisition of DeepMind, believed to be crucial to its AI strategy, is seen as one of the most important acquisitions ever made by a technology company. Also, London is a powerhouse in artificial intelligence on a par with Silicon Valley. What factors do you think keep the United Kingdom ahead in AI research?

A: Breakthroughs in AI will be primarily intellectual, although physical resources will pay a part too. The primacy of thought over material is a British tradition. Two examples from computer history illustrate this.

Maurice Wilkes led the construction of the first practical stored-program computer EDSAC which first ran in May 1949. He was once asked how it was possible that his small, under-resourced team in Cambridge England could have beaten the much richer America laboratories. He explained that it was because he had sufficient resources, but only just sufficient. The modest funding ensured he kept things simple, used known engineering techniques, and his small team had a single objective.

Alan Turing believed that thought was far more important than material in research. In 1937 he published his classic paper "On Computable Numbers, with an Application to the Entscheidungsproblem," the origin of the theoretical Turing Machine and one of the founding texts of computer science. Turing knew that trying to build a real machine would be a distraction, and he did not attempt it. During the war, he was one of the code-breakers at Bletchley Park in England. The machine he designed for breaking the Enigma codes was a slow mechanical device, hundreds of times slower than an electronic computer. But because of its subtle design, it succeeded where brute-force would have failed.

Britain has world-class AI research community because the tradition of thought above material lives on in the British research ethos.

Q: Artificial intelligence has always been one of the most concerned research fields, with major world powers competing for talent. What are your suggestions for students planning to devote themselves to AI research?

A: An understanding of human-computer interaction is at the heart of user-centered computing. One of the great historical examples of this was the development of the graphical user interface (GUI) at Xerox PARC in the 1970s. The GUI was invented not by computer scientists alone, but by including psychologists and social scientists in the development team. Although Xerox was unable to exploit the GUI successfully, it eventually found its way into the Apple Macintosh computer, and then into Windows and subsequent personal-computer operating systems.

AI, too, is concerned with human-computer interaction, but on an even greater scale. Talented computer engineers and scientists will be at the heart of AI laboratories, but they will not be able to achieve the necessary breakthroughs by technology alone. They will need psychologists and social scientists, and researchers will need to understand and respect their unique outlooks. If possible, students entering the field should spend a good fraction of their time studying beyond the realm of computing. This could be in psychology or social science, or even music or history. This is sometimes called one's "intellectual hinterland." The broader a person's point-of-view the better he or she can address the challenges of AI research.

相關文章