Roundtable: The Future of Computing and Telecommunications
As part of its 10th anniversary symposium on May 16, 1996, the National Research Council’s Computer Science and Telecommunications Board convened a panel of experts to speculate about what the future holds. This is an abridged version of their discussion. The complete version along with the rest of the proceedings from the symposium will be published in 1997 as Defining a Decade: CSTB’s Second 10 Years.
The participants were Edward A. Feigenbaum, chief scientist of the U.S. Air Force and the founder of the Knowledge Systems Laboratory at Stanford University; Juris Hartmanis, the Walter R. Read professor of engineering and computer science at Cornell University; Robert W. Lucky, corporate vice president of applied research at Bellcore and the former executive director of the Communications Sciences Research Division at Bell Labs; Robert Metcalfe, vice president/technology at International Data Group, the inventor of Ethernet, and the founder of 3Com Corporation; Raj Reddy, dean of the School of Computer Science at Carnegie Mellon University and the Herbert A. Simon university professor of computer science and robotics; and Mary Shaw, the Alan J. Perlis professor of computer science, associate dean for professional programs, and a member of the Human Computer Interaction Institute at Carnegie Mellon University. The moderator was David D. Clark, senior research scientist at the MIT Laboratory for Computer Science.
Clark: In discussions of computing, we often hear the phrase”the reckless pace of innovation in the field.” It’s a great phrase. I have a feeling that our field has just left behind the debris of half-understood ideas in an attempt to plow into the future. One of the questions I wanted to ask the panel is: Do you think that in the next 10 years we’re going to grow up? Are we going to mature? Are we going to slow down? Ten years from now, will we still say that we have been driven by the reckless pace of innovation? Or will we, in fact, have been able to breathe long enough to codify what we have actually understood so far?
Reddy: You are making it appear as though we have some control over the future. We have absolutely no control over the pace of innovation. It will happen whether we like it or not. It is just a question of how fast we can run with it.
Clark: I wasn’t suggesting that we had any control over its pace, but you’re saying you think it will continue to be just as fast and just as chaotic?
Reddy: And most of us will be left behind, actually.
Lucky: At Bell Labs, we used to talk about research in terms of 10 years. Now you can hardly see two weeks ahead in our field. The question of what long-term research is all about remains unanswered when you can’t see what’s out there to do research on.
Nicholas Negroponte said recently that, when he started the Media Lab, his competition came from places like Bell Labs, Stanford University, and U.C. Berkeley. Now he says his competition comes from 16-year-old kids. I see researchers working on good academic problems, and then two weeks later some young kids in a small company are out there doing it. You may ask: Where do we fit into this anymore? In some sense, particularly in this field, I think there must still be good academic fields where you can work on long-term problems, but the future is coming at us so fast that I sometimes find myself looking in the rear-view mirror.
Shaw: I think it will keep moving; at least I hope so. What will keep it moving is the demand from outside. In the past few years, we have begun to get over the hump where people who aren’t in the computing priesthood and who haven’t invested lots and lots of years in figuring out how to make computers do things can actually make computers do things. As that becomes easier-it’s not easy yet-more and more people will be demanding services tuned to their own needs. They are going to generate the demand that will keep the field growing.
Hartmanis: We can project reasonably well what silicon technology can yield during the next 20 years; this growth in computing power will follow the established pattern. The fascinating question is: What is the next technology to accelerate this rate and to provide the growth during the next century? Is it quantum computing? Could it really add additional orders of magnitude? Is it molecular or DNA computing? Probably not. The key question is: What technologies, if any, will complement and/or replace the predictable silicon technology?
Clark: Is there any real innovation in our field?
Shaw: We have had some innovation, but it hasn’t been our own doing. The things that have started to open the door to people who are not highly trained computing professionals–spreadsheets and word processors, for example–have come at the academic community from the outside, and had very little credibility for a long time. Most recently, there has been the upsurge of the World Wide Web. It is true that Mosaic was developed in a university, but not exactly in the computer science department. Those are genuine innovations, not just nickel-and-dime things.
Feigenbaum: Until now, there has been a revolution going on that no one really recognizes as a revolution. That is the revolution of packaged software, which has created immense amounts of programming at our fingertipsThis is the single biggest change since 1980. The future is best seen not in terms of changing hardware or increased processor speed, but rather in terms of the software revolution. We are now living in a software-first world. The revolution will be in software-building that is now done painstakingly in a craft-like way by the major companies producing packaged software. They create a “suite”-a cooperating set of applications-which takes the coordinated effort of a large team.
What we need to do now in computer science and engineering is to invent a way in which everyone can do that at his or her desktop; we need to enable people to “glue” packaged software together so the packages work as integrated systems. That will be a very significant revolution.
I think the other revolution will be intelligent agents. Here, the function of the agent is to allow you to express what it is you want to accomplish and to provide the agent with enough knowledge about your environment and your context to reason out exactly how to accomplish it.
Last, I’ll say something about the debris. I can bring my laptop into this room, take an electric cord out of the back and plug it into the wall. I get electricity to power my computer, anywhere. But I cannot take the information wire that comes out of the back and plug it into the wall anywhere. We do not yet have anything like an information utility. Yes, I can dial the Internet on a modem, but that is a second-rate adaptation to an old world of switched analog telephones. That is not the dream. The architecture of the Internet, wonderful as it may seem, has frustrated the dream of the information utility.
Metcalfe: Others are better able to discuss the structure of the Internet. I point to what Gordon Moore has recently called Grove’s Law: that the communications bandwidth available doubles only every 100 years. It is a description of the sad effects of the structure of the telecommunications industry, which would be in charge of putting those information outlets where you want them. That industry has been under-performing for 40 to 50 years, and now we have to wake it up.
Lucky: We’re pushing something we would like to call IP dialtone. There was an interesting interview with Mary Modahl in last month’s Wired magazine. They asked her if voice on the Internet would really take over, and she said, “No.” She said that real-time voice is a hobby, like CB radio, not a permanent application. I actually think it may turn out to be that way in the future, that the voice will be a smaller network and the IP infrastructure will really take it over. IP dialtone will be the main thing. I wouldn’t rebuild the voice network. I would just leave it there and build this whole new network of IP dialtone networks.
Clark: Another thing that marks our field is the persistence of stubborn, intractable problems that we have no idea how to solve. An obvious problem is (looking at it abstractly) our ability to understand complexity, or (looking at it more concretely) our ability to write large software systems that work. When we go to CSTB’s 20th anniversary and look back, do you think we’re going to see any new breakthrough? I’m thinking about the point Ed Feigenbaum made that people are going to be able to engineer a software package at their desks. I said, “Oh no. It’s done by gnomes inside Microsoft.” Won’t it be done by gnomes inside Microsoft for the next 10 years?
Shaw: I think it’s a very big problem, but Ed pointed out a piece of it- that the parts don’t fit together. We have, though, this myth that someday we’re going to be able to put software systems together out of parts just like tinker toys. Well, folks, it ain’t like that. It’s more like having a bathtub full of Tinker Toys, Erector Sets, Lego blocks, Lincoln Logs, and all of the other building kits you ever had as a kid, and reaching into it and grabbing three pieces at random and expecting to build something useful.
I do believe that we will be able to make progress. Breakthrough is a pretty big word, but I think we will at least be able to make significant progress on articulating those distinctions, and helping each other understand when we have the problem, and what, if anything, we can do about it.
I have the same problem that Ed does, except mine is at the software level. I put a document on a floppy disk and I take it someplace. Well, maybe the text formatter I find when I get there is the same one that it was created with; how fortunate. Even so, well, the fonts on the machine aren’t the same, and the default fonts in the text formatter aren’t the same, and it probably takes me half an hour to restore the document to legibility just because the local context changed. Then, of course, there is the rest of the time, when I find a different document formatter entirely. This is another example of having parts that exist independently that we want to move around and put together. Once again, I think the big problem is not being able to articulate the assumptions the parts make about the context they need to have.
Audience: I say we just had a breakthrough. How many breakthroughs per decade are you entitled to? The breakthrough we just had is the Web. You had to cobble together a few million computers, a whole bunch of servers, all kinds of legacy databases and documents, and all kinds of other stuff. You can patch together huge amounts of stuff and make it accessible to millions of people. What is all this whining and moaning about? Furthermore, I would like to point out that if you would like your document to be portable, just write it in vanilla ASCII and you won’t have any problems with the portability.
Shaw: I’m really good at ASCII, and ASCII art, too, but we were planning the next decade’s breakthrough.
Metcalfe: At the risk of being nasty, what I just heard is that we need standardization. That’s all I heard. I didn’t hear that all this money we are spending on software research isn’t resulting in any breakthroughs, or whatever breakthroughs it is resulting in are not being converted because we just can’t standardize on it. Is that right? Is that what I heard?
Shaw: Standardization suggests that one size fits all, and if everyone would “just do it my way,” everything would be just fine. That implies that there can be one approach that suffices for all problems.
Lucky: Isn’t standardization what made the Web? We all got together behind one solution; it may not fit everybody, but we empowered everybody to build on the same thing, and that’s what made the whole thing happen.
Clark: You know, one of the things that was said at the beginning of this decade is that the nineties would be the decade of the standards. And as some smart aleck commented: The nice thing about standards is that there are so many to pick from. In truth, I think that one of the things that has happened in the nineties is that a few standards-not because they are necessarily best-happened to win some sort of battle.
Lucky: That’s the tragedy and the great triumph at the same time. You can build a better processor than Intel or a better operating system than Microsoft. It doesn’t matter. It just doesn’t matter.
Clark: How can we hurtle into the future at a reckless pace and, simultaneously, conclude that it is all over because it doesn’t matter because you can’t do something better, because it’s all frozen in standards?
Metcalfe: There seems to be reckless innovation on almost all fronts except two, software engineering and the telco monopolies.
Clark: I look at the Web, and the fact is that we have a de facto standard out there, a regrettable set of de facto standards in HTML and HTPP. When you try to innovate by saying it would be better if URLs were some other way, the answer is, “Yes, but there are already 50 million of them out there, so forget it.” So I’m not sure I believe your statement that there is rapid innovation everywhere, except for those two areas.
Metcalfe: I go back to Butler Lampson’s comments. Just last week there was rapid innovation in the Web.
Lucky: It’s possible that if all the dreams of the Java advocates come true, it will permit innovation on top of a standard. That is one way to get at this problem. We don’t know how it’s going to work out, but at least that would be the theory.
Clark: Many people said that progress in silicon technology is the engine that drove us forward. I think that’s true, but I’m not sure it’s the only engine.
Lucky: At the base, silicon has driven the whole thing. It has really made everything possible. That is undeniable, even though we spend most of our time working on a different level. That is the engine in the basement that really is doing it.
Metcalfe: The old adage: Grove giveth and Gates taketh away.
Clark: What does the future hold for academic research? If I have a good idea, I can put one or two people to work on it, and industry could marshal the equivalent of 100 man-years. What role can a poor academic play? If all of the academic researchers died, what impact would it have on the field in 10 years?
Reddy: No students.
Lucky: It’s like the NBA draft. Students are going to be leaving early, trying to be Mark Andreessen.
Clark: That’s happened to me. I cannot get them to stay. There is no doubt it is a serious issue for me. Why does it matter?
Lucky: I think you’re right, actually; you’re doomed.
Metcalfe: I think it is a fact that, right now, industrial advancement in technology is outstripping the universities. I see that as a temporary problem that we need to fix. Some of us need to stop working on all these short-term projects in the universities and somehow leap out ahead of where the industry is now.
Clark: You can’t outrun them. If it’s a hardware area, you can hallucinate something so improbable you just can’t build it today. Then, of course, you can’t build it in the lab either. But in the software area, there really is no such thing as a long-term answer. If you can conceive it, somebody can reduce it to practice. So I don’t know what it means to be long-term anymore.
Hartmanis: I don’t believe what was said earlier, that if you invent a better operating system or a better Web or computer architecture, it doesn’t matter. I think it matters a lot. It’s not that industry takes over directly what you have done, but the students who move into industry take those ideas with them, and they do show up in development agendas and products. I am convinced that the above assessment is far too pessimistic about the influence of academic research. .
Feigenbaum: On the question of long-term versus short-term, university researchers should attend to longer range issues. Bill Joy of Sun Microsystems says that for Sun 18 months is a long time. He said he wouldn’t entertain anything that is more than 24 months out.
I was at a DARPA meeting recently at which they were talking about advances in parallel computer architectures. They were focusing on very advanced work of the Stanford Computer System Lab on FLASH architecture. That project has been going on for more than a decade now. It evolved with several different related architectures. That kind of sustained effort is the role of the university.
Lucky: I just want to say, in support of academics, that we are all proud of what the Internet and the Web have done. This was really created by a partnership between academia and the government. The industry had very little to do with it. The question for all of us is whether this is a model that can be repeated. Can government do something again like they did with ARPANET, something that will have the tremendous effects for all of us that this has had two decades later?
Feigenbaum: Dave, before you leave this subject, though, I would like to say something about a paradox or a dilemma that the university researchers find themselves in. If you go around and look at what individual faculty people do, you find smallish things in a world that seems to demand more team and system activity. There is not much money around to fund anything more than small things, basically to supplement a university professor’s salary and a graduate student or two.
Partly that’s because there is a general lack of money. Partly it’s because we have a population explosion problem and all these mouths to feed. All the agencies that were feeding relatively few mouths 20 years ago are now feeding maybe 100 times as many assistant professors and young researchers, so amounts of money to each are very small. That means that, except for the occasional brilliant meteor that comes through, you have relatively small things being done. When they get turned into anything, it is because the individual faculty member or student convinces a company to spend more money on it. Subsequently, the world thinks it came out of the industry.
Audience: If we keep training students to look inside their own heads and become professors, then we lose the path of innovation. If we train our students to look at what industry is doing and what customers and people out there using this stuff can’t do-not be terrorized by what they can do, but look at where they are running into walls-then our students start appreciating these as the sources of really hard problems. I think that focus is lacking in academia to some extent, and that looking outward at real problems gives you focus for research.
Hartmanis: Yes. I fully agree with you. Students should be well aware of what industry is and is not doing. Students see problems with software and with the Internet. They go out and work summers in industry. They are not in any sense isolated; they know what is going on. Limited funding may not permit big university projects, but students are quite well informed about industrial activities.
Shaw: Earlier I mentioned three innovations that came from outside the computer science community-spreadsheets, text formatting, and the Web. They came about because people outside the community had something they needed to do, and they weren’t getting any help. We’ll get more leads by looking not only at the problems that computer scientists have, but also at the problems of the people who don’t have the technical expertise to cope with these problems. I don’t think the next innovation is particularly going to be an increment along the Web, or an increment on spreadsheets, or an increment on something else. How are we going to be the originators of the next killer app, rather than waiting for somebody outside to show it to us?
Feigenbaum: I have talked to a lot of people abroad-academics and industry people in Japan and in Europe-about our computer science situation, especially on the software side. We are the envy of the world in terms of the connectedness of our professors and our students to real-world problems. Talk about isolation-they think they are isolated relative to us.
Clark: Now it is time to give each of the panelists two or three minutes to tell us the thing about the future that matters the most to you.
Reddy: As Bob Lucky pointed out, there are different kinds of futures. If you go back 40 years, it was clear that certain things were going to have an impact on society-things like communications satellites, predicted by Arthur Clarke; the invention of the computer; and the discovery of DNA structure. At the same time, none of us had any idea of semiconductor memories or integrated circuits. Nor did we imagine the ARPANET. All of these came to have a major impact on society.
So my hypothesis is that there are some things we now know that will have impact. One is digital libraries. The term digital library is a misnomer; the wrong metaphor. It ought to be called digital archive, bookstore, and library. It provides access to information at some price, including no price. In fact, NSF and DARPA have large projects on digital libraries, but they are mainly technology-based; creating the technology to access information. Nobody is working on the problem of ubiquitous content, which includes not just books, but also music, movies, art, and lectures.
We have a Library of Congress with 30 million volumes; globally, the estimate is about 100 million volumes in all languages. The Government Printing Office produces 40,000 documents consisting of six million pages that are out of copyright. Creating a movement-because it is not going to be done by any one country or any one group, it must be done globally-to get all the content on-line is critically important. I think that is one of the futures that will affect every man, woman, and child, and we can do it.
Metcalfe: I would like speak briefly on behalf of those efforts aimed at fixing the Internet. The Internet is one of our big success stories and we should be proud of it, but it is broken and on the verge of collapse. It is suffering numerous brown-outs and outages. About 90 percent of the people I talk to are generally dissatisfied with the performance and reliability of the Internet.
There is no greater proof of that than the proliferation of what are called intranets. The good reason that they build them is to serve internal corporate data processing applications, as they always have. The bad reason is because the Internet offers inadequate security, performance, and reliability for their uses. The universities, as I understand it, are currently approaching NSF to build another NSFNET for them. This is really a suggestion to not fix the Internet but to build another network for us.
Of course, the Internet service providers are also tempted to build their own copies of the Internet for special customers and so on. I believe this is the wrong approach. We need to be working on fixing the Internet. Lest you be in doubt about what that would include, that would be adding facilities to the Internet by which it can be managed. I claim those facilities are not in the Internet because universities find management boring and they don’t work on it. Fixing the Internet also would include the addition of mechanisms for finance so the infrastructure can be grown through normal communication between supply and demand in our open markets; and the addition of security, and it’s not the National Security Agency’s fault we don’t have security in the Internet. It is because for years and years it has been boring to work on security, and no one has been doing it; now we finally have started.
We need to add money to the Internet; not the finance part I just talked about, but electronic money that will support electronic commerce on the Internet. We need to introduce the concept of zoning in the Internet. The Communications Decency Act is an effort, although lame, to bring this about. On the Internet, mechanisms supporting freedom of speech have to be matched by mechanisms supporting freedom not to listen.
We need progress on the development of residential networking. The telecommunications monopolies have been in the way for 30 or 40 years, and we need to break those monopolies and get competition working on our behalf.
Shaw: I think the future is going to be shaped, as the past has been, by changes in the relationship between the people who use computing and the computing that they use. We have talked a lot today about software, and we have talked a little about the World Wide Web, which is really a provider of information rather than of computation at this point. I believe we should not think about those two things separately, but about their fusion as information services, including computation and information, but also the hybrid of active information.
On the Web, we have lots of information available as a vast undifferentiated sea of bits. We have some search engines that find us individual points. We need mechanisms that will allow us to search more systematically and to retain the context of the search. In order to fundamentally change the relation between the users and the computing, we need to find ways to make computing genuinely widespread and affordable and private and symmetric, and genuinely intellectually accessible by a wider collection of people.
I thank Bob for saying most of what I was going to say about things that need to be done because the networks must become a places to do real business, rather than places to exchange information among friends. In addition, I think we need to spend more time thinking about what you might call naive models; that is, ways for people who are specialists in something other than computing to understand the computing medium and what it will do for them, and to do so in their own terms so they can take personal control over their computing.
Lucky: There are two things I know about the future. First, after the turn of the century there will be one billion people using the Internet. The second thing I know is that I haven’t the foggiest idea what they are going to be using it for.
We have created something much bigger than us where biological phenomena like Darwinism and self-adaptive organization seem more relevant than the future paradigm that we are used to. The question is: How do we design an infrastructure in the face of this total unknown? There are certain things that seem to be an unalloyed good that we can strive for. Getting greater bandwidth out all the way to the user is something that we can do without loss of generality.
On the other side, it is hard to find other unalloyed goods. For example, intelligence is not necessarily a good thing. I’ll just give you one example. Recently there was a flurry of e-mail on the Internet when one of the router companies announced that they were going to put an Exon box in their router. An Exon box would check all packets going by to see if they are adult packets or not. There was a lot of protest on the Internet, not because of first amendment principles, but because people didn’t want anything put inside the network that exercises control.
It’s hard to find these unalloyed goods. Bandwidth is good, but anything else you do on the network may later come back to bite you because of profound uncertainty about what is happening.
Hartmanis: I would like to talk more about the science part of computer science; namely, theoretical work in computer science, its relevance, and about some stubborn intellectual problems. For example, security and trust on the Internet are of utmost importance, and yet all the methods we use for encryption are based on unproven principles. We have no idea how hard it is to factor large integers, but our security systems are largely based on the assumed difficulty of factoring. There are many more such unresolved problems about the complexity of computations that are of direct relevance to trust, security, and authentication, as well as to the grand challenge of understanding what is and is not feasibly computable. Because of the universality of the computing paradigm, the quest to understand what is and is not feasibly computable is equivalent to understanding the limits of rational reasoning-a noble task indeed.
Feigenbaum: I would like to talk very briefly about artificial intelligence and the near future. There is a kind of Edisonian analog to this. Yes, we have invented the light bulb, and we have given people the plans to build the generators. We have given them the tools for constructing the generators. They have gone out and hand-crafted a few generators. There is one lamppost working here, or lights on one city block over there. A few places are illuminated, but most of the world is still dark. But the dream is to light up the world! Edison, of course, invented an electric company. So the vision is to find out what it is we must do-and I’m going to tell you what I think it is-and then go out and build that electric company.
What we learned over the past 25 years is that the driver of the power of intelligent systems is the knowledge that the systems have about their universe of discourse, not the sophistication of the reasoning process the systems employ. We have put together tiny amounts of knowledge in very narrow, specialized areas in programs called expert systems. These are the individual lampposts or, at most, the city block. What we need to build is a large, distributed knowledge base. The way to build it is the way the data space of the World Wide Web came about-a large number of individuals contributing their data to the nodes of the Web. In the case I’m talking about, people will be contributing their knowledge in machine-usable form. The knowledge would be presented in a neutral and general way-a way of building knowledge bases so they are reusable and extendible-so that the knowledge can be used in many different applications. A lot of basic work has been done to enable that kind of infrastructure growth. I think we just need the will to go down that road.