INTERVIEW: PBS Frontline with Robert Steele on Hackers

Articles & Chapters
0Shares

PBS FRONTLINE

interview: robert d. steele


photo of richard d. steele

Founder, President and Chief Executive Officer of Open Source Solutions, Inc. (OSS) he has twenty years of experience in national and defense intelligence, including clandestine, covert action and technical collection, and managing an offensive counterintelligence program. He was the senior civilian responsible for creating the Marine Corps Intelligence Center. He participated in the Hackers on Planet Earth H2K convention, sponsored by the hacker group 2600.

How did you move from your previous profession into the current one?

After many years as a spy, I had an opportunity to set up the Marine Corps Intelligence Center, to create it from the start. That's our nation's newest intelligence facility. I was responsible for hiring and managing analysts and for doing intelligence products. And, to my great shock, after having spent over a decade stealing secrets, I discovered that most of what we needed to produce intelligence was not secret, and was not available from the CIA; it was in the private sector. But we didn't have the knowledge, the money or the security permissions to go get it. And so that led me on a crusade to basically try and help governments . . . get smart about making better use of private sector knowledge.

Can you give an example of what you're talking about?

I'll give you a very practical example. The Aspin-Brown Commission was charged with reviewing the entire US international intelligence community. They invited me to a benchmark exercise–myself against the entire US intelligence community on an impromptu question, which was Burundi, in August of 1995.

Overnight, I got information with six phone calls. From Oxford Analytica, I got political military studies on Burundi; from Eastview Publications, I got Russian military maps of Burundi; from Spot Image, I got commercial imagery of Burundi, cloud-free, less than three years old; from Janes Information Group, I got order of battle information for the tribes, at a time when governments were only following the Burundi army; from Lexis-Nexis, the top ten journalists in the world, immediately available for debriefing; and from the Institute of Scientific Information, the top ten academics in the world, immediately available for debriefing. In other words, by knowing who knows what in the private sector, with six phone calls I was able to assemble a team that was vastly superior in knowledge about Burundi than any government intelligence community in the world.

What was the response to that finding in the exercise?

Shock. Denial. And for about 10 years, inaction. The Aspin-Brown Commission recommended that we spend significant amounts of money on open source intelligence, but real spies don't do open source. This is a real cultural issue. But now, 10 years after these lessons were brought forward, I think we're finally at the point where we're starting to see some elements of the intelligence community realize that if they don't get a grip on private sector knowledge, they'll become irrelevant. . . .

Give me a brief history of the internet relevant to the intelligence community.

From my point of view, the internet started in the 1970s, when the United States government needed a network for communicating among its research centers. . . . Then, over the years, it became something of a coffeeshop, a homebrew garage thing. It was popularized among the California techno-elites.

When the Israelis catch a hacker, they give him a job.  When the Americans catch a hacker, they kick him in the teeth and throw him in jail.  And that's not good. It did not actually hit the mainstream in the United States until the mid-1990s, and then it exploded beyond anyone's wildest imagination. All of a sudden it became something that anybody could afford. And although people haven't realized this yet, it changed the balance of power between people and governments. It made it possible for people to come together and create virtual communities that could have more knowledge and more influence on any given issue than any single government could muster. . . .

With the structure that's been built on the internet over the last 10 years or so, it seems to be doing a job it was never designed to do. Are the foundations safe, or is this thing shaking a bit?

Well, you're really talking about . . . the safety of communications in computing, not just the internet. . . . What it boils down to is this: food is regulated; automobile safety is regulated; people need licenses to cut your hair. Yet there are no licenses required to write software. There are no standards of documentation or testing or certification for software. So, in essence, our entire digital society now is based on software built by people we don't know, who have no licenses, who have no quality control, who are not legally liable if their software causes the destruction of our business. That's scary. . . .

What do you see as the dangers if we don't address this?

The difference between the digital age, the information age, and the agricultural or industrial ages, is this: in the agricultural and industrial ages, things were more simplified. They moved more slowly. If there was a breakdown or a disaster, you could recover fairly quickly. It was easy to diagnose where the problem was. It was easy to contain the damage. You could do what's called “graceful degradation,” which is when systems break down a little bit at a time.

The big difference between today and yesterday is that, in the digital age, you're either on or off, you're either black or white, you're either fixed or broken. You crash, literally–by system, by industry, by society–in the event of major computer malfunctions. If the banking system suddenly goes down for 15 to 20 minutes, that's a trillion dollars of exchanges that will never be replicated. If more than two of the eighteen power generators in the United States burn out, we're out of spares. And if the German factory that makes them also burns out, then all of a sudden you're missing some critical pieces with which to help society run. . . .

In your estimation, what will take them out?

Let's go back to the other question, which is, “What will bring society down?” What will bring society down, or what will cause society enormous inconvenience, are accidents that interact in unpredictable ways, and that are very, very difficult to recover from. For example, New Zealand experienced a five-week blackout for one of its major cities. And it's my feeling that these accidents will be more and more frequent, because we are not establishing any standards at all for the communications and computing industry. It is literally “Buyer beware.” There is no protection for the individual, the corporate buyer or the government buyer, because software is sold “as is,” with no claim for quality.

What is the role of hackers in all of this?

. . . One of the reasons that I support hackers is that they have been telling us for over 10 years that the emperor is naked. It's very erroneous to think of hackers as criminals–that's not the case. Hackers are more like astronauts pushing the edge of the envelope. Hackers have been identifying major vulnerabilities in Microsoft products and Sun products and Dell products and all kinds of computer and communications products. And nobody has wanted to listen.

In August,1994, I myself published a $1 billion-a-year budget in a press release to address these issues. A big part of it was for education, and a big part of it was for testing and certification labs, for passing “due diligence” legislation. Nobody wanted to listen. Now the US government has recently come to grips with the fact that it has a major critical infrastructure problem; it lives in a glass house at a time when increasing numbers of people in the world are both angry at the US, and are able to use communications and computing attacks to hurt the US. So we're making some progress. But we will not really come to grips with this problem until every individual citizen demands of their government that it legislate standards of responsibility for the private sector, and then holds the private sector accountable for essentially writing safe software that will stand up to various kinds of unanticipated disasters.

How vulnerable do you think we are? Clearly you think that hackers are doing a good job. But give me a reasonable scenario of what could happen.

. . . It's a relatively simple matter, and I combine here both physical infrastructure attacks and computing or electronic infrastructure attacks. You can take . . . the Barking Sands time antenna in Hawaii, which actually synchronizes computers. You can take out the global positioning system antennas that are playing a similar role. You can take out the Federal Reserve computer. And even though it has a hot backup and a cold backup, it's highly likely that this will cause chaos in American financial circles. You can explode the Alaska pipeline, you can explode the Panama Canal, you can take out the seven bridges across the Mississippi that carry all of our food. These are all nodes that people take for granted. And I think we're living in an age when you have to be much more sensitive to what your vulnerabilities are, because we are no longer able to recover from major disasters as we were able to in the agricultural and industrial age.

In your estimation, is the digital age a more dangerous age?

It's an age that has enormous promise, and it's an age that is also very, very scary, because we literally don't understand it. This is like the invention of fire, or the beginning of time. It's vastly more powerful than fire. It's vastly more powerful than nuclear energy. It's embedded in every single piece of equipment that we touch–and we literally don't understand it.

Your view of hackers will come as a surprise, I think, to a lot of viewers, who view them as greasy-haired, goth louts who are spending too much time in front of a computer screen.

Well, I myself have participated in a very well attended debate on whether hackers were a national resource–which is my position–or whether they are pathological scum. I would say to you that it is the media's fault that hackers are seen in this light. And it is the fault of the US Secret Service, and it is the fault of certain governments around the world who chose to treat hackers as a threat because they didn't understand hackers; they didn't understand the electronic environment that that hackers were addressing.

The bottom line is that hackers are the pioneers in this electronic frontier. They are way out in front of the rest of the world. They are seeing the dangers, the vulnerabilities, the shoddy, unethical, inappropriate business behavior by communications and computing companies. They're basically saying, “Hey, look what we found.” And everyone wants to shoot the messenger.

Give me one of the more egregious examples of unethical behavior by large computer powers.

Paul Strassmann, the former director of Defense Information, and the former chief information officer of the Xerox Corporation, has written a very provocative paper. He suggests that Microsoft is a threat to national security.

Strassmann's essay, Microsoft: A U.S. Security Threat, was published by Computerworld magazine in 1998. His thesis is that because Microsoft systems software is so ubiquitous and has, he claims, so many security flaws, it constitutes a threat to national security. He updated and expanded on this article for FRONTLINE.

And I will tell you that, in my view, from my experience with both employee productivity and software implementation projects, that Microsoft is dramatically impairing and handicapping the productivity of people around the world.

Why is that?

To his great credit, Bill Gates has succeeded in creating an industry standard. But it is a standard that is replete with secret elements known only to Microsoft, and used by Microsoft to impair competitiveness around the world. And at the same time, his products are shoddy. Here's a specific example: if you import PowerPoint slides into a Word document, at some point, the document self-destructs. It explodes.

I would say that Bill Gates is probably responsible for holding the productivity of knowledge workers down to perhaps 60 percent of where they could be. His products are too much trouble to integrate. They prevent the integration of other software, structured argument analysis, modeling and simulation, foreign language translation; there are 18 specific functionalities that I think of. We can't get to a desktop suite of normal information productivity tools today, in part because Bill Gates has refused to share and stabilize the application program interfaces, the APIs, that are needed for other products to work together.

Why has he done that?

Because he's a genius at marketing. And in the period of time when he was fortunate enough to make his money, people did not realize that what he was doing was ultimately very destructive for both national security and national competitiveness.

Give me your portrait of today's hacker.

I will give you Sherry Turkle's portrait of a hacker. Sherry Turkle wrote a wonderful book called [The Second Self:] Computers and the Human Spirit. It was about the original hackers. The original hackers were MIT students, individuals vastly endowed with great intelligence, selected by MIT as the best and the brightest in the nation. And they began playing with the first Dell computer. They began discovering that there were new and unusual things that you could do with computers that once were things that punched cards.

Hacking is about exploring. Hacking is about going where no one else has gone before. It is about finding new corners in cyberspace. It is about discovering new worlds, and finding different solutions. A good hack is about doing something better than it's ever been done before. That's why I'm here at the “Hackers in the Twenty-first Century” conference. And that's why I'm very upset that people don't understand that hackers are, in fact, a national resource. You can't create a hacker. Hackers are born; they are very special people. When the Israelis catch a hacker, they give him a job. When the Americans catch a hacker, they kick him in the teeth and throw him in jail. And that's not good.

Have you noticed a change from the early days of the hacker community?

I've noticed two changes. The first change is within the hacker community itself. I am stunned to find that these thousand people who normally would have slept through the day and been a disorganized mob started this conference on time, had a program, and had mainstream speakers. Hackers have come of age. Hackers are now a power unto themselves, as a community–not an illegal community, not an unethical community–but as a community of vibrant knowledge that is able to express its views to the media and to others in articulate, structured way.

I've also seen a change in the private sector and in government. They still don't understand hackers. They still don't understand the communications and computing environment as well as they should. We've talked here about the abysmally ignorant federal regulators and the federal regulations that are completely inappropriate–1950s regulations for 1990s and year 2000 technology. But I clearly see that government and industry understand that hackers and the views that hackers represent are a force to be reckoned with. Therefore, over the next five to ten years, I anticipate that hackers will have a very beneficial influence on the safety and stability of cyberspace.

What about the FBI's National Infrastructure Protection program?

I know Michael Vatis well. He's a very good person, and what he's doing is important. We have to protect critical infrastructures, but in a distributed computing environment, [that] is not something that can be done by a central agency. It has to be done by the individual proprietors of individual computers. That is essentially a three-part solution.

Part one is that the government has to legislate what comprises “due diligence.” Software has to meet certain standards of safety and stability and reliability and transparency. The second part is that government has to test and certify that software, so that as a commonwealth interest, software is validated by the government as meeting those standards.

But the third and most important part is that the proprietors of the computers themselves must live up to a new standard of responsibility. You can't leave your computer connected to the world and not have firewalls. You can't send documents without encryption or other protection and expect them to remain private. So we ourselves have a responsibility. But our responsibility, although the most important, is only the third step. The first two steps have to be taken by government and by the private sector.

Financial Liberty at Risk-728x90




liberty-risk-dark