Glance over this quote from a (useful) Symantec blog dissection of Stuxnet. Or, if it seems too dense, just look back as I refer to it below:
To access a PLC, specific software needs to be installed; Stuxnet specifically targets the WinCC/Step 7 software used for programming particular models of PLC. With this software installed, the programmer can connect to the PLC via a data cable and access the memory contents, reconfigure it, download a program onto it, or debug previously loaded code. Once the PLC has been configured and programmed, the Windows machine can be disconnected and the PLC will function by itself.
Let’s say I’m a programmer in a gas plant or power plant or water plant… No, scratch that. I’ll be a programmer in a nuclear research facility with live reactors. What I’ll do is connect my laptop to the PLC via a data cable. I can then “access the memory contents” of the PLC. I can “reconfigure it.” I can “download a program onto it.” I can “debug previously loaded code.”
That last is especially important, right? Because yesterday I hooked my laptop up to the nuclear reactor management PLC and made a couple of changes. Today, some engineering guys who work closer to the core than I do are complaining that it seems to be running a little hot. In fact, a fly landed on the cool side of a heat shield and instantly shrivelled and vanished in a whisp of smoke.
I hook up my laptop to the PLC and check what I did yesterday. Hmmm… it does look like I might have fat-fingered an extra couple of zeroes when I was multiplying z. Should be 10*z, not 1000*z. No wonder she’s cookin’ flies today!
I fix that, then do a little debugging on a problem I’ve been kicking around for a week, something causing weird voltage spikes for about 30 seconds every hour.
Try something… hit enter. No, that’s wrong. Try something else… hit enter. Still wrong. Try again… Bingo. That’s it. Should run smoothly now. No meltdown today! I wonder what they’re serving in the cafeteria…
Even if you have no experience of software development, you will know that picture is completely absurd. If a programmer in a nuke plant can access live code that impacts the running of the plant — so he can “reconfigure it” or “download a program onto it” or (heaven forbid!) “debug previously loaded code” — why would anyone spend a lot of time and effort creating a souped up internet worm like Stuxnet to take out this nuclear plant?
Give it a little time; it will take itself out.
I haven’t yet seen a good explanation for why commonplace software development and release processes would not have stopped Stuxnet in its tracks well short of access to any live code running a plant. All the reporting is about how clever Stuxnet is at propagating, or hiding in Windows, or hiding in PLC code. But that’s not where the challenge lies for a worm intended to take down a plant.
I feel like I’m in a class where the professor just spent an hour filling the board with equations, diagrams and proofs, concluding with, “…and then magic happens, and we have 42.”
What is lacking in what I’ve read about Stuxnet is a coherent explanation for how it could cause evil code to move from a developer’s PC to “production” or “release” or whatever they call it in an industrial plant. Absolutely nothing that I’ve seen written about Stuxnet so far makes me think this clever worm had any chance whatsoever to damage a nuclear plant by changing code controlling the plant operations.
When a programmer changes code for a system, he or she is working on a copy of the code base that will not be installed for use anywhere until it jumps a series of hurdles — none of which Stuxnet could jump.
New code, or a code change, doesn’t go from a developer’s machine directly into production. It is reviewed. And it is tested first in an environment that simulates the deployed environment. Code never moves directly from a developer’s machine into production at all. He (can’t be “she” in Iran) checks code into source control as flat text, diff-able and reviewable, and as code moves toward deployment, it is checked out of source control onto other machines by other people.
The Symantec writer talks casually about a programmer working on a PLC who can “debug previously loaded code.” Do you think the programmer debugs code on a PLC running the plant? Do you change the oil in your motorcycle at a nice cruising speed, headed south on Lake Shore Drive?
I’ve debugged a lot of code over the years, I’ve lived in world of code debugging, and I will assure you that no programmer in Iran or North Korea or any other place, however weird or backward, will be debugging running production code in a nuclear power plant, or debugging production code in any plant that does something more complicated than smash tree stumps into pulp.
Debugging is done off to the side of production. Debugging can be a tedious trial-and-error process that takes a while, and would be highly disruptive (to put it mildly) for anyone depending on the code to be doing something. When problems are found, the solutions are tested… still off to the side of production.
Someone may say, “Don’t you think that where fanatical towelheads and raving nutcases run things the engineers are probably blockheads who don’t know anything about normal software development processes?”
No, I don’t think that.
The Iranians have reached a place where there is international concern about what they are doing with their nuclear research. They didn’t get where they are with blockhead engineers.
Much of the fuss about Stuxnet hinges on a Bogus Myth that between the developer writing or debugging code on an infected machine and absolute control of a nuke plant is… nothing… No code review. Nothing is checked into source control as flat text and tagged. There’s no QA. No testing. No bugs are ever filed. There is no staging environment that mirrors production. For new code there are no smoke tests. No sanity checks. No regression testing. There’s no configuration management of key machines. Emails don’t go out to a wide audience of engineers 7 days before, then 3 days, then 24 hours before a change is pushed to production. And of course there’s no redundancy for production control — an independent control path, if control code running in production doesn’t seem to be doing what it should; and no redundancy in monitoring; and no way to instantly roll back a change that breaks something…
For Stuxnet, the main problem isn’t how to hide in Windows or spread in Windows-heavy networks, or conceal itself in PLC code and alter that code — the main problem is how to get from any development machine to production — anywhere.
And as far as I can see, the primary evidence that Stuxnet was sponsored by a government is this: It’s a technical marvel that, marvelously, doesn’t take into account the main problem. It’s a bridge to nowhere.
 The Symantec publications related to Stuxnet are outstanding. They are focused on the worm itself, not on workflow and processes in a potential target facility. To review the processes, someone would have to know what they are — and that’s not known. In its context, the Symantec writer’s comment about debugging is quite reasonable. I’m sure Nicolas knows how debugging works.