unpolluted sprockets #1

One thing I like about Wikipedia are the little “citation needed” inserts reminding the reader, in effect, “Here we have a statement of fact which has been made without supporting evidence.”

For instance, this is from the Raytheon article[1]:

Raytheon Professional Services (RPS) is a global leader[citation needed] in training services and learning outsourcing for over 75 years.[citation needed]

Raytheon is the company where William J. Lynn III worked as a lobbyist before he was made Deputy Secretary of Defense in the current administration.

A citation doesn’t make the fact asserted true, it just means there is some kind of evidence for the assertion that anyone is free to check out. That evidence might be strong or weak, and the reader probably won’t bother to check it anyway, but the presence of a citation makes a statement of fact, in some way, verifiable.

In Foreign Affairs, essays are not required to have footnotes, much less “citation needed” flags, so in comparison to Wikipedia, the reading experience of FA is superficial.

In Mr. Lynn’s Foreign Affairs essay[2], “Defending a New Domain” he says this:

Every day, U.S. military and civilian networks are probed thousands of times and scanned millions of times.

Citation needed, right?

None supplied.

This sounds like a variation on the 6-million-attacks-a-day (on Department of Defense networks) assertion that is part of the template for people writing for or speaking to technically non-literate audiences. This example from Bill Lambrecht:

The new head of the U.S. Cyber Command, Gen. Keith Alexander, revealed this month that Pentagon systems are attacked 250,000 times an hour, 6 million times a day.[3]

No citation available for Mr. Lambrecht’s assertion either. Which is a shame, because I’d like to know if Gen. Alexander really said Pentagon systems are “attacked” 6 million times a day in some context I’m not familiar with, or if Mr. Lambrecht spiced up his column by carelessly swapping in the word “attack” for what Gen. Alexander really did say:

DOD systems are probed by unauthorized users approximately 250,000 times an hour, over 6 million times a day.[4]

I’ll bet that Gen. Alexander chose the word “probed” deliberately when he was speaking to CSIS, and I will further bet that he consciously avoided using the word “attack” in characterizing what was happening 250,000 times an hour, 6 million times a day to Pentagon systems. In his Senate confirmation hearing, Gen. Alexander specifically said that “probes” are not “attacks”.[5] For military guys, the word “attack” is loaded with all kinds of baggage completely unknown to those who use the same word in a network security context.

Another variation uses “targeted”:

When asked how often the federal government’s computers get targeted or probed each day, defense specialist Rep. Adam Smith, D-Wash., curtly responds: “North of a million times.”[6]

Here’s another:

The Pentagon’s top information-security official, Robert Lentz, said the Defense Department detected 360 million attempts to penetrate its networks last year, up from six million in 2006. [7]

Hmm… “Attempts to penetrate” DOD networks? How is a single attempt identified for the purpose of counting? When a Facebook scraper works for weeks putting together information for a spear-phishing attack on a Navy Admiral, to craft an email with a link in it he will foolishly click… Will all those http GETs and POSTs at Facebook and elsewhere, plus the email to the Admiral count as just just one attempt to penetrate a DOD network? With a number in the hundreds of millions, there must be an automated way of counting. How do they count? What do they count?

Mr. Lynn has some vague numbers, “thousands” and “millions”, for probes and scans respectively. But what is a “probe”? What is a “scan”? Do his IT guys parse router logs into “probes,” “scans” and “other,” based on what protocols are used, what ports are queried, what the source IPs are?

Gen. Alexander has more precise numbers for “probes” by “unauthorized users” (250,000/hour, 6 million/day). But what, pray tell, is an “unauthorized user”? If these numbers come from router logs, what distinguishes an authorized user from an unauthorized user?

Observe:

$ ping nsa.gov
PING nsa.gov (12.120.166.8): 56 data bytes
92 bytes from 10.98.13.43: icmp_type=3 (Dest Unreachable) icmp_code=3
92 bytes from 10.98.13.43: icmp_type=3 (Dest Unreachable) icmp_code=3
92 bytes from 10.98.13.43: icmp_type=3 (Dest Unreachable) icmp_code=3

—-nsa.gov PING Statistics—-
46 packets transmitted, 0 packets received, 100.0% packet loss

I just pinged NSA at nsa.gov. DNS resolved the name to 12.120.166.8, but that IP doesn’t answer to ping so I get a “destination unreachable” response from a router closer to my NIC.

My ping put a line in a router log at NSA for an ICMP echo request from my IP that was dropped. Will that line count as a “probe” or is it counted as something else (like, “blog post demonstration”: – )? Since we don’t know what a probe is, we can’t know what kinds of IP traffic are not probes.

If my ping is a probe, it can’t count as coming from an unauthorized user. I authorized it myself, so I know it was authorized. But how will the light-starved gnomes counting probes deep in the catacombs beneath Ft. Meade know my ping was authorized? Do they flip coins?

Logs can be mined for data that will sort inbound traffic into “solicited” and “unsolicited” buckets (ICMP echo requests are always unsolicited, by the very nature of the protocol). But “authorized” and “unauthorized” categories have no technical meaning. Do they have any meaning at all?

Alas, the transcript for Gen. Alexander’s talk doesn’t have citations, so his “unauthorized users” just get stirred into the soup of nebulous terminology along with “attacks” and “probes” and “scans” and “attempts to penetrate” and “targeted computers” — and when someone who doesn’t know much about how the internet works, or about network security, wants to say something to impress an audience which also doesn’t know much… he just dips in a ladle and serves up a big helping of whatever soup mush happens to be near the top.

Probably, these people are talking about nothing more than what we think of as the Background Noise of the Internet, what Steve Gibson calls Internet Background Radiation.[8] Anyone who wants to watch random unsolicited packets from the Internet bouncing against a home router can see it. Dozens of log entries an hour. Unless it’s a hobby for you, you don’t watch. Router logs just aren’t that fascinating. Ten or 12 years ago IBR was interesting and something of a novelty for many people (me included). But now it’s just raindrops on the roof. Who cares?

Well, the lobbyists-presently-in-government and the lobbyists-presently-lobbying care, possibly because it looks to them like there’s a lot of unguarded money ready to be bagged and trucked off for those who can spin up a fun “cyber” story. The 6-million-attacks-a-day bit is how a good cyber story always begins, just like, “Gather round children and I’ll tell you about…”

You’re thinking, “Someone must have counted something; surely, if there’s a number someone, sometime, somehow must have¬†counted something? Or if they didn’t count they had a statistically validated method of estimating?”

Ah… Wouldn’t it be nice to have a citation? — a reference to some document or web page to check, so we could see when the count was done (if it’s not ongoing), how it was done, what was counted, how the categories were defined…

Wouldn’t it be nice…

Here is the simple truth concerning the 6-million-attacks-a-day assertion, in all its protean forms:

No one has ever counted anything.

Sometime back in the 90s a low-level Pentagon beaurocrat named Winston Smith overheard a couple of techies from the server room talking about unsolicited packets in the logs — “…1,012 hits between 0100 and 0200 from who-knows-where…” and later that day, helping his report-producing boss prepare a report for some other report-producers, Winston did a quick calculation: “With 249 other offices big enough to rate an auto-grind coffee machine like ours, that’s 250 times 1,012… but just to be conservative, let’s say 1000… that’s 250,000 an hour… But what was it they called them? Unpolluted sprockets? That’s too technical…”

This morphed from an overhead projector transparency into a PowerPoint slide, was copied into another slide deck, then another, then it became part of the standard intro to hundreds of PowerPoint presentations, was copy-pasted into reports, repeated with a straight face at news conferences, adjusted to fit preferences for the nuance of one word over another (“probe” vs. “attack”), merged into the President’s telepromter stream… and in the course of time, came to be believed by a generation of those in the greater Washington government-and-contractor community: “…and so, children, that is how the rabbit lost its tail.”

None of the people quoted above could tell you one important difference between UDP and TCP, or between telnet and ssh, or how sha256sums are used to know when a file changes — technical concepts so basic that in 2010, they are arguably not even technical any more.

At a higher level, they don’t know why it is that the vast majority of network (a.k.a “cyber”) security challenges today, approaching 100%, come from solicited packets, not unsolicited raindrops-on-the-roof packets. If you probe Gen. Alexander, he won’t be able to tell you what a probe is, or how probes are counted. Given as much time as he likes to scan his notes, William J. Lynn III will not be able tell you the difference between a scan and a probe.

If you questioned them, even trivially, this generation of talkers and report-producers would not be able to define clearly and consistently their own words. And if President Obama brought the whole lumbering, obese government to a blubbery, wobbling halt, demanding to know, “Where does this 6 million probes a day number come from?” — no one would be able to tell him. Gen. Alexander would ask his staff, and they would turn around and ask their staffs and those staffs would, in turn, try to find and wake up their staffs… and no one would be able to find the original study or tell the President how and when it was done.

Because there was no study.

Winston Smith is currently working for a government contractor and may not be back in government proper for a year or two. He’s now an expert in Arctic tundra reclamation policy. He doesn’t even remember what word he thought was better than “unpolluted sprockets” back when he was a computer network security expert, before the government contractors rebranded network security as “Cyber” so they could jack up their per diems, scan congressmen for opportunities to fill white space in 1000-page bills, probe DHS with cybersecurity concepts, target DOD for billion dollar firewall upgrades, give those timid NSA loafers the willies with horror stories about unauthorized users, and attack the ongoing problem of how to move money out of the pockets of working people and into report-writing employment and lively conversation over drinks and good food at the finest dining establishments in Washington, D.C.

If there had been a count of something, none of the government/contractor people you see quoted in the papers and blogs would know what was counted, or how.

But the fact is, no one ever counted anything. Six million whatevers a day, 250,000 whatevers an hour, 360 million whatevers a year… it doesn’t matter. It’s all bogus. And I have a citation for that. –> [9]

PB

——-

[1] http://en.wikipedia.org/wiki/Raytheon
[2] Foreign Affairs, September/October 2010, “Defending a New Domain”
[3] Bill Lambrecht, LA Times, June 24, 2010, “U.S. is busy thwarting cyber terrorism — The government and defense contractors are in a constant battle against computer attacks” http://articles.latimes.com/2010/jun/24/business/la-fi-cyber-terrorism-20100624
[4] Gen. Keith Alexander, Director, National Security Agency, Commander, U.S. Cyber Command, Thursday, June 3, 2010, speaking to the Center for
Strategic and International Studies (CSIS)
[5] Sean Lawson blog post at Forbes.com: “Just How Big Is The Cyber Threat To The Department Of Defense?” Jun. 4 2010. http://blogs.forbes.com/firewall/2010/06/04/just-how-big-is-the-cyber-threat-to-dod/
This is not the only place that references Gen. Alexander’s testimony. Sean has also put together some interesting quotes that capture the muddled-terminology situation. What I’m prone to say with war-painted, spear-shaking unruliness, he conveys in a gentlemanly way: “The contradictions between this and previous statements of the threat, both by Alexander and others, combined with continued confusion over the definition of key terms, points once again for the need to more clearly articulate the cyber threat if we are to develop appropriate policy responses.”
[6] Joel Connelly, “Cyber attacks: The next big security threat?” Seattle Post Intelligencer, April 11, 2010 http://www.seattlepi.com/connelly/418225_joel12.html
[7] Yochi J. Dreazen and Siobhan Gorman, “U.S. Cyber Infrastructure Vulnerable to Attacks” Wall Street Journal, May 6, 2009. http://online.wsj.com/article/SB124153427633287573.html
[8] Steve Gibson, grc.com, SecurityNow, SpinRite, the Portable Dog Killer and other useful endeavors.
[9] https://pmbarry.wordpress.com/2010/10/30/unpolluted-sprockets-1/

Advertisements
Posted in Uncategorized | Leave a comment

Education and Learning #1

“Most of my chess growth came from studying my losses very deeply…” –Josh Waitzkin

This Authors@Google interview with Josh Waitzkin lasts about an hour. He talks about chess, martial arts, learning. It is a little abstract, with a sprinkling of Oriental philosophy, sometimes on the edge of flakiness — but not quite over the edge. I follow him, even when he talks about playing 40 games of chess with 40 opponents at one time, moving from board to board, and all of the games somehow converge in his mind into a single big game, in which each board is a part…

What he says about loss and failure — rather, the importance and value of loss and failure — must feel like a surprise bee sting to a lolling, complacent education establishment wanting to ensure that no child is left behind, that every student succeeds, that Self Esteem is forever protected and pampered.

Self esteem is all very well in its own place, alongside other “self” stuff, like self-deception and selfishness, but with respect to education, what if real growth in skills, knowledge and understanding depends on failure, loss and pain?

Josh Waitzkin doesn’t so much make a case for the importance of failure, he simply testifies to his own experience: “I hardly remember the wins… what I remember are the losses…” And he makes connections between seemingly different failures in different areas of life…

I’m a programmer[1], and if I look at programming in a certain light, I see that what I do when I attack a problem is fail my way through it.

In programming, failure happens at every level, from the whiteboard planning to the last lines of svn-committed code, and even beyond, as bugs are discovered by users.

Try something. It doesn’t work. Examine the failure and what you did. Try something else. It doesn’t work. Examine the failure and what you did. Try something else… Pretty soon it works and you move on to the next iterations of try-fail-examine.

With energetic debate as the soundtrack, whiteboard lists, illustrations, boxes and arrows are erased and new ones fill the space. While coding, methods are written, then moved, then split into new methods; lines are written, then deleted, replaced by new lines in different places. The erasing, moving, splitting, deleting, replacing… all articulate on instances of failure.

It’s not just failure. It’s failure followed by study of the failure. If contemporary pop education were to suddenly stand on its head and junk all the self esteem rubbish — let students fail, tell them plainly they’ve failed, and when necessary, contrive to make them fail — that would not, in itself, improve learning. But it would create a context in which great learning is possible.

Take a picture of this:

A professor introduces himself on the first day of class and says, “This is a two semester course. Every one of you in this room will fail the first semester.

“If you are really sharp, learn the material, solve the problems I give you to solve… I’ll give you more material and tougher problems. Solve those and you get even tougher problems. The problems will keep coming until you get one that can be solved by someone, but you can’t solve it, even when you stay up all night, then miss fall break to work on it.

“You will fail this semester.”

[Even the burnouts at the back of the room are awake and sitting up straight.]

The professor continues: “In this semester, although you will fail, by the end you will be able to analyze a problem, develop a coherent plan, and write respectable code to solve the problem. You’ll be valuable to an employer because of your analytical and programming skill. But the main thing you must learn this semester is how to study your own failure.

“That’s what we will do in this class: Study failure. Learn to describe it accurately and completely. Learn to break a failure into its components and analyze each component, figure out how 3 or 4 small components of a failure work together to cause a single big failure… But not just any failure. Your failure. Not the failure of other people, in other places, or in history. You will study your own failure, and in that you will become an expert.”

hmm… That sounds like a class that would be worth taking. For credit.

What if we take the word “chess” out of Josh Waitzkin’s quote above?

“Most of my __________ growth came from studying my losses very deeply…”

Take that as a starting point for an approach to learning and education. Build a course of study on that concept. What does it look like?

PB

Kings of Convenience: Failure

——-

[1] QA Software Engineer, to be more specific

Posted in Uncategorized | Leave a comment

stuxnet #4

Glance over this quote from a (useful) Symantec blog dissection of Stuxnet. Or, if it seems too dense, just look back as I refer to it below:

To access a PLC, specific software needs to be installed; Stuxnet specifically targets the WinCC/Step 7 software used for programming particular models of PLC. With this software installed, the programmer can connect to the PLC via a data cable and access the memory contents, reconfigure it, download a program onto it, or debug previously loaded code. Once the PLC has been configured and programmed, the Windows machine can be disconnected and the PLC will function by itself.

Let’s say I’m a programmer in a gas plant or power plant or water plant… No, scratch that. I’ll be a programmer in a nuclear research facility with live reactors. What I’ll do is connect my laptop to the PLC via a data cable. I can then “access the memory contents” of the PLC. I can “reconfigure it.” I can “download a program onto it.” I can “debug previously loaded code.”

That last is especially important, right? Because yesterday I hooked my laptop up to the nuclear reactor management PLC and made a couple of changes. Today, some engineering guys who work closer to the core than I do are complaining that it seems to be running a little hot. In fact, a fly landed on the cool side of a heat shield and instantly shrivelled and vanished in a whisp of smoke.

I hook up my laptop to the PLC and check what I did yesterday. Hmmm… it does look like I might have fat-fingered an extra couple of zeroes when I was multiplying z. Should be 10*z, not 1000*z. No wonder she’s cookin’ flies today!

I fix that, then do a little debugging on a problem I’ve been kicking around for a week, something causing weird voltage spikes for about 30 seconds every hour.

Try something… hit enter. No, that’s wrong. Try something else… hit enter. Still wrong. Try again… Bingo. That’s it. Should run smoothly now. No meltdown today! I wonder what they’re serving in the cafeteria…

Even if you have no experience of software development, you will know that picture is completely absurd. If a programmer in a nuke plant can access live code that impacts the running of the plant — so he can “reconfigure it” or “download a program onto it” or (heaven forbid!) “debug previously loaded code” — why would anyone spend a lot of time and effort creating a souped up internet worm like Stuxnet to take out this nuclear plant?

Give it a little time; it will take itself out.

I haven’t yet seen a good explanation for why commonplace software development and release processes would not have stopped Stuxnet in its tracks well short of access to any live code running a plant. All the reporting is about how clever Stuxnet is at propagating, or hiding in Windows, or hiding in PLC code. But that’s not where the challenge lies for a worm intended to take down a plant.

I feel like I’m in a class where the professor just spent an hour filling the board with equations, diagrams and proofs, concluding with, “…and then magic happens, and we have 42.”

Eh?

What is lacking in what I’ve read about Stuxnet is a coherent explanation for how it could cause evil code to move from a developer’s PC to “production” or “release” or whatever they call it in an industrial plant. Absolutely nothing that I’ve seen written about Stuxnet so far makes me think this clever worm had any chance whatsoever to damage a nuclear plant by changing code controlling the plant operations.

When a programmer changes code for a system, he or she is working on a copy of the code base that will not be installed for use anywhere until it jumps a series of hurdles — none of which Stuxnet could jump.

New code, or a code change, doesn’t go from a developer’s machine directly into production. It is reviewed. And it is tested first in an environment that simulates the deployed environment. Code never moves directly from a developer’s machine into production at all. He (can’t be “she” in Iran) checks code into source control as flat text, diff-able and reviewable, and as code moves toward deployment, it is checked out of source control onto other machines by other people.

The Symantec writer talks casually about a programmer working on a PLC who can “debug previously loaded code.” Do you think the programmer debugs code on a PLC running the plant? Do you change the oil in your motorcycle at a nice cruising speed, headed south on Lake Shore Drive?

I’ve debugged a lot of code over the years, I’ve lived in world of code debugging, and I will assure you that no programmer in Iran or North Korea or any other place, however weird or backward, will be debugging running production code in a nuclear power plant, or debugging production code in any plant that does something more complicated than smash tree stumps into pulp.

Debugging is done off to the side of production. Debugging can be a tedious trial-and-error process that takes a while, and would be highly disruptive (to put it mildly) for anyone depending on the code to be doing something. When problems are found, the solutions are tested… still off to the side of production.

Someone may say, “Don’t you think that where fanatical towelheads and raving nutcases run things the engineers are probably blockheads who don’t know anything about normal software development processes?”

No, I don’t think that.

The Iranians have reached a place where there is international concern about what they are doing with their nuclear research. They didn’t get where they are with blockhead engineers.

Much of the fuss about Stuxnet hinges on a Bogus Myth that between the developer writing or debugging code on an infected machine and absolute control of a nuke plant is… nothing… No code review. Nothing is checked into source control as flat text and tagged. There’s no QA. No testing. No bugs are ever filed. There is no staging environment that mirrors production. For new code there are no smoke tests. No sanity checks. No regression testing. There’s no configuration management of key machines. Emails don’t go out to a wide audience of engineers 7 days before, then 3 days, then 24 hours before a change is pushed to production. And of course there’s no redundancy for production control — an independent control path, if control code running in production doesn’t seem to be doing what it should; and no redundancy in monitoring; and no way to instantly roll back a change that breaks something…

Bogus Myth.

For Stuxnet, the main problem isn’t how to hide in Windows or spread in Windows-heavy networks, or conceal itself in PLC code and alter that code — the main problem is how to get from any development machine to production — anywhere.

And as far as I can see, the primary evidence that Stuxnet was sponsored by a government is this: It’s a technical marvel that, marvelously, doesn’t take into account the main problem. It’s a bridge to nowhere.

PB

[1] The Symantec publications related to Stuxnet are outstanding. They are focused on the worm itself, not on workflow and processes in a potential target facility. To review the processes, someone would have to know what they are — and that’s not known. In its context, the Symantec writer’s comment about debugging is quite reasonable. I’m sure Nicolas knows how debugging works.

Posted in Uncategorized | Leave a comment

stuxnet #3

There’s a popular theory that Stuxnet was intended to cripple Iran’s nuclear power program. Other theories could fit the facts, but for the sake of argument, let’s say that theory is credible. How would it work?

Here’s what I think is a plausible scenario:

1. Stuxnet is released in Iran so that it will spread there most quickly.

2. Stuxnet, possibly but not necessarily, reaches some control software somewhere in an Iranian plant and does something to make things go wonky, proving that it is a menace.

3. Stuxnet is discovered, either when #2 happens or before, and carefully analyzed.

4. His Utmost Ridiculousness Ahmadinejad and other Iranian leaders believe they’ve had a brush with a serious threat, and they get hopping mad.

5. Iran retaliates for the worm against the United States using physical force, in some showy way.

6. In the United States there are high-fives behind certain closed doors, and outrage at Iran’s unprovoked aggression in front of the TV cameras.

7. Immediately half of Iran’s air force is wiped out, along with virtually all the land-to-sea missiles planted along the Persian Gulf; ships are torpedoed; military communications are smashed up… and as if it were an afterthought, Iranian nuclear research facilities are obliterated, setting the bomb-making program back five years.

That’s how I see Stuxnet could work.

This isn’t so much a real theory I’d like to defend as it is an exercise in coming up with a theory that fits the facts. The theory that Stuxnet by itself, without supporting aircraft and cruise missiles, could seriously inconvenience Iran’s nuclear bomb making efforts, isn’t credible.

Notes:

1. For the Iranians to retaliate forcefully, they have to notice they’ve been “attacked”. Stuxnet is an internet Windows worm, that spreads using clever zero-day exploits. Once released, it was certain to spread widely and certain to be discovered. That’s something the Stuxnet creators could depend on.

Windows rootkit worms aren’t invisible, except to Windows itself. They typically communicate across networks (Stuxnet does), so they are noisy; a worm will stand out like a camel in a flock of sheep when a file system is inventoried outside of Windows.

The Stuxnet team probably figured they had six months max, and maybe just days, before security researchers would be all over it. But if released the right way in Iran, it might spread enough there to seem threatening.

2. For the Iranians to retaliate forcefully, they would have to believe the worm was really a military-grade threat. With the internet infested by every manner of virus, worm and scam, coming up with a worm that is “military grade” is no small task. Stuxnet is very clever. For “military-grade?” it checks the block.

3. Plausible deniability. Stuxnet is a Windows worm, which has spread all over the world. There’s nothing about its spreading techniques that would limit it to Iran. There’s no way to conclusively tie the worm to any attacker. Many countries and criminal enterprises could put together 5 to 10 engineers plus testers and management for six months.[1]

The discovered payload of the worm (as opposed to the spreading technology) messes with PLC code, but the worm was provided with means to get updates in the wild, so presumably the payload could be switched out for something completely different.

4. In Operation Mincemeat, the British convinced the Germans to do what they otherwise would not have done (concentrating defenses on Greece and Sardinia rather than Sicily). We know the story from a movie, “The Man Who Never Was.” A dead body was dressed as a British officer, with an attached briefcase containing misleading papers, and dropped into the sea off the Spanish coast. A great deal of effort went into making the body convincing. The Germans were fooled, and this affected what the German military did.

From the perspective of many nations — the U.S., Israel, Saudi Arabia, Kuwait, Russia, the UAE, the UK… — a good excuse for someone to reduce Iran’s nuclear weapons program to a smoking ruin would be most welcome. A great deal of effort went into making Stuxnet convincing.

PB
—–
[1] Symantec’s estimate

Posted in Uncategorized | Leave a comment

bird watching

A headline from the front page of Saturday’s Wall Street Journal: “CIA Escalates in Pakistan”. Sub head: “Pentagon Diverts Drones From Afghanistan to Bolster Campaign Next Door”.[1]

Somehow we need to re-label this war, but it’s awkward to take the names of those two countries, with a total of seven syllables between them, not counting one syllable for “War”, and come up with a easy-to-use blend that is likely to get traction in conversation.

Afghanistan-Pakistan War? Correct, but too long; ungainly.

Pak-Afghan War? Pakistan too abbreviated.

Afpakstan War? Nah. Superficially clever, but blatantly contrived… Besides, what if the war slops over into one or more of the three “stans” to the north? Anyway, someone will come up with something I’m sure. Neither “Afghan War” nor “War in Afghanistan” capture the real situation any longer.

There is a subtlety in that WSJ sub head, possibly unintentional, but it struck me as nicely done: the name “Pakistan” does not appear at all. It’s the “Campaign Next Door”. Indeed, “Next Door” to Afghanistan, to the east, is an area that is also Next Door to Pakistan to the northwest, an area between Afghanistan and Pakistan, which is roughly what Kipling called “Kafirstan”[2]. It’s an area where Pakistan has influence, perhaps because the mountains have caused trade to lean toward Pakistan, and people in the area will tend to use Pakistani airports when they go abroad. This area is certainly not part of Pakistan in the way that Florida is part of the United States, or Cornwall is part of the UK.

However, I meant to talk about bird watching, not 21st century feudalism.

When the CIA adds more drones to the war (a.k.a. “campaign”), there is a mixed nationality coterie of military professionals and engineers, mostly to the east of Afghanistan, who are delighted. I’ll call them “bird watchers”.

Remember that these American drones represent an impressive military innovation, and military people throughout the world, especially in nations that feel they could, potentially, some day, come to blows with the U.S., or with some other nation that could deploy drones, are very interested in these aircraft. They want to know how they work, what logistical support they need, what their limitations are. But there are two things especially the bird watchers want to figure out:

1. How to see them.

2. How to destroy them.

If you assume people on the ground are most interested in hiding from them, you’re thinking of Osama bin Laden and his ilk, hot-footing it from cave-to-cave, not daring to turn on their satellite phones, wondering if it’s safe to light a cigarette or a hash pipe (hint: it’s not).

The bird watchers take notes on how to hide from drones. But they’ve got bigger fish to fry. And they don’t lack for cash money, vehicles and electronics.

The bird watchers are not “for the Taliban”, although they consort with the Taliban, and even help the Taliban trivially from time to time. They need access to the ground over which the drones fly, so they make accommodations with whoever controls the ground — in some areas that may even be Pakistani army units.

Both “seeing” a drone (knowing it is present and where exactly it is in the sky) and figuring out how to destroy a drone are challenging technical problems. I have some ideas about how these problems (especially the seeing) will be tackled, but the main thing to note is that the bird watchers can’t figure out how to see drones, much less destroy them, if there aren’t any drones.

The bird watchers want drones to study.

When the U.S. decides to increase the drone presence over Pakistan and Quasi-Pakistan, that’s good news for the bird watchers.

For the United States, one of the costs of a drawn-out war is that we expose our highly technical weapons — things like drones — to parties not necessarily our friends, who want to learn everything they can about what we have and how we use it.

How long will we linger in Afghanistan and Pakistan, letting all-and-sundry study our drones and develop anti-drone technology? Hard to say. It could be a long time. What began as an easy-to-understand effort to capture or kill Osama bin Laden, and capture or kill everyone who helped him, has morphed into an incomprehensible educational project: American Civics 101 for the Historically and Culturally Challenged.

Afghans are not bright students.

But it’s not obvious who’s presiding in the classroom. Perhaps they’re treating us to a course in Afghan Historical Continuity 101.

We’re not such bright students either.

PB

———
[1] Wall Street Journal, Saturday/Sunday, October 2-3, 2010
[2] “It’s a place of warring tribes, which is to say, a land of opportunity.” [approximate quote from The Man Who Would Be King]

Posted in Uncategorized | Leave a comment

stuxnet #2

If you’ve followed reports of the Stuxnet internet worm over the last month or so, you probably have in your mind an image. I am about to guess the image in your mind.

I’ll use the word “research” in my description of what’s in your mind. But if you think “bomb making” sounds better than “research”, feel free to read “bomb making” where I have the word “research”.

The image in your mind is of a Windows computer. This computer is in a nuclear research facility in Iran. It is attached to a network and has access to the internet. It is also attached to a programmable logic controller (PLC), via a Windows application, that allows an Iranian researcher to key in or otherwise load changes to the operational nuclear research environment, adjusting settings and changing live code as he needs to.

In the plant where this Windows computer sits there is no deployment discipline for patches or upgrades. That is, the kind of process which enterprises typically use for bringing code changes online (test –> staging –> prod) is unknown. Someone can sit down at the machine, tap keys for a bit, hit ENTER and immediately production code is changed. There are no safeguards to prevent a typo from causing disaster. There’s no way to quickly roll back a patch if it causes an unexpected problem. There’s no redundancy, so that if one monitoring or control system fails another system — independent — can take over.

This Windows machine is not only on the corp network but it is reachable by other machines on the network — that is, the router doesn’t have it on its own lonely subnet. It has some virus detection software, but that’s it. It is not re-imaged regularly, and no one ever cd boots it with a non-Windows OS to collect checksums from the filesystem to diff against a known baseline.

Is that anything like the image you have in your mind?

The reason it could be is that the superficial reporting of Stuxnet has embedded in it an assumption that the challenge for the worm was only to find its way to a more-or-less ordinary Windows machine with access to production code running a nuke plant, quietly get control of that machine and then make changes to the nuke-plant-controlling code.

Got that? Here’s something else we should take just as seriously:

Living in the sewers there is a race of little green men, originally from Mars, who ooze through keyholes and use precision lasers to steal vital organs from sleeping victims without waking the family doberman.

The image of the alleged target Windows machine and fly-by-the-seat-of-your-pants change management is created by superficial reporting of Stuxnet, and it’s preposterous.

The worm has many clever features, but it’s not magical. Symantec has a good paper describing it, if you have a taste for technical details. I won’t give away the plot, but it is at least conceptually possible for Stuxnet to get its code into a PLC, in the wild (as opposed to in the lab). However, the idea that it could seriously disrupt a nuclear research plant for any length of time, causing anything more than a passing inconvenience, is nonsense.

Now (still reading your mind), you’re thinking, “In Iran maybe they do have Windows boxes that can access live code managing a nuke plant, machines that are also used to check email and surf the net…”

The reason you think that is related to another image in your mind: the image of His Utmost Ridiculousness Ahmadinejad.

The response: “No. In Iran they don’t hire monkeys to manage nuclear research plants. They have smart people, good engineers, who know how to do things that are technically complex. They know how to review code changes, manage patch deployments, limit access to mission critical applications, checksum binaries and code blocks… do rollbacks, ensure redundancy… Iran is not a nation of halfwits, notwithstanding Ahmadinejad’s efforts to persuade us otherwise.

What about the speculation that Stuxnet was created by state-sponsored hackers — maybe Israel, maybe the U.S. — and that the target was an Iranian nuclear plant?

Governments, certainly ours and maybe Israel’s, are capable of delivering truckloads of money to oily, big-talking contractors, who pretend that Rube Goldbergian schemes are easy as tinker toys. But the simple fact is that no one with understanding looked at a proposed objective like, “eliminate Iranian nuclear plant,” and believed that Stuxnet could or would do that.

I can make a case that is coherent (if not likely) that the U.S. or Israel or some other state sponsored the release of Stuxnet on the internet, and that it did have something to do with an Iranian nuclear plant — but the chance that Stuxnet itself would do damage was understood: slim to none.

I’ll make that case… another day.

PB

Posted in Uncategorized | Leave a comment

stuxnet #1

Symantec’s W32.Stuxnet Dossier is the most useful info I’ve seen about stuxnet. Early in the paper, explaining context:

Industrial control systems (ICS) are operated by a specialized assembly like code on programmable logic controllers (PLCs). The PLCs are often programmed from Windows computers not connected to the Internet or even the internal network. In addition, the industrial control systems themselves are also unlikely to be connected to the Internet.

Whew… Don’t miss this: “…PLCs are often programmed from Windows computers not connected to the Internet or even to the internal network…”

From the superficial reporting of stuxnet you could get the idea that a Windows box, casually attached to a network, could also casually access and change production code in an industrial environment. When you stop to think about it, you know that can’t be true.¬† It could be true somewhere at some time, simply because there are outliers in any large sample, but if you know how things work, you know it can’t be true in general.

Fortunately, Symantec mentions what is hardly worth mentioning — because it will be assumed by technically literate readers. But a lot of people don’t know much about development and deployment processes that are widely used, even for goofy web apps…

— “What? You go through all that to change one line of code on your webserver?”
— “On our production webserver”.

The Symantec paper has enough technical detail to be interesting. The assessment of resource requirements:

The full cycle may have taken six months and five to ten core developers not counting numerous other individuals, such as quality assurance and management.

From the superficial reporting, I had guessed 10 engineers (including QA) for 1 year, plus equipment, simulators, etc. — way less than $10 million if done in the private sector, and probably not more than $100 million if government sponsored.

This, of course, is easily within the reach of many criminal organizations (today in Wired: “5 Key Players Nabbed in Ukraine in $70-Million Bank Fraud Ring“) — and criminal-government blends.

Because the Symantec paper has fairly close technical analysis, it includes odd notes like this:

…If this value is equal to 19790509 the threat will exit. This is thought to be an infection marker or a “do not infect” marker. If this is set correctly infection will not occur. The value appears to be a date of May 9, 1979. While on May 9, 1979 a variety of historical events occured, according to Wikipedia “Habib Elghanian was executed by a firing squad in Tehran sending shock waves through the closely knit Iranian Jewish community. He was the first Jew and one of the first civilians to be executed by the new Islamic government. This prompted the mass exodus of the once 100,000 member strong Jewish community of Iran which continues to this day.” Symantec cautions readers on drawing any attribution conclusions. Attackers would have the natural desire to implicate another party.

Wikipedia has a short article about Habib Elghanian:

hmm…

Why does this remind me of “The Man Who Never Was”?

PB

Posted in Uncategorized | Leave a comment