Yesterday’s Washington Post has a story about the leak of a confidential report from the House of Representatives Ethics Committee:
House ethics investigators have been scrutinizing the activities of more than 30 lawmakers and several aides in inquiries about issues including defense lobbying and corporate influence peddling, according to a confidential House ethics committee report prepared in July.
While the main focus of the story is the contents of the report, obtained by the Post, and the fodder for Washington gossip that it provides. I was interested to read that the report was disclosed because a committee staff member made it available on a peer-to-peer [P2P] file sharing network, apparently unintentionally:
The committee’s review of investigations became available on file-sharing networks because of a junior staff member’s use of the software while working from home.
There are, of course, the obligatory declarations that this was a violation of policy, that the matter is being investigated (doubtless Inspector Clouseau is on the case), and — amazingly — that no real security breach occurred:
The committee “is taking all appropriate steps to deal with this issue,” they said, noting that neither the committee nor the House’s information systems were breached in any way.
This is a most interesting concept. It is akin to saying that burglars got into your house and stole everything that was not nailed down, but there was no breach of security since the front door lock was still intact.
Back in early August, I wrote about a significant leakage of sensitive government information onto P2P networks. At the time, it was reported that Rep Edolphus Towns (D-NY) was planning to introduce legislation prohibiting the installation of P2P software on government networks. As I said then, this misses the point:
In some cases, it appears that the software was installed without authorization by some of the network users. Why are these systems and networks configured to allow ordinary users to install software? This is just lunacy.
As I’ve observed in several other contexts here, it is not reasonable to set up a security regime that depends on ordinary users being competent systems and security administrators. They are not, and there is no prospect that they will gain that skill — nor should that be expected. If a system is going to be used to process sensitive information, it needs to be designed to be secure from the start. The software installed should be limited to what is required to do the job, and any additions or changes should be vetted before they are made. Allowing users to install anything that strikes their fancy, or allowing them to export sensitive data to external, uncontrolled environments, is insanity, not to mention incompetence.