Sasser worm Author caught

Over at the BBC they're carrying the story that " Teen 'confesses' to Sasser worm". What worries me most about this, is if this guy turns out to be the author of the Sasser worms and the Netsky virii (which some other newswires are suggesting), he has managed to cause millions of pounds of damage on his own... one teenager.....
Given that, what level of damage could be done by an organised, well funded group of people, looking to maximise the damage done to the Internet...? Not a comforting thought really.

Detecting Rogue machines on client subnets

A little while back, I was giving some thought as to how to mitigate the risk of rogue DHCP servers on internal networks.
The risk, briefly, is that if someone can get their rogue DHCP server to hand out an address faster than the real one, then they can control things like the default gateway and DNS server of client PC's. Once they've set that up they can sniff any and all traffic that goes by and also modify traffic if required.
One of the standard technological controls for stopping people putting rogue devices on a network, static MAC address assignments on the switch ports, isn't likely to be effective here as it would be very onerous to maintain that on client subnets... Likewise other ones like an IDS system aren't likely to be deployed in what is perceived generally as "low risk" segments of the network...
So, an idea which might work (and it may already exist, I'd be interested to hear if it does) would be to have something like NMAP scanning round the subnets on a regular basis looking for new services coming online... all that would be needed is an interface for admins to define what to look for (eg, there should be only ports 137-139 and 445 on this subnet) and an alerting system... Would also help for detecting unauthorised web servers and the like in large corps...

Security threats to open/closed source software

Over at David Cartwright's Home Page there's some comments on a debate about the relative security of open and close source software. It pretty much sums up how I feel about it.
There are potentially going to be security flaws, either malicious or accidental, in any software much more compicated than "Hello World", be it open or close source. My personal opinion is that at least with open source software if it's sufficiently important to you to mitigate that risk you *can* get the source code reviewed. This cannot be the case with closed source software as even if you are given a copy of the code to review (for example with Microsoft through their shared source initiative) you have no guarantee that the code you reviewed is the code that was compiled to create the software you get on the CD.....
Leads me on to another thought actually which is, I wonder if any of the shared source licensees have been able to comile something like Win2003 server from the source they've been given to create a running OS.....?

Article or Troll? Securing the 'Net

There's an article over on SecurityFocus by Tim Mullen titled " Stop Being a Victim". I'm undecided as to whether it's a troll or not. He appears to be suggesting that the way to improve the security for Internet users is for those users understand and care enough to secure their computers.... It's a nice idea, but having been a network admin in the past and having supported a lot of users in my time, my initial thought on reading it was "BWA HA HA HA HA"
The idea of getting the X million people currently connected to the Internet to understand what is required to secure a computer on the Internet is quite amusing, given that professional IT people in large corporations regularly get it wrong looking at the ease with which hackers like Adrian Lamo have penetrated their networks.
So, what is the answer then?? There are a couple of ideas which come to my mind.
1. Make ISPs and network access providers responsible (and legally liable) for traffic from their networks. Of course, the knock on effect of that would be a huge rise in Internet access costs and greatly reduced functionality as ISPs would have to install outbound filtering to stop attacks originating from their networks infecting others...
2. Split the Internet. One answer for some users would be for the recreation of walled Internet communities (like AOL and CompuServe of old). Where there is no access to the mainstream Internet from the community (or very controlled access). That would, however, need to be combined with more control over the end-users of the service.... and again far higher charges for access as the provider provided security services to the subscribers...
3. Improve software to make it more secure, and less vulnerable to attack.... This is the one that's currently being tried out by Microsoft, with them improving software quality and adding security feature to their operating systems. However I'm not convinced that this will ever really have the desired effect... At the moment, my perception is that Worm/Virus attacks are on the up and also the number of patches coming out of Redmond is going up, not down....
I'd mention the idea of regulation as a concept, except for in a global Internet, the chances of getting all the worlds governments lined up behind decent legislation is what I can only describe as, extremely unlikely...
So where does that leave us? The answer is, I'm not sure. Sorry, this isn't some published article so I don't have to have a silver bullet solution ;op

MetaSploit redux

Well I had a chance to download and have a quick test of the metasploit framework which I talked about earlier.
It definately does what it says on the tin! I downloaded it, ran the web server version (one command), fired up a known vulnerable Virtual machine, and very soon had a remote administrator exploit against IIS5 launched.
I think it could be very useful in the securtiy industry from the point of view of convincing companies that level of technical knowledge required to hack into their systems is not high.... This is needed as a common reason given by management in companies for not doing things like patch management of internal servers is that "well no-one would know how to do that" with the thought that hacking a server requires a high level of technical expertise...

Prelude IDS

There's an interesting article over at Local Area Security which talks about the prelude IDS framework. It's a application which provides, amongst other things, a console for viewing alerts which can be pulled in and aggregated from a number of sources...

The end of ROSI, one can but hope

information security: RoSI: R.I.P.
There's an interesting link over at Axel Eble's blog to a report that, hopefully, people are geting round to the throught that security is not something that you calculate the R.O.I on, more that you view it like insurance or fire control system, as loss avoidance.
The problem with calculating ROSI has always been quantification, and it's always struck me that people that suggest it as a good way of justifying security spend, come up very short on specifics when asked, how it would actually be implemented.....

Online Portscan

SecuriScan Security Test
A online portscanner, to go with grc's shields up and the less flashy one over at yashy.com

Spyware in the corporation

An interesting article over at computerworld Spyware in the office .
The existance of spyware on corporate networks is definately not a good thing. Apart from the obvious reasons or potential leaks of confidential information or excess traffic being generated, there is the problem that deploying code on a complex platform could cause other, business critical, applications to stop working...

Encrpyted mail that doesn't interfere with A-V

Over at ZDNet there's an article PGP software gains antivirus defense .
This capability is very useful, getting round one of the problems of encrypted mail, which is that the content is hidden from any security or other inspection mechanisms, like A-V.