XSS in Rails Applications

I'm doing some research at the moment for a presentation I'm doing for the Scotland on Rails conference, later this month. As part of that I've been downloading some sample Rails applications to get an idea of common security issues that I can discuss.
Interestingly on popular applications that I've downloaded so far, I'm 2 for 2 on the exact same problem.
Both of them have XSS vulnerabilities from the user-->admin sides of the site. So the end-user pages have output encoding to restrict XSS but the admin sections don't consistently provide the same protection.
It's also interesting that both applications seem to be relying on output encoding as a defence as opposed to input validation. In my experience the best defence is a combination of the two...
Of course that leads to some potentially nasty exploits around stealing admin credentials from the site in question. Hey looks like I'll have some stuff to talk about anyway :)

Penetration Test Scoping

Got a reminder I've not blogged in a while, so here's the next part of what I was going to talk about..
So, following on from my first post in this series I thought I'd go on to talk about penetration test scoping.
Getting the scope right is one of the most important parts of a successful pen. test. If you get the wrong scope it won't matter how brilliantly the test is executed or how great the report looks, because you won't have fulfilled the customers requirements.
Unfortunately in a lot of cases the customer doesn't actually know what they want, they may have heard that they need "one of those security test things", they may have auditors telling them they have to have one, or if you're lucky they may have an idea of what they're looking to achieve.
The best pen. tests have specific goals in mind, which allow specific tests to be scoped. Most commonly a good scope will focus on the question "what's changed", along with a view of the level of security desirable for an application.
So a high risk new application on a new platform is likely to warrant a fairly heavy review (web application, possibly code review, likely config. security review of the operating system and other new components like firewalls or routers), whereas some new pages added to an existing application where it's purely static content, might not warrant a review at all (or a very quick sense check).
Ultimately there's going to be a trade-off between time spent and the level of assurance over security that's needed. A full code-review/manual build review/architecture review of a new application is likely to provide the best level of assurance, but at a pretty high cost. A "black box" vulnerability scan and web application test, will likely be quick and therefore cheaper, but will provide a lower level of assurance.
next time (hopefully sooner) I'll talk about some of the challenges in executing tests and touch on some of the gotchas that can cause problems

What is Penetration Testing?

I'm planning to do a series of posts about penetration testing over the next couple of weeks so I thought I should start in the obvious place of defining what it actually is.
You'd think this would be relatively straightforward, but the term "penetration testing" is mis-used all over the place. Some people use it to refer to vulnerability assessment, some people use it to refer to Web Application Security Assessment, and a lot of business people use it to refer generically to any and all security assessment activity.
So what actually is it? Well for me, a penetration test is a scenario based assessment where the tester will actually try to exploit security vulnerabilities in a system or systems (depending on the scope) and then leverage those exploited vulnerabilities to gain further access to systems within the scope of the assessment which may be accessible after exploiting the initial vulnerability.
So that's what it is, why is it important to use the term correctly?
Well, different security assessment types have different characteristics and provide the owner of the system with different levels of assurance, so it's important to make sure everyone's talking about the same thing.
For example, vulnerability assessment is typically primarily tool based (eg, Nessus), focuses on networking/Operating System/maybe database level problems and doesn't usually exploit the vulnerabilities found. Pretty low risk to the systems under test (usually) but won't provide definite confirmation of problems and typically doesn't look at web applications, so it won't cover all the attack surface of a typical web application exposed over the Internet.
So if someone calls a vulnerability assessment a penetration test (and this is pretty common, in my experience) there's a good chance that someone's going to be disappointed in the results...
From the definition I used there's a couple of areas that can be very important to define correctly when conducting a test, so next time, I'm planning to go over some of the common problems and misconceptions in scoping penetration tests.

Death of Pen Testing?

http://riskmanagementinsight.com/riskanalysis/?p=532

Very interesting post over at Riskanalysis.is on penetration testing and what it may turn in to.
There's some good reasons to do penetration testing in there and I'd agree that targeted testing to prove or disprove theories about the security environment is a smart way to use penetration testing. My feeling though is that, at the moment, only more mature security organisations will be in a good place to use it in that way.
For most companies there are other reasons why penetration testing is going to remain on the menu in its current form

  • Compliance. Penetration testing seems to be getting commonly adopted as one of the "bullet points" that need to be completed to comply with industry or government regulations, probably most noticable by PCI
  • Externally hosted applications. In situations when a company doesn't have great visibility of an application that they're entrusting valuable data to (eg, most outsourced application hosting setups) they need some way to get comfort that a reasonable level of security is being applied to that application. Usually that will involve a penetration test, especially if the application is exposed to a hostile envrioment (like the Internet!)
  • So whilst I'd definitely like to see smarter use of penetration tests, I don't think that testing as it's used currently is going to go out of fashion any time soon.

Catching out dodgy security policies

Here's a question to ask your security policy people, to see whether their recommendations are actually risk based or just "best guesses"...
"Have you updated the minimum password length/complexity requirements due to recent advances in password cracking speeds?"
I was reading a couple of posts on the Red Database Security blog (here and here, and it occurred to me that despite the increases that have been made in password cracking speeds over the last couple of years, I've not seen a lot of movement in minimum password length/strength requirements to go along with it...
Obviously password policies should be tailored to mitigate the threats to the systems they protect and the primary risk that long passwords mitigate is an offline attack where the attacker has access to the encrypted password. (the more common online brute-force is better mitigated by account lockout and security monitoring in most cases)
So if crackers are getting faster, passwords should obviously get longer...

Why eBook Readers won't succeed for now...

I really like the idea of eBook readers and I've been following the progress of a number of them for a while now (There's an excellent resource over at the MobileRead site).
But there's one glaringly obvious reason why they won't succeed for recreational book readers... which is the absurd pricing of eBooks.
The most recent evidence of this is the launch of the Sony reader in the UK. I had a look round their site and all looks well. The price is reasonable (£199) and the product looks nice. To get a feel for the books available I went to Waterstones UK website, who are Sonys eBook partner for the launch..
What I found really does surprise me, it's like the book publishers want this to fail.
First book on the page, The Private Patient by P.D James. Waterstones eBook price £12.92... Amazon.co.uk's price for the Hardback version..... £9.49 !
So they're seriously expecting people to pay 36% more for an eBook which is a digital file, easily produced, with no shipping or production costs and with DRM on it, as against a hard back book that could be resold once you've read it.
Looking through some of the other prices, this doesn't appear to be a limited aberration either, the differential is higher for hard back books (a concept which makes zero sense in an eBook world) but the prices seem uniformly higher for eBooks than physical ones.
Now I do see that for some applications where physical books are impractical eBooks , whatever the cost, could make sense.
But for recreational reading, the chances that large numbers of book lovers (many of whom are attached to the experience of physical books anyway) will change for a more expensive, more restrictive, electronic implementation are pretty slim!

DNS vulnerability - are there any other mitigations apart from patching?

Well as I'm sure everyone is aware the details of the DNS flaw that Dan Kaminsky found have been disseminated round the 'net a bit early.
I'm not going to get into the politics of whether that's a good thing/bad thing or how urgent patching is as it's been done to death elsewhere...
I was thinking though about how it may be possible to mitigate this in other ways than patching...
Having heard the detailed explanation from matasano on the vulnerability, wouldn't it be possible to mitigate this by changing the behaviour of the authoritative name server..?
If I'm understandning things correctly as the authoritative name server for a domain you'd see a whole load of requests for invalid subdomains to your domain (eg, AAAA.MYDOMAIN.COM AAAB.MYDOMAIN.COM) and usually you just respond with NXDOMAIN. Now the attacker is relying on you responding NXDOMAIN so he can respond with the additional RR of your real website, say, WWW.MYDOMAIN.COM.
Would it be possible to change your behaviour to respond as the attacker would do with the RR for your valid hosts, so causing the caching DNS server to cache them on the first attempt and preventing the attacker from getting the incorrect entries in first..? The attacker is relying on guessing port and transaction ID so won't get there in the first attempt, so it would seem that this would potentially mitigate the problem..
That said I'm no DNS expert so this may well be off base...

More virtualization fun..

There's an interesting post at Hoffs blog around virtualization and DMZs and to what level it's "ok" to virtualize a given DMZ environment, following on from a white paper by VMware on the subject
As Hoff mentions you need to understand the wider context in any risk assessment, but I actually think that in the scenarios that VMware have painted out, I'd agree with Alessandro, that the fully collapsed DMZs talked about in the paper are a no-no.
And there's a nice risk assessment reasoning here, it's not just a "ooh hypervisors scary" kind of reaction, honest :) ..
So here's how it works. In the diagrams they've used they've laid out a picture of a number of security controls. The main one being separate firewalls segregating the Internet from each of the DMZs in turn. This would indicate to me that the risk assessment dictated that no one device should be a point of failure for the security being provided by the environment (a more cost effective, but traditionally seen as more risky design would be a single firewall with multiple interfaces, one for each network.)
So if we then introduce virtualization to this scenario then it seems that the option of a "partially collapsed" DMZ meets the security requirements as each DMZ has it's own VMware ESX instance and a compromise of the hypervisor won't result in a breach of DMZ segregation.
I think that in a lot of cases it's easy to look at virtualization as something new but it should be possible to look at the current risk appetite in an environment (are you using separate devices to segregate things, are you relying on VLAN tagging for separation) and then apply that to come up with the appropriate virtualization design.

Avoiding controls which are "designed to fail"

One of the great problems and frustrations of working in security is when those darned users don't follow the nice policies that people have spent so much time working on.
But here's the thing, security professionals actually indoctrinate users not to follow policies!
How do they do this? Well people like following patterns, and so when the pattern "It's okay not to actually follow this" is established in relation to security , people will apply that pattern the next time they run into a security policy that's potentially difficult or hard to follow.
I'm sure there's a lot of security people saying "No idea what he's talking about, all my policies were made to be followed!"....
O'Rly..
Here's an example that I'll bet is familiar to a lot of people. Password policy. Does anyone actually follow their companies password policy? I'll bet it looks something like

  • Passwords must be 8 or more characters with upper, lower, numeric and special characters
  • Passwords must not be based on dictionary words
  • Passwords must be rotated every 30 days
  • You must have a different password for every system (including not using the same passwords for personal websites
  • Oh yeah and once you've got this list of 40 or so random strings that are really tricky to remember and you might not use very often, don't you dare write them down

We're setting ourselves up for failure, and study after study shows that users will write down their passwords, or use sequences or many other tricks to make them more memorable.
This example (which may be a users main interaction with "security") sets the expectation that security policies can be ignored, because they're unrealistic.
So what's the answer..
Well when designing controls, I think that it's not enough to just look at the technical security properties in abstract. We've got to consider the psychological/sociological elements of the people we're expecting to execute the controls, and maybe take a path that isn't the best abstract solution but may well be the one that will work best in real life...
After all once users are set on the path of ignoring security it becomes pretty difficult to get them back on the one true way!

When is a debian user not a debian user?

So lots of people have commented on the potentially very nasty crypto bug in OpenSSL on debian Linux (and derivatives, including Ubuntu) with the good advice of patching and regenerating your SSH keys...
Only thing is, what if you don't have access to the shell to do exactly that....? What if you don't even know you run debian Linux...?
Over the last several years there has been a proliferation of computing "appliances" which almost inevitably run a cut-down Linux underneath the main software stack and in many cases, that's going to be debian Linux.
The thing is, in some cases the vendor won't even explicitly mention what the underlying software is, so the end customer may be blissfully unaware that they have vulnerable machines...