Re-reading my previous post on data-centric security Hoff made the correct comment that I'd gone to the extreme end and it didn't quite flow from his post.
Fair point, I jumped a couple of hurdles a bit too quickly and it probably didn't make where I'm coming from clear, so I'll try and cover things a bit better now.
ok first basic point, I'm not a fan of *some* of the Jericho forums ideas (I like most of the others just fine, in principle anyway). Specifically the DRM/acces to data bit. In principle it sounds great, but I don't think that it's practible to implement in most organisations with their masses of un-organised data an ever increasing requirements for easier connectivity and data flow.
Now Rob makes the point very forcefully that models like Bell LaPudala have described the kind of Mandatory Access Control world that DRM implements for quite some time. Yep they have, but outside of miltiary or police environments I've never seen these implemented. My feeling is that the reason for this is that in these systems users need to be actively involved in data security, they need to classify information as it's created and they need to understand the requirements on them to maintain the classification of data.
I don't think that most corporates will buy into systems that work in that way. I think that the overhead of training and maintaining systems that implement MAC is beyond what most companies want.
So .. am I anti-security? Nope I'm extremely pro-security. My feeling is however that the best way to implement security is in ways which it's invisable to users. Every time you make ordinary business people think about security (eg, usernames/passwords) they try their darndest to bypass those requirements.
Personally I'm a great fan of network segregation and defence in depth at the network layer. I think that devices like the ones crossbeam produce are very useful in coming up with risk profiles, on a network by network basis rather than a data basis and managing traffic in that way. The reason for this is that then the segregation and protections can be applied without the intervention of end-users and without them (hopefully) having to know about what security is in place.
So to use the phrase that I've seen in other blogs on this subject, I think that the "zones of trust" are a great idea, but the zone's shouldn't be based on the data that flows over them, but the user/machine that are used. It's the idea of tagging all that data with the right tags and controlling it's flow that bugs me.
So that's where my points in the previous post came from, and I still reckon their correct. Data tagging and parsing relies on the existance of standards and their uptake in the first instance and then users *actually using them* and personally I think that's not going to happen in general companies and therefore is not the best place to be focusing security effort...


raesene

Security Geek, Kubernetes, Docker, Ruby, Hillwalking