Being commercial

… given that I probably wouldn’t be writing this blog if I weren’t paid by my employer, I don’t feel to bad about mentioning our Summit…

I get paid*.  By Red Hat.  Not to write these blog posts, but to do a job of work.  The musings which make up this blog aren’t necessarily directly connected to the views of my employer (see standard disclaimer), but given that I try to conduct my work with integrity and to blog about stuff I believe in, there’s a pretty good correlation**.

Anyway, given that I probably wouldn’t be writing this blog if I weren’t paid by my employer, I don’t feel to bad about mentioning our Summit this week.  It’s my first one, and I’m involved in two sessions: one on moving to a hybrid (or multi-) cloud model, and one about the importance of systems security and what impact Open Source has on it (and vice versa).  There’s going to be a lot of security-related content, presented by some fantastic colleagues, partners and customers, and, depending on quite how busy things are, I’m hoping to post snippets here.

In the meantime, I’d like to invite anybody who reads this blog and will be attending the Red Hat Summit to get in touch – it would be great to meet you.

 


*which is, in my view, a Good Thing[tm].

**see one of my favourite xkcd cartoons.

Service degradation: actually a good thing

…here’s the interesting distinction between the classic IT security mindset and that of “the business”: the business generally want things to keep running.

Well, not all the time, obviously*.  But bear with me: we spend most of our time ensuring that all of our systems are up and secure and working as expected, because that’s what we hope for, but there’s a real argument for not only finding out what happens when they don’t, and not just planning for when they don’t, but also planning for how they shouldn’t.  Let’s start by examining some techniques for how we might do that.

Part 1 – planning

There’s a story** that the oil company Shell, in the 1970’s, did some scenario planning that examined what were considered, at the time, very unlikely events, and which allowed it to react when OPEC’s strategy surprised most of the rest of the industry a few years later.  Sensitivity modelling is another technique that organisations use at the financial level to understand what impact various changes – in order fulfilment, currency exchange or interest rates, for instance – make to the various parts of their business.  Yet another is war gaming, which the military use to try to understand what will happen when failures occur: putting real people and their associated systems into situations and watching them react.  And Netflix are famous for taking this a step further in the context of the IT world and having a virtual Chaos Monkey (a set of processes and scripts) which they use to bring down parts of their systems in real time to allow them to understand how resilient they the wider system is.

So that gives us four approaches that are applicable, with various options for automation:

  1. scenario planning – trying to understand what impact large scale events might have on your systems;
  2. sensitivity planning – modelling the impact on your systems of specific changes to the operating environment;
  3. wargaming – putting your people and systems through simulated events to see what happens;
  4. real outages – testing your people and systems with actual events and failures.

Actually going out of your way to sabotage your own systems might seem like insane behaviour, but it’s actually a work of genius.  If you don’t plan for failure, what are you going to do when it happens?

So let’s say that you’ve adopted all of these practices****: what are you going to do with the information?  Well, there are some obvious things you can do, such as:

  • removing discovered weaknesses;
  • improving resilience;
  • getting rid of single points of failure;
  • ensuring that you have adequately trained staff;
  • making sure that your backups are protected, but available to authorised entities.

I won’t try to compile an exhaustive list, because there are loads books and articles and training courses about this sort of thing, but there’s another, maybe less obvious, course of action which I believe we must take, and that’s plan for managed degradation.

Part 2 – managed degradation

What do I mean by that?  Well, it’s simple.  We***** are trained and indoctrinated to take the view that if something fails, it must always “fail to safe” or “fail to secure”.  If something stops working right, it should stop working at all.

There’s value in this approach, of course there is, and we’re paid****** to ensure everything is secure, right?  Wrong.  We’re actually paid to help keep the business running, and here’s the interesting distinction between the classic IT security mindset and that of “the business”: the business generally want things to keep running.  Crazy, right?  “The business” want to keep making money and servicing customers even if things aren’t perfectly secure!  Don’t they know the risks?

And the answer to that question is “no”.  They don’t know the risks.  And that’s our real job: we need to explain the risks and the mitigations, and allow a balancing act to take place.  In fact, we’re always making those trade-offs and managing that balance – after all, the only truly secure computer is one with no network connection, no keyboard, no mouse and no power connection*******.  But most of the time, we don’t need to explain the decisions we make around risk: we just take them, following best industry practice, regulatory requirements and the rest.  Nor are the trade-offs usually so stark, because when failure strikes – whether through an attack, accident or misfortune – it’s often a pretty simple choice between maintaining a particular security posture and keeping the lights on.  So we need to think about and plan for some degradation, and realise that on occasion, we may need to adopt a different security posture to the perfect (or at least preferred) one in which we normally operate.

How would we do that?  Well, the approach I’m advocating is best described as “managed degradation”.  We allow our systems – including, where necessary our security systems – to degrade to a managed (and preferably planned) state, where we know that they’re not operating at peak efficiency, but where they are operating.  Key, however, is that we know the conditions under which they’re working, so we understand their operational parameters, and can explain and manage the risks associated with this new posture.  That posture may change, in response to ongoing events, and the systems and our responses to those events, so we need to plan ahead (using the techniques I discussed above) so that we can be flexible enough to provide real resiliency.

We need to find modes of operation which don’t expose the crown jewels******** of the business, but do allow key business operations to take place.  And those key business operations may not be the ones we expect – maybe it’s more important to be able to create new orders than to collect payments for them, for instance, at least in the short term.  So we need to discuss the options with the business, and respond to their needs.  This planning is not just security resiliency planning: it’s business resiliency planning.  We won’t be able to consider all the possible failures – though the techniques I outlined above will help us to identify many of them – but the more we plan for, the better we will be at reacting to the surprises.  And, possibly best of all, we’ll be talking to the business, informing them, learning from them, and even, maybe just a bit, helping them understand that the job we do does have some value after all.


*I’m assuming that we’re the Good Guys/Gals**.

**Maybe less story than MBA*** case study.

***There’s no shame in it.

****Well done, by the way.

*****The mythical security community again – see past posts.

******Hopefully…

*******Preferably at the bottom of a well, encased in concrete, with all storage already removed and destroyed.

********Probably not the actual Crown Jewels, unless you work at the Tower of London.

Getting started in IT security – an in/outsider’s view

… a basic grounding in cryptography is vital …

I am, by many measures, almost uniquely badly qualified* to talk about IT security, given that my degree is in English Literature and Theology (I did two years of each, finishing with the latter), and the only other formal university qualification I have is an MBA.  Neither of these seem to be great starting points for a career in IT security.  Along the way, admittedly, I did pick up a CISSP qualification and took an excellent SANS course on Linux and UNIX security, but that’s pretty much it.  I should also point out in my defence that I was always pretty much a geek at school***, learning Pascal and Assembly to optimise my Mandelbrot set generator**** and spending countless hours trying to create simple stickman animations.

The rest of it was learnt on the job, at seminars, meetings, from colleagues or from books.  What prompted me to write this particular post was a post over at IT Security guru, 9 out of 10 IT Security Pros Surveyed Favour Experience over Qualifications – FireMon, a brief analysis of a survey disclosed on Firemon’s site.

This cheered me, I have to say, given my background, but it also occurred to me that I sometimes get asked what advice I have for people who are interested in getting involved in IT Security.  I’m wary providing a one-size-fits-all answer, but there’s one action, and three books, that I tend to suggest, so I thought I’d share them here, in case they’re useful to anyone.

An action:

  • get involved in an Open Source project, preferably related to security.  Honestly, this is partly because I’m passionate about Open Source, but also because it’s something that I know I and others look for on an CV*****.  You don’t even need to be writing code, necessarily: there’s a huge need for documentation, testing, UI design, evangelism****** and the rest, but it’s great exposure, and can give you a great taster of what’s going on.  You can even choose a non-security project, but considering getting involved in security-related work for that project.

Three books******* to give you a taste of the field, and a broad grounding:

  1. Security Engineering: A Guide to Building Dependable Distributed Systems, by Ross Anderson. I learned more about security systems from this book than any other. I think it gives a very good overview of the field from a point of view that makes sense to me.  There’s deep technical detail in here, but you don’t need to understand all of it on first reading in order to get a lot of benefit.
  2. Practical Cryptography, by Bruce Schneier. Schneier has been in the field of security for a long time (many of his books are worth reading, as is his monthly email, CRYPTO-GRAM), and this book is a follow-up to his classic “Applied Cryptography”. In Practical Cryptography, he admitted that security was more than just mathematics, and that the human element is also important. This book goes into quite a lot of technical depth, but again, you don’t have to follow all of it to benefit.
  3. Cryptonomicon, by Neal Stephenson. This is a (very long!) work of fiction, but it has a lot of security background and history in it, and also gives a good view into the mindset of how many security people think – or used to think!  I love it, and re-read it every few years.

I’m aware that the second and third are unashamedly crypto-related (though there’s a lot more general security in Cryptonomicon than the title suggests), and I make no apology for that.  I think that a basic grounding in cryptography is vital for anyone wishing to make a serious career in IT Security.  You don’t need to understand the mathematics, but you do need to understand, if not how to use crypto correctly, then at least the impact of using it incorrectly********.

So, that’s my lot.  If anyone has other suggestions, feel free to post them in comments.  I have some thoughts on some more advanced books around architecture which I may share at some point, but I wanted to keep it pretty simple for now.


*we could almost stop the sentence here**, to be honest.

**or maybe the entire article.

***by which I mean “before university”.  When Americans ask Brits “are you at school?”, we get upset if we’ve already started university (do we really look that young?).

****the Pascal didn’t help, because BBC BASIC was so fast already, and floating point was so difficult in Assembly that I frankly gave up.

*****”Curriculum Vitae”.  If you’re from North America, think “Resumé”, but it’s Latin, not French.

******I know quite a lot about evangelism, given my degree in Theology, but that’s a story for another time.

*******All of these should be available from a decent library.  If your university/college/town/city library doesn’t have these, I’d lobby for them.  You should also be able to find them online.  Please consume them legally: authors deserve to be paid for their work.

********Spoiler: it’s bad.  Very bad.

Diversity in IT security: not just a canine issue

“People won’t listen to you or take you seriously unless you’re an old white man, and since I’m an old white man I’m going to use that to help the people who need it.” —Patrick Stewart, Actor

A couple of weeks ago, I wrote an April Fool’s post: “Changing the demographic in IT security: a radical proposal“.  It was “guest-written” by my dog, Sherlock, and suggested that dogs would make a good demographic from which to recruit IT professionals.  It went down quite well, and I had a good spike of hits, which was nice, but I wish that it hadn’t been necessary, or resonated so obviously with where we really are in the industry*.

Of special interest to me is the representation of women within IT security, and particularly within technical roles.  This is largely due to the fact that I have two daughters of school age**, and although I wouldn’t want to force them into a technical career, I want to be absolutely sure that they have as much chance both to try it – and then to succeed – as anybody else does, regardless of their gender.

But I think we should feel that other issues of under-representation should be of equal concern.  Professionals from ethnic minorities and with disabilities are also under-represented within IT security, and this is, without a doubt, a Bad Thing[tm].  I suspect that the same goes for people in the LGBTQ+ demographics.  From my perspective, diversity is something which is an unalloyed good within pretty much any organisation.  Different viewpoints don’t just allow us to reflect what our customers see and do, but also bring different perspectives to anything from perimeter defence to user stories, from UX to threading models.  Companies and organisations are just more flexible – and therefore more resilient – if they represent a wide ranging set of perspectives and views.  Not only because they’re more likely to be able to react positively if they come under criticism, but because they are less likely to succumb to groupthink and the “yes-men”*** mentality.

Part of the problem is that we hire ourselves.  I’m a white male in a straight marriage with a Western university education and a nuclear family.  I’ve got all of the privilege starting right there, and it’s really, really easy to find people like me to work with, and to hire, and to trust in business relationships.  And I know that I sometimes get annoyed with people who approach things differently to me, whose viewpoint leads them to consider alternative solutions or ideas.  And whether there’s a disproportionate percentage of annoyances associated with people who come from a different background to me, or that I’m just less likely to notice such annoyances when they come from someone who shares my background, there’s a real danger of prejudice kicking in and privilege – my privilege – taking over.

So, what can we do?  Here are some ideas:

  • Go out of our way to read, listen to and engage with people from different backgrounds to our own, particularly if we disagree with them, and particularly if they’re in our industry
  • Make a point of including the views of non-majority members of teams and groups in which you participate
  • Mentor and encourage those from disparate backgrounds in their careers
  • Consider positive discrimination – this is tricky, particularly with legal requirements in some contexts, but it’s worth considering, if only to recognise what a difference it might make.
  • Encourage our companies to engage in affirmative groups and events
  • Encourage our companies only to sponsor events with positive policies on harassment, speaker and panel selection, etc.
  • Consider refusing to speak on industry panels made up of people who are all in our demographic****
  • Interview out-liers
  • Practice “blind CV” selection

These are my views. The views of someone with privilege.  I’m sure they’re not all right.  I’m sure they’re not all applicable to everybody’s situation.  I’m aware that there’s a danger of my misappropriating a fight which is not mine, and of the dangers of intersectionality.

But if I can’t stand up from my position of privilege***** and say something, then who can?


*Or, let’s face it, society.

**I’m also married to a very strong proponent of equal rights and feminism.  It’s not so much that it rubbed off on me, but that I’m pretty sure she’d have little to do with me if I didn’t feel the same way.

***And I do mean “men” here, yes.

****My wife challenged me to put this in.  Because I don’t do it, and I should.

*****“People won’t listen to you or take you seriously unless you’re an old****** white man, and since I’m an old white man I’m going to use that to help the people who need it.” —Patrick Stewart, Actor

******Although I’m not old.*******

*******Whatever my daughters may say.

Disbelieving the many eyes hypothesis

There is a view that because Open Source Software is subject to review by many eyes, all the bugs will be ironed out of it. This is a myth.

Writing code is hard.  Writing secure code is harder: much harder.  And before you get there, you need to think about design and architecture.  When you’re writing code to implement security functionality, it’s often based on architectures and designs which have been pored over and examined in detail.  They may even reflect standards which have gone through worldwide review processes and are generally considered perfect and unbreakable*.

However good those designs and architectures are, though, there’s something about putting things into actual software that’s, well, special.  With the exception of software proven to be mathematically correct**, being able to write software which accurately implements the functionality you’re trying to realise is somewhere between a science and an art.  This is no surprise to anyone who’s actually written any software, tried to debug software or divine software’s correctness by stepping through it.  It’s not the key point of this post either, however.

Nobody*** actually believes that the software that comes out of this process is going to be perfect, but everybody agrees that software should be made as close to perfect and bug-free as possible.  It is for this reason that code review is a core principle of software development.  And luckily – in my view, at least – much of the code that we use these days in our day-to-day lives is Open Source, which means that anybody can look at it, and it’s available for tens or hundreds of thousands of eyes to review.

And herein lies the problem.  There is a view that because Open Source Software is subject to review by many eyes, all the bugs will be ironed out of it.  This is a myth.  A dangerous myth.  The problems with this view are at least twofold.  The first is the “if you build it, they will come” fallacy.  I remember when there was a list of all the websites in the world, and if you added your website to that list, people would visit it****.  In the same way, the number of Open Source projects was (maybe) once so small that there was a good chance that people might look at and review your code.  Those days are past – long past.  Second, for many areas of security functionality – crypto primitives implementation is a good example – the number of suitably qualified eyes is low.

Don’t think that I am in any way suggesting that the problem is any lesser in proprietary code: quite the opposite.  Not only are the designs and architectures in proprietary software often hidden from review, but you have fewer eyes available to look at the code, and the dangers of hierarchical pressure and groupthink are dramatically increased.  “Proprietary code is more secure” is less myth, more fake news.  I completely understand why companies like to keep their security software secret – and I’m afraid that the “it’s to protect our intellectual property” line is too often a platitude they tell themselves, when really, it’s just unsafe to release it.  So for me, it’s Open Source all the way when we’re looking at security software.

So, what can we do?  Well, companies and other organisations that care about security functionality can – and have, I believe a responsibility to – expend resources on checking and reviewing the code that implements that functionality.  That is part of what Red Hat, the organisation for whom I work, is committed to doing.  Alongside that, we, the Open Source community, can – and are – finding ways to support critical projects and improve the amount of review that goes into that code*****.  And we should encourage academic organisations to train students in the black art of security software writing and review, not to mention highlighting the importance of Open Source Software.

We can do better – and we are doing better.  Because what we need to realise is that the reason the “many eyes hypothesis” is a myth is not that many eyes won’t improve code – they will – but that we don’t have enough expert eyes looking.  Yet.


* Yeah, really: “perfect and unbreakable”.  Let’s just pretend that’s true for the purposes of this discussion.

** …and which still relies on the design and architecture actually to do what you want – or think you want – of course, so good luck.

*** nobody who’s actually written more than about 5 lines of code (or more than 6 characters of Perl)

**** I added one.  They came.  It was like some sort of magic.

***** see, for instance, the Linux Foundation‘s Core Infrastructure Initiative

Changing the demographic in IT security: a radical proposal

If we rule out a change in age demographic, gender, race or ethnicity, what options do we have left?

This is a guest post by Sherlock.

We have known for a while now that we as an industry don’t have enough security specialists to manage the tide of malware and attacks that threaten to overwhelm not just the IT sector but also all those other areas where software and hardware security play a vital part in our way of life.  This is everything from the food supply chain to the exercise industry, from pharmaceuticals to wildlife management.  The security sphere is currently dominated by men – and the majority of them are white men.  There is a significant – and welcome – move towards encouraging women into STEM subjects, and improving the chances for those from other ethnic groups, but I believe that we need to go further: much, much further.

There is also an argument that the age demographic of workers is much too skewed towards the older range of the employment market, and there is clear evidence to show that humans’ mental acuity tends to decrease with age.  This, in a field where the ability to think quickly and react to threats is a key success metric.  The obvious place to start would be by recruiting a younger workforce, but this faces problems.  Labour laws in most countries restrict the age at which significant work can be done by children*, so one alternative is to take the next age demographic: millennials.  Here, however, we run into the ongoing debate about whether this group is lazy and entitled***.  If we rule out a change in age demographic, gender, race or ethnicity, what options do we have left?

It seems to me that the obvious solution is to re- or up-skill a part of the existing security workforce and bring them into the IT security market.  This group is intelligent*****, loyal******, fast-moving [I’m done with the asterisks – you get the picture], quick-thinking [see earlier parenthetical comment], and easily rewarded [this bit really is universally true].  In short, the canine workforce is currently under-represented except in the physical security space, but there seems to be excellent opportunity to up-skill a large part of this demographic and bring them into positions of responsibility within the IT security space.  So, next time you’re looking to recruit into a key IT security role, look no further than your faithful hound.  Who’s a good boy?  Who’s a good boy?  You‘re a good boy.


*this is a Good Thing[tm] – nobody**’s complaining about this

**apart from some annoying kids, and well, who cares?

***I could have spent more time researching this: am I being ignorant or apathetic?****

****I don’t know, and I don’t care.

*****mostly

******again, mostly

The Backdoor Fallacy: explaining it slowly for governments

… I literally don’t know a single person with modicum of technical understanding who thinks this is a good idea …

I should probably avoid this one, because a) everyone will be writing about it; and b) it makes me really, really cross; but I just can’t*.  I’m also going to restate the standard disclaimer that the opinions expressed here are mine, and may not represent those of my employer, Red Hat, Inc. (although I hope that they do).

Amber Rudd, UK Home Secretary, has embraced what I’m going to call the Backdoor Fallacy.  This is basically a security-by-obscurity belief that it’s necessary for encryption providers to provide the police and security services with a “hidden” method by which they can read all encrypted communications**.  The Home Secretary’s espousal of this popular position is a predictable reaction to the terrorist attack in London last week, but it won’t help.  I literally don’t know a single person with modicum of technical understanding who thinks this is a good idea.  Or remotely practicable.  Obviously, therefore, I’m not the only person who’s going to writing about this, but I thought it would be an interesting exercise to collect some of the reasons that this is monumentally bad idea in one short article, so let’s examine this fallacy from a few angles.

  • It always fails – because a backdoor isn’t just a backdoor for authorised users: it’s a backdoor for anyone who can find it.  And keeping these sort of things hidden is difficult, because:
    • academic researchers look for them
    • criminals look for them
    • “unfriendly state actors” (governments we don’t like at the moment) look for them
    • previously friendly state actors (governments we used to like, but we don’t like so much anymore) look for them
    • police and security services mess up and leak them by accident
    • insiders within police and security services decide to leak them
    • source code gets leaked, giving clues to how they’re implemented -for those apps which aren’t Open Source in the first place
    • the people writing them don’t always get it right, and you end up with more holes than you expected***
    • techniques that seem safe now often seem laughably insecure in a few years’ time.

There is just no safe way to protect these backdoors.

  • You can’t identify all the providers – today it’s Whatsapp.  And Facebook, and Twitter, and Instagram, and Tumblr, and …  But if I’d asked you for a list a year, or five years ago, what would that list have looked like?  And can you tell me what should be on the list for next week, or next year?  No, you can’t.  And I suspect you (as a learned reader of this blog) are a lot more clued up than the UK Home Office.
  • You can’t convince all the providers – and that’s assuming that all of the providers are interested or can be convinced to care adequately to sign up anyway.
  • You can’t hit all channels – even if you could identify all the providers, what about online gaming?  And email.  And ssh.  I mean, really.
  • The obviousness issue – presumably, in order to make this work, governments need to publish a list of approved applications.  I suspect, just suspect that the sort of bad people who want to get around this will choose to use different apps, or different channels to the approved ones … but so will people who aren’t “bad people”, but just have legitimate reasons for encrypting their communications.
  • The business problem – there are legitimate uses for encryption.  Many, many of them.  And they far outnumber the illegitimate uses.  So, if you’re a government, you have two options:
    1. you can convince all legitimate business, including banks, foreign corporations and human rights organisations and everyone who communicates with them to use your compromised, “backdoor-enhanced”***** encryption scheme.  Good luck with this: it’s not going to work.
    2. you can institute a simple, fast, unabuseable red-tape free process by which you hand out exemptions to “legitimate” businesses who you can trust to use non-compromised, backdoor-unenhanced encryption schemes.******

I’m guessing that we don’t expect either of these to fly.

  • The “nothing to hide” sub-fallacy – “But if you have nothing to hide, then you have nothing to fear” argument.  Well, I may have nothing to hide from the current government.  But what about future governments?  Have the past 100 years of world history taught us nothing?  Hitler, Franco, Stalin, Perón, … the list goes on and on.  From “previously friendly state actors”?  And from the criminals who are the main reason most of us use encryption in the first place?  Puh-lease.
  • The who-do-you-trust question – this leads on from the “police and security services mess up” sub-bullet above.  The fewer people to whom you give the backdoor details, the more hard work and expense there is in using that backdoor for your purposes.  So there’s an obvious move to reduce costs by spreading knowledge of the backdoor.  And governments tend towards any policy which reduces costs, so…  And, of course, the more spread the knowledge, the more likely it is it leak.
  • Once it’s gone, it’s gone – and once it’s leaked, it’s leaked, whether by accident or intention (Chelsea Manning, Julian Assange, Edward Snowden, …).  You can’t put this genie back in the bottle.  The cost and complication of re-keying a communications channel for which the key has leaked is phenomenal.  I’m assuming that this is just a re-keying exercise, but if it’s a recoding exercise, it’s even harder.  And how do you enforce that only the new version is used, anyway?
  • The jurisdiction issue – do all governments agree on the same key?  No?  Well, then I have to have different versions of all apps I might use, and choose the correct one for each country I travel in?  And ensure that neither I nor any businesses ever communicate across jurisdictional boundaries.  Or we could have multiple backdoors, each for a different jurisdiction?  Let’s introduce the phrase “combinatorial explosion” here, shall we?

Let’s work as an industry to disabuse governments of the idea that this is ever a good idea. And we also need to work them to come up with other techniques to help them catch criminals and stop terrorist attacks: let’s do that, too.


*believe me: I tried.  Not that hard, but I tried.

**they probably want all “at-rest” keys as well as all transport keys.  This is even more stupid.

***don’t get me wrong: this is going to happen anyway, but why add to the problem?

****inverted commas for irony, which I hope is obvious by this state in the proceedings

*****”I can’t even”, to borrow from popular parlance.  This is the UK government, after all.