People somehow always under estimate fixed length space during design. For example, take Thomas J. Watson’s famous 1943 quote, “I think there is a world market for maybe five computers.” Or, how about the unforeseen limits of 32 bit computing or IPv4 Internet address space? We’re seeing this problem again right now with Common Vulnerability Enumeration (CVE) identifiers.
For those that aren’t familiar with CVE ID’s, they function like a retail product UPC ID, or a book ISBN ID. CVE’s are used to identify a named vulnerability. If my product references CVE-1999-1212 and your product references CVE-1999-1212, we are syntactically and semantically referencing the same thing. In my opinion, out of all of the security standards invented to date, CVE has delivered the greatest value to the security industry.
So, what’s the problem?
Today, the CVE syntax has something people are calling the “10,000 vulnerabilities in a single year” problem. This problem is the result of a design decision that resulted in the format CVE-YYYY-NNNN, which can only support a maximum of 9,999 unique identifiers per year. Back in the late 90’s when this standard was being created, people designing the standard that were biased toward a fixed length format, thought this was sufficient. They probably thought it was more than sufficient.
Stop it with the fixed-lengths in design, please! Ultimately, you *will* be wrong at some point.
Still not convinced? Let’s do some simple math from the CVE’s published on NVD so we can get a rough estimate of the growth rates. If your math is close to my math, by October of 2013 we could run out of CVE’s to assign. What happens then? Maybe we’ll all just pack it up and go for a vacation in Hawaii because if we can’t identify vulnerabilities in a common manner, the bad guys will definitely take some time off too, right? Wrong.
The CVE Board members are currently voting in a new syntax and we are all CVE stakeholders. I suggest you get on it and learn what’s going on. http://cve.mitre.org/
Red means stop, green means go. Quit means see you later, goodbye, and if you are an application, you are no longer supposed to be running!
Seriously people, if you are an application developer (I’m talking to you Skype and Spotify), when a person pulls down the File menu and selects ‘Exit/Quit’, this means you are done. As they say at 2 am, you don’t have to go home, but you can’t stay here.
There may be a lot of other applications that commit this offense, but I’m going to use Spotify as an example of this failure to meet basic user expectation. When you want to shut Spotify down, the ‘File’ menu offers an option called ‘Exit’. This seems like an obvious way to close the application, right?
But, in order to really quit Spotify you have to go to the application tray and ‘Quit’, because even after you choose ‘Exit’ Spotify is still running.
Evidently, in Spotify’s world, ‘Exit’ just means close the window. Why does Spotify think this is ok? Why do Spotify users, and I include myself in this group, tolerate this? I’m guessing this behavior pisses a lot of Spotify users off. I don’t think it’s too much to ask for fundamental functions within all applications to behave the same way. Exit should mean exactly that. If you want to offer the user to let the application run in the background then offer a menu choice called “Close Window”.
Do you agree? Which applications are the biggest offenders in your opinion? This is not ok and users must demand change.
According to the standard we call age, I turned 48 years old today. I have been perfectly compliant with this standard all my life. More importantly, the environment that supports me has also been perfectly compliant with this standard even long before I could speak or understand the many rituals designed to support it.
I want to explore age as a standard in this blog post because it can help us understand how we need to go about building information security standards and can help explain why certain information security standards are not as successful as they should be.
Before we dive into this topic, let me say that my perspective and understanding on this topic came from Standards and their Stories, a bookby Martha Lampland and Susan Leigh Star. If you’re interested in this stuff, go back and read more of Susan Leigh Star’s work in “Sorting Things Out” and her other papers. She passed away in 2010 but did some amazing work.
What comes to your mind when you think about information security standards? Many people immediately think of the standards bodies like NIST, IETF, ITU that manage them. It’s tempting to imagine that standards come into being wholly formed but that’s not the case. The evolution of standards and their subsequent stability is a fundamentally social process. And, once a standard emerges from this social process, rituals must be put in place to keep the standard stable enough to allow other standards to be built on its shoulders. This is certainly the case with age.
You’re thinking, “Everyone knows their age”! True, but Star says, “The adoption of chronological age as a metric for standards was not an end in itself. Its imposition served as a means to improve other processes; it brought numerical precision, certainty, and impartiality to classification practices that are otherwise inexact and arbitrary”.
Bam! I wish I’d said that. Old enough to drink, old enough to drive, old enough to have sex; you can’t build laws around minimum maturity levels without snapping to an exact and non-arbitrary grid my friends, and this is why society developed an age standard.
I’ll bet you think I’ve exhausted the age / information security standard analogy, but I’m not even close yet. Let’s talk a bit about standard interoperability because, in some countries and for some applications, age standard precision isn’t necessary. In some countries it’s enough to know you are the oldest male. In China, you are given the age of 1 at birth and then age in increments based on a lunar calendar which is different than the western calendar. Making these two different systems with different levels of precision interoperable is a perfect analogy for the interoperability (or lack of it) in many information security standards.
The ultimate power-user of our age standard has to be the enumerator of the US census data. A brief review of census data shows how the age standard was expanded and refined across society. In 1790, only free white males were asked their age and it wasn’t’ until 1850 that age was counted for African Americans. Over the years the scope of the US census widened and precision increased, but as late as the 1900 the instructions to the enumerator read, “The object of the question ‘What is your date of birth’ is to help in getting the exact age in years of each person enumerated. Many a person who can tell the month and year of his birth will be careless or forgetful in stating the years of his age, and so an error will creep into the census. This danger cannot be entirely avoided, but asking the question in two forms will prevent it in many cases”. Even back then they were dealing with dirty data from their sensors.
For geeks interested in these metrics and the processes surrounding it, check out “Measuring America: The Decennial Censuses from 1790 to 2000” by Jason G Gauthier.
By now you can see the role social rituals like birthday parties and cards, Facebook acknowledgements and other customs have in supporting the age standard and how this stability allows it to be used as a control by many other standards. The age standard is so ingrained in our society it would be nearly impossible to change because of the ripple effect it would have on other standards. If we ever undertook this change, it would require massive effort and would take multiple generations to achieve meaningful change.
The global standards we require for information security require global processes. We only now see some of that movement with institutions like the IETF and ITU starting to adopt or develop standard metrics, identifiers, and common schemas for information security. These institutions create the social rituals for these standards to come into being and for them to evolve or in some cases fail. While I point out these familiar standards bodies, please don’t forget that with the Internet these days, the coordination cost is so incredibly low in forming large global social networks, that with the right idea at the right time you could be the originator of yet another standard now that you know the recipe. As soon as other build on top of your standards, you grow more and more durable.
Today, I’m going to celebrate my compliance with the age standard like I do every year with lots of Facebook friends, presents from loved ones, and birthday dinner that includes cocktails because as I will show, I am of the legal drinking age.
<< musical theme>> Go TK, it’s your birthday, we gonna party like it’s your birthday, we gonna sip Bacardi like it’s your birthday….
You may have noticed that your favorite security blogs are busy posting suggestions on how to protect your private data, but I want to talk about what businesses should do to protect the privacy of their customer and employee data. Every organization should encrypt sensitive customer and employee data. This may seem obvious, but unfortunately, recent high profile encryption fails prove otherwise:
- On January 13th, an unencrypted device containing the records of more than 100,000 records of youth...
- On January 9th, personal info was left on city computer hard drives sold in a government auction
- In December 2012, the California Department of Health Care Services accidently published 14,000 soci...
- A State Comptroller audit of 12 New York public school districts found the majority lacked adequate ...
- Former S.C. computer security chief Scott Shealy revealed the state did not take basic security step...
And the list goes on.
Here’s the thing: breaches happen. If your data is stolen but it’s effectively encrypted very little has been lost. There’s a major difference between the Blizzard data breach last August and what happened in South Carolina. Effective encryption can make stolen data useless.
To learn more about encrypting data, check out some of my previous blogs on the subject:
A happy couple living in a wonderful home decides to have (or adopt) a child. The child begins to grow up and, at some point, the new parents have to child-proof the home. In fact, at this point they are certain that the child is determined to harm herself if they don’t take all the recommended safety precautions.
This analogy turns out to be a perfect explanation for the interrelationship between usability and security because, in both cases, success can only be determined in a specific context.
One definition of child proofing is making everything harmful to the child unusable. Given the child’s capabilities, certain things in the home aren’t feasible for them to operate. Still, the rascals want to put everything into their mouths, they pull and tug on large objects that may crush them, crawl into tiny spaces and get trapped and stick things into electrical outlets. When you actually start childproofing, it suddenly seems that there are more things that will kill a child in the home than there are things that support life.
A key component of childproofing is factoring in the other beings living in the same physical space. Childproofing cannot mean adult-proofing or there will be a lot of unhappy users, as well as a significant increase in language unsuitable for small children.
Believe it or not, these principles are part of the design fundamentals needed to build usable, secure systems. When you secure an information system, you need it to remain usable for trusted users but close to impossible to use for nearly everyone else. To say that a system is secure is an incomplete statement because systems are only secure in the context of a specific community. A system that’s considered highly usable by everyone, even by the adversary, is obviously not secure.
Childproofing a house is simpler than securing an IT system because a child has a very limited set of capabilities. A non-child in the same home has a much wider set of physical and cognitive capabilities. In this way, childproofing is nothing like hacker-proofing because, more often than not, the hacker has more expertise and know-how than you do. To carry our analogy one step further, this would be like a child trying to adult-proof a home – fundamentally unworkable.
The bottom line is that ‘proofing’ something from one community while rendering it usable to another has everything to do with understanding the perceptive, cognitive, and technical boundaries of your adversary. To the degree that you can design beyond those capabilities, you are dramatically reducing the usability for attackers.
Now, go make sure your critical assets out the reach of the kids.
I’m not an academic but I’m also not one to reinvent something that’s already well understood and practiced. We practice information security and yet most of us don’t bother to leverage information science. In particular, it irritates me to no end when people say “data” and they really mean “information”. This bothers me so much so I’m going to spend an entire blog post complaining about it and explaining why we need to clean up our language.
There’s a structure in Information Science that allows us to speak about the differences between data and information using a model called DIKW, Data-Information-Knowledge-Wisdom. If you’re in the field of Information Security, please read this link and then read it again. While you’re reading it, know that your job is to protect information, not data. In fact, if you are doing your job, your data can be in your enemy’s possession and yet your information can still be protected.
The cost of getting this wrong is enormous. It creates bad habits and imprecise language that can confuse and confound the other parts of your business that you communicate and coordinate with on a daily basis. If everyone gets this fundamental concept wrong, decisions will be based on faulty assumptions and you will find yourself assigned to another day in hell as your adversaries have their way with your network.
Let’s be absolutely clear. Data is what your sense organs (eyes, ears, touch, etc.) experience, full stop. You can experience data only within the limits of your own perceptive boundaries. This means that the precision and resolution of all data is limited by the observer’s perceptive boundaries.
It’s only after we make ‘sense’ of data that it becomes information. The critical point here is that data outside of an observer’s perceptive boundary is still data, not information. This concept is the fundamental principle and functional center of encryption.
Another way to think about this issue is that information is observer centric. Data is experienced by the observer, processed by the observer’s mental model and, if understood, becomes information which then changes the observer’s mental model. You must respect the fact that while each observer may come to similar inferences and conclusions based on a set of data their conclusions will be somewhat different. These differences can be small enough that they become semantically stable. At this point we can label them the ‘same’. We argue, communicate and socialize the semantics of data to stabilize them enough to get to the label of ‘same’, but each observer’s perceptions are always unique.
If you are following me so far, ask yourself these questions:
- What is your adversary’s mental model and how can you mess with it?
- How can you get your adversary to draw inferences that are inaccurate or just create more busywork for them?
- How can you operate outside of their perceptive boundaries?
With a more precise DIKW model, we can now explore these kinds of discussions with some precision.
This line of thinking makes a significant difference when you think about big data -- one of this year’s hottest buzzwords. The promise of big data is just that -- bigness. When you think about the implications of big data you realize that the problem with big data is that it’s hard to comprehend. Big deal - we’ve all been dealing with limitations on our perception since birth.
Big data or small data isn’t the issue. What we want is useful information. It’s far more important to have a faithful model that can be used for the inferences of the data than it is to have data of a specific size. Big data is like saying “a 3.2Mbytes mp3” when what we are after is <insert your favorite song here>. Instead of big data, keep your eye on the prize of useful information.
In the next few years the synthesis and analytics of big security data are going to become a key focus. While these processes produce information, we’ve labeled the discussions we’re having about them with the term ‘data’. In my book, we’re just propagating the bad habit of imprecise language.
Information Security is a set of adaptive processes that secure information – the data is just a small part of the whole. Information Security tools all produce lots of data but only the good ones produce information meaningful and useful to specific observers.
If you understand this problem then you will also understand why the evolution of information security language and vocabulary must change so we can move forward as an industry. Information science has years of experience in this exact domain and they can show us the way if we let them.
Welcome to 2013! As always, a new year brings new challenges and innovations. But what should we expect from the realm of information security? Here is my forecast for the next 12 months.
First, the bad news. As cloud services become widely adopted by the businesses and federal agencies, organizations will be subject to a variety of social engineering hacks directed at the password recovery process. Hackers will pose as users or employees, call support and complain loudly that they need urgent access to data. We’ve already seen these tactics used successfully in last year’s infamous Mat Honan hack and they won’t be going away any time soon. Obviously, this is will be a major issue for cloud users everywhere.
Attackers will also take advantage of the way users browse the web with cross-site request forgery (CSRF). It’s easy to open us a tab for Facebook, a tab for online banking, a tab for an ecommerce shopping site and another tab for private business data. CSRF attacks exploit the trust a website has in the user’s browser to send unauthorized commands to another site. No browser is safe and there is no easy fix for these attacks. Get ready for an exponential increase of these attacks on federal agencies and other sensitive organizations in 2013. Websites that host a variety of confidential information will be squarely in the cross hairs.
But don’t worry, 2013 will also bring us new security innovations. New, multi-factor authentication services based on smartphone data will emerge as a security solution for a variety of applications. These new services will pair something you know (your password) with location, biometric, or device data to make it much more difficult for attackers to break into online networks and accounts. Mobile phones are ubiquitous, always on and always available and easy to use – so they are the perfect solution to the failure of password security.
In the end, we should expect to see both thunder and sunshine in 2013. True, there are major challenges ahead, but that just gives us an opportunity to meet them head on. Happy New Year!
There’s a saying that goes: “If the only tool you have is a hammer, you treat everything as if it were a nail.” But in today’s case it’s: “If the only tool you have is an attorney, you treat everything as if it were a lawsuit.” Let me sum it up for you:
Attorney John Hawkins, a former Republican senator, filed the initial suit Oct. 31 in Richland County against Gov. Nikki Haley, the SCDOR and its director, for negligence. He is now adding Trustwave to the list because the company was hired by the South Carolina Department of Revenue (SCDOR) in 2005 to provide computer security in place of DSIT. It’s in the crosshairs of the suit, according to the plaintiffs, because it failed to prevent the heist, in which international hackers made off with 3.6 million personal income tax returns, 387,000 credit and debit card numbers and up to 657,000 business filings.
Does this mean you need to add ‘file a lawsuit’ to your business continuity plan? I certainly hope not. But it does raise the question about how consumers of security services should audit the effectiveness of a provider before bad things start happening. The pattern here is similar to a recent blog post of mine that explains how in Vegas casinos that it’s the slot machine manufacturer that pays out the winnings, not the casino. This way, the feedback loop flows in the right direction. Now, I’m not saying we should apply this pattern directly because I’m sure the majority of service providers would become unaffordable if they took on this risk. Nonetheless, the time to figure these things out is well before something goes wrong and if you think you will never be compromised, you are delusional.
He’s got the whole world, in his hands...
Did you read that part that went like this: “…in which international hackers made off with…”? You gotta love how the Internet makes South Carolina local to hackers internationally. Was Johnny-Boy thinking about this when his state decided to put their systems on the Internet? I think not. If this keeps you up at night, think about how many other Johnny-Boys there are in executive positions who think their information systems are safe and they’ll never be a victim of International threats. Now do you see why I worry about critical infrastructure? Oh, let’s put the controls systems of electricity, water, gas, on the Internet. The fallout following one of these events will make the South Carolina breach look like a pimple on an elephant’s butt (you can thank me for that imagery later).
Secured, Breached, Repeat.
Unfortunately, we will see a lot more security breaches and lawsuits before anything gets better. I don’t mean to be the buzzkill, but is there any other point in history where a bad guy from an arm chair could completely breach the assets of a US State? No. When you put systems on the Internet, you are essentially boxing up the system and shipping it internationally. Ask yourself: Have I implemented enough cryptography in my design such that I could box up my systems that sent them to the adversary? If the answer is no, don’t go putting it on the Internet. Oh, and don’t sue me if this does not work out for you. The bad guys also know crypto-analysis well, so really all you are doing is slowing them down.
The fine line between INFORMATION and BULLS#!T
No matter who you plan on voting for in the presidential election, the news on who is leading in the polls seems to be on everywhere. Polls constantly make claims about who is leading but they don’t give us context on how the data is being gathered.
In order to understand poll data, we need to know more than a number like 2% -- we need to know the limits of the sample and the methodology behind the data collection because data is not information. For data to be informative the observer of the data must have context.
"The hearer, not the speaker determines the meaning of an utterance." -- Heinz von Forster
We no need know stinkin’ wires
Do you still have a landline at home? If you come from my generation, having dial tone in your home delivered over copper was as important as running water. But these days, that’s just not the case. A very large number of voters operate with only mobile phones, and the polling companies don’t call mobile phones. This is known as the ‘landline bias’.
I’m not going to make claims here as to how many voters are in the ‘no landline’ group or who they are voting for, but this fact alone should make everyone question the accuracy of polling figures.
Could I please speak to a human?
But wait; there is more missing context for our orientation phase on polling data. (This wouldn’t be a TK post without an OODA reference)
I don’t know about you, but after working hard all day do you really want to take that call from a telemarketer? If you are like me and you just got off of seven hours of conference calls and there is no one on the other side of the phone, you just hang up. After being on conference calls all day I really don’t want to be bothered with phone calls from robots. .
Another poll fact: Today almost all the polling is being done via machines.
“We love speaking to those automated support lines” – Said No One Ever.
So, if you are of the class of American’s that still owns and pays for a landline and you also don’t mind speaking to robots at the other end we have some very detailed poll data on your voting preferences. If you aren’t in either of these two groups then your preferences remain a mystery.
No offense to robots here, some of my best friends are robots -- I’m just trying to explain how these polling processes are done so you have more context on these numbers being presented.
It is your responsibility to demand context to data.
Whenever we hear some claim that is supported with data, we need to understand the context of the claim in order to evaluate it. In the case of vulnerability data, you have to understand the settings of the scanner for the data to be understood. At a low technical level for example, to truly understand a domainname that was resolved, one should know the nameserver at which the resolver was using when it took place. Or another example would be if you last found a vulnerability, and now it is gone, it could be that someone changed the settings of the scanner and because of an authentication failure, this application is no longer available at the time of the assessment – it is still there, you just can’t see it but in the report, all you see is that it went away. The point is that context matters. A lot.
Voters are being assaulted with “Polls say that XYZ is ahead by 7%” every ten minutes. Can’t polling companies take the time to explain how the data was collected and what biases exist for the population sampled? Is it a time constraint issue or are they trying to deliberately mislead us?
Voters should demand the appropriate context of poll data along with the claims.
Calling all story tellers
We get educated about nutrition so that we can make choices that balance our diet. Let’s do the same with the way we consume data so it can be digested in the right context and yield useful information.
If you don’t have the appropriate context, demand it. Without context, data will mislead you. Ignorance of context is no excuse for faulty conclusions.
Calling all quants, calling all story tellers, step up and teach us about context.
Teach us how to properly consume data so that the information produced is part of healthy decision making.
Compiled By: BackgroundCheck.org
I love Starbucks! They get my money on a daily basis and recently I allowed them to start tracking my habits because I now use their card for payment.
In fact, Starbucks has an app for their and it is awesome. The apps is secured by a password, or it was anyway, is until they began to support the new iOS Passbook feature. That’s right, since Passbook on iOS has no password protection of its own, you can click on the Passbook and, without any authentication, , use my Starbucks card. Not cool dude.
Back when my Starbucks card was secure, if I went to settings:
Then to the Passcode feature to turn it on:
Tapping on it brought up a numeric lockpad:
Enter the four digit code and it asks you to enter it a second time to confirm:
If you input both codes correctly then you're done. Just exit and tap on the Starbucks app icon again. This time instead of going directly into application, you are prompted for a Passcode:
If this is your first time setting up the Starbucks application, you are prompted to add Passbook support and, knowing what I know now, I would recommend you answer with a big CANCEL.
Because if you add Passbook anyone with access to your phone can click on Passbook
And Passbook will present your card without asking for any Passcode. What. So. Ever.
I know what you’re thinking. You’re thinking that this app might not have good security but it’s just as safe as my physical card. It’s definitely true that if someone gets custody of my physical, they can just show it and steal my balance. But, that is why I loved the Starbucks Application because it offered me a Passcode (This is essentially two-factor authentication: something I have plus something I know).
Apple, you really need to offer a setting for Passbook that allows me to protect my Starbucks account with a Passcode. If you are going to put a cash-like payment data in there, at least protect it with two factors. This is basic stuff, get with it yo!
Secure and insecure habits
We are creatures of habit – some good habits, some bad habits. In my 20’s my mental model of risk was different than it is now that I’m in my 40’s and included significantly more bad habits. I look back on some of the decisions I made and activities I considered to be sound practice and think, “Wow, that was nuts.” At some point in our lives, something(s) happen and we transition to a completely different world model. Some might say we ‘grow up’ – I say we swap out our reasoning engine so the same input creates a completely new set of inferences.
By scheduled or unscheduled event
In some cultures, this transitional event is planned and performed through a community ritual, a rite of passage. When this is done explicitly, the community supports the changes necessary to transition from boy to man, from girl to woman, or from the old you to the new you. In other cultures, nothing is really in place and this rite of passage might be triggered by a life changing event. While these events are not planned or scheduled, the intensity is still life-changing -- you are never the same afterwards. I think the same concepts apply to organizations, especially in information security and I’d like to explore these changes in this blog post.
Watching Organizations Mature
Watching companies grow up is a lot like watching people grow up. There isn’t any rite of passage community ritual in connection with security practices, so change tends to come through intense, surprising, abrupt events that deliver a huge wakeup call. These events are so profound that habits change throughout the entire organization.
Microsoft’s security evolution certainly fits this pattern. Microsoft in early 2000, before Slammer and CodeRed happened, was a completely different organization than the one we see today with its own crime fighting unit. They certainly went through several event triggered rites of passage, and from where I sit these fairly painful.
Organizations like Apple and Adobe are now going through similar transitions, when they come out the other end, their security practices will be profoundly different. They have to change to learn how to survive (or they will die trying). Sony certainly has had to learn how to fight back after they took a few beatings on the Internet playground.
The events that are shaping these companies are so intense they create an emotional response throughout the entire organization, and this response fuels fundamental changes in information security attitudes and habits.
Of course, everyone would like to achieve this state of security awareness across the organization without the pain and loss a real event entails. How could we make that happen?
The Security Event Rite of Passage
In the same way cultures create transitional rituals, could we create a security ritual designed to elicit the same intense emotional reaction in the entire organization? If such a thing were possible, would it even be effective?
Let’s say you did a real, full-scale security breach drill for an entire day?
Even with outside actors, a full-scale security event is very complicated to stage. With cultural rites of passage the community plays a major role. To mimic this in a security drill, you would also need to emulate the impact on customers, analysts and other company stake holders.
I hope you see how this would likely never get budgeted or put on the priority list to happen any day soon. But suspend your disbelief just a minute longer to imaging this in its entirety.
In the end, simulation can only take you so far. When you look back at life-changing experiences, the ones that hurt the most are the ones you want to avoid at all costs in the future.
When you look at the major changes that need to take place in organizations to build more secure information technology or practices, you realize that you are really asking humans to toss out their old mental model of the world and install a new one. Easy, right? No way. Has your doctor told you to lose that 25lbs, or to do some cardio 3 to 5 times per week, or to stop smoking? Have you followed that advice yet? Nope. Why? It probably has to do with the idea that nothing bad has happened yet, so you go on doing what you are doing.
I think you can see the obvious parallels in the security programs of others. It might be just a little bit harder to see it in your own security program – that is until a really bad thing happens.
There is something known as the streetlight effect. In Wikipedia it’s described as an observational bias that causes people searching for something to look in the places where it would be easiest to find.
The term “streetlight” appears to come from a story that has several forms, but the general outline goes like this:
A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost the keys here. The drunk replies, no, he lost them in the park. The policeman asks why the drunk is searching here, and the drunk replies, "this is where the light is."
In my experience, people involved in the production of software think of themselves as developers and design is something that makes them better developers. I have a word of advice is to all developers though; take the role of designer as seriously as you do developer and do all that you can to make sure you build the best design and the best software.
This doesn’t apply to everyone, but I’ve been a part of enough development examples that mimic the streetlight effect I thought I would shed some light on the subject.
Here’s how it starts:
A developer, or a team of developers, gets handed a document describing some requirements. They are told a design is needed to meet the requirements listed. Most developers begin thinking about how to solve the problem but they don’t allow themselves to think beyond what they can execute. In effect, what they know they can successfully build is “where the light is”. It’s far better to play the role of designer at this stage and push beyond what you think you can execute. Out there, where it’s much harder to see how it will be executed, is where great things are made.
I started with this example of software developer but this applies to how we go about designing information systems in general and the same thing can be said about security practices. So many times I see people focusing their efforts on the easiest areas of work, not the areas that will yield the greatest reduction of risk.
I certainly hope you apply the parable of the streetlight effect to any of your endeavors. In fitness, in education, in your professional life, it always works the same way -- growth only comes from pushing your limits and abilities and that is not where the light is.
When people argue about security and compliance they always ask the same question: Are we secure?
This is the wrong way to approach security and most other questions of certainty. A better question would be: Are we insecure? It may not seem like a big difference but the logic in the two questions are very different. Secure or insecure, survive or die, win or lose, when you go about the investigation of any of these high stakes black and white situations, the question you ask is directly related to the answer and evidence you produce to back that answer. This pairing brings the discussion to a shared certainty of future events.
Let me explain.
Let’s take a physical sport like boxing as an example. Let’s say you are participating in a boxing match the future state is win or lose. If you diligently train for year leading up to the fight, you can’t be certain if you will win. However, if you do no training, and you have a reasonable understanding that your opponent is training, you can be nearly certain you will lose.
I make this point because we all read a lot of stories in the media about the failure of daily vulnerability and configuration management programs to make us more secure. This media hyperbole echoes the ‘are we secure’ question, and it is, without a doubt, the wrong question to ask. Without vulnerability and configuration management, you can be certain that you are insecure, and this is the problem with inferring that a future state of being secure because evidence can only back the claim insecure. If you go about proving insecurity, the absence of evidence is enough, whereas if you go about proving a secure future state, the lack of evidence can and will be interpreted either way.
How do you know if you are asking the right questions? Ask the kind of questions where the lack of evidence increases a future claim’s certainty and that the evidence gathered is directly backing of the future state.
Everyone should be data driven in their decision making, but it pays to be extremely careful with exactly what is being measured and the questions you ask.
Just a few weeks ago, I wrote a blog post titled IE standing for Is_Exploitable but in light of new information, I want to change my mind. Today, IE stands for “Is_Elite”. Stay with me here, in Internet-time things change hourly.
First a little background on security evaluations of the most popular browsers:
The Accuvant study compared Google Chrome, Microsoft’s Internet Explorer, and Mozilla Firefox. In this study, the winner was Chrome due to mature sandbox techniques. IE came in 2nd and Firefox came in last. The execution environment with the smallest target surface yet highly functional was the winner.
The far more recent NSS Labs study compared Internet Explorer 9, Chrome 15 through 19, Mozilla Firefox 7 through 13, and Apple Safari 5 against 84,396 active and malicious URLs over a 175-day testing period. Both tests explore anti-exploitation techniques which is the right perspective since this is what is being targeted by the threat agent. Read the report for more detail, but IE out performed them all.
Surprised? You shouldn’t be. This is a perfect example of the Darwinian reality of the Internet. You and the threat actor go round and round and round until one of you become the strongest and most badass creature in the ecosystem and the other either dies or limps off the field injured and weaker.
All kinds of media reports have slammed any number of vulnerabilities in IE over the years. But, if you think of these as a very public bug report and these bugs get fixed, guess what happens? Quality goes up. No one should be surprised that IE is now very good at surviving in a hostile environment.
You may not love Microsoft for a variety of other reasons, but you have to give credit where credit is due my friend. If you have relatives that are less technical and run Windows, I would get them patched up and advise them to use the latest version of IE whenever they are on Internet. Personally, I run all the browsers and when one is under fire and vulnerable, I use the others.
No good soldier goes to war with only one weapon right?
I’d like to discuss misleading names because they are everywhere and the security industry is riddled with them. When we are talking about human-to-human socialization we can disambiguate imprecise names fairly easily but when computers are involved precision in naming is everything. Most computers still operate at the syntactical level, so ambiguous or conflicting semantic structures create serious problems.
Here are some examples of misleading names in the animal kingdom:
What drugs where they on when they named these wonderful creatures? Poor Peacock Mantis Shrimp! I’d be pissed if, instead of human, I was called “eagle roach dog”. (Wait, I kinda’ like that…)
Here’s my candidate for most misleading term in the security industry: “attack surface”. Here’s the definition:
“The attack surface of a software environment is the code within a computer system that can be run by unauthorized users. “ [Wikipedia Reference]
In the theatre of information security conflict, there are always at least two roles, that of an ATTACKER and that of a TARGET. If we diagramed each of these roles with the objective of clarifying the term attack surface, the diagram would look something like this:
So, when people talk about attack surface, they are not talking about the surface of the attack. Instead, they are talking about the surface of the target the attacker is trying to access. Therefore, in a logical world, it should be called the “target surface”.
My sympathy goes out to the mountain goat, electric eel, and maned wolf. It’s too late to adjust your misleading names, the terms are so socially stable so you are stuck with lousy monikers. In the same way, our industry will continue to use the term attack surface to describe something that is very clearly not on the surface of the attacker.
If we created a chart for our industry with misleading terms, what would be on it? If you have some, post in the comments please. Let’s get them out in the open and discuss.
My Buddy: Hello?
Unknown Caller: Hello, I’m from Microsoft and we have an error report showing that there is a virus on your HDD. We would like to get some information from you so we can help you fix the problem.
My Buddy: Umm…..how ‘bout not. Click (hangs up the phone)
Luckily for my buddy, he is paranoid like I am and did not fall for it but he immediately called me to tell me what had happened and asked if it was a scam. After a quick search it appears to be a popular method to get at personal information. This not Microsoft’s first rodeo and they have a great set of pages that explain all of this in detail.
Once you are safe from these creeps, and hopefully hung up before you gave them any critical information or downloaded anything, you can file a complaint with the Internet Crime Complaint Center. (IC3). Their FAQ is here. Doing this might make you feel better but the real win is that you did not give the bad guys any information or download anything to your computer. Also, because this is phone fraud, you can totally report this to the FTC here. They say “Your complaint counts!”
I really just wanted to post this because while you may be as paranoid as my buddy and I, I know I have family members and friends who are way more trusting and could get into a lot of trouble. God only knows what would happen to Apple users who in my opinion were told once that their computers can't get viruses/malware.
Word to the wise: make sure you validate with a callback or some credential anyone asking you for information or trying to get you to buy or download anything to your computer no matter what computer you are running.
On a daily basis, I'm always thinking of ways to make it harder and more fustrating for attackers.
This is the mindset one must have if we are to adapt to the threat and raise the cost to the attacker.
There is a cost to gaining enough knowledge to execute, there is a cost to remaining undetected when you execute, there is a cost to remaining undetected and operational once your malware is resident on the host.
I'd like to hear from you guys. If you were a bad guy, not saying you are, you just have the ability to think like one, what would make your job harder. Lets think this through. Comments please and lets talk about it.
I can't wait to chair this panel at the upcoming nCircle Worldwide User Conference.
Answer this question: Do you have your own crime fighting unit? If the answer is No, then you really need to know what you can do on a daily basis should some criminal activity happen on your network and you need to engage the FBI or some cyber crime fighting unit locally.
Who do you call?
When is it appropriate to call?
What baselines or data would make the crime fighting tasks more efficient and effective?
How and when to involve legal?
Is your PR team ready?
Can you perform drills to stay on your game?
If you can't make the conference, I'll post some blog entries for y'all.
Closed Caption English should be on
If these guys are able to make a commercial this cool about a bus, think what they could do for a security product. I just want to be the dude that says "Yah, cool!"
A security researcher recently reported a vulnerability in the Samsung S III phone where a single USSD code is issued that can remotely wipe the phone. Turns out this USSD code can be delivered via URL from a remote website, malicious link, or maybe even a QR code. The bad news gets even worse because handsets can be instructed to load the bad code from a website using a WAP-pushed SMS message. The fix will require a patch from carriers, and other Samsung devices are reportedly vulnerable.
This vulnerability is an excellent example of a dominant pattern in the art of hacking. It is highly likely that the development team that engineered the kill sequence via USSD never considered that the USSD could be delivered via some other remote protocol and the team doing the remote protocol implementation never considered that the USSD could deliver a remote wipe.
Hacking is very much like the pattern that makes jokes funny. You take the listener down a path from A and B and they infer that C and D will obviously come next. You give them C, but instead of D the next part of the pattern is M, which is completely unexpected and makes you laugh. This type of pattern happens in every industry. In hacking the unexpected part of the pattern is the attack and it doesn’t make you laugh.
Hackers have become adept at preying on blind spots created by the failure of the design team to consider every feature in every possible context.
Let’s look at another example of these kinds of blind spots in automobiles. Safety is a big deal with auto manufacturers. On impact, many vendors shut off fuel systems and hazard lights, emergency signals are enabled, all doors automatically unlocked, restraint systems are unlocked, etc.
All these features were designed to prevent passenger entrapment and improve safety. But, hackers see the words ‘all doors automatically unlocked’ and think, how does the car define collision? Could I simulate the G-force of a collision with a small sledge hammer blow to the bumper? This is exactly how features developed to improve collision safety can be used for car theft.
Take a look at these excerpts from car manufacturer websites:
POST-SAFE also helps to prevent follow-on accidents and makes it easier to locate the accident vehicle by automatically activating the hazard warning lights. The doors unlock automatically and are easy to open, which speeds up rescue times for the people inside.
VW Intelligent Crash Response
If a certain severity of crash is detected, all doors automatically unlock, the battery terminal automatically disconnects, the fuel supply is automatically terminated, warning hazards are automatically engaged, high consumption electrical components are automatically shut off and an emergency signal is automatically sent to OnStar Telematics (if equipped). If you didn't catch that, it all happens automatically so no need to worry. These steps greatly minimize the possibility of trapped occupants and fire.
To avoid these kinds of blind spots, development teams have to think through every context in which a feature could be executed. It takes time but rest assured -- the bad guys are spending most of their waking hours doing exactly this kind of analysis.
By now, you have had at least one of your friends get their account hacked on Facebook and start saying stupid things to encourage you to click on a link or install some malicious application. When you know these people, this sudden change in behavior is obvious and in a ‘neighborhood watch’ style, you are notified and go through the recovery and cleanup process.
The problem is that some of you may have some crazy policy of accepting anyone as a friend. Stop that right now. These are not friends, they are strangers and a percentage of those accounts are not even real people. I don’t care how hot that guy or girl is in that picture, don’t do it.
I encourage you to go through your Friends and make sure they are real friends in there. If you are not sure, message them with a question only the two of you share. At the end of this post, I’ll say why this is important.
For this discussion, you can classify different users on Facebook as:
- Strangers that are real people
- Strangers that are real people whose accounts are compromised
- Strangers that are fake accounts (bots)
- Friend that are real people that you know in real life
- Friends that are real people that you know in real life whose accounts are compromised
I’m making a big deal here because Facebook has an Account Recovery method that when all else fails, will use your ‘Friends’ to gain access to your account. They help you pick a few friends, they send each of them a security code, and then you call each friend, collect the codes, submit the codes to reset your password.
- Bad guy controls many accounts on Facebook
- At some point in the past, you accepted a friend request from 3 to 5 of these bad guy accounts
- Bad guy contacts Facebook and says he is you and that he can’t remember his password and can’t access his email or mobile phone on the account
- Bad guy instructs Facebook to send the codes to these bad guy accounts that you have previously friend’ed
- Bad guy collects the codes from these accounts, submits them and ‘recovers’ your account
- Bad guy has your account.
Unlike Twitter or Google+, Facebook really considers your Friend a real Friend so I suggest you do the same.
For those of you really high profile folks, don’t put it past the bad guys to really put some time and effort into this. I’ll post later in the week more Friending-attacks where the bad guy get some history and makes up accounts for past classmates and through those accounts gain more access to you on Facebook. The only defense here is to really ask the hard questions to these accounts and if they can’t answer them, they are not your friend.
As technology becomes more and more integrated, you will have to manage risk across multiple information systems. The good news is that as the world becomes more connected, your work becomes more efficient; the bad news is that as the world becomes more connected, your adversaries work becomes more efficient.
Those of you with iPhones may have upgraded to iOS6 already and I want to warn you of an unreasonable default related to your Contacts.
After upgrading to iOS6, you will find integration with Facebook that is a "nice to have" but there’s one feature I believe should be off by default. Facebook Events and Friends will appear in your Calendar and Contacts respectively. The Calendar integration I don’t mind so much but the automatic Contacts integration concerns me.
Given that many of your Friends have had their accounts compromised at one time or another, and some of you may have a “I Friend Everyone” policy which is dangerous, this type of access to your Contacts could get scary. A bad guy could maliciously update a phone number or URL redirecting you to a site that is a toll call or to a URL that downloads malware. Bottom line is: Stay out of my contacts unless I carefully put you there.
You can turn it off by going into the Facebook settings and disabling this feature after you upgrade.