As an engineer, numbers and mathematics play significant roles in my daily activities. Sometimes, however, putting a number on something just doesn’t make sense. For me, trying to estimate the global cost associated with cybercrime is one of those ‘somethings’. The inherent complexity associated with the global space of cybercrime events prevents us from calculating a reliable cost estimate with respectable accuracy and precision.
Not so long ago, Symantec asserted that cybercrime was costing us about $110 billion per year. Around the same time, our friends at McAfee stated that cybercrime was instead costing us approximately $1 trillion per year. I wonder which one is right? It’s a conundrum, indeed. For years, I have watched these sorts of global cost estimates travel across the wire, and yet I have found little use of the information because the data points are, with absolute certainty, all over the board. Nowadays I simply ignore these ‘informationals’ when they cross my path—long term exposure to them has desensitized me.
So, why is this worth mentioning? Because continual distribution of information containing non-negligible discrepancies such as the aforementioned cost estimates can potentially cause customers to lose faith in security providers. The information security industry, for the large part, has been working hard to reshape how users think about security. Before this reshaping took place, security was a nuisance for enterprises, was overlooked by developers (i.e., security-as-a-fix instead of security-at-inception), and was unknown to end users. Fortunately, the trend is changing. For example, CXOs are now less reluctant to approve those line items in the budget related to securing their enterprises and end users are becoming more aware of cyber security and its consequences. However, these changes would not have occurred if our industry was desensitizing our target audience with inaccurate information.
The moral of this story—we as security professionals need to focus on relaying relevant information to the rest of the world and to do so as accurately as possible. There is no room for guessing games in our industry.
Representations of the security industry and ‘hacking’ have become ubiquitous in popular media in recent years. Representing computer security enthusiasts in what one might consider a less than realistic light. Being a television and movie fan myself, I thought that I would talk a bit about this phenomenon and how it affects us as security professionals.
The largest mistake in movies is to portray security or hacking solutions in an overly simplified way that bears little relation with reality. One of my favorite examples occurs in the film “Firewall”. At one point, Harrison Ford decides to take code from a computer screen by ripping the light emitting bar out of a scanner and somehow directly plugging it into his daughter’s iPod and holding it in front of the screen.
A second example that I see often is the emphasis on physical building penetration while the data theft is portrayed as simple. A character or characters will spend countless hours rehearsing lines and identities in order to work their way into the building, and yet bypassing network security infrastructure is depicted as only requiring the insertion of a ‘special’ thumb drive in any computer on the network. Though, interestingly enough, with the discovery of MS13-027, maybe this isn’t as unlikely as it initially seemed.
Hollywood and the television industry don’t always get it wrong. Some movies do their research and get it right. For example, in the film ‘Wargames’, the main character gains access to a computer system because it made use of an easily guessed password. I am always pleasantly surprised when I see something real like this in a film.
At the end of the day it is about perception. How many times have you heard friends or family making comments about how easy a security penetration of a certain system would be, and proceed to describe some type of MacGyver solution that would be unlikely to work in the real world? I don’t have a red “Hack” button on my keyboard and, no matter how cool it would be, I can’t create a “master key” to any firewall on the planet using a flash card, bailing wire, and a stick of bubblegum.
Recently, I bought SimCity and EA claimed the issues that occurred on release day were not caused by digital rights management. I agree, server supply simply didn’t meet server demand. That wouldn’t have been an issue if the game was capable of operating offline. Since EA released SimCity as an online game, consumers were forced to login to these servers with a valid product key.
EA is not the only one that has implemented “You Must Be Online to Play” (YMBOP) DRM. Ubisoft implemented YMBOP with the release of Splinter Cell Conviction. Growing up in the country, we never had a stable Internet connection and there are consumers out there that still have dial up, requiring they go offline to place phone calls. I guess EA and friends have chosen to ignore those consumers, leaving them out in the cold with games like SimCity and Splinter Cell Conviction.
Both these YMBOP DRM attempts have been failures, as people will always find a way around the problem. The last rumor I heard was that you could now play SimCity offline (illegally) without the capability to save the game; it’s only a matter of time until that is changed as well. So now, the only people that suffer are the honest consumers, like me, that went out and purchased the game. It’s a shame; SimCity would have been phenomenal if it was released as an offline game.
Companies are only hurting the consumer at this point and, as a result, themselves. I’ll never buy another game that has YMBOP for the PC after this experience.
With all the precautions you can take to actively protect sensitive data on your web server sometimes there are unintended consequences of your actions that may result in information disclosure. Consider this scenario:
Adam is the administrator of a Wordpress-based website. The database credentials have changed and Adam needs to update the Wordpress configuration for the new username and password. Adam logs into the web server over SSH and opens up wp-config.php in Vim. After making his changes, but before he can close Vim, Adam's network connection goes down and the SSH session drops. Some time later Heather comes along and finds Adam's website. Seeing that it's a Wordpress site she attempts to retrieve .wp-config.php.swp. Because Adam's SSH connection dropped before Vim was closed, the swap file that was automatically generated by Vim may still exist and could be retrieved by Heather using a simple HTTP GET request. Heather could extract Adam's database credentials from this file.
Vim uses this swap file mechanism to make recovery possible after a crash. Other file editors have similar features. This recovery can be very helpful but the unintended consequence here is that the web server isn't going to interpret .wp-config.php.swp as a PHP file and will serve it as plain text instead, making the contents readable to anyone who cares to look.
Here are some tips and techniques that can be used to mitigate this sort of information disclosure:
- Don't edit any files directly on the production server. Instead, make all file edits on a development or staging server and then only copy the specifically changed files to the production server. Make sure not to copy entire directories to avoid accidentally copying over these temporary files.
- Configure your text editors or any other applications you use to write their temporary files to a location other than the current directory. Or, if you can live without the functionality they provide, configure your editors not to write temporary files at all. Many applications allow for this sort of configuration. In Vim you can change your backup and swap file location by putting the following in your .vimrc file:
set backupdir=~/.vim/backup set directory=~/.vim/tmp
- Configure your web server to only serve pages with allowed file types. By explicitly configuring the web server to only serve certain file types any other files that accidentally end up on the web server will not be accessible. This not only covers the text editor temporary files already discussed but other files that might accidentally end up in the web root, such as backup files. In Apache this can be configured with the following setup:
Deny from all <FilesMatch "\.(html|php|jpg|png)$"> Allow from all </FilesMatch>
These techniques can be used individually or in combination to prevent accidental information disclosure due to unintended consequences of normal application use. If you haven't been using these techniques these sorts of files may already exist on your web servers. Fortunately, WebApp360 will locate these files for you so they can be cleaned up.
In my 7+ years with nCircle, I’ve been involved in vulnerability scoring discussions more times than I can count. Colleagues, customers, conference-goers, and complete strangers all want to discuss the topic and I can’t say I blame them… the topic is interesting. So after numerous blog posts on other subjects, it’s probably time to tackle the issue in an open forum where we can discuss and debate as a group.
Let’s start with my feelings on scores since I should probably be upfront about it. The current state of vulnerability scoring is useless. With the frequency of vulnerability disclosure and the number of vulnerabilities patched in products, a bucket consisting of High, Medium, and Low tells me nothing. The solution to our scoring problems was supposed to be CVSS. It provides more buckets and looks, at least initially, as though it distributes the scores quite well. In reality, the 100 buckets of CVSS is really more like 15-20 as certain scores become commonplace and others are never seen.
Let’s take two vulnerabilities, A and B.
Vulnerability A has public exploit code and targets a popular operating system. It is a remote unauthenticated attack against the network stack and leads to a total compromise of the system.
CVSS Score: 10.0
CVSS Vector: (AV:N/AC:L/Au:N/C:C/I:C/A:C)
Vulnerability B has no public exploit code and targets a popular web technology. It requires that a user browse to a website and, if an exploit existed, successful exploitation would lead to code running in the context of the user, not the system.
CVSS Score: 10.0
CVSS Vector: (AV:N/AC:L/Au:N/C:C/I:C/A:C)
As you can see both of these have CVSS Scores of 10.0. This is why I see these system as useless. When you read the details, it’s very easy to identify a priority but if you were to rank these by CVSS, that priority would disappear.
So what is the solution? In my mind it’s risk scoring. When you’re talking about a vulnerability, the risk score is what you should care about. What we really need is CVRSS but it doesn’t exist (hrm… maybe an upcoming blog idea). When you look at public systems, no one is really doing this. Microsoft has come close but they still aren’t there. If they could find a way to marry their severity score with their exploitability index into a single value, I think they’d provide a useful way to truly prioritize Microsoft bulletins.
I think this is something that everyone in the vulnerability management space is currently dealing with as well. Helping customers prioritize vulnerabilities is a large part of our job and no one has figured out the perfect way to do it yet. Some companies are still scoring vulnerabilities, other companies are just using CVSS, and others still, nCircle included, have worked at risk scoring but still haven’t perfected it.
At nCircle, we look to score the risk using three factors: age of the vulnerability, exploit availability, and resulting level of access to the system. If I do a follow-up blog post on a proposed CVRSS, I would leave these in but I would likely improve the process with a few additional factors. When you look at the elements used by nCircle, they all make sense, especially exploit availability and level of system access. If an exploit is available, a vulnerability should score higher, and if that exploit is in malware or a known exploit framework it should score even higher. The same is true when you look at the level of access… a vulnerability that leads to user level access should not score the same as a vulnerability that leads to system level access. These factors make a world of difference when you’re looking at a list of 10,000 vulnerabilities trying to decide which ones to patch first.
If you look at the above Vulnerability A and Vulnerability B examples, using nCircle’s scoring system, Vulnerability A would receive a score of 24,203 and Vulnerability B would receive a score of 1. From this difference, it is quite easy to determine a priority for applying patches.
In an effort to further improve our scoring, nCircle released a number of ASPL-Based scoring changes roughly 6 months ago. These are flags that the customer can set to further prioritize their vulnerability scan results. These flags can be thought of as the environmental portion of a proposed CVRSS, allowing more control over the results that are important to you. nCircle customers interested in learning more about this can contact Support for details or sign up for nCircle Connect.
After this much writing, I’ll probably have to do a follow-up post on risk scoring… but I wanted to get something posted simply to start the discussion. So… discuss!
Note: CVSS examples are based on NVD's use of the scoring system since NVD is considered by many to be the standard for CVSS scores.
I saw in the news this morning that the British courts have granted a request from several major music labels to compel Internet Service Providers to block all access to specific Torrent sites. While I, in no way, support piracy; the situation becomes more abstract when you take into account that many people with slower connections make use of torrent sites to access Linux ISO’s and other free content. In fact, some free content providers no longer make direct downloads available and choose to offer their content exclusively as torrents using existing trackers. With the passage of this new law, users will no longer be able to access legitimate content using these sites.
After looking around further, I discovered that it is not the first time a law like this has been passed in the UK. A similar access restriction was put in place last year restricting access to a different torrent site. Until the technology exists to filter illegal content and only illegal content, this topic is sure to continue to generate lively debate. Everyone I talk to seems to have their own opinion regarding these types of content restrictions. Leave a comment below and let me know your stance on this issue.
When nCircle first launched IP360, the content (detection logic) was written in ASPL. This is how it worked 7 years ago when I joined nCircle, 99% of the logic written within the detection at that point was done in ASPL however, modules (back-end code) were written in Python. I say 99% because there was the odd single line of Python integrated with our ASPL code. Over the past 7 years I’ve watched this change, from the odd line of Python in a rule to the occasional rule written entirely in Python and, in the past year or two, a shift to almost every rule being written in Python. This allows us (VERT) the ability to write rules that are much more dynamic and powerful, we can do things that we’d never considered using ASPL. The end result of this shift is better, more accurate detection for our customers.
Python is a powerful language that is used in so many places. It’s used in my favourite game, EVE Online and in some of the tools that support it, such as pyfa. It’s also used in security tools like Canvas and Core Impact. Websites are built using Python thanks to frameworks like Django. Python is used in the scientific research community, the medical community, and industrial automation. Many companies like nCircle rely on the power, flexibility, and simplicity of Python as an integral part of their business. Yet nCircle wasn’t my first encounter with Python, I’d already been developing tutorials for online security forums with the language and I’d developed tools for past employers using Python.
While Python wasn’t my first programming language (that credit goes to Turing [a language no one outside Ontario, Canada is familiar with]), it’s definitely the one I’ve spent the most time with. I’ve got rows on my bookshelf dedicated to Python books, as well as a large chunk of my Kobo. Given my appreciation for the language, the popularity of it, and the availability of resources, I was excited last week that we were able to announce that nCircle customers now have the ability to write custom detection in Python.
This change allows our customers greater flexibility and control when developing custom rules. It also means that VERT is able to share unique pieces of code that aren’t destined for the product but may be useful to certain customers in one-off situations. We’ve already posted examples of these types of rules on nCircle Connect in a new user group that IP360 customers can join and we’ll be posting more in the future. I welcome everyone that can to sign up for this new user group, I’m looking forward to seeing the rules that everyone creates.
I am happy to report that as a result of my BSides SF talk, millions of Google users can now feel a little less exposed to account hijacking. It started last week when I noticed something peculiar, my proof-of-concept app which had been working flawlessly began to intermittently fail. I now know that this was in fact part of Google’s process for rolling out fixes for the numerous vulnerability reports I had filed while researching the 2-step verification system. With just under 48 hours to go before my talk, I received a kindly worded message from Google’s security program manager explaining what they had done to protect users in light of my reports and pending conference talk. This is of course the goal for every legitimate security researcher and I am very happy to have played the role that I did towards improving Google’s authentication systems!Read more...
It’s been said that technology journalist Mat Honan’s unfortunate experiences, which rendered his MacBook Air virtually useless, could have been prevented had he enabled 2-step verification. Unfortunately, as CloudFlare CEO Matthew Prince learned, Google’s 2-factor authentication system is far from impenetrable. In fact through my research, I have found that in some ways 2-step verification can actually reduce the security of a Google account!Read more...
I have recently worked with IBM’s DB2 software. IBM’s DB2 allows you to run multiple versions of the application simultaneously. Trying to make out which version of DB2 you actually need, can be a challenge as they offer many versions of the product. IBM needs to improve their website.
A single host can have multiple fix pack versions installed. I was quite surprised at this as you can also have multiple versions on you host that are vulnerable to different attacks. When installing the fix pack with defaults settings you will end up with a fixed version and a version that is still affected by the vulnerability.
Looking at variety of flavors DB2 offers, I think it would be a massive undertaking to figure out which version you need. DB2 servers versions include: “Enterprise Server Edition”, “Workgroup Server Edition”, “Advanced Enterprise Server Edition”, “Express-C, “Express Edition”, and you can even have a “Personal Edition”. Each version has their own specific use case but I cannot understand why you would need so many different versions. Additionally, the naming scheme could have been a lot better: “Express Edition” and “Express-C”. What is the difference you may ask? Well, the first is a cut down version of DB2 and is a low budget option; Express-C is a free version with lacking functionality.
I spent some time looking up which CVEs affected which versions of DB2 on the IBM website. One would expect that they would list the CVEs with the fix list information. Indeed, some of them are there, but you have dig around to find the full list of CVEs that affect the software you are looking at. I feel that underfunded open source projects do a better job of documenting than IBM.
Overall, IBM’s DB2 software is the most interesting product I’ve worked with in a couple of months. It wasn’t the software itself, but how the software can be used. I did like the option that allowed you to run different versions of the software on your computer for backwards compatibility, but IBM needs to work on combining the information in their site to allow for ease of access.
I always visit family over the holidays and this year will be no
different. After making the trek back to my hometown each year, one of
the things I can always count on is that family members will ask me to
"fix" their computers. It is the inevitable role of the geek, and often involves
long hours spent configuring personal firewalls and
killing malware. All this in the understanding that by the time the
holidays come around next year, these machines will be back in a similar
state and require "fixing" again.
Last year, during a complete system reformat caused by an oppressive number
of viruses on a family member's machine, I started thinking about how
preventable most of the issues were. Could we decrease the 'family tech
support' problem by providing basic security education? Simple rules
regarding downloads and safe browsing combined with an understanding of
basic security practices would go a long way. While it may be time consuming to do
security training with family members, it pales in comparison to the time
it takes to fix these issues.
I had a particular family member that often had these issues and last year
I took the educational route. I have already heard that there are few, if
any, issues that will need to be dealt with this visit. So, as you head home
for the holidays and put on your tech support Santa hat, remember that
education can go a long way towards making your holidays more enjoyable.
I thought I would return to the topic of reporting filters today. Questions often come in regarding which items were found locally and which were found remotely. You can create a report filter to show you only locally or remotely detected applications/vulnerabilities present in your nCircle report.
To create this filter click on 'Analyze' in the left side menu in your IP360 VNE and then select 'Reporting Filters'. A new filter will appear and step number one is to assign it a name. Step number two is titled 'Set Parameters' and contains a list with a drop down for 'Attribute' and 'Action'. We are going to add the application trees that contain the remote checks. First, use the 'Attribute' drop-down menu to select 'Applications' as your attribute. Now use the search and filter buttons to find 'Windows Registry', 'SSH-DRT', and 'SNMP' and click on them to add them to the box on the right.
The screenshot below illustrates what your filter should look like.
Now we can move on and select the desired action. Include will restrict report content to the selected remote applications. So if you select 'Include' you will get a filter that shows only locally discovered applications and vulnerabilities. On the other hand, if you select 'Exclude' you will get only remotely discovered content. This can be very helpful if you are looking for hosts that had vulnerabilities detected via remote methods without using supplied credentials. As a caveat, it should be noted that there is some remote SNMP coverage included in the SNMP application tree, therefore there will be some remote detection present in the report.
Now that we have our desired filter created we can apply it to an existing audit. Select 'Analyze -> Run a Report -> Distinct Audits' from the menu on the left. Now click on the 'Advanced' tab. At the top set a Network Group, Network, Audit Limit, and Audit. Clicking 'add' will create an entry for the audit you want to filter. Now you can select the name of the reporting filter that you created from the list at the bottom left. Click the 'view' button to generate your custom report. This same method can be used to show you other information as well, such as 'only Mac OS X hosts' or 'only Windows hosts'. Experiment with the different options to tailor your report to your specific needs.
WordPress has a thriving plugin ecosystem, which adds to the popularity of WordPress as a blogging and content management system. But plugins increase the attack surface of a WordPress installation and should be used with caution. While this post is focused on WordPress, the advice is good for dealing with all software that has a plugin architecture.
I did a quick search of CVEs to see what was out there for WordPress. For 2012 alone, my search turned up three and a half times the number of CVEs for WordPress plugins than the core WordPress product. With over 22,000 plugins available now with new ones being added frequently, there is likely a significant number of undiscovered or undisclosed vulnerabilities out there.
I'm a little troubled by how WordPress handles vulnerable plugins after the disclosure occurs. When a plugin is found to contain a security vulnerability it is removed from the WordPress Plugin Directory. However, no notification is given to users of that plugin that it is no longer available due to a security problem. This means that while the plugin is unavailable for installing anywhere new, the existing vulnerable installations still exist. Administrators must be vigilant because WordPress only provides notification when a new version is available, not when the entire plugin has become unavailable.
It's always best to minimize the attack surface of any service or application. For WordPress and other content management systems that means deciding whether the functionality provided by a plugin is really necessary and only installing it if it absolutely is. As with any software it's necessary to remain up to date on all components, including plugins. The idea that WordPress should report when a plugin has been removed from the directory has already been proposed to the WordPress team and they claim to be working on it. In the meantime, there is a plugin to tell you if any of your installed plugins have been purged from the directory. Is this a necessary plugin to install? That decision remains up to you.
In December of 2011, I graduated from Fanshawe College’s three-year Computer Systems Technology program. I had previously worked as an intern with nCIrcle and when I returned, in January of 2012, I found the same great atmosphere I’d previously enjoyed. Everyone on the team knew what they were doing and they were just awesome to work with.
I joined VERT working out of Toronto, just as I had during my internship. I discovered that the team worked but on Patch Tuesday, we ran like a well-oiled machine. For those that don’t know, Patch Tuesday occurs on the second Tuesday of every month and is the when Microsoft (and now Adobe) release their latest round of updates. This is when Microsoft releases patches for vulnerabilities that were found in their products. On Patch Tuesday we work non-stop until we cover ever vulnerability Microsoft has announced. The team-work is enhanced on this day due to increased communication between team members because we’re all working on the same task.
This year, I have used operating systems and software I had never used before. This included an introduction to the Solaris operating system. I found Solaris to be quite annoying at times but what Unix based systems isn’t? I’ve found that I quite enjoy being challenged when working with a new environment. It wouldn’t be as much fun if everything worked right out of the box.
Oracle Database and Microsoft Sharepoint were two software packages I’d been lucky enough to avoid before now. I learned that Oracle is bothersome and difficult to install on many operating systems. If you are going to install Oracle products, I recommend that you find a decent installation guide, as there are important environment variables that will be needed for the database to function.
Overall, my first year at nCircle has been quite educational. I’ve learned how an enterprise operates and how an agile team functions within that enterprise. I also enjoyed working with environments that were new to me and I look forward to the future challenges that this position will bring me.
I’ve always been surprised by the AV industry because, in my mind, it should have failed years ago. It is touted as the corner stone of end user security but in reality it’s akin to having three deadbolts on your front door, while leaving the backdoor open with a giant neon sign that says “OPEN”. The vendors are constantly playing catch up and appear to have lacked innovation from the get-go. To be fair, this issue doesn’t seem to plague every vendor, but more on that later.
My favourite story about AV is from years ago when I was working at a college, we had a malware outbreak in a couple of the labs and our corporate AV was unable to detect and clean the malware. We submitted samples and developed our own tool for cleaning systems. We waited weeks and when a second outbreak occurred, we realized that the vendor still hadn’t dealt with our samples, so we once again resorted to our manual process.
Other vendors have introduced vulnerabilities and reduced system security. You can see recent examples by looking to Tavis Ormandy’s Sophail paper and past examples by visiting almost any AV vendor’s website. So, now we have AV vendors giving malware new ways to propagate, yet many people are still willing to give money to this massive industry.
Even well known names in the AV industry have their own failures. Stuxnet, DuQu, and Flame all went undetected for quite a while, which Mikko Hypponen refers to as (paraphrased), “a spectacular failure for the antivirus industry”.
I mentioned when I started that not all AV vendors have been plagued by horrid inefficiency. Microsoft Security Essentials has amazed me as a product and 10 years ago I wouldn’t have considered saying that about a product from Microsoft. Judging by a report earlier this year from OPSWAT, I’m not the only one realizing that. Microsoft’s market share is growing. It also makes sense when you look at that report that the top two are free offerings. I’m often amazed that people are still paying for AV -- WHY PEOPLE… WHY?!?!?! I could understand paying if it was the best offering, but it’s not… it’s a flawed fallback at best.
In a recent talk, I mentioned that Security Essentials had impressed me by denying my Metasploit payloads 100% of the time and requiring that I turn off my AV in order to successfully run an exploit. A student at a local university came up to me afterward and mentioned that in his labs, they had easily bypassed other AVs but Security Essentials had stopped all of their attacks.
So, back to my original question, should the AV industry accept defeat? I think they should; I think that AV should be rolled in at the OS level. This could mean an internally built team, an acquired company, or outsourcing to another company to build the product but tighter integration with the OS Developer seems key to a strong AV offering. Sure, that approach failed us for a long time in the browser market but maybe the reverse is true that the OS level. Rolling awesome, standalone products into the OS should, in my mind, only make things better.
Over the past few days, I have identified and disclosed several cross-site scripting (XSS) vulnerabilities within a website I’ve recently started using. In case you don’t know, an XSS vulnerability basically means that an attacker can provide new scripts to execute within the context of the vulnerable web application. The application vendor, let’s call them ‘Company X’ for now, advertises that their sites serve millions of users in over a half dozen countries. A quick Google search for the ‘This site is provided by Company X’ returns 32,000+ results. I looked at a few of these sites, including Company X’s own online demo site, and this cursory examination revealed the same set of persistent and non-persistent XSS vulnerabilities. This means Company X is probably putting a huge number of their users at potential risk with these vulnerabilities.Read more...
On October 3, I went to another talk at SecTor called ‘I Just Middled You’. This talk reiterated the definition of a man-in-the-middle attack and just how long they have been used. The speaker also talked about tools you can use to defend against the threat of a man in the middle attack.
Man-in-the-middle attacks have been around 15 years, yet the speaker has only been caught once. He also noted that it works because of infrastructure issues that have not been addressed. Devices on a corporate network are only replaced after the device replacement cycle so they may not be able to perform functions needed to prevent man-in-the-middle attacks.
When devices are replaced, companies should implement security policies that log ARP traffic to determine where an attack originated. A technique that should be enabled on new technology is dynamic inspection, which will find the hosts that are poisoning the table. When the host are found dynamic ARP inspection will quarantine them and prevent the attack.
To sum it all up, sometimes network security requires you upgrade before a device is due for replacement… it’s the cost of a secure network.
Over the years, I’ve had to engage in public speaking more times than I’d care to admit. Between conferences, user groups, and the classroom, it’s been more than enough to learn a thing or two. I know, for example, that I’ll need water if I’m speaking for more than 10 minutes. I also know that, due to nerves, I can’t eat prior to getting up in front of a group. There is, however, one rule that I’ve yet to learn. I’m fully aware of it and I always tell myself that I’m going to account for it, but somehow I never do. As far as I’m concerned it’s the golden rule of public speaking.
The rule consists of two parts, but I still consider it to be a single rule. The rule is also directly affected by how long the presentation is supposed to be. Given that I’m a geek, this rule is probably best expressed as an equation.
PT * 0.75 = NS + 5 = T – 10
Where: PT = Practice Time
NS = Number of Slides
T = Time (Length of Talk)
If we take my recent talk at SecTor as the example (although this applies to the majority of presentations I’ve done), we can see how this works. You start with the length of your presentation and subtract 10 minutes (allowing for questions). This gives you your target numbers. From here you need to determine how long your practice time should run and how many slides you should have.
Your practice time will always be longer than your presentation time. This seems to be a rule. So the best way to keep yourself on track is to ensure that your practice time runs over by ~35%. Our practice time for our SecTor talk was 47 minutes (instead of following the golden rule, we aimed for PT = T – 10). Yet, when we stood up in front of our audience, our presentation was finished in 35 minutes. Had we applied the golden rule, we would have been good.
The other part of this (NS+5 = T-10) is really just a means of ensuring you stay on track for the first part (PT * 0.75 = T- 10). We had 30 slides, which fit the definition of the rule… that we would speak for 35 minutes. If we’d had 55 slides, we could have paced ourselves better to aim for that 50-minute mark.
This rule may not be true for everyone, but I’ve found that it’s true for the presentations I’ve given and for the curriculum I’ve developed and taught. The real question is whether or not I’ll ever learn to follow the rule myself.
So… does this rule apply to you?
"How NOT to do Security: Lessons Learned from the Galactic Empire" How could you not want to see this talk after reading that title. This was definitely a talk that jumped out as something I wouldn't want to miss, and the day 2's lunch keynote by Kellman Meghu certainly lived up to my expectations. Accompanied by more variety of food for lunch that appeased my love of tomatoes, the talk, interspersed with clips from Star Wars was both amusing and informative. The security failings of the Galactic Empire, are issues that are quite relevant to businesses still. From removable storage (droids) walking out the door with sensitive data, to the difficulties of explaining security weaknesses to top management (Darth Vadar). Star Wars gives a surprisingly accurate and informative look into how small holes in a security policy can lead to catastrophic consequences. A look at how a few simple changes rounded out the talk nicely. For example, improved access control so R2-D2 coudn't have shut down the garbage compactor, would have ended the movie right there.
On October 2, there was a talk at SecTor called ‘Controlling BYOD Before It Becomes Your Own Demise’. This brought up three main subjects that should be considered when thinking about using a bring your own devices policy. You have to consider the risk of having business information on a mobile device that you do not control. What type of policies are you going to enforce and how they are going to be enforced on a personal device? What type of security can be used with the device?
What happens when a device is lost, especially if that data on the device is important? You have to consider the worst-case scenario, the data you have lost will become public or, at least, the data could be sold to a competitor. This will lead you to consider what type of policies you will put in place on the devices.
To find a solution to all of the risks we turn to producing policies that will be used to protect data loss. You will then have to consider how much control you have over a user’s personal device. You will also have to considering that you can have applications that can monitor and wipe devices with all of their personal information on it. Example questions that should be asked: should the device be remote wiped or if possible can you selectively wipe the data and do you limit the applications that the user can install?
Security really goes without saying, as all devices in your company should have a password. Remote wiping a device when it is lost since it can probably fall into the hands of someone that could use that data against you. Also, do you allow personal devices to VPN into the corporate network? Imagine the havoc that could be wrecked if someone “came into” access to your internal network.
combinations with modern computing, right? At Steve Werby’s presentation “Building Dictionaries and Destroying Hashes Using Amazon EC2” he demonstrates just how much computing power one has access to and the havoc that can ensue.Read more...
The highlights of SecTor 2012 for me were two very excellent lunch keynotes. While the day 1 lunch food was a little lackluster, consisting of sandwiches and salad (the mushroom salad was delicious though), the NFC talk by Charlie Miller was great. I'm not sad at all that my phone doesn't support NFC, especially since there is little to do to prevent unwanted NFC use short of completely turning it off. The assumption is that since you have to be really close to use it, you're giving permission by proximity. The problem is, getting close enough to exploit NFC takes even less skill than picking a pocket. As was pointed out in the talk, even if you notice someone getting into your personal space, it's likely too late as your phone has already accepted the NFC data.
Here at SecTor 2012 conference in Toronto I am looking at vulnerabilities that time forgot : Jamie Gamble's presentation on vulnerabilities still present 15+ years after their discovery, particularly password issues on unix and unix-like systems.
Think that password file is safe when using NIS (Network Information Service) for network authentication? A flaw to take note of existed in NIS where an attacker can simple run ypcat (ypcat passwd.byname) to grab a copy of your /etc/passwd or /etc/shadow file. Similar yp commands can also be used to bypass local security (the “yp” prefix comes from NIS formerly known as Yellow Pages). You may be running a service like openLDAP, and ypcat may not be available use, but Gamble still warns "other simple methods exist".
Yesterday, I spoke at SecTor on the subject of VMware ThinApp… specifically looking at the various isolation modes and how they are affected by run of the mill exploits, like the ones you would see in various exploit kits. While I’ve planned a second blog post to discuss what it was like to speak at SecTor, in this post I wanted to address an idea that I briefly mentioned near the end of the talk.
To discuss this idea, I should first give you a brief summary of our research (the slide deck is available on the Connect Forums). ThinApp allows for three isolation modes, each mode modifies the way that the guest application can interact with the host operating system. The results ranged from introducing risk (e.g. IE6 on Windows 7 in ‘merged mode’) to so secure the application wouldn’t start (Firefox on Windows 7 in ‘full mode’). There is a nice middle ground with Write-Copy and Full mode working together where the application runs yet malware cannot, the sandbox is fully segmented from the host.
My suggestion near the end of the talk was that VMware should work with popular software vendors (e.g. Google, Microsoft, Mozilla) to offer “ThinApps.com”, a website where software from popular vendors can be downloaded as a secure ThinApp. If I knew the licensing around ThinApps, I’d consider doing this myself, but I don’t… so the safest best is to get the vendors themselves working together to create this offering.
Right now if you want a secure browsing setup distinct from your host OS, you need to get VMware Player, install an OS, install and configure software, and boot a second OS (very resource intensive) to use that “secure browser”. You also have to remember to revert to your clean snapshot after every browsing session and worry that an individual browsing session could still be owned.
With a correctly configured ThinApp this isn’t the case. When properly locked down, our exploits weren’t successful. While targeted attacks may still be possible, you eliminate the risk of the generic exploits bundled with most exploit kits. You also have a single application to run, so for the home user it’s extremely simple to use. I think we could go a long way to increasing the security of the home user if vendors worked together to make use of VMware ThinApp.
So what do ya say vendors? Who’s in?
In MS11-072 and MS11-050, Microsoft offers multiple patches for Sharepoint 2007 and 2010 under different component names. However, they give very little explanation as to what these component actually refer to. The best information can be found in this blog post [link: http://blogs.msdn.com/b/opal/archive/2011/06/30/th
The printf() family of functions (printf(), fprintf(), sprintf(), etc.) are surprisingly powerful and, if not properly used, can expose a class of vulnerabilities called format string attacks. These attacks can be very bad because, with a well-crafted format string, an attacker could write an arbitrary value into an arbitrary memory location. This could allow the attacker to do things like hijack execution or escalate privilege. In this post, I'm going to go over the basics of how this vulnerability works and how it can be corrected.Read more...
There have been a lot of very high profile vulnerabilites over the last several months. They have run the gamut from Internet Explorer to Java. Today, I wanted to talk a bit about the increasingly audible calls for regulation in the industry, with regard to vulnerability research.
Does the severity and increased consequences of high profile vulnerabilities mean that research and disclosure of such information should be regulated? Are we approaching a time when exploits are considered dangerous weapons and treated as such? These questions are being asked by those both internal and external to the security industry. Its an interesting question, one that can be viewed from several different angles.
Firstly, I think we need to discuss the degree of regulation... what would it look like? Let's assume that, under a new hypothetical regulation, only security researchers could release information related to vulnerabilities. Granted, this is a rather extreme example, but it helps to illustrate the issue. In this situation, how would one define a security researcher? Secondly, who will have the right to the information generated by these researchers? The government? Security companies? How would these regulations help or hurt people that rely on this information to keep them secure?
Another important point that should be raised when discussing regulation is enforcement. The security industry is a global entity. If our government were to embark upon regulation, how would it be enforced beyond the borders of the United States?
It's a difficult question and one that has many possible answers. I'm not sure I have an answer, so I put the question to you. Is regulation something that should be considered and, if so, to what degree.
This is the conclusion of a three part series exploring how stack buffer overflow vulnerabilities work and what developers can do to protect their code. Read on for a demonstration of how the 'synscan' example program can be exploited to gain a root shell by using BASH environment variables to store and locate shellcode in memory.Read more...
Part 1 of 'VERT Vuln School: Stack Buffer Overflows 101' introduced an example program containing a common programming error known as a buffer overflow. Specifically as outlined in part 1, this program fails to provide bounds checking when processing user-input and enables an overflow of user-controlled data onto the stack. This installment of Stack Buffer Overflows 101 answers some of the most common questions pertaining to the stack overflow vulnerability category. By looking at what the stack is and how the stack is organized in memory we can begin to understand how unbounded string manipulation can enable system exploitation.Read more...