Protect IT Infrastructure Part II:

Index

Securing IT infrastructure

Vulnerability Management

A vulnerability is a security weakness in a system or service, usually one that can be exploited somehow. Exploitation means an attacker can try to take advantage of that weakness and may gain something in return—usually access to the system or information that he shouldn’t have access to. Vulnerability assessment and management, as a practice, is a systematic way to find these vulnerabilities. After they’ve been identified, they have to be remediated and fixed. If a company is running a network of a hundred computers on the internet, for example, they need to be added to the vulnerability management process. Then they should be scanned periodically to find out if there are existing vulnerabilities that should be fixed.

Vulnerability management should not be done once—it should be more or less continuous. The cybersecurity manager should give a strong message to the management that if the company doesn’t manage vulnerabilities themselves, then someone with a darker agenda out there will do it with potentially harmful consequences.

Run vulnerability management as often as you can to catch any new weaknesses on time before attackers do!

Vulnerability management can be internal or external, and it can be passive or active. Active vulnerability management means scanning networks, sending out queries and packets, and finding out what systems, services and weaknesses are out there. Passive means putting something on your gateway to listen to all of the traffic that goes through, then deducing from that traffic whether there’s a vulnerability in that service. A combination can be helpful.

Companies would be wise to at least do vulnerability management on any of their external services that are visible from the internet. That’s the bare minimum.

Let’s say a company runs a website and ten more services that are available through the internet. It’s quite easy to obtain a service from the cloud, buy from a consultancy, or do it themselves—but they should be scanning those services continuously. If any new vulnerabilities are found, they’ll be alerted immediately and can fix them. Most cybersecurity managers are already doing these things or requiring them. If not, they’re ignorant of obvious security risks.

Companies shouldn’t forget that scanning the network with vulnerability solutions periodically has its limits. Most weaknesses are nowadays in web applications that are used with a web browser. Basically they are programmes that run partially in the client’s computer—and quite often, the client might be the attacker. Problems in web applications can’t yet be reliably identified or fixed by automated scanner tools. Hence, we’ve been seeing a lot of high-profile data breaches when web applications get hacked—Facebook, for example, lost more than 50 million user accounts because they made a little mistake in one of their various applications. There are other approaches that try to remedy this weakness in scanner services. Some solutions install a small monitoring agent on the web application server itself, and this approach has the hope of detecting and even preventing attacks better than active scanners can. This approach isn’t yet mainstream but somewhat complements the lacking features of scanners. No matter, a cybersecurity manager would be wise to find web application weaknesses even if it means having to buy penetration testing against all of his internet-connected applications.

Access Management

Companies should have a way to grant, change, and remove access from people—usually, access to systems and information. A variety of options are available, but most companies today still use maybe a centralised control called Microsoft Active Directory.

Active Directory has always been relatively easy to use: just make a change to Active Directory, reset the user password, or create a user account, and everything would be there. Now it’s getting more complex. We have mobile devices that need to be added to a mobile device management platform, maybe provided by Microsoft or a third party. There are Linux and Unix servers in the network that don’t speak Microsoft, and web applications and web shops that are not integrated with the same credentials that people use at their computer.

At the same time, using this type of control is becoming easier because of web standards for authentication. Federation of identities between different web services ensures we can create identities within our network by using good old Microsoft Active Directory and synchronising with cloud services automatically. Companies use many cloud services, like Microsoft Office 365, Gmail, or G-Suite for Business. Federation allows them to use all of those services with the same credentials, which offers a centralised way to synchronise user access to third-party services in the cloud. Once a user is created to the AD domain, access to the company cloud service provider and more becomes a breeze, depending on how they integrate. Building this will take some effort at IT, but it will probably be worth it in the long run. Adoption of the cloud was just at its infancy in 2018, and we expect nearly everything to move to the cloud in the near future.

This offers a lot of efficiency because there’s less to manage, but it’s an enormous opportunity for hackers. Centralised access means someone can hack in, gain access to one of those cloud service providers or steal the password, then directly log in to all those services. There are no additional checks. It’s a trade-off that most organisations make gladly.

Authentication

Authentication is related to access management, but it’s a separate subject. Authentication covers the techniques and technologies companies use to ensure that people are who they claim to be. This is what makes it possible to have confidential information online. Without effective authentication, just about anybody could access a company’s information.

To authenticate, we first have to identify the person trying to use the system. Typically, identification is when a user gives his username or his other identifier, like a mobile phone number. After the user announces his identity, authentication’s job is to verify that the person is who he claims to be. In other words, the process begins when the user provides a bit of information that proves he is who he claims to be.

There are many levels and kinds of authentication, including password and two-factor authentication (2FA), which means another factor to verify the identity claim is required. It might be an SMS message to a registered cell phone number, a PIN token, a software certificate, or something like that. It might be biometrics in some instances or a smart chip.

The level of authentication of identity depends on the company’s needs. The best practice in authentication is to choose a level of security that’s consistent with the values being protected. At a web bank where people are transferring money in and out, it might be a good idea to have two-factor authentication and various other security measures in place. This makes it inconvenient for people who use the application—they have to manage authentication devices, SMS tokens, and those sorts of things—but it’s worth the hassle for the added security. A web shop selling T-shirts, however, might not feel it’s necessary to verify much about the users. They might not even care about the customer’s identity, as long as they get paid.

The biggest pitfall for authentication is when security people try to make everything ultrasecure without thinking about usability. It should be a balance of the two priorities.

The security people should understand a wide variety of different authentication technologies and methods to be able to pick the most suitable method for any situation rather than proceed with a one-solution-fits-all mindset.

In the past, companies tried to solve the problem of access management with something called a Single Sign-On (SSO) or related technologies. The goal of SSO is to implement everything behind one login so that users don’t need to be concerned about more than one password for everything. This user password is then stored somewhere, and they hope it remains secure.

SSO is a good idea, and it sometimes works quite well. But companies should also remember that it creates a “one password to rule them all” sort of risk. If people lose their one password that secures all their other passwords, they’ll risk access to everything.

Psychological consistency is also important when dealing with access management. We’ve seen multiple cases where corporations have an enforced password policy in their Active Directory but totally ignore it elsewhere. In the best cases, AD password is mighty strong and synchronised and enforced across all workstations, servers, and even some services in the cloud. Maybe it’s a magnificent password with twenty characters. But simultaneously, they’re often offering internet services to their customers and allowing them to use any kinds of passwords. Quite often, their employees are using unintegrated third-party services or old systems that aren’t consistent with their password requirements. What kind of story are we telling them about passwords as the cybersecurity manager? When users look around and see what others are doing with their passwords, they see social proof that security management are okay with poor passwords across many places—but just not in the AD. The perception turns against security—it seems like a nuisance because of this inconsistency. They ask themselves: “Why should I use a mighty hard password here when, everywhere else, it’s so easy?” Try to be consistent.

Antivirus

Around the end of the 1980s, antivirus software emerged as one of the first computer software protections. Traditionally, antivirus software was based on signatures. The idea was to create a list of all malicious software in the world, then scan each computer at startup to see if there were any signs of them on the hard drive, stop the harmful programmes, create an alarm, and even remove them if possible. The obvious weakness of that approach is that it’s impossible to maintain a comprehensive list of all the bad things in the world. Someone can always figure out how to create one more bad thing.

Today, antivirus companies have implemented something called heuristics and behavioural modelling—even reputational modelling. It sounds fancy, and it works to an extent. Of course, the hacker has the benefit of figuring out one more bad thing that works differently, and they do. It’s a cat and mouse game, where the mouse is always a few steps ahead.

Any experienced security manager knows that they shouldn’t rely on that antivirus software too much. We’ve done a lot of tests against antivirus penetration, which means we create malware that’s supposed to go through and install undetected. Our success rate is high: 96 to 98 percent of the time, we can bypass the antivirus protection. If we can do it and we aren’t virus writers by profession, almost anybody can do it. In practice, all a knowledgeable programmer needs is a text editor and a compiler, and maybe half an hour of time, and he will be able to modify almost any malware variant to be undetectable by almost any antivirus programme.

Having acknowledged that antivirus approaches have always had their weaknesses and flaws, we still think it’s necessary to run one. If you get ten thousand attempts, antivirus will block a major portion of them. Antivirus software might find 920 identifiable incidents out of 1,000, leaving only 80 infections undetected. Antivirus is not 100 percent secure and never will be, but it can cut down the numbers. It’s what doesn’t get caught by the antivirus that you should be hunting for. This is why we noted earlier that companies would be wise to consider workstation networks already compromised, even hostile, to the other networks. Consider the previous example of eighty undetected infections—we don’t know anyone in IT or cybersecurity who’s comfortable with this fact. Numbers may be different, but the basic idea is a hard, indisputable fact.

Subscribe To Our Newsletter

Get the latest intelligence and trends in the cyber security industry.


    Beyond Antivirus

    Some people sell solutions that claim to be able to identify malware that antivirus can’t, like sophisticated hacking attempts directed toward high-value targets. Take these claims with a grain of salt because it takes a bit of effort to buy, manage, and implement them, and there’s still very little evidence that they deliver the value promised. It might be a good idea to try out these services on a demo or a free trial. But the question remains, how to verify their claims? They’re looking for an invisible threat that antivirus doesn’t detect. There’s no way to be sure they’re doing anything at all; even if they provide a sample virus, it’s probably a sample they know will be caught. Some approaches involve sandboxing applications or portions of the computer to secured envelopes that are monitored, and many attacks are stopped on the fly. The approach has shown promise, but isn’t actually handling the real problems—the gullibility of people and the security flaws in the system and applications themselves. Nevertheless, this approach can be an additional layer of defence, and can probably stop some attacks that would be successful otherwise.

    Layered Anti-Spam and Anti-Malware

    If you want to be efficient, get your first layer of malware and spam protection from the cloud. Then do the same checks but with different products and possibly even with a different approach at the workstation level. This shouldn’t be overly expensive and provides a fair level of protection against simple attacks.

    Cloud First but Not Last

    If you want to be efficient, get your first layer of malware and spam protection from the cloud. Then do the same checks but with different products and possibly even with a different approach at the workstation level. This shouldn’t be overly expensive and provides a fair level of protection against simple attacks.

    Security Event Collection and Log Management

    In an ideal world, companies would have a centralised service where they store all the event logs of all of their systems, including anything related to security. All the entries would be pushed or pulled into one centralised location, where they would be data mined, correlated, and stored efficiently. Then, out of that big mass of data, machines would automatically detect anomalies with fancy algorithms. That’s the dream.

    Of course, this is easier said than done. The cybersecurity manager will have to answer a lot of questions before deciding on the right solution: What if it’s a global company with twelve or more time zones? All of the events across the globe are pushed to this central system, so what time is marked when the entry arrives? If the company is running its own time service, what if it’s a couple of hours off? Can these events be correlated afterwards? Do the systems provide a way to log entries or transfer those logs away from those systems? How are those logging features enabled? Which systems should be covered?

    Event collection requires recording events in a log, then transferring them somewhere. If someone disables the log feature on a system, or silently removes some entries, the centralised log management system won’t be good for detecting anything on that system anymore. There will be no meaningful events to transfer. Or if an attacker gains access to the system by a means that’s not actually logged at all—which is usually the case—it wouldn’t be apparent in the centralised system, even if all logging is enabled. A lot of cyberattacks and exploitation techniques leave no log entries because they don’t require running software in the system that generates logs. After the exploitation, the hacker has access to the system and can disable the logging, change the logs, remove entries, or add false entries. Even in the case that some logs are generated, the security team will most likely miss it because of “alert fatigue.” They deal with thousands of alerts on a weekly or monthly basis. This desensitises them against any real alerts.

    Log management is a good tool for investigating all kinds of problems and security incidents but also, even more importantly, almost any problem at IT in general

    Companies that buy log management solutions have to trust that they will enable the company to analyse security incidents after they happen as well as detect if something is happening. In reality, some things will be identifiable, but not most; a skilful hacker doesn’t create a log entry. Antivirus won’t detect the hack, the logs won’t say anything, and yet the system will be compromised.

    Security shouldn’t be the only reason for acquiring one of these event collection and log management systems. It should be purchased for manageability and problem resolution in IT, but the business benefit should be the main selling point, not the security benefit. These event collection and log management systems don’t make the greatest difference in security.

    Security Information Event Management

    Log management can evolve into security information event management. With log entries or threat information entries and sources in the network listed and stored in one place, companies can correlate that information with wisdom and artificial intelligence. That’s security information event management; it sounds powerful, and it can be.

    These systems make sense for large organisations that run security operations centres. Service providers that sell cybersecurity services and continuous services to other companies have good reason to invest in one of these. They could integrate all of the different sources of information into this one system—vulnerability management and network scanning, firewall traffic, and intrusion detection. Quite often, companies subscribe to external threat feeds as well, which provide external threat intelligence. This would give them as much information as possible from different sources, then let the intelligent machine decide which of those alerts constitutes a risk.

    For many Security Information Event Management (SIEM) solutions, the promise is like this: first, there’s a hacker on the internet scanning the company’s network. The SIEM solution would identify a network scan on the network edge based on the firewall log entry. SIEM could also use external threat feed data and notice that the attacker’s IP address belongs to a known bad actor. It would be possible to raise an alarm just with this information at hand. Usually, the hacker would launch an exploit on a found vulnerability on a server, and the network intrusion detection system would log an entry to SIEM that there was an exploitation attempt that it saw. This would be logged maybe with a higher priority. So first there was a scan from a known bad actor, and now there’s an exploitation attempt.

    Then, staff could go back to the database in the SIEM solution and cross-check whether the system in question was vulnerable and whether there was a vulnerability that would match that exploit. Or this step could be fully automated based on capability of the SIEM solution. If so, then the incident would be given a very high priority—because there was a vulnerability that was exploited and was probably successful. There would be a pretty good chance that the attack was successful.

    This story is fine, and sometimes things will play out like this, just like they are told in SIEM vendor presentations. In reality, there are multiple factors that mess up that pretty picture.

    The usual signal-to-noise ratio between real alerts and false positives in SIEM systems is typically so high that the systems are often unable to deliver the value that vendors promise. Quite often, real attacks are lost like a needle in a haystack. In most cases, the systems aren’t able to do the correlation with the precision and efficiency that the vendors promise.

    Lots of false alerts means there has to be an expensive team managing them very actively, all the time. The data sources need a lot of pruning, rules need to be modified and created, and so on. If you do not have the resources to run something like that, don’t buy it. If you plan to run a SOC, please bear in mind that you may have to pay a steep price for a meagre return.

    Threat Intelligence Solutions

    SIEM and threat intelligence solutions go hand in hand nicely. An internal SIEM solution would only be able to correlate events that are internal to the company’s infrastructure and any traffic that comes that way. Many companies subscribe to a number of threat intelligence feeds. These offerings generally fall into two distinct categories

    1. Cyber intelligence solutions
    2. 2. Threat feed solutions

    A cybersecurity manager should be able to distinguish between these. A threat feed contains something called Indicators of Compromise, or IoCs, which are basically technical fingerprints of known bad actors or their techniques, tactics, and procedures. A typical IoC could be a hash value of a known malicious file, or the IP address belonging to a known bad actor. These feeds can enrich the internal SIEM by adding an external view to what the system is able to correlate. The drawback is that the signal-to-noise ratio still usually tends to be quite high, and SOC teams will see a lot of false positive alerts.

    Cyber intelligence solutions can also be integrated to a SIEM, but with the distinction that their data is usually enriched by analysis before it is sent to the customer for consumption. Hence, these solutions have a naturally low rate of false positives, and the information they send tends to be more reliable than mere technical IoC feed data. Basically, a good cyber intelligence solution can always provide actionable alerts to the customer without the need for further correlation with other events in the SIEM before SOC analysts are able to act on the information. This provides a remedy to the alert fatigue problem that is plaguing the cybersecurity industry.

    Think of it this way—all actionable cyber intelligence needs to be collected and analysed before it becomes consumable and actionable—before anyone can do anything about it. A cybersecurity manager would be wise to externalise some of that burden to a service provider if he can’t have it done properly in-house. Most companies don’t have that capability and will not want to invest into it.

    Assigning Analysis

    A core question in cyber threat and intelligence services is this—Who’s doing the analysis phase before information can be consumed? This will tell you who’s going to have to employ the staff to do it. This takes a large budget that most companies can’t bear. If this is your case, go for an intelligence solution instead of increasing your own alert fatigue.

    Gateway Protection

    Intrusion detection systems (IDS) are an old idea, in theory very similar to antivirus. The concept is that an alert should be created if there’s an attack flying over network traffic that has an identifiable fingerprint, or it behaves in a way that’s suspicious. It’s based on the idea that there’s a certain amount of bad things that can be identified on the fly and that you can make a list of them all. Like antivirus, it’s nearly impossible to list everything; something will be missed. Likewise, the behavioural approaches have the typical problem that they tend not to be 100 percent effective and also increase the alert fatigue that SOC teams are too burdened with anyway.

    IDS comes in two variants: network-based and host-based. Network-based IDS (NIDS) listens to network traffic, usually at the choke point, like one side of a firewall. The other variant, host-based IDS (HIDS), is the software or rules that are applied to a single system, like a server. NIDS listens to network traffic and gives alarms, while HIDS monitors the integrity of the system and any bad signs of something happening inside the system itself.

    The focus of HIDS is usually to monitor any important changes within a system. Let’s say a company is running a Unix server. A HIDS running in the server notices that binaries, or executable files, are changing. An alert would be raised that a change is detected, but not necessarily where it came from. Maybe someone internally made the change by updating the binaries to a newer version, or maybe a hacker signed in and is replacing executables with malicious ones.

    The obvious pitfall of this technology is that it creates a lot of false alarms. There’s also a high chance that any attack will fail to be detected at all. In practice, these detection systems work like an alarm bell that doesn’t always ring, and when it does, there’s usually nothing to worry about. Is it really a good alarm then? If it were a fire alarm, you’d probably throw it away after a week. The technology isn’t necessarily bad, but it has its limitations and requires a lot of pruning and maintenance.

    Backups

    We’ve talked a lot about backups already, and our advice is simple: do them. Do them first, test them, and do them again. Have a process in place that ensures they actually work. If you don’t do anything else in security, do this. Don’t even have a firewall if you don’t have the money, but have backups. Take the backups offline at least once in a while.

    Online backup has become very popular. Everybody has an iPhone or Android, and if you take a photo, after a minute, it will be in the cloud, backed up. Companies should look for similar solutions that are equally easy for the user—automatic data transfer, low maintenance, and an adequate level of security. It might not go to Google or Apple’s cloud, but there are separate solutions that give you the promise of online backup with encryption and security and central management. Companies should look into this because almost everyone has already adopted the idea of online backups based on their personal experience with mobile devices. There’s no adoption curve.

    A good online backup service should provide you with the innate ability to have data off-site and available immediately for restore when it’s needed. Many services offer encryption, user management, and other security features that should make it easy for most businesses to adopt these services. They also provide some protection against ransomware attacks that are commonplace nowadays.

    Vulnerability Blind Spots

    We are constantly surprised at how organisations fail to recognise their security blind spots. Sometimes, these blind spots are technical in nature; other times, they are failures in physical security or access control. Here’s a real-world example.

    We worked with a major company in Europe that’s in the real estate and facilities construction business. They are one of those entities who are considered as critical infrastructure for that country. They had virtually hundreds of physical locations and huge buildings. Most of their access control systems are online, connected to the internet.

    The servers that controlled the doors were grey metallic boxes that were screwed on to the walls in a dusty room and essentially forgotten about. This company had these in most of their buildings, usually somewhere locked away in the basement floor. The boxes were connected to the internet by routers that were managed by the service provider who installed the system. Interestingly, but not surprisingly, most of the routers were installed in 1999 and were actually managed by nobody. It’s very commonplace that these boxes quickly become forgotten by everyone because all they care about is that access to the buildings works and nothing more. We inspected some of these routers and found that they were full of multiple vulnerabilities that could have been exploited from outside of the network. Anyone from the internet could find the system and scan it, and see that it was vulnerable to all kinds of known exploits.

    Of course, someone did exploit that modem vulnerability and gained access to the company’s built-in access control systems and used it to their advantage. Remarkably, before the attack, IT didn’t have any idea that this was a risk.

    The worst part was that even after they were exploited, they still didn’t consider it a serious problem. This was surprising, especially considering they had a law enforcement agency in one of their facilities. Despite all that, they probably still have the same vulnerable modems in use today.

    The point of this story is that even the biggest and most forward-thinking companies in the world have security blind spots and vulnerabilities that they don’t know about. Or they do know about them but fail to fix the problem. That’s why it’s important for every cybersecurity manager to follow the steps and guidelines spelled out in our articles and our book ‘Smiling Security’. By adhering to these suggestions, security blind spots can be uncovered, and they can be addressed. Ideally, before a breach or an attack happens.

    Partner with us


      Add additional information

      Thank you for contacting Cyber Intelligence House.
      A member of our team will get in touch with you shortly to schedule a test drive of the Cyber Exposure Platform and discuss your needs