HR & Privacy
When hiring a new employee, most companies fail to do the most basic security checks. They usually don’t even check the applicant’s official ID. So really, the person showing up for work could be anybody.
Most companies also fail to verify the applicant’s claims. Many, if not most, of the CVs that companies receive don’t accurately reflect the truth. They aren’t necessarily fake, and there’s some truth to most of them, but CVs sometimes exaggerate or misrepresent a candidate’s experiences.
For example, we’ve hired people who told us they were experts in information systems and certified in operating systems and certain technologies. In reality, they didn’t have those skills. This happens frequently, and it’s not just an HR issue; it might be a security issue, and most definitely, it’s an issue about money. If an applicant is fabricating work experience, what else are they lying about? How much money did the company lose just in hiring and firing people? That’s a huge financial cost, too, but not considered a security issue often. But it’s the same company’s money that pays that bill too!
If HR managers don’t perform their due diligence at the start of an employment contract, they might end up with questionable people working at the company. So it is essential for HR to conduct background checks consistently on every new hire. Many companies just fail at this outright. They don’t even do background checks for security positions.
Failing to do background checks doesn’t make much sense. It’s relatively inexpensive to run a check, and yet the cost of hiring the wrong person or a person with a criminal background is quite high, sometimes catastrophic. It’s so much easier to hire the right person from the outset. HR is critical for getting the right people and avoiding the wrong people.
Find a professional service provider who can do background checks for your HR. Security screening is something you can’t do in-house.
HR also fires people, and if they don’t do it the right way, that can be a huge risk to the company too. Many employees who leave a company take information with them illegally when they go. That might come as a surprise to some people, but it happens a lot, especially in sales organisations. A salesperson’s value depends largely on who they know, so the next employer might value them more if they have an address book of contacts. Realising it might give them an advantage; some salespeople try to take the sales database with them when they leave a company—secretly, of course. There’s a fine line here. If you have a business relationship with a customer, no one can expect you to not contact them again under different agenda, but taking the whole sales database without permission is illegal. This risk can be sometimes covered with anti-competition clauses in employment contracts. But in reality, this happens a lot, even with those clauses in place.
We had a CEO contact us and ask, “Can we track down whether someone has opened our sales database files and made a copy of them?” This happened after they fired a salesperson, then suddenly their customers started getting calls from that salesperson’s new employer. It was well timed and looked very suspicious, but in the end, there was no proof. The CEO wanted evidence that there was theft of data. The problem is that most companies forget to enable the file audit logging feature that allows the company to make a log of who opened or copied a file. No log, no evidence. They can’t generate logs retroactively.
It’s not just salespeople either. Maybe a system administrator left the company with all the data about the employer, like passwords and keys to information systems. Our advice is to be very nice to system administrators when you fire them. They can wield enormous power in terms of access to information and systems. Treat them well, even during the hard times.
HR plays another role in security efforts when they’re involved in complying with data privacy legislation. This area is quite new. The titles of privacy officer and data privacy officer have just started to appear in the last few years, as new legislation has started to mandate customer and employee privacy rules. HR used to be the lead in this area, but now there are managers with titles like Chief Data Protection Officer. Whatever the titles, HR is still at the heart of this because HR deals with employees’ private personal data. Sometimes they are needed as in-house experts when dealing with customer’s personal data.
The HR director is also connected to a lot of different security policies and will know company requirements about privacy and the local privacy legislation. They’re also the key players in dealing with representatives from unions. If a cybersecurity manager wants to make an acceptable use policy, they’ll have to get pretty deep into people’s daily work, and they’ll need to talk with union representatives, so they will need to work with HR for both. Let’s say the security department wants to install a CCTV system to monitor areas where people work, or maybe they want to track their computer use in some security-related cases. These are usually things that people feel are invasive, or they feel it’s against their privacy. The best way to seek acceptance is to go through HR, then talk to union representatives with them and plan the necessary improvements together. Early involvement is the key.
Because HR managers have a high level of security awareness, most cybersecurity managers are comfortable working with them. HR knows how to protect personally identifiable information, or PII. When they leave their offices, HR people usually take a lot of care to lock their doors, put all the papers in drawers and lock them, lock their screens and computers, and take other physical security measures. Many of these security and privacy requirements are mandated by law. If HR makes a mistake, they might be liable for it, and they know it.
Threats in HR and Privacy
Data breaches and leaked personal information are the clear threats related to HR and privacy. Typical cases include stolen personal IDs or social security numbers. Stolen private data might also include street address, name, phone number, and credit card number, which is enough to conduct many types of fraud on someone. Nowadays, it’s becoming very common that health institutions are breached, and all of the above data is stolen along with patient health records.
The simplest form of identity theft is someone stealing personal information and using it for phishing. There’s much more, though. Some people’s identities have been used to buy or sell a home or to shop online. It’s easy enough—some online shops allow customers to make purchases with post-payments, so all a hacker needs is a full set of information about a person’s identity, and they can make a purchase and send it to a false address, leaving the victim to pay the bill.
Identity theft, as horrible as it is, is seldom personal. Hackers rarely try to target somebody personally as a vendetta. It’s all for money, and there are a million targets out there. The only limiting factor is that attackers don’t have the resources to attack everybody at the same time. The one who is targeted is just unlucky. And these unlucky guys come in large quantities. PII is one of the hard currencies that cyber criminals use in their trade.
Once personal information gets out, there’s not much a victim can do. We recently heard about a case where someone in Finland was involved in a data breach. Back in 2012, his information was breached along with that of nine thousand others. You might think that information from 2012 was too old to be useful anymore, so what’s the big deal if the information was leaked? That may be true for a phone number or an address, but some private information doesn’t change, like a person’s social security number or their name. In 2018 this victim found out someone was fraudulently buying things online under his name, six years later! Once it’s out, it’s out.
Theft of personal information obviously leads to a loss of consumer trust, and it should be a major concern for companies that have a lot of B2C consumers. It’s getting worse; one recent data breach contained 1.4 billion user passwords. This trend has led countries across the globe to enact new laws about data breaches and spreading information with malicious intent. It’s going to take time for these new laws to become effective; however, the criminals won’t wait. They’ll do it anyway, with or without the law.
Companies can spend a fortune to make IT and technology virtually impenetrable, but the bottom line is that the people involved need to develop their security awareness. We have data that shows the biggest thing companies can do to improve security is increase awareness. Otherwise, someone is going to click the wrong link or open an attachment or do something silly. And there’s no patch for human stupidity.
Companies usually perceive themselves as being better prepared against these sorts of threats than they actually are. Employees often think, “Because I have a computer in front of me and I can do anything I want with that computer, then I can always avoid clicking a suspicious link or opening an unknown attachment. Because I can do it, I deduce that everybody else is in the company will do the same.” It’s not true. Most people don’t understand when they are being influenced or tricked. Human nature is one of the most difficult problems in security, and it always will be.
Look into SET (Social Engineering Toolkit).
This simple tool demonstrates how easily hackers can fool people by forging messages and web sites and making them give up their passwords. And much, much more.
To protect private data, cybersecurity managers should find out where the company’s customer data and employee data are stored. These kinds of data have a lot of similarities; they’re usually linked to individuals, such as employee name and details, customer name and details, or partner name and details. The business might not even be aware that they have this data. They might focus on the web shop, sales, or operational processes, but they often forget the data has a lot of value both for businesses and for criminals. Often, companies think they just need to secure the payment process, credit card information, and application. Sorry, that’s not all of it. The customer data within the application and the servers where data is stored is also important to secure. Sometimes that data goes silently into log files without anyone realising it. When logs are stolen, all the data goes out too, even if it’s not in the customer database anymore.
The cybersecurity manager should cooperate with HR to find out where the data is stored, then figure out if it’s stored securely enough. Are there risks that still need to be addressed? This usually leads to a list of servers, services, IT services, and a description of what sort of data is in each of those systems. Then the cybersecurity manager must meet with HR and IT and have a discussion about how to handle securing that data without stressing the budget.
Securing Personal Data
Companies usually decide that they’ve got two important types of PII data to protect: customer data and employee data. Employee data is straightforward—HR will be responsible for securing records in the salary and payroll systems. HR may need to do encryption or test the HR system’s security. It shouldn’t be a problem for the cybersecurity manager to insist on this; most HR staff will be happy to do it because they know how important it is to protect that data.
Customer data is more difficult to secure, not so much from a technical angle but because the organisational roles aren’t clear about who owns it. It’s not really IT’s data. IT is just running the systems. They can secure it, but they’ll look for an additional budget to do so. This is when the cybersecurity manager goes to senior management—maybe the leader of that business operation unit. The cybersecurity manager will say, “We’ve got this service that contains a hundred million user records of our customers, and I’d like it security tested and audited.” The manager will ask why, and the cybersecurity manager can explain its importance.
Internet-facing systems with customer information face the highest risk. The best solution available is not to store that data at all, if possible. The next best thing is to design the system to be secure from the start, then encrypt customer information and do security testing when the system is ready enough to go to piloting and again before it’s put online. Preferably, the testing and consultation will happen early on, so developers have time to fix and implement security fixes if necessary. We’ve done hundreds and hundreds of penetration tests against different web applications. The best results in security tests are consistently with systems that were designed to be secure from the start.
It’s very hard to add security on top of an existing system afterwards. Get involved with the software development teams in the company!
Subscribe To Our Newsletter
Get the latest intelligence and trends in the cyber security industry.
As we’ve seen, the weak link in securing data is often the employees who lack training in security issues. Fortunately, HR usually has funding budgeted for training and employee education. They organise e-learning and face-to-face teaching and online training events as well. It shouldn’t be too difficult for the cybersecurity manager to get approval for an hour of cybersecurity training for all employees. If possible, scheduling a one- or two-hour face-to-face training for all new employees is ideal.
Efficiency through E-Learning
Most companies do this type of training via e-learning instead of face-to-face, in-person training because it’s more cost-effective, has flexible timing, and still meets compliance criteria. These trainings live on an internal cloud service where HR posts the e-learning materials. The content is usually created by a service provider who’s able to produce the content, like educational videos. It’s also possible to buy this training as a service and just modify small bits of the presentation.
E-learning saves a lot of time for everybody, but face-to-face training classes are important as well. It’s helpful to make the cybersecurity manager’s face known to the other employees. They need to understand that the cybersecurity manager is a people person who likes to help, not just stay in her office and play with computers.
Those are all good ways to train the entire staff in a big company, but there’s usually a smaller need for specialist trainings. This includes people in ICT special roles, such as network specialists, firewall administrators, application developers, and many others.
Let’s say the company is building their own web services platform for their customers. This is supposed to make money in the future. Shouldn’t the people who are coding the software and putting it online also know about how to create secure web services? Were they ever trained to do that at school? Probably not. These people are usually somewhere in their forties. When they were young and in school, there was no subject called “security” in ICT. It’s being taught in schools and universities now, but not often enough. When they do address security, universities aren’t teaching the most up-to-date tools in a professional context; they tend to rely mostly on textbook theory.
That means the cybersecurity manager has to be aware of training opportunities and courses for IT specialists in a variety of roles. Then they need to cooperate with those departments and with HR to make them aware of the training or event. HR and the business unit will then discuss who will pay the bill. It’s not a hard sell; they’ll usually buy a lot of these types of training.
Continuing education is always a good business ethic. And it’s an effective way to show people that their biased sense of security is not correct. Unless employees feel that there’s urgency and a need to change, there won’t be any change in the company.
Security Awareness Tools
We’ve talked about the most common awareness training tools: e-learning and face-to-face training. There’s another way cybersecurity managers can teach and test awareness: by using phishing campaigns, which test people’s awareness of phishing and related threats.
In a phishing campaign, the company sends phishing test emails to see whether anyone clicks. Then they make a list of the recipients and note how each responded or didn’t. The results give the cybersecurity manager valuable feedback about how employee awareness outside the classroom.
Companies can also test whether employees are vulnerable to being fooled with a service called vishing. This test is focused on telephone entry points—usually a helpdesk or service desk in ICT. The security tester will call the helpdesk and portray themselves as an employee or manager, then use social engineering tricks to influence that person to give out a password or change it or even create a new account.
This is a useful test because this is exactly how many companies get hacked, unless they are wary of the threat. Kevin Mitnick, one of the first hackers to make the news and now an author, used these techniques. He exploited human behaviour and trust to hack a lot of companies. He was good as a technical hacker as well, but his main tool was always social engineering. If he called in to a service desk and someone answered, “Hello, Company XYZ, how can I help you?” he might start with, “You can help me by giving me the CEO’s email address.”
Sometimes asking directly does work, but usually hackers are a bit more clever. They use social engineering techniques to fool people. Social engineering is an art and science of influencing people to do things they normally would not do.
Learn How People Are Influenced
Read Influence: The Psychology of Persuasion by Robert B. Cialdini
Hackers use something called weapons of influence to make people do their bidding. Human nature is rigged to be that way—there are cues that we follow in daily life automatically without questioning it much. Let’s review a few examples of how hackers exploit these tendencies. An attacker could call the helpdesk and use consistency: “Hey Sheila, thanks for that sales director’s email address—you are super helpful! Can you also help me with another issue—I also need to reset my password. If you could just kindly send the new one to my personal email? I am on the road.” In this case, the attacker was using a technique called psychological labelling. If Sheila accepts the proposition of being super helpful, she should feel internally consistent helping the customer again with the password.
Another type of common tactic is sense of urgency. “Can you make an exception here? I’m really in a hurry.” There will be a sense of urgency. “I’m about to get onto an airplane, and I cannot get into my computer, so I’m sending this from my private Gmail account.” They’ll drop in crumbs of information that seem to be the right way to verify things, then people respond to the urgency and want to help.
Reciprocity works as well. If an attacker makes the appearance that he’s been very helpful for the victim even in the tiniest way, he can ask the victim to return the favour. Maybe reset the password or something similar.
Another social engineering approach is to use authority: “Hey, I’m the CFO and I’m getting on a plane. It’s urgent, can you just help me reset my email password?” Or maybe there’s a phishing email from the police department or the security officers at the bank.
Sometimes social engineering comes with a sugar coating. It works well, especially if the victim likes the attacker or if the hacker is using a sexy picture to present himself. Old-school seduction works.
The permutations of possible social engineering hacks are endless, so awareness testing and training requires continuous training and an ongoing budget. People don’t change behaviour from one session of e-learning; they only learn with repeated reinforcement and repetition of the experience. Employees need to get bits of security information now, and then that needs to be repeated many times over. It’s much better to help them learn this way, rather than by getting badly burned by their mistakes.
Even with extensive training, people are still hardwired to fall for social engineering attack techniques. There is no final remedy to this human weakness. Take this into consideration when you plan your defences. For instance, consider all workstation networks to be contaminated and compromised. That approach should give you the right attitude and allow one more level of defence in depth when you defend your networks.
Employee-related cyber risks aren’t limited to the company offices or systems. Third-party breaches are all too common—Employees use their work-related computers and accounts all across the internet. When these outside services experience a breach, employee account information might be at risk—especially if they reuse the same passwords that they use at the office.
There are services that companies can use to monitor if their employee or customer information has been stolen or leaked. These services monitor for any leaked personal information and alert the company if there are indications of a personal data breach. Most of these stolen personal records are from third-party services on the internet, and they are mostly unrelated to the company’s own business. How does this affect their security, and why does it make sense to monitor it?
Let’s say there is a breach in a third-party service, for example, at a social media platform called FaceGram. The company would be totally oblivious that some of their users were victims in this breach. But if they used the monitoring service, they could know immediately that there’s been a recent breach at their online application. In that breach, there could be 190 million user accounts that were stolen, including email addresses and clear text passwords. They could see, for example, that Shawn Lewis Legrand at Adidas is using the password CATHERINE. This one company could have up to thousands of affected users without knowing if they don’t use this kind of service.
Shawn Lewis Legrand might be using the password CATHERINE at the company IT systems as well as for her work password. Or if it’s a social media password, someone could log in to the profile and use it as a means to abuse trust, like the phone call to the service desk—using the personal profile to contact someone and influence them, or ask for a password reset. Suddenly, these two unrelated things become a serious threat to the company. Remember how easy it is to use Google to find all the login forms for a company’s IT systems?
Companies should adopt this kind of a monitoring service to see whether any information regarding people they employ or their customers has leaked out. These services can monitor any PII-related terms, like email addresses, credit card numbers, social security numbers, and so on. If the company knows what’s going on, they can warn a person and educate them about the breach. They can ask that person to pay attention, change their passwords, and watch out for phishing emails in the future. They can also educate users not to use their work email in outside services that aren’t related to work. After all, the domain name in the email address points directly back home. It’s like writing your home address to your key fob. If you lose that key fob, you can rest assured everyone knows where to try the keys on!
Just like social security numbers, once this information is out, it’s out for good. If companies don’t subscribe to a service to receive the feed of those compromised accounts and records, how do they know whose password to reset? How do they know to warn people that their identity is at risk? They don’t. That puts them in a position of elevated risk.
Monitoring services allow companies to react quickly to possible leaks of personal information and minimise possible damages.
No Insurance for Stupidity
Here’s a telling example of what we’re talking about in this chapter. A major company that provides mining machinery and equipment across the globe works in different time zones with a vast network of suppliers who have strange office hours. They have emails and faxes and messages coming at all times of the day. They’re large enough that they don’t even know all the companies that provide services for them in their network, though they do know the biggest ones. The machinery might be sent overseas to a site for a customer, then the subcontractor or service provider in that country provides operational support. It’s customary for money to flow through the customer, leaving a large bill on the order of several million dollars.
All of these transactions were cleared by email and phone calls, and many legitimate calls and emails happened at weird times of the day. It wasn’t uncommon for bills of $17 million or $20 million for big machinery to get approved this way, so employees were used to it. One evening, they got an email with an invoice from a known supplier that looked legitimate and was sent to the CFO, asking for over 18 million dollars. The bill was referring to a known project, and the sum of money was what they were expecting on the bill. The email was forged, but the hackers had done their research. They knew the customer, the service provider in that country, and the person in the company who would handle the bill. They had even done their research on the amount to be billed and got that detail right too. They simply added all of those details into a bill template that looked just right and sent it.
The invoice was not questioned because it was expected. All the hacker had changed was the account number. The payment was sent to the hacker’s bank account.
Over eighteen million dollars!
The money was never found. They had to make a public announcement about the loss because they were a listed company, and it affected their financial outcomes and numbers. It had to go in the annual report, and it circulated in the news. The incident was a massive embarrassment.
There’s no insurance for stupidity. You can’t insure against careless employees getting tricked by ruthless hackers. You can’t insure against an employee giving out millions in free money to crooks. But you can increase awareness and minimise the risks.