AI-powered cybersecurity — or how to avoid becoming the next shocking data breach (VB Live)
“Identity is back on the front page, as people are starting to understand that stolen identity is the number one security issue out there,” says Jim Ducharme, VP of Identity Products at RSA. “Compromised credentials is the weak link in the security armor, but there are lot of good technical advancements in the market.”
Artificial intelligence is the key, Ducharme says. It allows us to go beyond some of the less scalable ways of protection, with its ability to scan enormous data sets to detect complex attacks and changing attack patterns, and then adapt to them.
“For over a decade, AI and machine learning has demonstrated it can do a better job of fraud detection,” he says. “It’s proven to work in the world of security, particularly in advanced fraud. Now we need to take a lot of the same principles and apply them to securing other things.”
For instance, enterprise access — is this person who they claim to be? It’s time to go past basic security strategies and the way we think about security. The “I know your mother’s maiden name, so it must be you” world, and think about ways AI can supplement the safeguards currently in place.
“It’s not that companies who have experienced breaches didn’t care about security or didn’t have controls in place to protect their data,” he says. “The reality is, the threat actors found ways around those static controls to get to that data. But that’s where AI comes in, to add a layer above that static control.”
He offers the example of credit and debit card transactions: Why is it that a 4-digit PIN is good enough to protect your bank account?
“Here in the enterprise, my password has to be at least eight characters, have a special character, an uppercase letter, a number, and I change it every 60 days,” he says. “While my debit card is protected by a 4-digit PIN, and I haven’t changed that password since I first set it when I was in high school.”
And that PIN can be guessed pretty easily — there are only a thousand combinations, and it’s probably either your birthday, your kid’s birthday, or a sequential set of numbers.
“But the beauty is, behind that PIN, behind that piece of plastic, is AI and machine learning fraud detection,” he says. “It’s asking, is this your normal pattern of behavior? Did you just buy a Ferrari with your debit card?”
AI-powered fraud detection goes beyond the simple static controls to look for things that don’t make sense — you had the right PIN and you seem to have the card, but this doesn’t smell right.
Fraud departments are the best way to see the power of AI day in and day out, Ducharme says, with the technology on the back end detecting fraud in real time. The next level is the enterprise case.
If someone logs into the enterprise server on a device they’ve never used or in an unknown location, that odd pattern can be flagged, and an identity challenge issued.
If you go back to any corporate data breach example, if somebody’s extracting the entire database, AI and machine learning would note that this user does have access to the system, with the right credentials, but it looks like they’ve just downloaded every customer’s data, and that just doesn’t seem to match their normal pattern of user behavior.
“The good news is, most companies have realized that things like usernames and passwords are easily compromised — they recognize the weak link,” says Ducharme. “Too many times the mistake is, they think the way in which they have to add additional layers of security is just putting an additional burden on the end user to protect their information.”
It results in what he calls the Fort Knox paradox, in which to protect your cloud data, companies make their employees log in via a VPN, so that they can’t get to a cloud resource without going through the enterprise, which defeats the purpose, and ensures you can’t get rid of your infrastructure cost, the way moving to the cloud was supposed to do. Or you require your users to change their password every 30 days instead of every 60, or you up the required complexity and so on, making controls more labyrinth without adding any significant security benefit. And almost always ending in users finding workarounds that defeat the purpose entirely, like the written-down password epidemic.
“It took me half an hour to create a password that worked with a bank’s password policy, because it was so complicated,” he says. “What did I have to do? I had to write it down on a post-it note. How secure is that, right? Who’s it really protecting? That’s the problem it creates.”
He cites the local cable provider with all the passwords for the systems he needed access to laminated onto his laptop; or the fire station with passwords for the state fire systems displayed on the wall, next to the system’s URL. Or the retail store with passwords to all of the store systems underneath the keyboard.
“The antithesis of that: I encourage customers to think about that information they think is so critical to their enterprise, how would they protect it with a 4-digit PIN?” he says. “Again, that leads into the discussion of machine learning and AI.”
It means shifting the burden off of the user, reducing friction on the front end, and putting security control on the back end, where it belongs.
There are a huge number of tools that cover everything from fraud to identity assurance, Ducharme says, but before you even consider tools, determining assurance levels is the first place to start.
“I used to use the example of our former president at RSA, Amit Yoran,” he says. “He always used to wear a black shirt and black pants. I said, if you think about it, our security team knows it’s Amit when he walks in. They do some recognition. There’s information about what he’s wearing that gives us the assurance it’s him. In an enterprise setting, I encourage folks to look at that as well.”
Step one, get out of your silo and look across the organization at sources of information that allow you to make a decision about how to tell if a person is who they say they are. Look at your data and applications and determine who is supposed to have access, and what would make it strange for them to be there. What would give you the assurance a user is who they say they are, this is what they should be doing, and if they’re doing it right?
It’s behaviorally based, he explains, and starts with something as simple as the devices they’re using, the locations that they’re coming from, and the networks they’re on. From there, go to behavioral patterns: Let’s take a look at Jim’s behavior and see if this is consistent with his previous patterns.
If Sally, tomorrow, logged into the system from St. Petersburg, Russia, would that raise an eyebrow? What else would raise an eyebrow? What if Sally showed up with a mustache? What if Amit showed up in a three-piece suit?
There are also three different dimensions to consider: identity assurance, access assurance, and activity assurance. Identity assurance is, do we know this person is who they claim to be: Is it Jim? Access assurance is, do we understand what he has access to: What can Jim do? Let’s say Jim is a developer. Should he have access to production systems? Jim’s a bank teller. Should he have access to the full vault?
Then there’s activity assurance. Is Jim doing what Jim should be doing? Is it normal for Jim to download every customer record?
It’s not just information that makes you raise your eyebrow, but information that would give you more certainty or assurance that that person is who they say they are.
“Those are all the things you want to feed into that contextual-based AI and machine learning algorithm,” he says. “You’ll start making these connections across your enterprise, and that’s going to be the fuel that feeds your AI and machine learning engine.”
This step is essential, even as just a thought experiment, he adds. These problems need to be thought about in new ways, and approached with a different mindset, or it’s too easy to fall back on patterns of defining the static policies that got you in trouble in the first place. A static control that says if a transaction is over $50,000, you throw up an identity challenge just means the fraudsters will rob you 20 cents at a time, 250,000 times.
Initiating an AI-powered cybersecurity strategy really is as easy as that, he says.
“The biggest barrier to AI and machine learning is that it’s not the black magic that people think it is,” says Ducharme. “It’s complicated, but it’s approachable. Otherwise we’ll be living with these horrible passwords and messes like that for a while.”
To learn more about planning and launching a 21st-century cybersecurity strategy, what cybersecurity specialists need to know about the tools and infrastructure required to add AI and machine learning to their security mix and more, don’t miss this VB Live event!
Source: VentureBeat
To Read Our Daily News Updates, Please Visit Inventiva Or Subscribe Our Newsletter & Push.