REPORT: KuppingerCole: Leadership Compass Identity Verification
Access Now

Fireside Chat and Live Hacking Demo: How I Would Hack you Live with AI


Video Transcript
Mike Engle (00:01):
Thanks everybody for joining us. My name is Mike Engle with 1Kosmos. I'm joined today by Rachel Tobac. Rachel, what's the name of your company and kind of what do you do?

Rachel Tobac (00:14):
Yeah, my company is called Social Proof Security and we are a social engineering prevention company. So we help people avoid getting hacked by humans and we do this with video training, live events like we're doing right now and pen tests. So actually hacking companies that ask us to so that they can stay safe.

Mike Engle (00:34):
Amazing. Yeah, and as I mentioned, I'm Mike Engle Company is 1Kosmos with a K, and we really have a very simple value prop. We can prove who you are, which Rachel's an expert in. We're going to talk a lot about that today and then get you into any system with that verified identity. So it's called password list, but Passwordless with a verified identity is a twist that we've been working on for quite a few years, and today we're going to get into really some of the nuance of how these attacks happen, the psychology of them, some of the compensating controls, and just really just have a fun discussion and maybe a little bit of live stuff here that can show us kind of the state of the art and how it works. So I'm really looking forward to it today. Rachel, I think if you could kick off with the piece that you did on 60 Minutes, you're 15 minutes of fame there, amazing piece. I think it would really help set the stage and we can maybe take it all from there.

Rachel Tobac (01:44):
Let's do it. Okay. So a little bit of context for everybody who has not seen the 60 Minutes piece. I'm about to show here, 60 minutes, ask me to hack them using a live voice clone live on air. So I only had one chance to get it right. They asked me to hack Sharon Alfonse, the correspondent live on air. But when I hack executives at companies, I typically don't hack the executive by contacting the executive directly. Typically when I hack an executive, I'm going to contact their executive assistant. So I needed to reach out to the team and make sure that I got consent from both the executive Sharon Alfon and her assistant Elizabeth, who you'll see here. So both of them consented to this. They just didn't know where when or how the attack would happen and ended up happening at around seven in the morning before Elizabeth knew that we were even filming. So here we go.

Speaker 3 (02:36):
We hired her to show us how easy it is to use information found online to scam someone. We asked her to target our unsuspecting colleague, Elizabeth Toback found Elizabeth's cell phone number on a business networking website as we set up for an interview Toback called Elizabeth, but used an AI powered app to mimic my voice and ask for my passport number. Oh, yes, yes, yes, I do have it. Okay, ready? It's Toback played the AI generated voice recording for us to reveal the scam. Elizabeth, sorry, need my passport number because the Ukraine trip is on. Can you read that out to me? Does that sound

Rachel Tobac (03:18):
Familiar?

Speaker 3 (03:21):
And I gave her, wow, I was sitting over there. What did it say on your phone, Sharon? How did you do that?

Rachel Tobac (03:30):
So I used something called a spoofing tool to actually be able to call you as Sharon. That's why a fact. And I failed. I failed. Everybody would get tricked with that. Everybody would. It says, Sharon, why would I not answer this call? Why would I not give that information?

Speaker 3 (03:46):
Toback showed us how she took clips of me from television and put it into an app that cloned my voice. It took about five minutes. I am a public person, my voice is out there. Could a person who's not a

Rachel Tobac (03:58):
Public person like me be spoofed as easily? Anybody can be spoofed. And oftentimes attackers will go after people. They don't even know who this people are, but they just know this person has a relationship to this other person and they can impersonate that person enough just by changing the pitch and the modulation of their voice that I believe that's my nephew and I need to really wire that money. There you have it. Spooky stuff.

Mike Engle (04:24):
Yeah, what a great piece. So congratulations on that.

Speaker 3 (04:27):
Thanks

Rachel Tobac (04:28):
Mike.

Mike Engle (04:28):
And also on the successful hacking, I think you've done, you really got your start at DEFCON and winning some competitions, is that right?

Rachel Tobac (04:36):
Yeah, I got second place in the social engineering competition three years in a row. So

(04:42):
Basically they put you in a glass booth in front of an audience of 500 hackers, and you have to hack a real company target live over the phone in 20 minutes in that booth. I mean, you feel like an animal at the zoo. It is extremely nerve wracking in its experience, but I will say kicking off your career in front of all of the people who will potentially buy your services and tools is probably the best way to do it if you can swing it. Because I didn't start my company because I thought that it would work. I started my company because the people in the audience demanded that I come to their organization and give a talk or hack them. So I was like, I have to LLC for legal purposes.

Mike Engle (05:23):
Yeah, what a great jumpstart. Just trial by fire. So congrats on that. Thanks. You did Defcon. I think you won that a few years ago, right?

Rachel Tobac (05:33):
Yeah, I got second place three years in a row, and it was Defcon, I think it was 24, 25, and 26. So starting back in 2016, I believe.

Mike Engle (05:44):
Wow. Almost 10 years ago. And the state of the art for this type of attack is changing at a mind-numbing pace.

Speaker 4 (05:52):
Even

Mike Engle (05:53):
As somebody who does this stuff for a living, I play with every AI tool there is. My work colleagues are like enough with the AI stuff. But what has changed in the past 6, 12, 24 months that's really changed the attack surface?

Rachel Tobac (06:11):
Yeah, I think with ai, I mean, I think what I was going to say that AI was the reason everything's changed in the last 24 months. When chat GPT first came out, I remember thinking, oh my gosh, this is going to be wild for phishing. What is going to happen? And we're already seeing some new stories about bad actors are using chat GPT to basically automate text message-based phishing where they'll text somebody like a WhatsApp link, then take them to Telegram, then give them a job offer. I mean, it's just absolutely ridiculous. But I think OpenAI and meta work together on this research. So we know that the AI tools are used for automating the tasks of hacking, and then when the voice cloning technology started to become much more useful, and it really worked on not just your standard American accent, which I would say took 'em about a year and a half to get to the point where it worked not just on somebody who sounds just like me or I realized that this was probably going to revolutionize the way that every attacker thinks.

(07:17):
And we're starting to see that now where in the news we're seeing people pretending to be an executive to their assistant. I mean, just like what you saw with 60 Minutes. But of course I'm not giving the attackers a playbook. They are smart. They do this for a living. This is their job. So they know how to come up with this stuff on their own, and they're reaching out to other execs as board members. And because the spoofing technology is still available in the app store costs less than a dollar, it's really just easy for an attacker to gain access to these tools and download them. It doesn't take any special set of skills. And they kind of share their playbook on Telegram. They talk about how they do these attacks and how they choose who they're going to pretend to be. And now they talk about how they can do a deepfake in a live video or teams call and ask people for a wire transfer, which is something that we saw happen to a large multinational. They lost 25 million with that attack. And now we see them talking about, oh, did you know you can hack banks if you just tried to basically fool their detection, their ID process with a deepfake. So I mean, it's wild. It's a wild west out there right now.

Mike Engle (08:28):
Yeah. Yeah. I have about a dozen ciso, chief Information security officer, friends out on the street and in the industry, and my phone rang quite a few times when Sam Ton had put out the warning two or three weeks ago.

Speaker 4 (08:40):
Yes.

Mike Engle (08:42):
I mean, we've all known, but for him when he says it now, everybody's really paying attention. So I think there'll be a lot more eyes on it for sure. So we'll see how that goes.

Rachel Tobac (08:51):
It is interesting to see him admit his fear about this, that it's terrifying to him. I think a lot of people are like, well, if you're so scared about this, then do something about it, man. But I think one of those things that's like it's kind of toothpaste out of the tube. There's not a lot that you can do to protect in this situation. If people aren't trained, they don't understand what's possible. The technology is so nascent and they just don't know how to spot it yet.

Mike Engle (09:18):
That's right. Yeah, that's key. So you talk about the notion of exploiting trust in a lot of your presentations and talk, and I think that's been the case since phishing, smashing phishing, whatever as a thing. You think it's a trusted email from UPS or FedEx or from your bank and you click a link, but obviously the attack has moved up the reality curve quite a bit and what really kicked off the phenomenon as the scattered spider hacking group, right? Targeting it help desks. You saw MGM really was sadly the poster child of that first really big attack. Get into the elevator. It was a big one, right?

Speaker 4 (10:02):
It's a big one.

Mike Engle (10:03):
So could you walk us through the psychology of why these attacks are so effective?

Rachel Tobac (10:11):
Yeah, so Scattered Spider does something that we have been trying to showcase in pen test for years. Scattered Spider. This group, which is basically English speaking young people, a lot of them are in the us, some are in the uk. I think there was a couple in Spain. They basically attack and focus over phone calls and they contact the service desk and they ask for password resets and multifactor authentication resets to attack or control devices. And really a lot of people would even claim that what they're doing isn't like true hacking. We see people say this because a lot of times they're not even spooking phone number. They're calling as an employee saying that they dropped their phone in a toilet and they're calling from a new phone number and they need access to a password reset and they need that password given to them over the phone.

(11:01):
And people are like, if you ask a question and the question is answered because they don't have a protocol to verify identity correctly, is that hacking or is that just answering a question? It is. I would say it is hacking social engineering, but it's one of those things where they do use technical tools to pivot throughout the network, and sometimes they'll use ransomware or they'll exfiltrate data and attempt to get a ransom for it. But for all intents and purposes, the attacks that they're doing are highly non-technical. In the beginning it's just a simple phone call. And when you're exploiting trust in that way, it's a trust within the institution in the organization that people who are calling are who they say they are. This person is who they say they are over email or phone call or chat or text. And if we don't verify that people are who they say they are through those channels, they're going to be able to just ask a question and get it answered with something like a Pence word.

Mike Engle (12:01):
That's right. Yeah. Have you seen the movie The Beekeeper?

Rachel Tobac (12:04):
No, I haven't. I have to see that.

Mike Engle (12:05):
Oh my goodness. All right. We're going to talk about that towards the end if we have time. Jason Stam is finest work. He

Speaker 4 (12:12):
Takes

Mike Engle (12:12):
Down the bad actor call centers. All right, we got to get into that later,

Speaker 4 (12:15):
Please. Yeah, I got to watch a trailer or something. Throw up a

Mike Engle (12:17):
Trailer. Oh, it is amazing. It's horribly cheesy, like a 9.5 on the cheese ter, but it's so good.

Rachel Tobac (12:23):
Listen, when it's a 9.5 on the cheese ome, but there's real hacking shown. It's fun. It doesn't matter. I love it. It's like war games.

Mike Engle (12:33):
That's right. That's right. So yeah, we really saw our identity verification take off, look for scattered Spider. Of course people are like, what do we do? To your point, the attacks didn't have to be that sophisticated because people are using secrets and secrets aren't secret. Or you can find them, figure out, you can coerce 'em, have a conversation with somebody and figure out the name of their high school or mascot. The amount of their last payroll deposit is one thing that companies use as a quote factor to have the password reset. So a lot of customers that we helped had originally turned off password resets, which means now you're calling your manager and you're just sucking hours of productivity out of the company because they can't reset passwords. So we've implemented some of the techniques that we'll be talking about here to fix that. Talk a little bit about your background, talking about trust and behavior, and I believe you have background in neuroscience, I believe, and some kind of behavioral science.

Rachel Tobac (13:37):
Yeah, that's right. So when I went to school, I didn't know anything about security. I don't think you can even get a degree in social engineering now, and I don't think you can get a degree in hacking technically either. But when I went to school, I got my degree in neuroscience and behavioral psychology. So my job was to, I mean, I worked in a rat lab helping understand human behavior, animal behavior, how mammals make decisions. It was really a formative experience for me to learn how persuasion works, how people are easily tricked, how reinforcements and punishments work on the human psyche and how it actually ends up. You can predict how people will make decisions based on how you speak to them. And my husband knew I was really good at persuasion and talking with people over the phone and he was in security and he went to DEFCON and he said, I think you should come to DEFCON and try the social engineering competition. Even though I had no experience, it turns out he was right. So it's one of those things where I think even if you take a non-linear path to your position, it almost brings more experience because you can use these skill sets and these tools that you've learned with say neuroscience, psychology, whatever else that you have applied from your previous lives that you've lived and it brings something to your new industry. I think it's kind of cool that security has so many people who have nonlinear paths.

Mike Engle (15:10):
And I guess that's given you a bit of an edge in creating attacks for proof or helping companies defend against them, right?

Rachel Tobac (15:18):
Yeah, I think so. I think sometimes people don't know exactly how to consider how reinforcements and punishments work within human behavior, and it actually is kind of a technical process. So there is code that you have to write to understand how human behavior operates, and operant conditioning is like its own code. So while I don't know how to write technical code and I hack just people, I know the code associated with that, that comes with positive reinforcement and punishment.

Mike Engle (15:49):
Right. I'd love to see your GPT prompt sometime on that. And tell me a little bit about your take on security awareness training, because I spent a lot of time in my career in InfoSec just trying to, back then it was don't click the link or clean desk type stuff, which is obviously still very important. But can you say you're educating a company of 30,000 person organization?

Speaker 3 (16:18):
Is

Mike Engle (16:18):
It effective and for how long? What's your take on the security awareness training?

Rachel Tobac (16:25):
Yeah, so security awareness training is just one piece of the puzzle. You can't train your way to a hundred percent. That's not possible. So I do security awareness training. That's a huge part of my job is educating the public and companies about how to stay safe. So with the public, the 60 minutes piece or the CNNs, getting on there and talking to people is one part of the puzzle and helping people understand what these likely scams look like. Helping companies understand how to avoid getting tricked is one part of the puzzle. The rest of it is getting the service desk to understand how to verify identity correctly with human-based protocols and technical tools, helping customer support have the exact same human-based and technical tools and making sure everybody at the organization has the education to understand what they're looking at, knows how to report it when it's not just an email, but a phone call or a text message or a social media dm that's odd. They understand how to use their multifactor authentication and they have a password manager or they have some sort of passwordless or FI oh two solution in place. They have to have all of this together. It can't just be one part of that pie. It doesn't work that way. Kind of like the Swiss cheese model, you have to make sure that all of the holes of Swiss cheese don't line up and you need many layers of Swiss cheese to make sure that those holes don't line up correctly.

Mike Engle (17:54):
So still a very important tool in the toolkit, but it's probably because things are changing so fast, it's probably becoming not less important, but less effective if nothing else. Because tomorrow the attack could be, there's a hologram sitting in front of me. It looks like grandma, right? You just don't know.

Rachel Tobac (18:12):
You have to have everything together, and that's why security is so hard. Defense is so hard at companies. Just having the right technical tools will hopefully make it a lot easier.

Mike Engle (18:22):
Right. So you've just done your training on 30,000. You had a live CEO sponsored thing, amazing attendance and awareness. Somebody's going to get hacked the next day. It's just right. Why does it happen? Is it because the tech is that good or just because people are busy and they're just, the brain can only process so much?

Rachel Tobac (18:47):
Yeah, I think it's a combination of things. First, we know when we're really busy and overwhelmed, we have poor decision-making skills. We have something called amygdala hijacking. I'm going to get a little neuroscience here first. Sure.

(18:59):
But basically when you rush somebody, you use a principle of urgency on them and you use some sort of empathy, some reason to express emotion, you do what's called amygdala hijacking. The amygdala is that emotional portion of our brain, and it reacts a half step faster than the rational portion of our brain. And so if I can say something like get on a call, spoof somebody that Mike and say, Hey, I'm actually jumping on a plane right now, getting ready for the board meeting. I need that KPI deck. I'm so sorry, I have a family emergency. I'm really stressed right now. Can we send that to this email address? What happens there is your amygdala goes into overdrive. It gets hijacked by this emotional center of your brain and this emotional reason why you need to work urgently, and you send the KPI deck to somebody who's not authorized, who's pretending to be a board member right now, Mike, I know you would never fall for this, but that's just a demonstration of how amygdala hijacking functions. And we know that when people are really busy, they feel empathy for someone, they get hijacked, and that is how the human brain works. It can be compromised like a computer. So that's why we need technical tools to back up humans. We can't just rely on humans to be perfect a hundred percent of the time. They're not robots. They're fallible.

Mike Engle (20:15):
Yeah, that's right. And giving them the tools would help. So you mentioned a whole bunch of them. For example, if you can reach out to them via a channel that only exists in the company, whether it's we have the one cosmos authenticator or we have a biometric that we can use in any scenarios, these are things that we try to work on, but it's continuously evolving in terms of the personal landscape. What is it that you would recommend for that? I should tell grandma because I've actually heard so many anecdotes of people getting scammed or they go to the Bitcoin machine, deposit 10 grand because they got talked into it. Is there, people say you should have a safe word in your family, but that's something like security awareness training. You have to practice. Grandma's not going to remember the safe word. Rumpel still skin in three months. I just gave away my safe word. Now I got to make a new one. But what are your thoughts on that?

Rachel Tobac (21:20):
Yeah, so after that 60 minutes piece came out, a lot of people were saying, you've got to have a pass phrase essentially with the people in your family. And I will say for some people, that might be a really good match if you have a great memory and you can actually calm under pressure. But we know that the way that human beings respond under pressure is not how they respond under everyday life and under every imagination. So if you get a call from your nephew that they are arrested because they were in an injury accident and they hit somebody who's pregnant, which is by the way, one of the most common scams we're hearing with voice loan attacks right now over the phone, and they say that they need $10,000 in cash in bail, most people can't respond rationally in that moment and say, okay, what's the past phrase? Most people can't, oftentimes also, I see people joke about their past phrase on social media. I've been able to find it in hashtags and Facebook posts. So it's one of those things where can it be used correctly in many cases maybe, but I think it's risky. What I recommend instead is just using another method of communication calls. Maybe let's say they've been in an accident that's exactly the same as a scam and we really need to help nephew. Well reach out to nephew, text the nephew, call the nephew back, tort spoofing,

Speaker 4 (22:43):
Put

Rachel Tobac (22:43):
Them on Instagram chat, whatever it is that your nephew has with you. Use is another method of communication to

Mike Engle (22:49):
Really

Rachel Tobac (22:50):
In trouble because nine times out of 10, probably 9.9 times out of 10, the nephew is at the gym and you are texting your nephew and they're saying, that's not me. I'm fine.

Mike Engle (23:01):
Wasn't me

Rachel Tobac (23:01):
Say I'm lifting right now.

Mike Engle (23:03):
Right, right.

Rachel Tobac (23:04):
Yeah.

Mike Engle (23:04):
I'm not going to take the time to red team my family and test them under pressure, so call my mom and do a spoof. It's not going to happen. But

Rachel Tobac (23:11):
No, no, we just have to use another method of communication, and these people have to understand how to use the right tools for their threat model and their digital literacy because grandma is probably not going to use, maybe they will, but they might not use a UPA key.

Speaker 3 (23:25):
They're

Rachel Tobac (23:26):
Probably going to be using SMS two factor at the very most. And so we have to meet people where they're at and give them the recommendations that work for them. So SMS probably fine for grandma unless grandma is head of state, in which case we're going to give Grandma Yuba key.

Mike Engle (23:41):
Yeah, yeah, she gave SIM swapped. Exactly. Exactly. Well, that's great. Why don't we maybe dive in a little bit to some of the work that we did together recently. We wanted to make this real for our audience a bit, and you've agreed to put our own identity verification to the test here. So I would love to hear your approach as an attacker and what you saw here.

Rachel Tobac (24:09):
Yeah, well, I'll talk about it at a high level. So what I did is I created a deepfake and I basically wore a face, like a digital mask to attempt to break into the systems. And when I do this, oftentimes I'm trying to bypass liveness detection with the steep fake, which for some tools does work, and I'm trying to bypass the ID check. So I want you to imagine that the attacker is pretending to be a person. They reach out to, say a bank, and they say, Hey, I lost access to my bank account. I need access. Can you send me some sort of reset? The bank uses a platform. They say, sure, go ahead and go to this link. Go on your phone and I verify your identity correctly using say an id like your driver's license, your passport, and then a live kind of selfie check. So I tried that using your tool on mobile, and here's what I got. Verification failed. I love that. That makes me really happy. So I tried these systems and some systems I'm able to get into some systems I'm not able to get into. Really cool to see that one Cosmos was throwing up this to me that I failed the verification process when I use my deepfake. That's what we want and it makes me really happy to try that and see that.

Mike Engle (25:32):
Yeah. So just a little bit about the word liveness. We've been working on liveness for years. There's a number of tests and certifications out there. So for those into geeky acronyms in the identity world, there's nist. Yeah, exactly. There's NIST 800 dash six three dash three, which is the government standard to say, here's how you verify an identity by having multiple sources of truth and documents and face matching and verifying your address in a system of record. It's actually quite rigorous. We got our 863 certification in 2018, I want to say back right after it came out that's gone through its fourth rev, which was just released a week or two ago. So that's really important one, and not many people get it. Not many companies get it for two reasons. One is the banking world doesn't need it yet. They don't mandate it as part of KYC and all that, and I hope that changes

Rachel Tobac (26:30):
That will change. It will have to with deep banks.

Mike Engle (26:33):
Yeah. I got three letters for you. For the banks, right? It's MRA, right? So they matter requiring attention that when they have all these fake bank accounts opened or people get hacked. So that's one. And the certifying body for that is called canterra. They're a nonprofit that says, we'll test your system. And then on the individual components like matching a face or scanning a document, there's the NIST riveted, R-I-V-T-D, which stands for remote Identity verification testing, and then there's FRTE, facial Recognition testing and evaluation. So all these, there's about four or five that really can move the needle to know that you have a system that at least stands a really good chance to even stop the mighty Rachel. So they're really important and just we'll maybe include some of those acronyms for those that want to get shored up. Another one is I beta pad two. There's multiple certifying bodies like I beta, but PAD stands for presentation. It's attack detection. I'm sure you know that very well. And there's level one, level two. Level two is much more rigorous. We've been through all of them, and it's a constant, you have to do this stuff every couple months or year at a minimum to make sure you're staying ahead of the attacker. So really important stuff, and thank you for testing that. You're welcome.

Speaker 4 (27:52):
Thanks for making it really hard on me. That's what I like.

Mike Engle (27:55):
Yeah. Yeah. I was frankly sweating because I know you're really good at this. So it was great to see that at least it stood a good chance against you.

Rachel Tobac (28:04):
Yeah.

Mike Engle (28:05):
So you mentioned you do beat many liveness systems over as you test throughout the year. What was it you think that might've mitigated the attack?

Rachel Tobac (28:21):
Yeah, I think one of the things is when you're going through a mobile flow, the mobile flow oftentimes can catch when you are doing a presentation attack. I'm not going to get into the full detail. I don't like to give people the kind of keys to the castle about how to do these attacks. But when you are trying to fool with a deepfake and doing a video relay or something like that, oftentimes the liveness will fail because it can sense the screen in play or it can sense that there's an additional tool being added to bypass the camera. So it's really cool to see that mobile was able to spot it and say, no, we're not letting you through here. I think it's really neat, and a lot of institutions don't really understand how this process works, so it's important for them to know they need to be able to verify their customers so that I can't just gain access to their bank accounts.

Mike Engle (29:18):
That's right. And the camera and the system can check. I think it's like 90 different points of presence and overt security features on a document. So if the font's a little off your human eyeball, just cannot see that the commissioner's signature on the license is off by a millimeter, but the camera and has a much better chance of that. So all important things. We're seeing RFPs that have 400 line items now just on this technology, so it's getting quite nuanced.

Rachel Tobac (29:47):
That's going to be really hard for a lot of organizations to stay up to date.

Mike Engle (29:52):
That's right. That's right. So we talked a little bit about service desk scattered spider attacks. Are you seeing any trends in hiring fraud?

Rachel Tobac (30:03):
Yes, a lot. So these types of attacks are often coming from North Korea, which is really interesting. They have whole farms of laptops that are in the us. There's actually somebody who is just found by the FB, I think they had 90 some laptops, so 90 different employees from North Korea were going through that. And yeah, they use DeepFakes. They use voice clones to sound different. They also use Chat UPT and LLMs on the back end to sound. They're really comfortable with the English language. We know a lot of examples of people who have caught these North Korean deep fakers, one of which notably was no before the security awareness training company, probably the largest in the world. They put out a blog post before everybody, before anybody honestly was embedding it, which was very brave to do. They said, we hired somebody who was using a deepfake and is from North Korea, and the second they got access to our network and our machines, they started uploading malware to our systems and we caught them and we figured out who they were and we traced them back with law enforcement and they were from North Korea. So it's pretty frightening to see.

Mike Engle (31:20):
Yeah, I heard the stat that a billion dollars, this was about eight months ago, is going to North Korea through hiring. I mean, they're amazing workers. They're doing five jobs at a time, but the scary part is they have access to your stuff and can plant Trojans or ransomware you and take it to all another level rather than just stealing a paycheck. So yeah, we're seeing a big uptick and going not just to knowing who you're hiring, but who you're interviewing because proxy interviewing is the same thing, is the same type of threat where you hire somebody else to interview, but the person who shows up on day one is a different actor.

Rachel Tobac (31:58):
Yeah, I've been seeing a lot of that with DeepFakes lately, and it's still kind of awkward. Oftentimes people are able to catch it because it just seems so off, but I think it's going to get better. I mean, give it six months and these types of proxy or deepfake style interviews I think are absolutely going to be useful. I had somebody just said, are either of you a deep fake presenting right now in the q and a? No, I'm not. I don't know about you, Mike.

Mike Engle (32:23):
Why don't you show

Speaker 4 (32:24):
Check us out.

Mike Engle (32:24):
That's right. That's right. We can do stick your tongue out or

Speaker 4 (32:28):
Whatever. It's exactly,

Mike Engle (32:30):
Yeah, you're hearing about that. And then of course there's the at least rumor, I don't know if it's true that you can't say anything negative about the DPRK main guy. So if you ask him to say something like, does kimel June smell, they can't say that he does or something.

Rachel Tobac (32:49):
I have been hearing people talking about that. I just can't imagine that we're going to integrate that into every video.

Mike Engle (32:55):
It's like the safe word. It's just this be

Speaker 4 (32:57):
Crazy.

Mike Engle (32:59):
That's right. Rumpelstiltskin. So another question for you is third party risk. So do you think the techniques like the one you tested with us here have a place in contractors and who you're hiring? You don't hire contractors really. You hire the company typically, and you might interview 'em, but are you seeing adoption there?

Rachel Tobac (33:23):
Yeah, absolutely. I think because we're seeing scattered spider attack over the phone to the service desk, and oftentimes those service desks are managed as a third party, these third parties, you need the same technical tools and human-based protocols to verify identity. So if anything, they should be more protected because they have less access to who they actually should be verifying. They don't know folks personally, and they're the first people that we're going to contact and we're trying to hack. So I have definitely seen a major shift in the past, I'd say 18 months with the organizations that I'm working with, extending tools to verify identity to the service desk, to customer support, to verify identity at the human level with technical tools in addition to human-based protocols.

Mike Engle (34:11):
Sure. That's great. So do you want to show a Mike Engle hack?

Rachel Tobac (34:17):
I do. Very badly. Folks, if you have questions, you're welcome to put them in the q and a. We're going to be answering q and a later, so this is probably a really good time to put those in there if you want to while I'm getting set up here. Okay, so let's do a little live hacking demonstration for you, Mike, to be clear to everybody, you gave me permission to clone your voice and clone your face. This is not actually a hack of you. This is very theoretical. I didn't take over any of your systems. I haven't logged into anything and nor will I that would get me a knock from the FBI, I'm sure. So Mike, what I did is I took your voice from one of the many cool podcasts. I listened to a lot of cool podcasts that you were on, and I took about a minute of your voice and I used it to create a voice. So I want you to imagine that I am pretending to be you. I spoof your phone number and I call up a teammate of yours, say somebody on the service desk, and I ask them for wire transfers, passwords, codes, password resets, multifactor authentication, resets to an attacker controlled device. This is how I would do that, and I want you to tell me what you think.

Speaker 5 (35:30):
Hey, sorry about contacting the service desk so late. Can you help me reset my password and MFA new password over the phone would be helpful.

Rachel Tobac (35:41):
I'm curious what you think. I mean, you are you, so you know what you sound like, but what do you think when you hear that?

Mike Engle (35:48):
So it definitely has my tone, the pauses, the way I stitch words together, whether it's an or whatever that would be, it's spot on for me. The sound of my own voice. I don't get to hear that often, right, because play it back. But I definitely think that would get by somebody who's just doing their job,

Rachel Tobac (36:11):
Right? Somebody's just doing their job and they want to help somebody who's an executive at a company. Oftentimes people will think, I mean, it's my job. It's my job to do a password reset. This executive is asking nicely. It shows up with their phone number on caller id, and if they don't have the right human protocols and technical tools to verify identity, why would they not go forward with it? One thing I really want to highlight is how it really matched your cadence. I'm just going to play the beginning of this again because I found this to be really interesting, and this is, by the way, raw AI without edits from me to change the cadence of your voice. The way that it pauses I found to be really fascinating.

Speaker 5 (36:50):
Hey, sorry about contacting the service desk so late. Can you help me reset my,

Rachel Tobac (36:55):
You have a very unique way of speaking, which I think is especially fun for voice clones. I really love the way that you take breaks in the middle of your sentences because it really puts the emphasis on what you're asking for, and the voice clone picked up on that immediately. It's so cool to see, and I think when somebody has a very generic way of speaking, you do a voice phone, it's like, okay, is that really that impressive? But the fact that it was able to get your cool cadence in there is incredible to me.

Mike Engle (37:24):
Yeah, that's it. Tools like 11 Labs have just made this so amazingly easy upload and clone, and in fact, we just recorded a round table of our executives talking about our recent series B announcement, and we did it really kind of ad hoc. We just were all together. We said, let's do it. And the microphones, we didn't have professional mics, so we just recorded it with some not that great equipment, and I stitched out the audio, put it into 11 labs, and it made it better, and you just layer it right back on top with premiere or whatever, and about an hour of effort. The voices sounded perfect, all the inflection, it was exactly the same, just clear, and you

Speaker 4 (38:09):
Use your voice clone. Within that,

Mike Engle (38:12):
I voice cloned our own voices on top of our voices to make 'em sound better.

Rachel Tobac (38:16):
That's so interesting. Oh

Mike Engle (38:17):
My God. And it worked really well. Yeah. So you upload an MP three, say, here's my cloned voice, and it just says the same thing again, just clear. So the AI did better than our iPhone or whatever we recorded with pretty amazing call it white hacker use of AI to fix it.

Rachel Tobac (38:37):
Yeah, I love that. Okay, so let's do a full video deepfake of you. Now, this tool, and I'm not going to name what this tool is by the way, because I think it's too powerful right now, but this tool basically takes that voice club, it then takes a, I think I gave it like a 45 second screen grab of a YouTube video that you were in, and it lip dubs exactly what I just had from that voice clone into a live zoom routine call. Terrifying. Are you ready to see this?

Speaker 5 (39:12):
Let's do it.

Rachel Tobac (39:13):
Okay, here we go.

Speaker 5 (39:14):
Hey, sorry about contacting the service desk so late. Can you help me reset my password and MFA new password over the phone would be helpful.

Rachel Tobac (39:25):
I mean, it's really wild to me that that could pop into a Zoom. Teams call and ask for that, and most people would think if it says your name, it sounds and looks like you. Why would I not give you the password? I mean, most people say just jump on a live call, and that's how you can verify that person is who they say they are. Well, if you get that live call from the attacker, like what happened with that multinational who wired 25 million to the attackers, if you receive that zoom link from that person, the person who pops in is going to be a deep ba, right? So it's really scary to see this,

Mike Engle (39:58):
And these zoom calls aren't really authenticated. You can force that. It's a valid Zoom user, but very rarely do you have organizational, especially when somebody's remote, right? Organizational authentication. So it's a tough one. So what we do for this, just as an internal company policy, is force the user to engage with our identity wallet. So instead of that, I just put up a QR code on the screen and say, would you scan this with your wallet? Now I have a digital certificate that comes, my phone turns green, and I know I'm talking to the person, so completely taking it out of band. I don't care what the video looks like. In fact, I don't need video. I can just have audio, and as long as you can push a button for me that says you did that digital signature. It's kind a little simple, but very effective if you can get the tooling in place.

Rachel Tobac (40:50):
I like that. That's really neat. I hope more people use a tool like that to verify identity. And I think we are going to have more news stories that come out of attackers using Live Zoom teams WebEx calls like this. And I think unfortunately, it's going to take many organizations experiencing an attack like this before they realize how scary it is. But once a big one hits the noose, I think that's when people are going to go, okay, it's time. We need a technical tool for this.

Mike Engle (41:16):
Right? And I don't know if there's an answer for this one. It's something I've just batted around, but how do you verify the authenticity of not only the live Zoom we have here, it came from the company, it's on our LinkedIn page. You got an invite from one cosmos.com, so you kind of know you're least engaging with the real one, but somebody who's recording this now and plays it tomorrow and changes seven of my words to take something out of context.

Speaker 3 (41:43):
Have

Mike Engle (41:43):
You seen anything that verifies the authenticity? Obviously when you get a banking email, the banks will tell you, don't ever use that. Go to bank.com and then engage there. And to me, I think that's the only way. Any other tricks up your sleeve for that one? It's a tough one.

Rachel Tobac (42:02):
They have these tools that will attempt to find differences between different pieces of media

Speaker 4 (42:07):
And they

Rachel Tobac (42:08):
Can do it, but it takes a person having the technical know-how to be able to actually do that. And most people are not going to take the time to verify that a person actually send those seven words that way as opposed to the other way. So it's really going to go back to primary sources. When we first started using the internet, when I was trained as a student in school, as a young person, and they're saying, you have to use primary sources for everything, find the original version, and that's hard sometimes on the internet when everybody's taking clips and reposting to Reel and TikTok and Twitter and all these places.

Mike Engle (42:43):
Yeah, little snippets is right. It is just you got to try to use your head, but to your point, you're just scrolling and you could change an election or a stock price now with one post.

Rachel Tobac (42:55):
It's true. I mean, we've seen posts on Twitter move markets. I was actually just on NASDAQ live yesterday talking about this, that people will edit, alter, or use DeepFakes to change the way that people think about their stocks, and we've seen massive stock market shifts from a simple tweet or a deepfake. So it's a scary thing out there right now.

Mike Engle (43:17):
Yeah. Yeah. What's next? Crystal ball? 12, 24 months. Anything that come to mind as we wrap up here?

Rachel Tobac (43:26):
I honestly hate to think about it because it's really scary. I think even six months from now, deep fakes are going to look way better than what you just saw me create. I think it's going to get to the point where people want to verify identity when they're talking to their family, not just when they're thinking about enterprise. So I think one Cosmos is going to want to think about consumer grade too.

Mike Engle (43:46):
Yeah. No, we are. We're heavily in the space. Hopefully you'll see us on lots of websites coming out here soon. We'll make some big announcements there.

Speaker 4 (43:53):
I love that.

Mike Engle (43:53):
Yeah. Well, amazing. Where can people follow you, check you out, all that stuff.

Rachel Tobac (44:00):
Yeah. My name right here at Rachel, that's what I am on Twitter, LinkedIn, Instagram, Mastodon, blue sky. I mean, the Internet's so fractured at this point, but you can find me anywhere under my real name at Rachel Toback and social proof security.com.

Mike Engle (44:15):
That's great. Yeah. Similarly, we tend every identity show there is. So Identi verse and Gartner Identity Access Management's coming up

Rachel Tobac (44:23):
In

Mike Engle (44:24):
December, so we'll see a lot of our industry friends and family there and fed id. There's just all these great conferences that we hope to meet people up and we will be replaying this. So those that missed it, hopefully you'll be able to hop on and check it, and thank you so much for coming and course this was really a fun experience.

Rachel Tobac (44:46):
Do you want to do some q and a from the audience?

Mike Engle (44:49):
Yeah, I know you answered one already about, let's see here. Are we a deepfake now? Right? We did the handy thing. So why is that? We've seen how this can really mess with a deepfake, right? Why? What's your take on why that works?

Rachel Tobac (45:09):
Yeah, because when you're popping in live and you're wearing somebody's mask, wearing somebody's face like a mask, there's an outline with the way it's programmed right now, where unless you do it a hundred percent perfect, there's a box around your head, and when you turn your head very hard to the side or you put a hand in front of the face, you can see some distortion in the box. We've seen that this is pretty useful. So I would recommend just having somebody move this in front of their face, turn their head to the side. Another thing you can do is you can request that the person that you're talking to just take any action, say these words or look up, look down, look right, look left, stick your tongue out. I mean, whatever. It's kind of silly, but it does work because if we're doing a canned deepfake, you just saw where we request something that we need and we don't have a lot of flexibility with what we are going to say next within that deepfake, you can oftentimes just make it go totally off course by requesting the person to do something that they're not currently doing.

Mike Engle (46:13):
Right. Yeah. Just to mix it up a little bit. Then one of the other questions here is what are some of the tools out there that I mentioned? 11 Labs, which is amazing at Voice, it's like 11 bucks a month too. I don't know if they did that on purpose, but you get a lot of airtime for that. But what are some of the video tools that you like? I've used Face Swap and Deep Face Lab. Hey, Jen, right? Is another good one.

Rachel Tobac (46:40):
Hey, Jen's pretty solid. Deep Face Live is a tool that if you have some experience with coding, you can spin it up and do some Live DeepFakes in Zoom and Teams. It's pretty scary.

Mike Engle (46:53):
Right? Right. So yeah, that'd be just a quick Google search, and you'll find that there are some free ones.

Speaker 4 (46:58):
They're

Mike Engle (46:59):
Pretty good at monetizing to make it even just a little bit good. And sometimes you need a GPU, like a decent Nvidia RTX, so you play around with them, and some of 'em are right on the phone for just a dollar or two.

Rachel Tobac (47:11):
Yeah, there's lots of tools available on the app store that I'm sure you'll find.

Mike Engle (47:15):
Right. And then have either of you consulted with higher ed organizations that have been dealing with financial aid fraud? What can you briefly share on this?

Rachel Tobac (47:27):
Unfortunately, yes. I've worked with higher ed organizations and I've worked with states that are combating fraud people who are basically saying that they're out of work and they need access to funds with government. So this type of fraud happens with higher ed institutions. It happens with at the state level, government level. And yeah, it's been a huge problem for people verifying identity. People will fraudulently take out plans in other people's names or claim that they're someone they're not to gain access to their financial aid. I think that there's a lot of opportunity for verifying identity in these spaces, and these institutions tend to move slightly slower than others. So

Mike Engle (48:08):
We'll

Rachel Tobac (48:09):
See that adoption increase soon.

Mike Engle (48:11):
They don't quite have the financial services budgets, right,

Rachel Tobac (48:14):
To

Mike Engle (48:14):
Stop the bad actors. Yeah.

Rachel Tobac (48:16):
Yeah. They're not tech company that just has this unlimited budget. I mean, oftentimes these budgets for some of these higher ed organizations are provided by the state.

Mike Engle (48:26):
That's right. And then another question, how hard is it to track these bad actors as they're attacking you? I know this because it's really hard. There's VPNs and using compromised machines and all types of tricks, but some of 'em are getting caught, but it's pretty rare. It

Rachel Tobac (48:46):
Is pretty rare. Most people who do cyber crime don't get caught. That's unfortunately the truth. I mean, we were hearing about the deep fake attackers that hit no before, I think it was maybe 16 months ago, maybe it's somewhere between 12 and 16 months ago, and we just got the first report of the FBI catching some of the people who were involved with those DeepFakes from North Korea, and that just happened last month. So it takes a long time to catch people, and you have to have the resources of the FBI behind you to do it. Sometimes attribution is hard,

Mike Engle (49:18):
Right? Right. Yeah. Somebody commented on my use of a token or a wallet that it's not easy, right? It is like you're giving somebody another tool that they have to maintain, keep current, remember to use. So it is hard for organizations. You can build it into how they log in every day, so it's easier there. But for the consumer world, share of app and share of wallet is really hard. Unless Apple or Google built it into something, it's going to be hard to get scale there. What are ways to prevent a business from these? I'd love to see what I can, how I can test my gen AI LM products. So I think the question there is how would organizations test the efficacy of some of these products? I mentioned some of the standards. You can make sure they've met them and been tested by the industry experts, but anything else they can do internally?

Rachel Tobac (50:15):
Yeah, I think I'm interpreting this question as what are ways to prevent as a business from these DeepFakes, so how do we make sure that people aren't using DeepFakes against a company or trying to take over accounts? I mean, we just talked about it. You can use one cosmos, right? That's why we're all here today to talk about that. Another thing that you can do is you can just test your internal protocols and understand how do people verify identity, right? Do they verify identity using something like KBA knowledge-based authentication questions, mothers made a name. Where did you grow up? What was your mascot? I actually just did a demo on a podcast called Scammer Payback that was just released this week. I'm going to post about it on LinkedIn probably tomorrow. That showed me siphoning out the answers to these security questions live over the phone on the podcast to the friend, childhood friend of the podcast host with a boys club. So we can't use KBA, we can't use knowledge-based authentication anymore. It's just too easy to get the answers to the questions, and if they're not available on Google or social media or a data brokerage site, I can just siphon them out if I'm a really interested attacker. So we've got to use technical tools to back us up.

Mike Engle (51:25):
Yeah, I'm changing the definition of KBA to known by anyone.

Speaker 4 (51:29):
Okay,

Mike Engle (51:30):
That's good. We're going to mix that up because that'll be like, oh, really? It's known by it pretty much is.

Speaker 4 (51:34):
I like

Mike Engle (51:34):
That. And we're also using a lot of HBA. You're familiar with

Speaker 4 (51:37):
H-B-A-H-B-A? Are you about to make a joke? I'm ready for this.

Mike Engle (51:41):
I am. Yeah. It's hope based authentication.

Speaker 4 (51:43):
Okay. Yep.

Mike Engle (51:45):
Yeah,

Speaker 4 (51:45):
I was going to say this one. I don't know.

Mike Engle (51:48):
Tell all your friends. We'll trademark it.

(51:51):
Yeah, I think just checking the questions. Best form of defense who hire remotely. So it's a couple things is start as early in the process as you can with a verified identity. A lot of HR organizations are still just eyeballing a driver's license that is 100% bad. We recently implemented our identity verification at one of the largest retailers in the us, and they were suffering massive losses because the retail associate just, it looks great. Same thing. Banks are doing this for opening accounts or verifications, and I know they're bleeding money. So you just use the tech as early in the process. It's a couple bucks, and it'll save potentially tons of money and reputational risk or whatever else might happen. So anything else? Yeah,

Rachel Tobac (52:47):
I can create an ID that passes many visual checks, just like your eyeballs, and it even scans. So it's scary. There's tools that are available on the dark web. There's AI tools now that make those custom IDs and they scan and they print out and they look real. So you don't want to be that easily fooled. You've got to use technical tools to verify identity.

Mike Engle (53:11):
That's right. Then are you seeing the tools that you use or that are out there work for live calls like this that might have some length in an interview that they're, if you've got a 30 minute interview with four or five people on a Zoom, are you seeing 'em have that level of fidelity that it would pull it off? Or is it typically a shorter attack?

Rachel Tobac (53:33):
Yeah, I mean, it depends on the pretext. So that's who they're pretending to be and whether or not they're claiming to have good wifi. So if you say that you have bad wifi and you kind of step down the graininess of the Zoom, a lot of times you can get away with a live deepfake where you're wearing a face like a digital mask, and you can keep going throughout the entirety of the call. But if you claim to have good wifi and it looks clear, then oftentimes you'll be able to see that box around the face, and now it's off to people. The live voice clone is harder, so we can do more of a static dubbed lip. Much easier.

Mike Engle (54:11):
Yeah. Yeah, the little micro movements, that's why this messes it up, right? It

Rachel Tobac (54:17):
Does.

Mike Engle (54:18):
This is a great question. Can you embed deepfake detection tool into video platforms like your teams WebEx or Zoom? The answer is yes. It's hard to get both parties to participate in that, but definitely for corporate clients. I don't know if you've noticed on teams, there's a little either verified or unverified little check mark in the corner, and what that's doing is verifying that at least you authenticated into teams as your entre id, your Microsoft backup directory, whatever you want to call it. And so that is a really good indicator that the person has the username, password, MFA or password list that gets you in to the corporate systems, not just an untrusted video session. So I am seeing a lot more of that. And there's third party plugins as well that can do these types of things.

Rachel Tobac (55:05):
Yeah, there are.

Mike Engle (55:09):
And then would you recommend YubiKey? They're good. YubiKeys have built a great business. They're just like the person who made the comment about the token in the wallet. It is incredibly onerous to manage UB keys. You have to ship one to every person. People don't carry 'em around. I have seven keys laying around for different banking applications, and if I'm out in about, I don't have 'em all in my pocket, so they're hard. Our answer for that is a thumbprint based authentication that stays with the endpoint with the Windows computer, for example. So YubiKey is given to a person, but we have something called a one key that stays with the machine. So now I walk up, tap a thumb, and I'm done. I can really go to any machine. So uq are great if you can manage them. If not, we do way that might work for what we call frontline workers. So those are people that can't use a phone in the field as your call center operators, your factory floor retail associates. You can't make them use a phone to log in. It can be illegal, it can be too expensive, et cetera. So that's been a really big solution that we've been looking at.

Rachel Tobac (56:26):
That's awesome.

Mike Engle (56:27):
Yeah. Okay. I think we have about done it and we're coming up on two minutes on the hour. So once again, thank you so much for joining. This was really a delightful chat and for hacking and not hacking us, so I really appreciate what you did.

Rachel Tobac (56:44):
You're welcome. Thanks so much for having me, Mike.
Michael Engle
Mike Engle
CSO
1Kosmos
Rachel Tobac
Rachel Tobac
CEO
SocialProof Security
SocialProof Security LLC

What happens when one of the world’s top ethical hackers takes on the defenses of a modern enterprise?

In this live, eye-opening session, renowned social engineer Rachel Tobac exposes how AI-assisted impersonation attacks are bypassing traditional technical defenses and exploiting human trust at scale. From HR onboarding to IT service desk calls, identity is being compromised before it’s even authenticated.

You’ll witness real-time tactics used in modern impersonation and social engineering campaigns — the same methods behind high-profile breaches at companies like Marks & Spencer, Qantas, and WestJet, and recent attacks linked to North Korean operatives targeting the Fortune 500.

We’ll then get actionable and focus on how you can stop these attackers in their tracks.

Don’t miss this virtual showdown between today’s most advanced attacks and most resilient defenses. You’ll walk away with an understanding of how to spot the latest AI-powered attacks at both the human and technical level.

Rachel is a hacker and the CEO of SocialProof Security, where she helps people and companies protect their data by training and pentesting them on social engineering threats. She got her start winning second place in DEF CON’s thrilling spectator sport, the Social Engineering Competition, three years in a row.

Rachel served on the CISA Technical Advisory Council under Director Jen Easterly, contributing her expertise to national cybersecurity initiatives. Her captivating real-world hacking stories have been featured on The New York Times, 60 Minutes, Last Week Tonight with John Oliver, NPR, CNN, NBC’s Nightly News with Lester Holt, and more. In her remaining spare time, Rachel serves as Chair of the Board for Women in Security and Privacy (WISP), where she works to advance women to lead in the fields.

Ready to go Passwordless?

Indisputable identity-proofing, advanced biometrics-powered passwordless authentication and fraud detection in a single application.
×