How gen AI is making real estate cybercrime easier than ever


As the FBI report suggested, generative AI shares a large part of the blame for the uptick in financial crimes.

“It lowers the playing field,” Matt O’Neill, a retired Secret Service agent and the co-founder of 5OH Consulting, said.

Previously, O’Neill said cybercriminals would specialize in certain parts of the crime or in certain technologies. They would then work together offering each other what was essentially “cybercrime as a service” to defraud their victims.

Now, however, O’Neill says AI has made it where cybercriminals don’t really need any level of technological proficiency.

“Two years ago, the lowest of the low-level actors didn’t have a lot of success, it was a pure volume play, but now with AI it is so much easier for them to create sophisticated attacks,” O’Neill said.

While cybersecurity experts believe fraudsters are just in the early stages of AI utilization, they have already seen some impressive applications.

Adams and his team recently encountered a spoof website for a real title company, something he finds greatly concerning.

“It was a direct replica of the actual title company’s website. Everything was the same except for the phone numbers and they had already infiltrated one transaction posing as the title company,” Adams said. “Those situations are the ones that scare me the most, especially when it comes to the advances of AI because it is no longer a bunch of humans trying to figure out how to rebuild a website. With AI they are able to just scrape it and rebuild, making it super simple.”

But sophisticated website spoofs are not the only way fraudsters are using AI. Cybersecurity experts said they are also seeing generative AI applications pop up in things as mundane as phishing scams. According to industry leaders, fraudsters’ use of AI is making the scams believable, and unfortunately for the victims, it’s working.

According to a study done by Fredrik Heiding, Bruce Schneier and Arun Vishwanath at Harvard University, 60% of study participants fell victim to AI-automated phishing. The researchers said this is in line with the success rates of non-AI-phishing messages created by human experts. However, what the researchers found most worrisome is that the entire phishing process can be automated using Large Language Models (LLMs), reducing the cost of phishing attacks by more than 95%.

“Because of this, we expect phishing to increase drastically in quality and quantity over the coming years,” the researchers wrote in an article in the Harvard Business Review.

The improved sophistication of phishing scams has sounded alarms for Andy White, the CEO of ClosingLock, especially since much of the cybersecurity focus has been on more sophisticated attacks and not on phishing scams, which have been around for decades.

“We don’t really think about phishing scams as a way fraudsters can use AI to infiltrate the real estate industry, but if you can use AI to make a fraudulent link that is more believable and more people click on it, then you can infiltrate any party in the transaction that you want. You could even get into a title company’s systems and are then able to send emails from the title company itself and not a spoof account or change all the account numbers where money goes to fraudulent accounts,” White said.

Although this is scary in and of itself, cybersecurity experts warn that even scarier scams are on the horizon as it becomes easier to make very convincing deep fake videos.

“The technical bar and the level of sophistication to carry out these attacks is not particularly high anymore and the cost of the hardware to do it has come down to a reasonable level,” John Heasman, the chief information security officer at identity verification firm Proof, said. “We expect to see more instances of real-time face swapping and real-time production of deep fake videos throughout the year.”

While Adams believes deep fakes pose a very real threat to the housing industry, he doesn’t believe we will see scams using this technology for several months.

“I think this year we are going to start seeing some really impressive fake IDs for virtual notaries and things like that, and that is going to be one of the biggest risks of the year, but when it comes to deep fakes and getting on a Zoom and not knowing if you are really talking to the real person, I think we’ll begin to see that late this year or early 2026,” Adams said.

Given all of this, cybersecurity experts acknowledge that it is easy for housing industry professionals to feel overwhelmed by the threats posed by fraudsters and their newly honed AI capabilities, but they believe it is not all doom and gloom.

“The small and medium-sized businesses are becoming more mature in their security, doing things like conditional access and dialing up their security hardening, which is promising to see,” Kevin Nincehelser, the CEO of cybersecurity firm Premier One, said.

While the fraudsters may have some new tricks up their sleeves, Nincehelser said the “good guys” also have some new tools at their disposal.

“A lot of security apparatus pieces are also using AI now and it has been very helpful in finding and mitigating more attacks,” Nincehelser said.

Companies working with Premier One on their cybersecurity have begun utilizing AI powered email filtering products, which Nincehelser said has been a game changer in preventing both fraud and ransomware attacks.

“Previously, email filters just looked at patterns, but then the bad guys stopped using patterns and started using AI and the AI tools we have can stop those attempts or attacks that come in via email because they are looking at behavior and intent,” Nincehelser said. “The AI tools aren’t just seeing the link in the email like a human would, but they are seeing the next three steps beyond that link and what it will ask the user for. From a defensive perspective, AI email security has been one of the most powerful new technologies to rise out to this so far.”

Although O’Neill acknowledges the need for advanced fraud detection and prevention tools, he believes the housing industry could also use a push from the government to further incentivize it to improve its cybersecurity.

“I am working with state legislators to create some sort of duty of care requirement that says you have to have these basic steps in place, like multi-factor authentication and using secure communication platforms outside of web-based email when you are working with clients transacting over a certain dollar amount,” he said.

On the federal level, O’Neill said there is a push in the financial sector to leverage 314b of the Patriot Act to enable financial institutions to share information with each other. He believes a wider adoption of the regulation will go a long way to prevent fraud.

According to O’Neill, part of the challenge is that as of right now, 314b is voluntary, so many banks have made the decision to not actively participate. Due to this, banks are not often held responsible for losses, which are just passed off to the consumer.

“When they can’t do that anymore, then they are going to have to start communicating with each other,” O’Neill said. “There could be some meaningful changes if financial institutions did things like matching account numbers with account holder names and things like that.”



Source link

About The Author

Scroll to Top