Cybersecurity and AI; Defending ourselves with, and from, a weapon of our own making.

How will cybersecurity be impacted by the evolution of AI

Artificial Intelligence Chip

The Evolution of AI

The journey of GPT’s (Generative Pre-Trained Transformer) models advancing in such a short period of time, has given rise to plausible sounding “conversations” that has led to excitement, furore and even controversy. The latest GPT model (Version 3.5) and the chat style interface, released in OpenAI’s ChatGPT has everyone talking about AI and wildly theorising, or outright predicting the rise and fall of AI influenced human civilisation.

I’ll be focusing on the mechanical and immediate short-term interactions these Large Language Models (LLM’s) will cause, and what this could mean for your business.

You have quite likely received an email from a “Nigerian Prince” promising millions of dollars, if only you would help them get the money out of their country. Or maybe yourself or a family member has been contacted on the phone by someone claiming there is a tax debt that is only able to be resolved by purchasing various retail gift cards. These scams have essentially fallen to the point of only targeting the naïve and vulnerable, as the average person is quick to see them for what they are.

The mechanism is now to find the 0.1% of people who are gullible and have the means to be exploited.

This changes drastically when LLM’s can conceive and formulate per person stories, or scams that are different in story and method for infinite potential targets. The plausibility of the “Hey Mum, this is my new number, I broke my phone.” scams will increase drastically. The wording will be more convincing, the back-and-forth communication will appear more realistic, and the uniqueness of the scam will be personally tailored to the target with seemingly astonishing precision.

Your sales department will receive requests for quotes that mimic genuine leads. Your accounts department will receive invoices and requests to change banking details (from both employees and customers) that will go back and forth like a real human being.

The plausibility of the back and forth of multiple emails, the convincingness of a human sounding audio conversation, specifics targeted to your business, your department and yourself. All automated to a scale designed to find weaknesses, such as the person who can't detect fishy scams due to their inability to follow security checks.

This will result in an increase of financial losses and material leakage from businesses, as the rate of these scams aided by AI increase. All parties involved within the business, can and will be susceptible for target.

Unlike current phishing and fraud scams which do require some aspect of human labour and interaction, the ability to use a machine learning bot to scale this, will mean that this level of attack, phish or fraud will increase in scale to a truly interruptive level.

To put it simply, it’s email spam of the last two and half decades, now with a convincing and individually tailored text, that will automatically reply, and will become increasingly convincing due to machine based learning.

How can your business “beef” up security.

  • Prepare your staff for machine-generated voice calls and emails asking for changes to financial information.
  • Review your processes.
  • Implement multi-channel verification, such as requesting a call or paper form.
  • Consider using client self-management portal switch MFA and identity verification.

On the plus side, large language models can help us detect fraud attempts by leveraging their ability to learn and deduce what we expect to see, despite their potential to increase attackers' authenticity.

Don't rely solely on expensive software or hardware to provide perfect protection against cybersecurity threats. Instead, focus on implementing basic cybersecurity practices, allocating adequate resources to your IT department, and using machine learning strategically.

A sign of maturity that we find in a business lifecycle is when cybersecurity is adequately invested in, and we start to see systems like a SIEM (Security Information & Event Management) being deployed. A SIEM records all events happening in the network, enabling better visibility and trend analysis for improved security outcomes. Connecting all systems through SIEM investment allows for better detection of anomalies, faults, and attacks. This is where we can judiciously lean into machine learning to assists the analysis, detection and response efforts.

Today many SIEM providers already offer machine learning features to help IT staff analyse usable data in their systems. With advanced language models and the ability to train tailored models, you can have an in-house "ChatGPT" that speaks your business processes natively. Instead of automating sales or support, focus on arming yourself against Machine Learning based attacks. Empower your IT department to quickly and intelligently analyse data and introduce Machine Learning to aid their efforts. This gives you a fighting chance of avoiding phone line shutdowns or aggressive email filtering.

Our capacity as humans to detect falsehood is clearly lagging behind the cutting edge of technologies built with the purpose to deceive us. To minimise the losses and impacts to our businesses that will arise from this, we need to strengthen our human processes, put in additional safeguards on our key points of change, and utilise Machine Learning technology to better aid our detection and analysis efforts. It’s going to be an interesting ride and there will be many bumps along this headlong rush to harness AI, but the process will settle and your ability to not be affected negatively will depend on the planning and preparation you do today.