Criminals are exploiting AI to create bigger, better, faster, stronger scams
Bigger. Faster. Stronger. Better. In sports and athletics, that’s the goal. But so is it with artificial intelligence, commonly known as AI: making AI seem all the more human.
On scale, AI is getting bigger; it’s becoming not only larger in terms of physical scale, being made into cars and robot dogs, but also in terms of reach, like the global network of the internet online on social media and into space.
In terms of getting better, AI is becoming more and more humanlike. Take machine learning as a proof of this: it allows a computer to uses mathematical models of data to “learn” without direct instruction. Also, ChatGPT and Google’s Bard allow for immediate written text that sounds very much like it is written by a human.
Getting faster, AI is rapidly becoming more fluent and fluid at transmitting information, being able to transmit information at the speed of thought. Just trying to outthink AI seems impossible.
AI is also getting stronger.
And so are the cybercriminals using AI.
Three main ways
There are three main ways that cybercriminals make use of and exploit AI to improve their nefarious efforts.
Wider, Faster Spread of Misinformation While it may not seem like an immediate threat to you or your organization, the spread of misinformation and disinformation can be harmful to any business or organization. The largest impact it can have is on your reputation as an individual or organization. This can cause people either to not trust your organization or to think wrongly about it, which can lead to a loss of business as well as profits, affecting the bottom line.
Bigger, Stronger Spread of Malicious Code AI can already write pretty good code and is improving all the time. Cybercriminals can use it to create malware. Though there is no AI-generated malware in the “wild”–yet–there is the possibility of generating malware with AI, infecting AI-generated code with malware, and leaving the AI-generated code with infections.
Better, Stronger Phishing Emails Up until now, the key to discerning whether an email was a phishing email was by looking at the terrible spelling and grammar. AI-written text, however, is harder to differentiate from legitimate, human-written text, simply because AI-generated text is not riddled with spelling or grammatical mistakes. Worse, cybercriminals can make every phishing email they send unique, which makes it harder for spam filters to spot potentially dangerous content.
What to do? What to do?
While AI isn’t bad or evil, it, like any tool, can be used for bad or evil ends. Cybercriminals will continue to adapt to the cyberworld and so should you.
Though you may not be able to spot a malicious phishing email on the grounds of it being riddled with spelling or grammatical errors, you can take these few steps to safeguard yourself, your organization, and your employees.
Steps to live by
Download wisely or not at all If you are tempted to download an document, image, video, etc., make sure that the following are in place:
Make sure that the source website is legitimate with an “https” at the beginning of the website.
Beware of common types of malicious files, such as PDFs and Microsoft Office files (.docx and .xlsx).
Do not download illegal (e.g., pirated) material or software, as this is likely to be unsafe for your system.
Check the file size and extension to make sure that they’re accurate to what you would expect.
Take a look at the software’s user reviews and ratings or forums that tell of others who have downloaded the same file before to see if it’s a legitimate file or software.
Expect the expected Today, looking for bad grammar and incorrect spelling is a thing of the past. The tried-and-true questions you should always ask are still important to ask:
Am I expecting an email from this person or organization?
Is the “from” address legitimate?
Am I being enticed to click on a link?
Don’t click if you don’t know
When in doubt, do without!
When you should call
Trying to stay ahead of all the technology and the changes with AI and how it can be used for ill intended purposes can be a full-time job. In fact, it is! That’s why MSSPs like us exist, to stay ahead of the technological trends and safeguard you from cyberattacks with both preventative, proactive methods as well as reactive methods: we support you and your IT needs from all angles and aspects. Call us today to see how we can protect you and your organization, especially in light of so much new AI technology coming down the pike.