Cyber Security and the Rise of Artificial Intelligence

With all the hype and media buzz surrounding artificial intelligence in recent months, nearly everyone has heard of tools like OpenAI’s ChatGPT or Google’s Bard. There are numerous other tools arriving on the scene nearly every day. From writing term papers to creating speeches, composing slide decks, or developing source code, the potential for AI to change our lives appears to be expansive. But is it for the better?

Seemingly from the start, security researchers and other pundits have denounced AI tools for fears they could propagate misinformation in new and more mischievous ways across the Internet. A recent article from Axios outlines many serious concerns about the spread of misinformation and the role these new tools are playing. These concerns are valid, though perhaps a bit overblown. 

Although AI can rapidly produce well written content upon request, the accuracy of such content must still be verified by a human being. Simple inaccuracies are one way the tools can be used to spread misinformation. These could be caused by someone using a tool to generate content in a subject where they have no expertise, and thus are incapable of recognizing falsehoods or inaccuracies. Further, these same tools could also be used to intentionally produce erroneous or misleading content for a misinformation campaign. 

In cases of misinformation, it’s up to the recipient of the content to verify accuracy and authenticity. In the age of fake news, alternate facts, and the fall of the local newspaper, it seems humans need to get a little smarter in our fact checking skills and powers of deductive reasoning if we’re going to beat this new threat.

A second concern posed by AI tools, closely related to misinformation, involves the ability to impersonate others. A New York Times article opens with an example of deep fake technology that portrays realistic journalists spouting misinformation and propaganda for a fictitious news organization. Further, if given enough samples for training, these technologies already have the ability to impersonate real people and make them appear to say anything desired in fictitious audio and even video broadcasts. As scary as this may seem, the antidote is much the same as before. With education and awareness, humans can beat these tools simply by fact checking and challenging information, using multiple reputable sources to quickly identify and discount the falsehoods.

If we dig a little deeper, more legitimate concerns about modern AI technologies arise, without clear guidelines for avoidance or mitigation. Here, ChatGPT seems to bear the brunt of the criticism. In one article from ForcePoint, researcher Aaron Mulgrew was able to sidestep guardrails and coax ChatGPT to write a zero day virus. In another story from Gizmodo, researchers at CyberArk were able to use ChatGPT to create a polymorphic, or self-mutating computer virus that was difficult to detect.  And, this bit from MalwareBytes outlines how yet another researcher was able to get ChatGPT to write rudimentary ransomware, despite strict boundaries designed to prevent it. 

These alarming examples demonstrate the security community has a large task ahead in developing better detection and preventative controls. Work also needs to be done in the AI community to build security into the technology instead of attempting to bolt it on after the fact.

Early trends and indications are that the community may be up to the tasks. Companies like CrowdStrike have developed adaptive anti-malware tools that leverage behavioral rules and predictive heuristics to automatically detect and stop previously unseen malware.  Further, tech sites like Stack Overflow have banned ChatGPT, at least temporarily, to slow the spread of bad information on their site. These and other examples indicate both an awareness of the risks as well as the ability and interest to combat them.   

Although perhaps not as obviously concerning as AI writing novel malware, one final security concern must be given strong consideration — misuse of intellectual property.  An article posted on Dark Reading outlines the risks with a myriad of examples. Many widely available AI tools have been trained against data from the public Internet. As a result, the risk is very high that answers may contain material under copyright that is not allowed to be used without attribution or payment. Companies that use content from these tools without validation are at risk of being sued. 

On the other end of the intellectual property spectrum, security researchers have expressed concerns about users inputting sensitive data into AI tools without a clear understanding of where that data is going or how it may be used. The earlier referenced Dark Reading article highlighted cases of users inputting sensitive information and intellectual property without regard with various consequences.

As with the risk of misinformation, it seems the best defense against this data leakage threat is user education and awareness. Users made aware of these risks who knowingly choose to press forward anyway can then be held accountable for their actions. Security personnel should revamp awareness materials to include AI as an emerging risk as quickly as possible.

With all of the security doom and gloom about AI, it seems like users should give pause to the technology. However, there is real and tangible benefit to be had, as long as security considerations are part of the solution rather than an afterthought. AI has demonstrated the ability to perform mundane tasks that might free up human capital for more value added work.  Examples include tasks such as capturing and recording a doctor’s notes allowing more time to spend with patients, capturing meeting minutes in a busy company to allow workers to focus on revenue generating work, or proof-reading content such as news briefs to be released to the public, allowing authors to produce even more valuable content. These and many more tasks may be good candidates for AI assistants, as long as there continues to be human oversight, and thought given to the very real risks the technology may pose.

Related Posts