When it comes to artificial intelligence (AI), it is easy to focus solely on the benefits. Advantages of AI include insights to expedite our decision-making capabilities, reducing errors and increasing efficiency related to manual or repetitive processes, instantaneous generation of complex programming code or written content, and providing sleepless 24/7 support for products and services.

Click here to read more content from Citrin Cooperman

Lost in the often-glowing media coverage, however, is that artificial intelligence can be exploited by cybercriminals for a multitude of insidious purposes. We are just starting to see how AI can be used when the user possesses ill-intent. Here is a selection of some of the more prevalent concerns of how AI can empower cybercriminals and endanger our digital lives:

Understanding Robotic-Assisted Surgery: Insights from the Experts

Advances in robotics are transforming our world, and healthcare is no exception. Robotic-assisted surgery is…

Learn More

· Instant generation of sophisticated social engineering attacks. A chatbot such ChatGPT is especially helpful for those attackers who don’t write or speak fluent English to compose natural-sounding spear phishing emails. With the advantage of having AI crafting their communications, cybercriminals will no longer be concerned with poor spelling and grammar – the traditional hallmarks of phishing attacks.

Best practice: Keep up to speed with the latest social engineering tactics through the use of cybersecurity awareness training and spear phishing simulations.

- Advertisement -

· New strains of malware can be generated with minimal effort and coding skills. According to security experts, examples of polymorphic malware generated through artificial intelligence chatbots have been observed in the wild. Cybercriminals utilizing AI to help program their malware can increase their likelihood of circumnavigating antivirus and other endpoint detection tools. A recently discovered hacking application called WormGPT, is an AI tool that can produce malware using the coding language Python while also providing advice on producing sophisticated cyberattacks.

Best practice: Utilize a strong endpoint protection solution and ensure that all of your applications and hardware devices are continuously patched with security updates.

· Criminals have set up fake websites that appear to host legitimate AI tools. With interest in artificial intelligence at a fever pitch, many individuals are determined to take the latest AI chatbot for a test drive. Criminals are taking advantage of this opportunity by running spurious ads for AI tools via social media and search engines, sending users into a trap instead of legitimate AI sites. A user may think that they arrived on a seemingly safe website, but it may actually be a conduit for downloading malware onto their device. Once the malware has been installed, criminals can possibly gain access to the victim’s passwords or steal information to sell it to markets and hackers located on the dark web.

Best practice: Use extreme caution when clicking on any links and conduct research on the legitimate address of any website before you visit.

· There are limited safety mechanisms in place to prevent the upload of sensitive information to AI tools. If users post information to a tool that is of a sensitive nature, that information is saved and is at risk of being exposed due to a future bug or hack. This risk can be especially dangerous (e.g., compromise of client data) for businesses that allow their employees to use AI tools without providing them with best practices.

Best practice: Every business should provide training and establish policies for users authorized to access AI tools and should consider blocking access for employees who have not received approval.

· Artificial intelligence has supercharged the deepfake capabilities used by cybercriminals. A deepfake, also known as synthetic media, is when a person’s voice or likeness is digitally altered, often with the goal of deceiving or misleading people. AI has made the process of creating deepfakes exceptionally easy and virtually indistinguishable from reality, with only a single photograph or a few seconds of audio as the necessary ingredients. Deepfakes can be used by cybercriminals for a variety of malicious purposes, including simulating the voice of a relative asking for money to be released by kidnappers, pretending to be a CEO calling to demand a wire transfer be made, creating incriminating photos for blackmail purposes, and posting fake news to affect a company’s stock price or reputation.

Best practice: While spotting deepfakes is possible (e.g., looking for inconsistencies in facial proportions, inconsistent quality of audio and video, etc.), it is exceptionally difficult to prevent them if your image or voice can be found on the internet. Until tougher legislation is enacted (only a handful of states have deepfake laws in place), responding to a deepfake may entail contacting the website administrator or law enforcement, as well as enlisting the aid of an attorney to pursue civil or criminal options.

· Chatbots are susceptible to hackers, so be aware of what information you provide on these sites. In May 2023, OpenAI, the creator of artificial intelligence (AI)-powered ChatGPT chatbot, confirmed that they may have experienced a data leak due to a bug in the chatbot’s source code. Beyond the usual threat of identity theft when a data breach occurs, chatbots can pose an additional risk related to the questions you have asked. While it may be tempting to have a chatbot author an article or answer a sensitive question for you, consider the repercussions if your chat history was exposed to the public after a data breach.

Best practice: Be very cautious when sharing sensitive information and ask yourself what the fallout would be if your chatbot history was available for public consumption.

While the power of artificial intelligence can be harnessed for an astonishing number of beneficial purposes, this technology is not without risk. Ironically, we may be rapidly approaching the day when “good” AI will be required to combat the risks of “evil” AI, a situation that is rife with concerning questions that even a chatbot would be unable to answer. For more information on the risks of artificial intelligence and keeping yourself safe and secure, contact Citrin Cooperman’s Technology, Risk Advisory, and Cybersecurity Practice or Kevin Ricci at kricci@citrincooperman.com.

Kevin Ricci

“Citrin Cooperman” is the brand under which Citrin Cooperman & Company, LLP, a licensed independent CPA firm, and Citrin Cooperman Advisors LLC serve clients’ business needs. The two firms operate as separate legal entities in an alternative practice structure. Citrin Cooperman is an independent member of Moore North America, which is itself a regional member of Moore Global Network Limited (MGNL).

Kevin Ricci is a Partner in the firm’s Providence office and a leader within the Technology, Risk Advisory, and Cybersecurity (TRAC) Practice. He has over 25 years of extensive experience in technology services including consulting, security assessments, cybersecurity awareness training, social engineering simulations, IT auditing, fractional CISO, project management, database development, data analysis, and compliance services including PCI DSS, for which he is a Qualified Security Assessor (QSA).

500 Exchange Street, Suite 9-100 | Providence, RI 02903 |  401-421-4800 | citrincooperman.com