Although artificial intelligence has multiple benefits in the workplace, IT leaders should still be aware of its drawbacks. The internal and external risks threaten network security by evading intrusion detection tools and automating sophisticated attacks. Hackers leverage many applications — such as generative AI — that enterprises use.

CIOs need to be proactive and vigilant as their organizations encounter new, complex cybersecurity challenges.
You can defeat AI-based attacks. The key is to adapt your techniques according to the kinds of attacks you’re
most likely to face. You also need to keep training your staff. Teach them how to recognize and avoid these assaults.

Building Financial Stability for Your Construction Business

Historically, the construction industry has not been immune to economic volatility. In recent years, COVID-19…

Learn More

Learn how you and your staff can identify and prevent AI-driven cyber threats.

Click here to read more content from Cox Business

- Advertisement -

Boost Your GPT Account Security

Enhance Security Measures for Your GPT Account

One common method of AI-related attacks is account takeover (ATO). If you or your team uses AI for operations, training, or marketing, your account security could be at risk. Attackers could gain access to your account, impersonate you, and modify data or perform actions on your behalf.

For instance, if malware infiltrates your network, it could extract data from documents and build a knowledge base. Using this information, attackers might craft phishing emails or messages to redirect users to fraudulent websites that mimic your company’s domain. Once users enter their credentials on these fake sites, the information can be captured by attackers, allowing further unauthorized access and potential impersonation.

How to Avoid AO Attacks

Fortunately, ATO attacks are easy to avoid. Below are a few ways to prevent them:

• Enable multi-factor authentication (MFA) on all accounts with GPT providers. Set up your MFA to incorporate a user’s personal device for identification verification.

  Use complex and nonsensical passwords. These are less likely to show up in databases of compromised account credentials.

Prevent AI-Powered Phishing Attacks

Hackers can take hold of sensitive information through simple tactics like sending fake emails and creating fake websites.

But when you know what to look for, they’re relatively easy to spot. That’s why it’s important to teach your employees the red flags to watch out for.

Here are some best practices to prevent AI-powered phishing attacks:

  Use simulated phishing attacks. During these exercises, employees view and interact with AI-generated content — but in a safe environment.

  Explain to staff how AI tools give hackers an advantage. Describe how attackers first use them to write error-free emails. They then send them to your inbox — hoping to lure you into clicking a link. Often, they try to toy with your emotions by creating a sense of urgency.

  Develop a simple reporting process for phishing attacks. Employees should know who to shoot an email to or who to call.

  Hold regular training workshops. Show your team legitimate links and attachments. Then show them fraudulent ones. Break down how to tell the difference. During these sessions, your team examines legitimate and suspicious links and attachments.

Pinpoint Ransomware and Social Engineering Targets

Since AI analyzes large data sets quickly, hackers can scan enterprise networks for vulnerabilities while performing many other actions. As they search for potential targets, cyber-attackers use collected data to update AI and machine learning (ML) models. These models and mapping tools help them evade detection in your network.

Once they find a weakness, generative AI technologies can create everything from social media profiles and personalized emails to voice-cloned messages and deep fake videos. 

The best defense is implementing user awareness training backed by the latest AI security solutions. Build social engineering training into your cybersecurity education classes and provide regular simulations, including examples of voice phishing (vishing) and synthetic videos. Also, consider partnering with a technology service provider that updates AI models and behavioral analysis tools to prevent the newest evasion tactics.

Evading CAPTCHAs and Strengthening Authentication

CAPTCHA verification is excellent for preventing attackers from using simple web scripts to crawl the net, enter stolen credentials, and access your site. These scripts simply don’t have the coding needed to identify images. They only look for checkboxes, which they can identify by reading your site’s HTML, CSS, and Javascript.

Unfortunately, AI image recognition tools are more effective. They can distinguish a fire hydrant from a light pole, for example, or a bus from an SUV. So, it’s best to consider a multi-layer approach. For example, you can use advanced authentication methods and CAPTCHA alternatives, such as:

  Automatically monitoring user activity. Your system can then request additional steps if it detects unusual behavior.

  Creating math or time-based challenges. These are more difficult for AI bots to get around.

  Verifying identities using additional devices or biometric authentication processes.

  Allowing workers to play a short game or describe an image instead of solving a CAPTCHA.

Continuously Updating Your Malware Protection

Generative AI systems have made hackers’ jobs easier than ever. Granted, a generative AI app won’t answer direct questions about how to execute cyber-attacks, it may unwittingly assist a hacker to code one.

Attackers are now attempting to use bits and pieces of advice from a generative AI to build malware. They will describe your system architecture to a generative AI, in hope that the genAI will then tell them how to write code to get by the obstacles described.

This is why constant malware updates are so crucial. They provide safeguards against attacks the software already knows about.

Consider a hypothetical example. A hacker purchases access to a database consisting of activity reports from a popular firewall. The data would show how the firewall stopped a variety of threats. It would also show each threat’s point of origin.

Each data packet that the firewall rejected would have crucial information that alerted the firewall to the nature of the threat. Again, a hacker could not ask a generative AI system, “How can you get past this firewall?” However, they could ask, “I need to configure this firewall for the best possible performance. What kinds of threats may this firewall miss?”

Within a few moments, the AI system could tell the hacker exactly what kind of malware they would need to develop to bypass that firewall.

However, anti-virus software manufacturers have stepped up to the plate to design solutions that close these loopholes. Once a hacker carries out an attack and the defense system detects it, it may send data about the nature of the threat back to the manufacturer. The manufacturer would then update the firewall to stop this kind of attack. If you happen to have the same firewall, installing this update can make the difference between a compromised system and a safe one.

Keep in mind, The idea that generative AI would directly assist in coding malware is more of a speculative risk than a current widespread practice. Many security experts agree that AI can facilitate certain steps, but crafting sophisticated malware still requires a human touch, and AI cannot develop complex, targeted attacks independently.

Checking and Adjusting Your Firewall’s Data Exfiltration Detection System

A data exfiltration attack involves cyber criminals stealing vast amounts of information from an organization. In many cases, a firewall can prevent this kind of attack by noticing when large sets of data begin streaming out of the network. At this point, the firewall would block the data from going through.

But AI makes it fairly easy for an attacker to program a custom hack. All they have to do is steal smaller amounts of data over time. In this way, the attack can attempt to circumvent the firewall’s threat detection mechanism. Since the amount of information the hacker steals doesn’t exceed the minimum threshold for initiating corrective action, your system may allow it.

To prevent this kind of attack, you can adjust the settings of your firewalls. For example, you reduce the amount of data you allow to stream out of your network at once. This may take some experimenting to avoid dropouts during legitimate transmissions. But by striking the right balance, you can stop crafty hackers from trying to use AI to trick your system. 

Additionally, while this approach in exfiltration is valid, the overall preventative approach should be combined with other monitoring tools (like anomaly detection) for better overall security.

Look Out for Apps with AI-generated Code

Of course, there’s nothing wrong with using artificial intelligence to write code. However, if you aren’t 100% comfortable with the software you’ve recently downloaded, you can check it for AI-generated code before installation. Then, if the analysis reveals AI wrote the code, you can take a closer look at its contents.

In this way, you safeguard your system. Hackers relatively new to coding may use AI to build malicious software. For instance, they can ask a generative AI system to write malicious code. The code makes the software perform some or all of the actions that a legitimate version would. Then, the attacker can include the code in the software’s installation file system. When you try to install the software, it infects your system with malware.

How can you tell if code is generated by AI? In many cases, a generative AI system includes numerous “comments” explaining what each line of code does. These comments tend to have predictable wording. But you can use this to your advantage. For example, you can ask a generative AI to identify the pattern of phraseology used in the comments inside a piece of software’s code. Then, you can ask if it was likely generated by an AI solution. 

Granted most users or IT admins aren’t likely to spend their time checking code comments, so realistically, the focus should be on robust code reviews and using trusted software sources. 

Proactive Defense Against AI-Driven Cyber Attacks

Fortunately, you can use AI to fight cybercrime. Enterprises should incorporate AI-powered threat detection and analysis applications to detect anomalies and assess behaviors. For example, an AI-powered system can analyze your network when it’s in a safe, virus-free state. It can then perform real-time scans of your network. If the scan results surface data that’s significantly different from what’s in the safe state, the system can trigger an alert or even shut down some or all of your network.

You can also update your CAPTCHA authentication. For example, you can use CAPTCHA tools that force users to solve simple puzzles instead of selecting images. These are more difficult for hackers to bypass.

You also need an AI-specific incident response plan. It should outline specific techniques for mitigating attacks. The techniques should vary according to the asset you’re trying to protect and the nature of the attack.

Use simulations and drills to test your strategies. And mix it up. Try different kinds of phishing attacks, such as whale or spear phishing. In effect, try your best to break your defenses. Then, use what you learn to improve your system.

Collaborating with AI Technology Providers

When you work with AI technology providers, two key benefits emerge: shared intelligence and early threat detection. You can then identify and prevent more incidents. Additionally, both can use AI to learn from new threat data and evolving attack patterns.

Working with a technology partner like Cox Business can add an additional layer of security to your network. Explore and ask about cloud solutions that can provide network, device, and data protection. With a trusted partner like Cox Business and a Backup as a Service (BaaS) cloud solution, if an incident does occur, you’ll have peace of mind knowing that Cox Business can assist with file backup and recovery tools.

Take a Proactive Approach to Network Security

Even though, like many new technologies, hackers are using artificial intelligence to their advantage, you still have the advantage. You can control who and what accesses your digital assets using continuous monitoring and threat detection. Further, IT leaders aren’t alone in the battle against AI-powered attacks. Technology partners like Cox Business can help enterprise business evolve to meet the latest challenges using proactive defense strategies.


Cox Business

The commercial division of Cox Communications, Cox Business, provides a broad commercial solutions portfolio, including advanced managed IT, cloud and fiber-based network solutions that support connected environments, unique hospitality experiences and diverse applications for nearly 370,000 businesses nationwide. For more information, please visit www.coxbusiness.com.