AI in cybersecurity: Key challenges and opportunities up next

Cybercrimes are becoming increasingly sophisticated and pervasive, making businesses and government organizations worldwide more susceptible to threats. Obviously, overcoming new kinds of threats poses a significant challenge for cyber security professionals. So when Mike Beck, Global Head of Threat Analysis at the Darktrace R&D Center, decided to bring together cybersecurity professionals and AI experts for a three-year-long project, he aimed to answer a significant question: Could AI in cybersecurity be used to not only detect threats but investigate them by emulating human thought processes too? 

Consequently, by 2019, this project took the shape of a full-fledged AI-powered cybersecurity analyst – a technology that can augment a human analyst by performing many of their functions. What else? This technology could rapidly generate a human-readable report on the context of an attack, which could be translated into any language with just a few clicks.

What is AI-powered cybersecurity and how is it helping build a safe digital society?

The concept of ‘Responsible AI’ is doing rounds today in the cybersecurity space. Why is that so? The need for strong protection against cyber attacks is increasing as AI systems take a heightened role in society. By deploying responsible AI in cybersecurity in today’s digital-first world, businesses can enhance their security measures while respecting the informed choices of consumers. Furthermore, they can ensure the ability to deliver valuable tailored products and services. 

AI-powered cybersecurity is a gamechanger as it can process huge amounts of risk data with quick response times. Furthermore, it can improve the performance of under-resourced security operations. Take a look at these three companies implementing AI in cybersecurity and demonstrating impressive results. 

organizations ai in cybersecurity

Source: Wired, Computer Weekly, Safe

AI in cybersecurity: Implementation challenges that cannot be overlooked

1. Shortage of talent pools with niche expertise

According to a recent cybersecurity study, the global cybersecurity workforce needs to grow 65% to effectively defend organizations’ critical assets. Additionally, organizations today want cybersecurity professionals to earn certifications that can increase specialization and expertise amid the rise of AI/ML in cybersecurity.

However, despite another influx of 700,000 professionals into the cybersecurity workforce, global demand for cybersecurity professionals continues to outpace supply. This poses several threats: misconfigured systems, rushed deployments, improper risk assessment, and insufficient oversight of processes and procedures.

2. Double-edged sword: Malicious AI and infiltration of valuable data

Although Google employed Bouncer to safeguard Android applications, Joker malware has managed to bypass Google security. In one instance, it infected over 11 Play Store apps with novel AI threats. Let alone Google Play, 71% of malware-infected apps are on 3rd party stores. 

malicious mobile app vs AI in cybersecurity
Source: Upstream

Moreover, smartphone malware can steal user data, compromise user privacy, and snoop on other apps. Malicious AI can be employed to infiltrate data in several ways: 

  • Decodes CAPTCHAs to sneak past this type of authentication method 
  • Scans social media to find the right people to target with spear-phishing campaigns
  • Creates more convincing spam, customized towards the target victim

3. Misunderstood inputs due to low-quality data used to train AI systems

Data quality is a major concern when it comes to ensuring safety. Despite performing complex tasks like maneuvering a vehicle, AI systems can still make critical mistakes based on misunderstood inputs. Furthermore, it is expensive to obtain high-quality data and to train large neural networks. Existing data is often obtained from external sources, but this can open massively interconnected AI systems to new risks.

Additionally, malicious training data introduced by backdoor attacks can cause AI systems to generate incorrect, potentially dangerous, results. For instance, if an attack on an AV system causes the vehicle to display a 100 mph limit instead of a stop sign, this could result in considerable safety issues. 

4. Inconsistency around data privacy laws, policies, and regulations

People often have to divulge their private information for services they want. Although individuals have the option of consenting to their data being shared, it can be a confusing experience providing personal information for services they want. 

In recent years, countries have increasingly passed legislation to protect the personal data and privacy of their citizens. Additionally, this grants them the right to have more control over the use of their data. These laws are under strain in an era of big data and AI. 

Furthermore, data privacy expectations are highly contextual. For example, complying with requirements to notify the consumer as to the purpose of data collection is difficult as (due to AI-enabled processes) the purpose may not be known at the time of notification. Consent is difficult to obtain when the complexity of big data systems is beyond the consumer’s comprehension. 

Opportunities up-next for AI in cybersecurity

1. Improved threat hunting by integrating behavior analysis

Despite the aforementioned challenges, AI in cybersecurity – if harnessed correctly – can act as a powerful tool of defense across industries. Consider this: Replacing traditional techniques with AI can increase threat detection rates up to 95%.

By deploying AI models, companies can use behavior analysis to develop profiles for every application within an organization’s infrastructure by processing a large volume of endpoint data. For example, Mitsubishi Electric used its behavioral analysis AI Maisart to detect slight differences in human motions that people do not readily notice. Furthermore, it can analyze human behavior in various fields, such as the motions of assembly-line workers, to help eliminate unnecessary motions and thereby improve productivity.

mitsubishi electric ai in cybersecurity
Source: Mitsubishi

2. Bolstering industrial IoT or Smart Factory (Industry) 4.0

As IoT accelerates, operational technology (OT) becomes smarter. Hence, companies will need to manage and secure endpoints and industrial networks to mitigate the risk of data theft and disruption caused by external entities. 

Today, cybersecurity in Industry 4.0 can’t be tackled in the same way as that of traditional computing environments. That’s where AI in cybersecurity comes into play:

  • Makes up for the lack of security teams.
  • Helps discover devices and hidden patterns while processing large amounts of data. 
  • Monitors incoming and outgoing traffic for any deviations in behavior or threats in the IoT ecosystem.

In September this year, Siemens Energy launched the AI solution EOS.IO for monitoring and responding to cyber threats against the Industrial IoT. The platform collects and collates data flows from IIoT endpoints for use by security teams, and insights are brought together in one interface. As a result, defenders will be able to spend less time on routine tasks and more time conducting powerful investigations.

Read more: SASE architecture: Bringing cloud security to SD-WAN

3. Automation of human actions via AI-driven Security Operation Centers (SOCs) 

New technologies such as sensing, cloud computing, and analytical tools have helped businesses respond to market feedback faster and make more informed decisions. However, as enterprises gain more business insights into their data, cyber-adversaries have more opportunities to exploit the expanding attack surface. 

Today’s SOCs face complexity on two fronts: arrays of technology and threats vying for their attention. In the coming years, successful organizations will focus on the following two themes to combat the expansion of attacks:

  • AI-driven SOCs will enhance their infrastructure with appropriate speed and scale to process high volumes of data from a diverse set of security devices.  
  • By incorporating AI in cybersecurity to develop anomaly-based alerts, modern SOCs will be able to accelerate their understanding of unusual behavior throughout enterprise technology stacks. 

4. AI-powered managed security services with profound benefits for MSPs

AI represents an exciting new horizon for managed services. Managed service providers (MSPs) have always had software to automate manual tasks. Henceforth, MSPs will potentially assign many of those manual tasks to an AI system, freeing up human technicians to create more valuable offerings for customers. 

AI can deepen the ability of managed services to provide comprehensive security and workflow oversight with the help of:

  • Anomaly detection: Synthesizing patterns and conducting behavioral analyses
  • Data analytics: Harnessing internal data and consumer metrics
  • Energy optimization: Implementing power, resource, and money-saving measures  

The way forward

AI-enabled cybersecurity solutions are yet to demonstrate efficiency without human intervention. Human cybersecurity experts can ensure that AI-based cyber systems are not subject to manipulation using false logic. 

Moving forward, using AI in cybersecurity will have profound effects as it continues to reduce programming hours and allow for faster response to threats to sensitive data. Also, this advanced security will benefit any company’s operations and boost customer confidence at the same time. 

Business leaders should strive to create a culture of security. What will your next move be? Is your company prepared to thrive in a world of technological disruption and digital dominance? With Netscribes’ technology and innovation research solutions, you can stay prepared for the next wave of technology requirements and the business landscape. To learn more, get in touch with us.