<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=492489&amp;fmt=gif">

Is ChatGPT the new security risk?

July 5, 2023
Read Time 4 mins
01

Is ChatGPT the new security risk?

While ChatGPT is an incredible and disruptive technology that appears to answer practically any question an individual might ask, it appears to leave C-suite executives and compliance officers with even more questions.

In essence, ChatGPT is a natural language processing chatbot that is fuelled by AI technology which enables users to have human-like conversations and more. It can answer questions, and help users with tasks, such as composing emails, writing essays, and even code. Users can even ask it to role-play and review a business proposal as if it were an investor from Shark Tank, for example.

Covered in this article

Not without limitations
Nobody really knows
Ask the right questions
Having the right policies in place
Vendor-related issues
Having the right skills
No need to panic

Not without limitations

However, although it appears very impressive, ChatGPT has its limitations. There are well-documented issues with accuracy, and OpenAI has admitted the tool has “limited knowledge of world events after 2021”.  As such, it tends to fill in replies with incorrect data if there is not enough information available on the subject in question. Various ethical considerations still remain in the grey zone, such as plagiarism and the use of intellectual property without specific consent.

Other limitations include its inability to answer questions that are worded in a certain way, as it requires specific phrasing to understand the question being asked clearly. Another, and more serious, limitation is a lack of quality in the chatbot’s responses, which while often plausible, can sometimes make no real sense, or are repetitive and vague.

It is no surprise then, that business leaders are asking themselves questions, such as how they should govern the use of ChatGPT, and in fact any of the many next-generation AI tools that are popping up all over the place, even within their own businesses.

Moreover, they are questioning how they can guard against new risks posed by bad actors using weaponised AI against them, and how they can monitor and manage the risks of vendors and third-party partners within their supply chain using these tools. Adding to the problem, is that many professionals and leaders don’t fully understand what these risks are, or comprehend their implications, in the first place.

Nobody really knows

It has become clear that AI will change the business world, because it’s impossible that technology that is so powerful and easy to use will not have a profound impact on corporate operations, risks, and governance.

Likewise, chief information security officers (CISO) and other practitioners in the areas of cybersecurity and risk will have a pivotal role to play in helping organisations navigate these challenges.

However, beyond that, the answers to the questions that ChatGPT poses are anyone’s guess, and CISOs will need to be prepared to find these answers as the technology advances and is adopted into the enterprise.

Ask the right questions

How can they do this? By asking themselves and their organisations a lot of questions, the first of which should be whether or not they have the appropriate oversight structures in place.

When it comes to AI, the fundamental challenge is governance, and businesses must find a way to manage how AI is studied, developed, and employed within the company.

Senior management and the board need to establish some sort of governance over the use and development of AI, otherwise employees might be left to their own devices, leaving the enterprise open to a proliferation of risks.

Having the right policies in place

The next step is having basic policies that formalise governance principles for AI. Following this, the business must implement more precise policies and procedures that staff members and other third parties need to follow.

For instance, if senior management unveils bold ambitions for using generative AI to automate interactions with customers, for example, the company should follow up with policies that dictate how particular business units can attempt to integrate AI into their operations.

In highly regulated industries such as financial services or healthcare, entities might want policies that prohibit any rolling out of AI initiatives until dedicated teams have had the opportunity to thoroughly test those systems for security and compliance risks.

Vendor-related issues

At this juncture, companies should also start to consider vendor-related challenges more thoroughly. For instance, does the organisation want its vendors to disclose whether or not they are using AI when processing data or transactions on the company’s behalf?

Similarly, do they require a security assessment before buying any AI tools from a vendor? Policies are needed to address these and other challenges, and companies must work closely with procurement teams to ensure policies are clearly understood and are integrated into operations.

Having the right skills

Companies also need to ask themselves whether or not they have the right skills to manage AI-enabled work on a regular basis, and if so, do they have the defined necessary roles and responsibilities to put AI ambitions into practice?

Different skills are needed to assess the security risks of an AI implementation, to carry out IT audits, and to test software code for new products that have been developed by ChatGPT, for example.

This element is particularly challenging, as organisations will need to design entirely new workflows in ways that are unfamiliar and far-reaching.

No need to panic

Importantly, businesses need to realise there’s nothing to panic about. At its heart, AI is just another new technology, much like the rise of the internet back in the 1990s, or the cloud in the 2010s.

Yes, it raises a slew of security, operational, and compliance issues that companies haven’t even considered yet, but CISOs are becoming equipped with the tools and skills needed to work through these issues and find solutions that meet their company’s needs.

They may well have to depend heavily on frameworks such as the National Institute of Standards and Technology (NIST), and others that are being developed specifically for AI. They will need to augment their capabilities in areas of policy management, risk assessment, monitoring, and training. And they’ll need support from the board, the C-Suite, management, and other stakeholders, to ensure the process is managed properly and that goals align with the company’s vision.

Subscribe to our blog