FoxPointe Security Hub

AI Awareness for Organizations Big and Small

FXP Article Image 101325

This Article is written by James Normand, Security Consultant.

Unless you just woke up from a 5-year coma, you’ve no doubt heard of how artificial intelligence (AI) and large language models (LLM) have ushered in an era of enhanced productivity, creativity, and shareholder value. AI agents and LLMs are being developed and employed across the globe for use by Governments, large multinational businesses, medium to small businesses, and even for personal everyday use. ChatGPT, one of the leaders in this space, has seen an almost twofold increase in weekly active users from 400 million at the start of 2025 to around 700 million by August of 2025. How did I get these numbers? I asked ChatGPT of course.

All that background aside, what does the advent of AI and LLM technology mean for small and medium sized businesses and Government entities? Well, unfortunately, it means an increase in risk. Vendor risk, internal control risk, and personnel risk.

Vendor Management: Ongoing reviews of vendors is a critical step for organizational compliance as well as system and data integrity. As AI laws and regulations change it is incredibly important to keep track of not just any AI tools used internally but also AI tools used by vendors with access to your organization’s data. Whether it be an AI questionnaire completed as part of annual due diligence reviews or something more involved, your organization needs to be aware of how your data and your client’s data is being used with regard to AI.

Data Classification and Acceptable Use: Understanding the types of data in use at your organization is key to determining limitations on how that data can be used. HIPAA, GDPR, and regional data privacy rules require strict data governance for sensitive and personal information. Data classification allows organizations to determine acceptable use cases for AI tools and helps limit the unauthorized use of sensitive data. Once data is accurately classified and safeguards are in place, personnel can be educated on the acceptable use of AI tools in the workplace.

Phishing: Phishing is not new and unlike the technology behind AI and LLMs it is easy to understand. Someone is trying to trick you and your employees into providing credentials that can be used to compromise sensitive systems and data. We’ve all been through the trainings and taught the tricks. Look for spelling errors, poor grammar, weird formatting, strange addresses, and be wary of unexpected, usually urgent, correspondences.

With AI and LLMs at their disposal, bad actors can create incredibly convincing phishing emails that look like legit emails from common services. Updated phishing training and spam/scam email detection tools are essential for protecting critical assets.

Regardless of your personal opinions on the efficacy and ethicality of AI tools, you cannot afford to ignore their impact on organizations big and small. AI must be considered when evaluating internal controls like acceptable use and data classification, vendor management, cybersecurity initiatives, and other industry specific concerns. If you have any questions regarding AI risk, compliance, IT controls audits, risk assessments, or vCISO services, don’t hesitate to reach out to myself or any of my colleagues here at FoxPointe Solutions.