When employees use public AI tools at work, they are usually trying to save time, whether that means refining an email, summarising a report, or solving a technical issue. The intention may be good, but the risk is real. Many businesses across the Caribbean are not yet prepared for it. According to Stanford’s 2025 AI Index, 78% of companies reported using AI in 2024, up from 55% the year prior. AI use is accelerating fast. The governance, however, is not keeping up. And that gap is exactly where businesses become exposed.
While the Intent Is Good, a Lot Can Go Wrong
Employees often share what seems like “just enough context” to get a better response. That can include:
- Client and customer details
- Internal emails and ticket references
- Pricing, proposals, and contract terms
- Incident logs, screenshots, and system information Research from LayerX Security’s Enterprise Generative AI Security Report 2025 found that 77% of employees paste data into generative AI tools, with the majority doing so from personal, unmanaged accounts that sit entirely outside company oversight. Many public AI platforms retain that data and depending on the service, it may be used to improve the platform. What feels like a shortcut can quickly become a serious data governance problem
The Gap Between Using AI and Understanding the Risk
Many companies have not yet caught up with how quickly AI has become part of the workday. Employees are using these tools daily, yet most have never been told what is acceptable, what is off limits, or what happens if something goes wrong. According to a report by CybSafe and the National Cybersecurity Alliance, 38% of employees admitted to sharing sensitive work information through AI tools without their employer’s knowledge.
The challenge is not that employees are careless. It is that businesses have not yet built the guardrails to match the pace of adoption.
Under Jamaica’s Data Protection Act, companies remain responsible for protecting the personal information they hold. If customer data, internal records, or confidential business information is entered into a public AI tool and mishandled, that responsibility still stays with the business, regardless of intent.
Bridging that gap starts with practical controls, not restrictions:
- A plain-language AI policy with clear examples of what must never be entered into public AI tools.
- Approved AI pathways that are fit for business use, so teams are not pushed toward unmanaged alternatives.
- Guardrails such as Data Loss Prevention and web controls to reduce copying, pasting, or uploading sensitive data to unapproved AI platforms.
- Visibility into which AI tools are being used on company devices and where data is moving.
AI use at work will only continue to grow, and the risks tied to it will grow just as quickly. The question is not whether your employees are using these tools. It is whether your business is protected when they do. Contact Info Exchange today to build a safer, more controlled approach to AI use with practical policies, stronger security, and the right oversight.