Key Points
- Bring Your Own AI (BYOAI) is the unsanctioned or unmanaged use of AI tools by employees outside organizational controls.
- BYOAI can boost productivity, but also expose sensitive data if not governed properly.
- What are the steps to govern BYOAI:
- Define BYOAI and set the scope.
- Create an approved AI toolkit with risk ratings.
- Discover existing BYOAI usage.
- Apply layered technical controls.
- Train and support users.
- Monitor, review, and improve.
- How NinjaOne can help with BYOAI governance:
- Discovery and inventory
- Policy deployment
- Alerting and tickets
- Reporting
- Establish clear guidance that allows employees to use preferred AI tools responsibly while safeguarding company data and maintaining productivity.
The rise of remote work during and after the pandemic has become more prevalent over the years. While employees working remotely have attained advantages, there have also been many drawbacks. One of these disadvantages is what we call “Shadow IT,” which refers to systems, devices, software, and services used within an organization without explicit approval or oversight from the central IT department.
With the simultaneous emergence of artificial intelligence, Shadow IT has extended to a practice called “Bring Your Own Artificial Intelligence” or BYOAI. This pertains to employees choosing or “bringing” unapproved AI systems for the operations of their preference. While it may seem harmless, the same tools can expose sensitive information if they operate outside organizational controls.
Bring Your Own AI is the staff’s unsanctioned or unmanaged use of AI services. The right approach is not prohibition. In this blog, we will guide you through responsible BYOAI enablement, backed by clear policy, approved tools, and measurable oversight.
Best practices summary
| Task | Purpose and value |
| Task 1: Define BYOAI and set scope | Establishes a clear definition of what BYOAI means based on your company standards. |
| Task 2: Create an approved AI toolkit with risk ratings | Provides the staff with a solid understanding of what AI tools are safe to use for the company’s security. |
| Task 3: Discover existing BYOAI usage | Expands the coverage of planned BYOAI governance policies by determining and learning more about the AI tools that staff are already using. |
| Task 4: Apply layered technical controls | List technical controls to provide specific, actionable steps for IT and security teams. |
| Task 5: Train and support users | Ensures that everyone on the team is calibrated and aware of what the governance policies are comprised of. |
| Task 6: Monitor, review, and improve | Guarantees continuous security and protection for the company while BYOAI policies are implemented. |
Prerequisites for governing BYOAI
Before establishing the BYOAI governance concept, here are some requirements you need to prepare:
- Written AI acceptable use policy: This documentation should clearly define what constitutes fair usage of AI and when it is unsafe. It should also include acceptable data types and certain limitations.
- Identity and access management (IAM): Apply least privilege and conditional access rules to restrict who can use specific AI tools.
- Visibility and monitoring: Gain visibility over web, DNS, and endpoint activity for discovery and monitoring. This can help determine and mitigate potential risks of BYOAI usage.
- Exception workflows: Document approvals, exceptions, and policy reviews using ticketing systems.
- Risk rating framework: Create a simple rubric that scores AI tools based on data category, vendor trust, and security posture.
Task 1: Define BYOAI and set scope
📌 Use Case:
This clearly defines what BYOAI means based on your company standards.
Clarifying boundaries is crucial so employees, staff, and admins know what the company allows regarding BYOAI practice. The following are the steps you can take to implement BYOAI policies.
- Define BYOAI: Establish a concrete definition of BYOAI as using AI tools or models that are not approved, not configured for enterprise data controls, or used outside sanctioned contexts.
- Create an allowlist: List permitted tools, permitted data types, and prohibited data classes such as credentials, regulated PII, PHI, or payment data.
- Provide examples: Publish examples of safe and unsafe prompts to set expectations.
Task 2: Create an approved AI toolkit with risk ratings
📌 Use Case:
This task should provide the staff with a solid understanding of what AI tools are safe to use for the company’s security.
Advocating for BYOAI enables safe choices while documenting rationale. But to do that, you must first determine which AI tools are safe for the company staff to utilize.
You can do it by following these steps:
- Score tools by data residency, encryption, logging, access controls, and vendor commitments.
- Document conditions, for example, approved only with SSO, tenant isolation, data loss prevention, or customer-managed keys.
- Group tools into the following:
- Approved: These are tools that are decided to be safe for general use.
- Conditional: These are tools that are only allowed under specific requirements.
- Not allowed: AI tools that are prohibited from use because they present a high data risk.
- Record exceptions with an owner, reason, and expiry date.
Task 3: Discover existing BYOAI usage
📌 Use Case:
This task expands the coverage of planned BYOAI governance policies by determining and learning more about the AI tools that staff already use.
Before enforcing standard BYOAI policies, you should first find unmanaged tools and patterns. This is to identify more considerations when crafting BYOAI governance policies.
Start by looking for signals such as the following:
- Endpoint processes and browser extensions associated with AI services
- DNS and proxy logs for known AI domains and new lookups
- CASB or secure web gateway detections for AI categories
- Ticket and chat references that indicate recurring tool use
Once this information is collected, you can proceed to:
- Inventory unapproved tools and map them to teams and data types.
- Convert safe patterns to policy and add approved tools to the catalog.
- Open tickets for risky usage with guidance and timelines to move to sanctioned options.
Task 4: Apply layered technical controls
📌 Use Case:
This task lists technical controls to provide specific, actionable steps for IT and security teams.
A well-governed BYOAI strategy relies on multiple layers of enforcement across endpoints, browsers, networks, and cloud environments. Here are the components and how you should put controls on them:
- Endpoint controls
- Where appropriate, use Windows 11 settings, Group Policy, or Registry keys to allow or deny generative AI features.
- Apply application allow-listing or browser extension controls for risky add-ons.
- Enforce device compliance before granting access to sanctioned AI tools.
- Browser and network controls
- Block unapproved AI domains or place them under restricted policies that prevent sensitive content from leaving.
- Enable SSL inspection, content inspection, or upload scanning based on sensitivity.
- Monitor for newly observed AI domains and review monthly.
- Cloud controls
- Disable AI connectors for repositories that hold regulated data unless guardrails are in place.
- Use data loss prevention to detect sensitive patterns in uploads and chat prompts.
- Require SSO, conditional access, and session controls for approved tools.
Task 5: Train and support users
📌 Use Case:
This task ensures that everyone on the team is calibrated and aware of the governance policies.
Implementing BYOAI policies can only be successful if everyone involved is properly educated about it. This should reduce human-driven leakage while keeping productivity high.
When training users, the following areas should be covered:
- What data can be shared with AI tools, and what cannot
- How to design safe prompts that avoid exposing secrets
- How to verify AI output before using it in tickets, docs, or communications
- How to request a new tool or a policy exception with a clear business case
Use these focuses when scheduling short quarterly refreshers with examples tied to the approved toolkit.
Task 6: Monitor, review, and improve
📌 Use Case:
This task guarantees continuous security and protection for the company while implementing BYOAI policies.
AI tools evolve so fast, requiring your policies to continuously adapt to changes. That’s why once you start governing BYOAI, it will be a part of your routine to monitor, review, and improve your policies. Here are some metrics you need to look after:
- Percentage of AI traffic that uses approved tools
- Number of exceptions and average time to expiry
- DLP incidents tied to AI usage by severity
- New AI domains detected and time to decision
Monitoring and reviewing can be done monthly during the operational review of detections and exceptions. Meanwhile, improvements can be implemented through quarterly toolkit updates and policy refreshes based on findings.
NinjaOne integrations
A comprehensive endpoint management platform like NinjaOne can offer robust tools in governing BYOAI. Here’s how:
| NinjaOne service | What it is | How it helps BYOAI governance |
| Discovery and inventory | Schedule scripts and queries to detect AI processes, browser extensions, and domain access on endpoints. | Identifies unmanaged AI tools and usage patterns so IT can assess risk and add approved tools to the catalog. |
| Policy deployment | Push configuration changes that enable or restrict AI features, enforce allow-listing, and apply browser controls. | Enforces approved AI usage across devices and browsers, ensuring compliance with organizational policies. |
| Alerting and tickets | Notify owners when unapproved AI usage is detected and open tickets with remediation guidance. | Automates incident response and ensures accountability by directing users toward sanctioned AI tools. |
| Reporting | Generate client-level dashboards that summarize approved usage, exception status, and DLP signal trends. | Provides visibility into AI governance metrics, helping track compliance progress and identify areas for improvement. |
Effectively governing BYOAI
BYOAI has become a common practice among employees. Governing it is the proper approach to welcome advancements in productivity. This can ensure that adequate guidance is provided to employees’ usage of the AI they prefer while maintaining protection for the company’s data.
Key takeaways:
- Define BYOAI, publish allowed tools, and prohibit risky data classes.
- Discover current usage patterns before enforcing stricter controls.
- Apply endpoint, browser, network, and cloud guardrails that match data sensitivity.
- Train users with practical examples and a simple exception process.
- Monitor results, expire exceptions, and update the approved toolkit regularly.
By following these governance strategies, BYOAI practices can make employees thrive, educate them in using AI tools responsibly, and enforce robust security for the company.
Related topics:
