Document Preview
AI Usage Guidelines Package
Northstar SaaS Co. | Prepared with GuardAxis
Northstar SaaS Co. receives a high draft risk posture in this Phase 2 demo because the intake indicates internal and external AI use with customer content, source code, trade secrets, credentials or secrets considerations.
Review Status
This package is a draft and requires business, security, and legal review before use.
Included Documents
4 draft documents plus the control mapping appendix.
Evidence Model
User facts, observed website facts, inferred risks, and policy language stay separated.
Executive Summary
- Intake answers are treated as the primary source of truth.
- Website evidence is limited to bounded public page review within the requested domain family.
- Policies are presented as editable draft guidelines.
Assumptions and Missing Information
Assumptions
- - The website scan in this phase is bounded to public pages within the requested domain family and may be incomplete.
- - Framework mappings are provided as high-level drafting references only.
Missing Information
- - Public pages do not confirm the exact guardrails used for customer-facing AI outputs.
Package Snapshot
Business inputs captured
6
Confirmed business details stay separate from public website evidence and remain the primary basis for the draft.
Public website evidence
4
Website observations are supporting context only. The appendix records what was observed without treating marketing copy as verified practice.
Priority risks
3
Risks are inferred from the business summary and public evidence, then translated into conservative policy controls.
Policy Documents
AI Acceptable Use Policy
Audience: All staff and contractors
Draft guidelines only. Review by business, security, and legal stakeholders is required before adoption.
Purpose and scope
Northstar SaaS Co. may use approved AI tools to support legitimate business work. These draft guidelines apply to employees and contractors and should be reviewed by business, security, and legal stakeholders before adoption.
Restricted and prohibited uses
AI may support approved work, but it must not be used for prohibited activities such as Do not submit credentials, secrets, or production source code into unapproved AI tools.; Do not use AI to make autonomous customer commitments or support decisions.; Do not use AI-generated code without engineering review.. High-impact outputs should not be used without review and approval.
Sensitive data handling
Users must avoid placing customer content, source code, trade secrets, credentials or secrets into unapproved AI tools. Exceptions should require documented approval, and any regulated or customer data should be handled conservatively.
AI Security & Governance Policy
Audience: Leadership, security, IT, and governance owners
Draft guidelines only. Review by business, security, and legal stakeholders is required before adoption.
Roles and responsibilities
Business owners, security reviewers, and legal stakeholders should share responsibility for approving AI use cases, reviewing exceptions, and confirming that policy language matches actual operations.
Logging and monitoring
AI-assisted workflows should follow the organization's logging expectations, with additional attention on customer-facing outputs and other high-impact uses. Incident review and exception tracking should be documented in a lightweight, repeatable way.
Third-party AI tools
New AI tools should not be adopted informally. Northstar SaaS Co. should maintain a simple approval path, record intended use, and require vendor review before wider rollout.
Employee AI Usage Standard
Audience: Employees and managers
Draft guidelines only. Review by business, security, and legal stakeholders is required before adoption.
Daily use rules
Employees may use approved AI tools for Internal drafting and summarization, Engineering assistance and code review support, Support ticket triage and response drafting, Product and operations analysis. Customer-visible or externally shared outputs should stay within approved use cases such as Customer support response drafting, Product feature assistance with human review.
Review and verification
AI output must be checked for accuracy, completeness, and business appropriateness before it is relied on. When facts are uncertain or the impact is high, employees should escalate rather than assume the output is correct.
Code and intellectual property safeguards
Source code, trade secrets, and other proprietary material should only be used in approved workflows. The current risk profile for Northstar SaaS Co. highlights elevated exposure concerns in this area.
Third-Party AI Review Checklist
Audience: Security, procurement, and business owners
Draft guidelines only. Review by business, security, and legal stakeholders is required before adoption.
Vendor basics
Record the tool name, business owner, intended use case, and whether the tool may affect customers directly. If the use case changes materially, repeat the review.
Data exposure and retention
Review what information the vendor receives, stores, or uses for model improvement. Pay special attention to customer content, source code, trade secrets, credentials or secrets, and confirm whether those categories are allowed at all.
Security and approval decision
Capture the approval decision, any required conditions, and follow-up owners. This checklist should support lightweight but consistent review rather than ad hoc judgment.
Appendix A | Business Inputs
Organization
Northstar SaaS Co.
Primary website
https://software-saas.guardaxis.io
AI usage mode
internal and external
Customer-facing AI
In scope
Sensitive data types
customer content, source code, trade secrets, credentials or secrets
Current AI tools
ChatGPT, GitHub Copilot, AI features inside productivity tools
Appendix B | Observed Website Evidence
software-saas.guardaxis.io
Northstar SaaS Co.
Observed public fact | Source: software-saas.guardaxis.io
software-saas.guardaxis.io/operations
Product teams use AI for internal drafting, engineering assistance, support ticket triage, and customer-facing workflow assistance with human review.
Observed public fact | Source: software-saas.guardaxis.io/operations
software-saas.guardaxis.io/trust
The sample trust page discusses vendor review, security review expectations, access control, and separation between customer data and internal productivity workflows.
Observed public fact | Source: software-saas.guardaxis.io/trust
software-saas.guardaxis.io/privacy
The sample privacy notes describe customer content, account data, support data, and restrictions around credentials, secrets, and production source code.
Observed public fact | Source: software-saas.guardaxis.io/privacy
Appendix C | Priority Risks
Customer-facing AI outputs need clear human review boundaries
Severity | Medium
Source code and proprietary information exposure risk is elevated
Severity | High
Third-party AI tools need consistent approval and vendor review
Severity | Medium
Appendix D | Drafting Basis By Section
AI Acceptable Use Policy
Purpose and scope
- Business input: AI usage mode
- Business input: Customer type
- Drafting plan: Purpose and scope
AI Acceptable Use Policy
Restricted and prohibited uses
- Risk driver: Customer-facing AI outputs need clear human review boundaries
- Risk driver: Source code and proprietary information exposure risk is elevated
- Risk driver: Third-party AI tools need consistent approval and vendor review
- Drafting plan: Restricted and prohibited uses
AI Acceptable Use Policy
Sensitive data handling
- Business input: Sensitive data types
- Business input: Regulated data present
- Drafting plan: Sensitive data handling
AI Security and Governance Policy
Roles and responsibilities
- Business input: New tool approval
- Business input: Vendor review requirement
- Drafting plan: Roles and responsibilities
AI Security and Governance Policy
Logging and monitoring
- Business input: Logging requirement
- Risk driver: Customer-facing AI outputs need clear human review boundaries
- Drafting plan: Logging and monitoring
AI Security and Governance Policy
Third-party AI tools
- Business input: Current AI tools
- Risk driver: Third-party AI tools need consistent approval and vendor review
- Drafting plan: Third-party AI tools
Employee AI Usage Standard
Daily use rules
- Business input: Internal AI use cases
- Drafting plan: Daily use rules
Employee AI Usage Standard
Review and verification
- Business input: Human review requirement
- Drafting plan: Review and verification
Employee AI Usage Standard
Code and intellectual property safeguards
- Risk driver: Source code and proprietary information exposure risk is elevated
- Drafting plan: Code and intellectual property safeguards
Third-Party AI Review Checklist
Vendor basics
- Business input: Current AI tools
- Drafting plan: Vendor basics
Third-Party AI Review Checklist
Data exposure and retention
- Business input: Sensitive data types
- Drafting plan: Data exposure and retention
Third-Party AI Review Checklist
Security and approval decision
- Business input: Vendor review requirement
- Drafting plan: Security and approval decision
Appendix E | Control Mapping
| Document | Section | Framework | Reference | Confidence |
|---|---|---|---|---|
| AI Acceptable Use Policy | Purpose and scope | NIST AI RMF | Govern | Medium |
| AI Acceptable Use Policy | Purpose and scope | NIST CSF 2.0 | PR.DS | Data Security | Medium |
| AI Acceptable Use Policy | Restricted and prohibited uses | OWASP LLM / GenAI | Overreliance | Medium |
| AI Acceptable Use Policy | Restricted and prohibited uses | NIST AI RMF | Govern | Medium |
| AI Acceptable Use Policy | Sensitive data handling | NIST CSF 2.0 | PR.DS | Data Security | Medium |
| AI Acceptable Use Policy | Sensitive data handling | CIS Controls | 3 | Data Protection | Medium |
| AI Security and Governance Policy | Roles and responsibilities | NIST CSF 2.0 | GV.RM | Risk Management Strategy | Medium |
| AI Security and Governance Policy | Roles and responsibilities | NIST AI RMF | Govern | Medium |
| AI Security and Governance Policy | Logging and monitoring | NIST CSF 2.0 | GV.RM | Risk Management Strategy | Medium |
| AI Security and Governance Policy | Logging and monitoring | OWASP LLM / GenAI | Sensitive Information Disclosure | Medium |
| AI Security and Governance Policy | Third-party AI tools | CIS Controls | 15 | Service Provider Management | Medium |
| AI Security and Governance Policy | Third-party AI tools | NIST AI RMF | Govern | Medium |
| Employee AI Usage Standard | Daily use rules | NIST AI RMF | Manage | Medium |
| Employee AI Usage Standard | Daily use rules | CIS Controls | 3 | Data Protection | Medium |
| Employee AI Usage Standard | Review and verification | OWASP LLM / GenAI | Overreliance | Medium |
| Employee AI Usage Standard | Review and verification | NIST AI RMF | Manage | Medium |
| Employee AI Usage Standard | Code and intellectual property safeguards | OWASP LLM / GenAI | Overreliance | Medium |
| Employee AI Usage Standard | Code and intellectual property safeguards | CIS Controls | 3 | Data Protection | Medium |
| Third-Party AI Review Checklist | Vendor basics | NIST CSF 2.0 | GV.SC | Cybersecurity Supply Chain Risk Management | Medium |
| Third-Party AI Review Checklist | Vendor basics | NIST AI RMF | Map | Medium |
| Third-Party AI Review Checklist | Data exposure and retention | CIS Controls | 15 | Service Provider Management | Medium |
| Third-Party AI Review Checklist | Data exposure and retention | OWASP LLM / GenAI | Supply Chain Vulnerabilities | Medium |
| Third-Party AI Review Checklist | Security and approval decision | NIST CSF 2.0 | GV.SC | Cybersecurity Supply Chain Risk Management | Medium |
| Third-Party AI Review Checklist | Security and approval decision | CIS Controls | 15 | Service Provider Management | Medium |
High-level drafting support only. Review against the final approved policy text before relying on this mapping.
Draft guidelines only. This material does not establish compliance, certification, or legal sufficiency and must be reviewed by business, security, and legal stakeholders.