Handshake AI Showcase Demo

GuardAxis

Institutional AI usage guidelines, drafted with restraint

A secure-default AI usage guidelines workflow for businesses that need a credible first draft quickly.

GuardAxis turns a short business conversation into a reviewable AI usage guidelines package shaped for business, security, and legal scrutiny.

This showcase route uses a prepared software-industry profile to demonstrate the end-to-end output without any payment wall. The live customer path keeps drafting free and monetizes the full export and reviewed delivery.

One-screen takeaway

Problem

Most teams can describe their AI use faster than they can assemble a policy package their reviewers will trust.

Approach

GuardAxis turns a short intake plus bounded public-site context into a draft package with explicit evidence separation and framework grounding.

Outcome

Reviewers get a structured draft they can challenge, edit, export, and map back to recognized governance references.

Prefer a static artifact first? The sample PDF opens immediately and the DOCX downloads without running live generation.

Why it is credible

User-confirmed business facts outrank website inference, and website evidence remains visibly separate from the final draft.

What the model contributes

It helps structure intake, summarize risk posture, and draft editable guidance while staying inside explicit review and non-compliance guardrails.

What this demo proves

The product can produce a coherent, review-ready package with framework alignment from a prepared industry scenario in one flow.

Demo workflow

What judges should look for in this sample

The sample below uses the software and SaaS industry track. Focus on how the draft stays inspectable from intake through framework alignment.

1. Intake discipline

The draft starts from business context, not raw crawling. Summary confirmation keeps the workflow grounded in facts the team can actually defend.

2. Reviewer-ready output

The generated package separates observed evidence, inferred risks, policy language, and framework alignment so reviewers can see why the draft says what it says.

Trusted drafting for internal AI governance

GuardAxis

Institutional AI usage guidelines, drafted with restraint

AI Usage Guidelines Package

Prepared for Northstar SaaS Co.. Confirmed business details remain the primary source of truth, optional website notes stay supporting only, and every artifact remains an editable draft for business, security, and legal review.

Northstar SaaS Co. receives a high draft risk posture in this Phase 2 demo because the intake indicates internal and external AI use with customer_content, source_code, trade_secrets, credentials_or_secrets considerations.

Showcase mode

This sample company and package are public demo materials for product review. They show the product shape, not a full customer engagement.

Risk Posture

High

Conservative posture inferred from intake priorities and the reviewed public evidence.

Website Context

4

Public website notes or reviewed pages used as supporting context before drafting.

Draft Documents

4

Draft artifacts included in the final package before export.

Framework grounding

This draft is already mapped to recognized guidance.

GuardAxis is using real framework references from the current package, not a generic claim of "best practices." Use Review Notes for the section-by-section alignment table and supporting reviewer context.

NIST AI RMFNIST CSF 2.0OWASP LLM / GenAICIS Controls

Document Preview

AI Usage Guidelines Package

Northstar SaaS Co. | Prepared with GuardAxis

Northstar SaaS Co. receives a high draft risk posture in this Phase 2 demo because the intake indicates internal and external AI use with customer content, source code, trade secrets, credentials or secrets considerations.

Review Status

This package is a draft and requires business, security, and legal review before use.

Included Documents

4 draft documents plus the control mapping appendix.

Evidence Model

User facts, observed website facts, inferred risks, and policy language stay separated.

Executive Summary

  • Intake answers are treated as the primary source of truth.
  • Website evidence is limited to bounded public page review within the requested domain family.
  • Policies are presented as editable draft guidelines.

Assumptions and Missing Information

Assumptions

  • - The website scan in this phase is bounded to public pages within the requested domain family and may be incomplete.
  • - Framework mappings are provided as high-level drafting references only.

Missing Information

  • - Public pages do not confirm the exact guardrails used for customer-facing AI outputs.

Package Snapshot

Business inputs captured

6

Confirmed business details stay separate from public website evidence and remain the primary basis for the draft.

Public website evidence

4

Website observations are supporting context only. The appendix records what was observed without treating marketing copy as verified practice.

Priority risks

3

Risks are inferred from the business summary and public evidence, then translated into conservative policy controls.

Policy Documents

AI Acceptable Use Policy

Audience: All staff and contractors

Draft

Draft guidelines only. Review by business, security, and legal stakeholders is required before adoption.

Purpose and scope

Northstar SaaS Co. may use approved AI tools to support legitimate business work. These draft guidelines apply to employees and contractors and should be reviewed by business, security, and legal stakeholders before adoption.

Restricted and prohibited uses

AI may support approved work, but it must not be used for prohibited activities such as Do not submit credentials, secrets, or production source code into unapproved AI tools.; Do not use AI to make autonomous customer commitments or support decisions.; Do not use AI-generated code without engineering review.. High-impact outputs should not be used without review and approval.

Sensitive data handling

Users must avoid placing customer content, source code, trade secrets, credentials or secrets into unapproved AI tools. Exceptions should require documented approval, and any regulated or customer data should be handled conservatively.

AI Security & Governance Policy

Audience: Leadership, security, IT, and governance owners

Draft

Draft guidelines only. Review by business, security, and legal stakeholders is required before adoption.

Roles and responsibilities

Business owners, security reviewers, and legal stakeholders should share responsibility for approving AI use cases, reviewing exceptions, and confirming that policy language matches actual operations.

Logging and monitoring

AI-assisted workflows should follow the organization's logging expectations, with additional attention on customer-facing outputs and other high-impact uses. Incident review and exception tracking should be documented in a lightweight, repeatable way.

Third-party AI tools

New AI tools should not be adopted informally. Northstar SaaS Co. should maintain a simple approval path, record intended use, and require vendor review before wider rollout.

Employee AI Usage Standard

Audience: Employees and managers

Draft

Draft guidelines only. Review by business, security, and legal stakeholders is required before adoption.

Daily use rules

Employees may use approved AI tools for Internal drafting and summarization, Engineering assistance and code review support, Support ticket triage and response drafting, Product and operations analysis. Customer-visible or externally shared outputs should stay within approved use cases such as Customer support response drafting, Product feature assistance with human review.

Review and verification

AI output must be checked for accuracy, completeness, and business appropriateness before it is relied on. When facts are uncertain or the impact is high, employees should escalate rather than assume the output is correct.

Code and intellectual property safeguards

Source code, trade secrets, and other proprietary material should only be used in approved workflows. The current risk profile for Northstar SaaS Co. highlights elevated exposure concerns in this area.

Third-Party AI Review Checklist

Audience: Security, procurement, and business owners

Draft

Draft guidelines only. Review by business, security, and legal stakeholders is required before adoption.

Vendor basics

Record the tool name, business owner, intended use case, and whether the tool may affect customers directly. If the use case changes materially, repeat the review.

Data exposure and retention

Review what information the vendor receives, stores, or uses for model improvement. Pay special attention to customer content, source code, trade secrets, credentials or secrets, and confirm whether those categories are allowed at all.

Security and approval decision

Capture the approval decision, any required conditions, and follow-up owners. This checklist should support lightweight but consistent review rather than ad hoc judgment.

Appendix A | Business Inputs

  • Organization

    Northstar SaaS Co.

  • Primary website

    https://software-saas.guardaxis.io

  • AI usage mode

    internal and external

  • Customer-facing AI

    In scope

  • Sensitive data types

    customer content, source code, trade secrets, credentials or secrets

  • Current AI tools

    ChatGPT, GitHub Copilot, AI features inside productivity tools

Appendix B | Observed Website Evidence

  • software-saas.guardaxis.io

    Northstar SaaS Co.

    Observed public fact | Source: software-saas.guardaxis.io

  • software-saas.guardaxis.io/operations

    Product teams use AI for internal drafting, engineering assistance, support ticket triage, and customer-facing workflow assistance with human review.

    Observed public fact | Source: software-saas.guardaxis.io/operations

  • software-saas.guardaxis.io/trust

    The sample trust page discusses vendor review, security review expectations, access control, and separation between customer data and internal productivity workflows.

    Observed public fact | Source: software-saas.guardaxis.io/trust

  • software-saas.guardaxis.io/privacy

    The sample privacy notes describe customer content, account data, support data, and restrictions around credentials, secrets, and production source code.

    Observed public fact | Source: software-saas.guardaxis.io/privacy

Appendix C | Priority Risks

  • Customer-facing AI outputs need clear human review boundaries

    Severity | Medium

  • Source code and proprietary information exposure risk is elevated

    Severity | High

  • Third-party AI tools need consistent approval and vendor review

    Severity | Medium

Appendix D | Drafting Basis By Section

AI Acceptable Use Policy

Purpose and scope

  • Business input: AI usage mode
  • Business input: Customer type
  • Drafting plan: Purpose and scope

AI Acceptable Use Policy

Restricted and prohibited uses

  • Risk driver: Customer-facing AI outputs need clear human review boundaries
  • Risk driver: Source code and proprietary information exposure risk is elevated
  • Risk driver: Third-party AI tools need consistent approval and vendor review
  • Drafting plan: Restricted and prohibited uses

AI Acceptable Use Policy

Sensitive data handling

  • Business input: Sensitive data types
  • Business input: Regulated data present
  • Drafting plan: Sensitive data handling

AI Security and Governance Policy

Roles and responsibilities

  • Business input: New tool approval
  • Business input: Vendor review requirement
  • Drafting plan: Roles and responsibilities

AI Security and Governance Policy

Logging and monitoring

  • Business input: Logging requirement
  • Risk driver: Customer-facing AI outputs need clear human review boundaries
  • Drafting plan: Logging and monitoring

AI Security and Governance Policy

Third-party AI tools

  • Business input: Current AI tools
  • Risk driver: Third-party AI tools need consistent approval and vendor review
  • Drafting plan: Third-party AI tools

Employee AI Usage Standard

Daily use rules

  • Business input: Internal AI use cases
  • Drafting plan: Daily use rules

Employee AI Usage Standard

Review and verification

  • Business input: Human review requirement
  • Drafting plan: Review and verification

Employee AI Usage Standard

Code and intellectual property safeguards

  • Risk driver: Source code and proprietary information exposure risk is elevated
  • Drafting plan: Code and intellectual property safeguards

Third-Party AI Review Checklist

Vendor basics

  • Business input: Current AI tools
  • Drafting plan: Vendor basics

Third-Party AI Review Checklist

Data exposure and retention

  • Business input: Sensitive data types
  • Drafting plan: Data exposure and retention

Third-Party AI Review Checklist

Security and approval decision

  • Business input: Vendor review requirement
  • Drafting plan: Security and approval decision

Appendix E | Control Mapping

Framework mappings for draft document sections.
DocumentSectionFrameworkReferenceConfidence
AI Acceptable Use PolicyPurpose and scopeNIST AI RMFGovernMedium
AI Acceptable Use PolicyPurpose and scopeNIST CSF 2.0PR.DS | Data SecurityMedium
AI Acceptable Use PolicyRestricted and prohibited usesOWASP LLM / GenAIOverrelianceMedium
AI Acceptable Use PolicyRestricted and prohibited usesNIST AI RMFGovernMedium
AI Acceptable Use PolicySensitive data handlingNIST CSF 2.0PR.DS | Data SecurityMedium
AI Acceptable Use PolicySensitive data handlingCIS Controls3 | Data ProtectionMedium
AI Security and Governance PolicyRoles and responsibilitiesNIST CSF 2.0GV.RM | Risk Management StrategyMedium
AI Security and Governance PolicyRoles and responsibilitiesNIST AI RMFGovernMedium
AI Security and Governance PolicyLogging and monitoringNIST CSF 2.0GV.RM | Risk Management StrategyMedium
AI Security and Governance PolicyLogging and monitoringOWASP LLM / GenAISensitive Information DisclosureMedium
AI Security and Governance PolicyThird-party AI toolsCIS Controls15 | Service Provider ManagementMedium
AI Security and Governance PolicyThird-party AI toolsNIST AI RMFGovernMedium
Employee AI Usage StandardDaily use rulesNIST AI RMFManageMedium
Employee AI Usage StandardDaily use rulesCIS Controls3 | Data ProtectionMedium
Employee AI Usage StandardReview and verificationOWASP LLM / GenAIOverrelianceMedium
Employee AI Usage StandardReview and verificationNIST AI RMFManageMedium
Employee AI Usage StandardCode and intellectual property safeguardsOWASP LLM / GenAIOverrelianceMedium
Employee AI Usage StandardCode and intellectual property safeguardsCIS Controls3 | Data ProtectionMedium
Third-Party AI Review ChecklistVendor basicsNIST CSF 2.0GV.SC | Cybersecurity Supply Chain Risk ManagementMedium
Third-Party AI Review ChecklistVendor basicsNIST AI RMFMapMedium
Third-Party AI Review ChecklistData exposure and retentionCIS Controls15 | Service Provider ManagementMedium
Third-Party AI Review ChecklistData exposure and retentionOWASP LLM / GenAISupply Chain VulnerabilitiesMedium
Third-Party AI Review ChecklistSecurity and approval decisionNIST CSF 2.0GV.SC | Cybersecurity Supply Chain Risk ManagementMedium
Third-Party AI Review ChecklistSecurity and approval decisionCIS Controls15 | Service Provider ManagementMedium

High-level drafting support only. Review against the final approved policy text before relying on this mapping.

Draft guidelines only. This material does not establish compliance, certification, or legal sufficiency and must be reviewed by business, security, and legal stakeholders.