1. Purpose
This policy sets out how AI is governed in the CompleteFlow platform and in how CompleteFlow personnel use AI tools. It addresses the provider / deployer distinction under the EU AI Act, data handling for AI systems, human oversight, prompt injection controls, personnel acceptable use of AI, and the prohibition on disclosure of customer data to public AI services.
2. Scope
- The AI components of the CompleteFlow platform, conversational agent, workflow automation (including document generation and analysis), RAG over customer document corpora, and structured extraction
- Any optional or customer-configured AI endpoints
- Any AI tool used by CompleteFlow personnel (coding assistants, writing assistants, research assistants, and so on)
3. Provider and deployer responsibilities
CompleteFlow distinguishes clearly between the responsibilities of the provider (CompleteFlow, as supplier of the platform and integrator of AI models) and the deployer (the customer, who configures and operates workflows for specific purposes using the platform). This aligns with the EU AI Act's provider / deployer model.
| Responsibility | Provider (CompleteFlow) | Deployer (Customer) |
|---|---|---|
| Select underlying AI model(s) that meet the product's security, data handling, and performance requirements | Yes | Co-approval for non-default endpoints |
| Ensure model data handling prevents training on customer data | Yes | N/A |
| Configure specific workflows and determine their intended purpose | N/A | Yes |
| Determine appropriate human oversight for each workflow (approval gates, review steps) | Provides controls | Configures use |
| Determine lawful basis and conduct DPIA for each processing activity | Supports | Yes |
| Inform end users where appropriate | Supports | Yes |
| Monitor in-use behaviour and report significant incidents | Platform telemetry | Operational awareness |
4. AI data handling
4.1 Default AI model
The default AI model endpoint is Azure OpenAI Service deployed within the customer's Azure subscription. This means inference requests do not leave the customer's Azure tenancy. Microsoft's contractual terms apply:
- Customer data submitted to Azure OpenAI Service is not used to train or improve any Microsoft or OpenAI models
- Inputs and outputs are subject to abuse monitoring with a default 30-day retention in the customer's Azure region; Limited Access Program opt-out is available to eliminate abuse-monitoring retention entirely
- Processing occurs only in the Azure region(s) selected by the customer
- Microsoft does not share Azure OpenAI inputs or outputs with OpenAI; OpenAI has no access to Azure OpenAI customer inputs
4.2 Optional model endpoints
Alternative AI endpoints may be configured with explicit customer approval. For each non-default endpoint, the customer's deployment specification records:
- Model name, provider, and version
- Geographic location of processing
- Data handling terms (training, retention, logging)
- Security posture evidence (certifications, audit reports)
- Justification for choosing a non-default endpoint
CompleteFlow recommends against using providers whose terms permit training on customer inputs.
4.3 No integrations with public AI services
The platform does not contain integrations with consumer-grade AI services (ChatGPT consumer, public Claude, public Gemini, or similar). Where analogous capability is required, it is accessed only via enterprise endpoints under contract.
4.4 No use of customer data for CompleteFlow purposes
Customer data is not used for:
- Training, fine-tuning, or evaluating any AI model
- Improving CompleteFlow's products or services beyond that customer's deployment
- Sharing with third parties for any purpose other than operating that customer's deployment
- Marketing, sales, or business development
This is a contractual commitment in the MSA and DPA and is reinforced by the architectural fact that customer data does not leave the customer's Azure subscription.
4.5 RAG and embeddings
For RAG, customer document corpora are embedded using an Azure OpenAI Service embedding model within the customer's Azure subscription. Embeddings are stored in the customer's PostgreSQL database with vector indexing. Embeddings and source content do not leave the customer's subscription.
4.6 Deletion of AI-related data
On customer request or at contract termination:
- Embeddings and vectorised content are deleted from the customer's PostgreSQL database
- Logged AI interactions are deleted from the customer's Azure Monitor Log Analytics workspace per the configured retention
- Cached AI outputs are deleted from the customer's Blob Storage and Redis
- Azure OpenAI's default 30-day abuse-monitoring retention is automatic; Limited Access Program opt-out eliminates this retention going forward
Written confirmation of deletion is provided on request (see CF-POL-006 section 4.5).
5. Human oversight
The platform does not take autonomous decisions. Workflows execute only when a customer administrator has configured the workflow to perform a specific action, the action runs under the credentials and entitlements of an identified end user, and any customer-configured human approval gates are satisfied.
Human oversight is retained through a combination of controls:
- Customer-configurable human approval gates at any workflow step
- Architectural blast-radius controls: workflows execute as the identified end user, not as shared service accounts; specific actions have inherent constraints (for example, email sending is limited to the user's own mailbox)
- Typed workflow outputs: AI step outputs are validated against a schema; non-conforming output fails the step rather than being acted upon
- Full audit trail of AI interactions and workflow executions, accessible to customer administrators
- Reasoning and tool-call transparency: for agentic flows, the sequence of reasoning and tool invocations is recorded in the audit log
Determining the appropriate level of human oversight for each workflow is the customer's responsibility as deployer. CompleteFlow's role is to provide the controls and the audit trail; the customer configures how those controls are used for each workflow.
6. Model selection and change management
Changes to the AI model(s) used by the platform or a customer deployment follow a controlled process:
- Proposed changes are assessed for data handling, security, performance, and cost against the current model
- Evaluation runs are performed in a non-production environment with non-customer content
- Customer notification is issued in advance of any material model change affecting their deployment (minimum 30 days for default endpoints)
- Customer-specific model choices are documented in the deployment specification
- Model change events are recorded in the platform change log
7. Prompt injection and input-integrity controls
Prompt injection is treated as a first-class threat. Controls in place and under continuous improvement include:
- Clear delimiters between system instructions, tool-call results, and user content in prompts
- Principle of least tool privilege: AI agents are granted only the tools strictly required for the workflow; tools are gated by the same authorisation policies as human users
- User-scoped execution: calls to downstream systems use per-user credentials with the user's own entitlements, so a compromised prompt cannot exceed the user's normal authorisation scope
- Typed outputs and schema validation: AI responses for structured workflows must conform to a declared schema; non-conforming responses fail the step
- Guardrail checks on high-risk actions (external communication, financial actions) including human approval gates where configured
- Audit log of prompts, responses, tool calls, and policy evaluations, available to customer administrators
- Content scanning: incoming documents are treated as untrusted; instructions embedded in documents do not automatically grant additional permissions
8. Personnel use of AI tools
AI tools used by CompleteFlow personnel fall into three categories:
8.1 Sanctioned enterprise AI tools
A short list of sanctioned enterprise AI tools is maintained for personnel use (including the AI coding assistant used in the development workflow). These tools:
- Are configured on CompleteFlow-managed devices only
- Operate under contractual terms preventing use of inputs for training (or with such use explicitly disabled)
- Are subject to review and inclusion in the quarterly approved software list review (CF-POL-004 section 7.1)
8.2 Prohibited: customer data in unsanctioned AI services
Personnel must not paste, upload, or otherwise submit CompleteFlow customer data to:
- Consumer or free-tier AI services (for example, ChatGPT consumer, Claude consumer, Gemini consumer)
- Any AI service that has not been explicitly sanctioned for customer-data handling
- Any AI service whose terms do not preclude use of inputs for training
This prohibition extends to fragments of customer data (document excerpts, client names, matter identifiers). Synthetic or redacted examples should be used for any AI-assisted work that does not require real customer content.
8.3 Common-sense use
AI tools used for general work (for example, drafting internal documentation, summarising public research, coding assistance against non-customer code) should still be used with judgement, inputs should be minimal, outputs should be reviewed by a human, and any action taken on the basis of AI output remains the responsibility of the human actor.
9. Secure development and deployment of AI components
- AI components are developed under CF-POL-001 and the secure development lifecycle described in CF-DOC-001 section 11.1
- Prompt templates, model selection, tool definitions, and schemas are version-controlled and subject to code review
- Infrastructure-as-code defines AI endpoints, regions, and networking
- Change management applies as for any other platform change
10. Monitoring and incident response
AI-specific monitoring includes:
- Azure OpenAI Service usage and errors
- Unusual prompt patterns (length anomalies, injection-like content)
- Unexpected egress destinations from workflow steps
- Unusual model-invocation patterns outside configured norms
Incidents involving AI components are handled under CF-PLAN-001. Significant AI-specific incidents (for example, hallucination causing real-world impact, prompt-injection breakout, model-provider outage) are recorded and reviewed in the post-incident process.
11. Regulatory monitoring
CompleteFlow monitors developments in AI-specific regulation and guidance, including:
- UK government AI policy and ICO guidance
- EU AI Act provisions applicable to UK-based providers serving EU deployers
- Sector-specific AI guidance (SRA, FCA, ICO)
- NCSC guidance on AI and machine learning security
Material changes in guidance are assessed for platform, policy, or contractual impact, with updates made as needed.
12. Document control
| Version | Date | Author | Change |
|---|---|---|---|
| 1.0 | 2026-04-24 | J. Griffin | Initial approved version |