Revenue is usage-based, contracts change monthly, and the ledger pulls from half a dozen systems. If you audit a SaaS company in 2025, AI can surface risks faster, but only if your evidence and controls keep up.
What this section covers
Practical ways auditors and finance teams can apply AI/ML for a US SaaS audit, how to document evidence that stands up to inspection, sampling approaches that pair well with AI-driven risk scoring, and the PCAOB focus areas you should expect. Notes are relevant for UK readers aligning to ISA (UK) with similar principles on evidence quality, controls, and documentation.
Where AI adds real value in a SaaS audit
1) Anomaly detection across revenue and cash
Use cases
- Flag contract modifications that change revenue timing.
- Detect unusual billings given seat counts, price books, or usage.
- Match PSP deposits to GL cash with outlier detection on fees and FX.
Evidence tips
- Preserve data lineage: raw extracts, transformation steps, and final tables.
- Retain model parameters and thresholds used to flag anomalies.
- Tie each flagged item to a testing worksheet and conclusion.
2) Risk-scored sampling for revenue and expenses
Use cases
- Combine monetization features (plan, discount, usage spikes), counterparty signals, and prior-period adjustments into a risk score.
- Allocate sample sizes by risk stratum instead of uniform random pulls.
Evidence tips
- Document the scoring logic and why higher scores received more coverage.
- Show that the full population was available to be sampled.
- Keep a reproducible seed for any random component.
3) Automated matching and reconciliation
Use cases
- Match orders, invoices, collections, and revenue entries.
- Reconcile usage logs to billed quantities and recognized revenue.
- Compare change logs in the billing platform to GL journals.
Evidence tips
- Archive matching rules and exception queues.
- Retain before/after snapshots for resolved breaks.
4) Journal-entry testing and user-access analytics
Use cases
- Score JEs on timing, description, preparer history, and unusual combinations.
- Analyze access logs for segregation-of-duties breaches across billing, revenue, and GL.
Evidence tips
- Keep the feature list used for JE scoring and calibrate thresholds with a rationale.
- Save exportable access-review results with sign-offs and remediation.
Sampling that pairs well with AI
Stratified, risk-weighted sampling
- Build strata from risk scores (for example, high, medium, low).
- Allocate more items to high-risk strata, but keep minimum coverage in others.
- Document the rule that links score bands to sample counts.
Monetary Unit Sampling (MUS) with AI-assisted targeting
- Use MUS to cover material amounts.
- Overlay AI to add targeted picks from low-value but high-risk transactions (for example, extreme discounts or back-dated credits).
Directed selections for known risks
- Always include manual journal reclasses to revenue, first and last-day entries, and large credits.
- Explain why these are directed and not statistical.
Making AI-assisted procedures “audit defensible”
Data completeness and accuracy (C&A)
- Describe the source systems, extraction dates, filters, and controls over interfaces.
- Perform C&A tie-outs: record counts and dollar totals from source to working tables.
- If using a data warehouse, test ETL controls or obtain assurance over them.
Model governance and reproducibility
- Fix model versions and parameter sets for the audit period.
- Save code notebooks or configurations with timestamps.
- Record sensitivity tests that show results are stable across reasonable thresholds.
Independence of auditor judgment
- AI suggests; auditors conclude. Write the rationale for any overrides.
- Keep evidence that skeptical inquiry occurred on “clean” areas, not only flagged items.
Third-party tools and vendors
- Retain user guides, configuration exports, and change logs for the tool.
- If you rely on hosted platforms, obtain relevant assurance reports (for example, SOC 2) and scope them to the control objectives you depend on.
PCAOB focus areas you should plan for
While specifics vary by engagement, inspectors consistently look for:
- Risk assessment linkage
Show how significant classes of transactions (subscriptions, usage, credits) inform the AI features, sampling, and procedures. - Controls over IT and reports
Evidence that ITGCs and key application controls work. If AI consumes system reports, prove the reports are accurate and complete. - Substantive analytical procedures
If you use analytics for substantive evidence, document expectations, tolerable differences, and how anomalies were resolved. - Revenue recognition
Clear testing of contract terms, modifications, variable consideration, and cut-off. Keep examples that trace from contract to invoice to recognition. - Use of specialists
When data science workflows affect scope, define roles, supervise the work, and evaluate competence and objectivity. - Fraud procedures
Journal entry testing, bias checks, and unpredictable audit procedures that are not fully determined by the model. - Documentation quality
Workpapers must let a knowledgeable reviewer reperform the work: inputs, transformations, parameters, outputs, exceptions, and conclusions.
Coordinating with offshore teams without increasing risk
- Enforce role-based access and MFA; use VDI where possible.
- Share locked SOPs and data dictionaries for every source table used in AI procedures.
- Route PBCs through ticketing with owners, due dates, and attachments in the system of record.
- Have offshore staff prepare C&A tie-outs and exception logs; reviewers retain approval within your firm.
Practical checklist for year one
- Define AI-eligible procedures and link them to risks and assertions.
- Map data sources, owners, extraction methods, and C&A tests.
- Fix model versions, thresholds, and random seeds; store configs.
- Select sampling method per area and document the rule.
- Build workpaper templates for anomalies, JE testing, and reconciliations.
- Schedule pre-close data dry runs with finance to validate pipelines.
- Prepare an inspection-ready index that ties AI outputs to conclusions.
Wrap-up
AI can speed up coverage and sharpen focus, but it does not replace evidence discipline. Anchor every model and sample to the risks you care about, prove your data is complete and accurate, and keep workpapers that another auditor could reperform without guessing.