Skip to main content

Accessibility operations platform

Prove accessibility progress—don't just publish a score

Continuous scans, clustered issues, review trails, and exports your agency or enterprise can stand behind. Source-first remediation—never overlay substitutes.

No account needed. Sample depth and rate limits apply—see results for caveats.

Built for builders and operators

Wire scans into how your team already works—IDE, CI, ticketing, and internal tools. Entitlements and org boundaries are enforced on the server, not buried in client UI. Automation surfaces many failures; it does not replace manual audit or legal judgment—we say that everywhere it matters.

  • MCP server with a broad IDE tool surface (see package README for the current tool list)
  • Scoped API keys with organization boundaries enforced server-side
  • CLI for CI gates and diff-friendly scan output
  • Webhooks when crawls complete—wire into your own runbooks
  • Same engine as the product UI—no mystery “AI score” API
# MCP (IDE-native)
npx @aros/mcp-server

# Authenticated API (org-scoped keys)
curl -H "Authorization: Bearer $API_KEY" \
  $BASE_URL/api/...

# CLI / CI
npx @aros/cli scan --site example.com

Engine packages are @aros/*; the product experience is AccessibleMadeFlexible.

Managed accessibility operations

When you want outcomes without hiring a full internal program overnight—we embed with your release cadence and evidence requirements.

Program setup & playbooks

Define scan scope, severity policy, export templates, and stakeholder reporting rhythms so the work stays accountable.

Remediation partnership

Engineers pair with your team on high-impact clusters—PRs, CMS patterns, and design-system fixes—not widget overlays.

Ongoing operations

Scheduled scans, regression alerts, and evidence packs for leadership—priced as a service, not shelf-ware.

Talk to us about scope

Custom SOWs—procurement-friendly documentation on request

Assurance vocabulary, not vanity scoring

Every output is labeled so teams can see what is automated, what has human review, and what is not safe to over-claim in external proof. No silent confidence jumps.

Automated

Detected by scan automation; not manually reviewed yet.

Public-safe proof label

Guided

Draft remediation guidance exists, but the change is not verified.

Public-safe proof label

Reviewed

A human reviewed triage or recommendation quality.

Public-safe proof label

Verified

A follow-up crawl confirms the issue condition changed as expected.

Public-safe proof label

Assured

A managed-service or contractual lane accepted accountability for this scope.

Internal-only unless contract terms apply

Stale

Evidence is older than policy tolerance and needs a fresh run.

Public-safe proof label

How it works

A recurring loop—scan, cluster, triage, prove—so accessibility stays legible to engineering, design, and legal stakeholders.

Browser-accurate crawling

Playwright renders like users’ browsers—CSR, SSR, and real accessibility trees—so findings match what ships.

Clustered root causes

Roll thousands of page hits into one component-level issue. Triage once, clear the blast radius with intent.

Bounded assist, not autopilot

Draft fixes with rationale and confidence where enabled. Nothing ships as “AI magic”—review and export are explicit gates.

Fixes in your repo

Map to source, open GitHub PRs, or export patches—so remediation lives in version control.

Review & accountability

Queues for what automation cannot judge: copy, context, keyboard flows, and assistive-tech nuance.

Evidence for stakeholders

Exports and report artifacts meant for agencies, execs, and procurement—not a single green score.

Plans

Free public scans invite you in; paid tiers unlock private workspaces, evidence history, workflow state, commitments, and API access—enforced server-side.

Free

$0

  • 1 site
  • 50 pages per crawl
  • 3 scans per month
  • 1 seat
  • No AI draft assist on this tier
  • Public instant scan from the homepage (bounded, with caveats)
  • Account + org shell; upgrade unlocks private workspace
  • Paid tiers: scans, findings, exports, API, and automation

Commitment boundaries

  • Public sample only: Public scans are bounded and expire. They provide orientation, not ongoing assurance.
  • No response SLA: Community-grade support only; no guaranteed response window.
Create free workspace

Starter

$49/mo

  • 3 sites
  • 200 pages per crawl
  • 10 scans per month
  • 3 seats
  • No AI draft assist on this tier
  • Full private scanning & crawl history
  • Issue clustering (fix once, clear many pages)
  • GitHub PR workflow for proposed fixes

Commitment boundaries

  • Private workspace continuity: Historical scans and findings remain available while subscription is active.
  • Re-scan after remediation: Teams can trigger verification scans after applying fixes, subject to plan scan limits.
Choose plan

Professional

$149/mo

  • 10 sites
  • 1,000 pages per crawl
  • 50 scans per month
  • 10 seats
  • Bounded AI draft assist: 100,000 tokens/mo (review required)
  • Deploy webhook triggers for post-deploy scans (this tier and up)
  • Human review queues + sign-off trails
  • Evidence-grade exports (VPAT-ready artifacts)
  • Integration automation where your operator configures connectors (e.g. Jira)
  • Bounded AI assist for draft fixes (review required)

Commitment boundaries

  • Review lane visibility: Review queues and status trails distinguish automated signals from reviewed decisions.
  • Operational proof exports: Report exports include remediation state and timestamps suitable for buyer updates.
Choose plan

Enterprise

$499/mo

  • 100 sites
  • 10,000 pages per crawl
  • 500 scans per month
  • 100 seats
  • Bounded AI draft assist: 1,000,000 tokens/mo (review required)
  • Higher limits + procurement-friendly terms
  • Custom integrations & migration support
  • Managed accessibility operations (optional)
  • Response-time and onboarding commitments only where agreed in writing

Commitment boundaries

  • Contract-shaped commitments: Priority response windows and specialist review terms apply only when written into contract scope.
  • Managed assurance lane: Optional managed operations can include expert triage and verification cadence by SOW.
Choose plan

Questions

How is this different from another “AI accessibility” checker?

AccessibleMadeFlexible is built as an operations surface: browser-accurate crawling, clustered findings so you fix root causes, review queues, exports, and API/MCP hooks. Where AI appears, it is bounded—draft suggestions with confidence and human review—not a black-box compliance promise.

How is this different from accessibility overlays?

Overlays inject third-party widgets that do not repair underlying code and are widely rejected by the disability community. This product is source-first: fix HTML, CSS, ARIA, and components where they belong.

Do you guarantee WCAG or legal compliance?

No. Automated testing covers a fraction of WCAG. We surface evidence and workflow state; manual testing by experts and users with disabilities remains essential for any serious conformance claim.

What does the free instant scan include?

A bounded public sample of pages with clear limitations—enough to see signal, not a substitute for full-site monitoring, private workspaces, history, or exports. Upgrade for the complete operator workflow.