top of page
grok-image-0c3eea19-e86d-49a4-a113-120ab69a40f0.png

EU AI Act deployer obligations hit August 2026.

Can your infrastructure prove what your AI systems did?

Companies using AI in high-risk categories — credit scoring, insurance pricing, hiring, healthcare triage, legal research — must maintain logs, ensure human oversight, report incidents, and in many cases conduct Fundamental Rights Impact Assessments. Fines: up to EUR 15M or 3% of global turnover. Most aren't ready.

What Deployers Must Do

The EU AI Act distinguishes between AI providers (companies building AI systems) and AI deployers (companies using them). If your company uses AI in high-risk categories, you're a deployer — with specific obligations:

  • Maintain logs of AI system decisions and outputs

  • Ensure meaningful human oversight of high-risk AI

  • Report serious incidents to authorities

  • Conduct Fundamental Rights Impact Assessments (for some categories)

  • Maintain technical documentation and transparency records

 

Important: Companies building products on top of models like Claude or ChatGPT may inadvertently become providers of new high-risk AI systems — triggering even heavier obligations including conformity assessments and risk management systems.

grok-image-73cc2529-8abf-47cf-bfff-bde536d37ed4.png

Internal Logs vs Cryptographic Proof

Most companies plan to meet AI Act obligations with internal logging — database entries, audit trails stored on their own servers. The problem: internal logs can be altered. When a regulator or counterparty asks for proof of compliance, you're offering trust, not verification.

 

Cryptographic receipts on tamper-proof infrastructure can't be altered. They're verifiable by any counterparty — regulators, auditors, customers — without trust assumptions.

grok-image-e0e3c1d3-2e56-40d9-b187-b0d000db04f5.png

Aikin's Approach

Cryptographic Provenance Architecture

Aikin designs AI compliance architecture from the ground up — not bolted on after the fact.

  • Immutable inference logs — Every AI decision recorded on tamper-proof infrastructure

  • Model version attestation — Cryptographic proof of which model version was used

  • Human oversight records — Verifiable proof that human oversight occurred

  • Compliance dashboards — Real-time visibility into deployer obligation status

  • Cryptographic receipts — Verifiable by regulators and counterparties, not just internal teams

 

All built on ICP's tamper-proof state — the same infrastructure that provides zero-downtime and cryptographic proof for sovereign cloud workloads.

High-Risk AI Categories Under the EU AI Act

  1. FinTech — Credit scoring, loan decisions, insurance pricing, fraud detection

  2. Healthcare — Triage, treatment recommendations, benefits eligibility

  3. Legal & Professional Services — AI-assisted case research, contract review, legal interpretation

  4. Industry 4.0 — Safety-critical AI in infrastructure, quality control AI

  5. Any company using AI in employment — Hiring, CV screening, performance evaluation, promotion decisions

How We Work Together

  • Compliance Readiness Assessment — Where does your AI usage stand relative to deployer obligations? (Entry point — urgent given August 2026 deadline)

  • Provenance Architecture Design — Design the compliance infrastructure: logging, attestation, oversight records, dashboards

  • Build — Implement using AI-orchestrated development on tamper-proof infrastructure

  • Ongoing Monitoring — Compliance dashboards and continuous audit readiness

Assess your AI Act readiness

August 2026 doesn't move. The time to design for compliance is now, not after the deadline.

bottom of page