Responsible AI
Applying AI that is transparent, ethical, and accountable.
Targeted AI Authentication
Our facial authentication approach is rooted in advanced technology and a strong commitment to privacy and ethics. The Rock uses state-of-the-art AI and 3D sensing to deliver real-time, highly accurate authentication for physical access control, ensuring only authorized individuals can enter secure areas.
A key design principle is our deliberate limitation to 1:1 and 1:Few matching — never broad 1:Many surveillance. When you approach the Rock, the system only compares your encrypted facial signature against the enrolled identities roster. It does not scan every face in view or reference large external databases. This targeted approach eliminates the risk of bystander surveillance, profiling, or “dragnet” searches that conflict with privacy regulations.
By operationalizing responsible AI this way, Alcatraz ensures that data use is strictly limited to what is necessary for authentication. This design minimizes bias, reduces regulatory risk under frameworks like GDPR and BIPA, and builds trust by aligning the system with societal expectations for fairness and consent.
What it means for you: You gain secure, seamless access without ever being profiled or monitored just for walking by. The Rock authenticates only those who choose to enroll — nothing more.
A key design principle is our deliberate limitation to 1:1 and 1:Few matching — never broad 1:Many surveillance. When you approach the Rock, the system only compares your encrypted facial signature against the enrolled identities roster. It does not scan every face in view or reference large external databases. This targeted approach eliminates the risk of bystander surveillance, profiling, or “dragnet” searches that conflict with privacy regulations.
By operationalizing responsible AI this way, Alcatraz ensures that data use is strictly limited to what is necessary for authentication. This design minimizes bias, reduces regulatory risk under frameworks like GDPR and BIPA, and builds trust by aligning the system with societal expectations for fairness and consent.
What it means for you: You gain secure, seamless access without ever being profiled or monitored just for walking by. The Rock authenticates only those who choose to enroll — nothing more.
Refusal and Opt-Out Built In
With Alcatraz AI, consent isn’t just policy — it’s enforced by design. Enrollment into the Rock system is always opt-in, meaning no one can be scanned or matched without explicit approval.
No Silent Enrollment: The Rock cannot capture or create a Facial Signature until the Solution Owner confirms consent. This safeguard prevents “background scanning” or automatic enrollment of bystanders.
Solution Owner Confirmation: Consent is tracked through the Solution Owner’s own systems — such as onboarding workflows, visitor registration, or digital consent forms. The Rock will not proceed without this explicit green light.
Тechnical Enforcement: Enrollment is locked until consent is registered in the connected Access Control System (ACS). This makes opt-in a hard requirement enforced by technology, not just a policy guideline.
Opt-Out Anytime: If an individual withdraws consent, their Facial Signature is deleted. From then on, the Rock will no longer recognize them, and access will be managed through other methods, like a badge or PIN.
Global Compliance: This consent-first approach aligns with international data protection laws, including BIPA, GDPR, and CCPA/CPRA, requiring informed and explicit consent for biometric processing.
What it means for you: Choosing Alcatraz’s Rock Solution is choosing control. Facial authentication only happens with your approval, and opting out never locks you out — it simply means you’ll use another secure access method.
No Silent Enrollment: The Rock cannot capture or create a Facial Signature until the Solution Owner confirms consent. This safeguard prevents “background scanning” or automatic enrollment of bystanders.
Solution Owner Confirmation: Consent is tracked through the Solution Owner’s own systems — such as onboarding workflows, visitor registration, or digital consent forms. The Rock will not proceed without this explicit green light.
Тechnical Enforcement: Enrollment is locked until consent is registered in the connected Access Control System (ACS). This makes opt-in a hard requirement enforced by technology, not just a policy guideline.
Opt-Out Anytime: If an individual withdraws consent, their Facial Signature is deleted. From then on, the Rock will no longer recognize them, and access will be managed through other methods, like a badge or PIN.
Global Compliance: This consent-first approach aligns with international data protection laws, including BIPA, GDPR, and CCPA/CPRA, requiring informed and explicit consent for biometric processing.
What it means for you: Choosing Alcatraz’s Rock Solution is choosing control. Facial authentication only happens with your approval, and opting out never locks you out — it simply means you’ll use another secure access method.
Human Oversight and Escalation
AI is powerful, but doesn’t make final, unchallengeable decisions in the Alcatraz system. We designed the Rock to operate within a framework where humans always have the authority — and the tools — to intervene.
Human-in-the-Loop: Every access control decision the Rock makes can be reviewed, questioned, and overturned by authorized humans. This ensures AI is a tool for efficiency, not a replacement for accountability.
Escalation Pathways: If an access attempt is denied or flagged as anomalous, the system provides clear escalation routes to trained administrators. These humans have the visibility and authority to investigate, correct, or override decisions when appropriate.
Logged Reviews: Escalations and overrides are logged in detail, creating a full accountability record that includes the original AI output, the human review, and the ultimate resolution. This ensures transparency and enables audits.
Bias and Error Safeguards: Human review isn’t just a safety net — it’s also a mechanism for spotting patterns of error or bias. Organizations can identify when and where AI needs tuning, retraining, or further oversight by logging escalations.
Compliance Alignment: Many regulations, including the EU AI Act, emphasize the need for “human oversight” in high-risk AI systems like biometric authentication. Alcatraz builds that into the Rock, making human review an operational reality rather than an afterthought.
What it means for you: If the Rock ever gets it wrong — say it doesn’t recognize you on a rainy morning or after a major haircut — you’re not stuck outside. There’s always a human with the authority to correct the record, and the system makes sure that pathway is logged, fair, and accountable.
Human-in-the-Loop: Every access control decision the Rock makes can be reviewed, questioned, and overturned by authorized humans. This ensures AI is a tool for efficiency, not a replacement for accountability.
Escalation Pathways: If an access attempt is denied or flagged as anomalous, the system provides clear escalation routes to trained administrators. These humans have the visibility and authority to investigate, correct, or override decisions when appropriate.
Logged Reviews: Escalations and overrides are logged in detail, creating a full accountability record that includes the original AI output, the human review, and the ultimate resolution. This ensures transparency and enables audits.
Bias and Error Safeguards: Human review isn’t just a safety net — it’s also a mechanism for spotting patterns of error or bias. Organizations can identify when and where AI needs tuning, retraining, or further oversight by logging escalations.
Compliance Alignment: Many regulations, including the EU AI Act, emphasize the need for “human oversight” in high-risk AI systems like biometric authentication. Alcatraz builds that into the Rock, making human review an operational reality rather than an afterthought.
What it means for you: If the Rock ever gets it wrong — say it doesn’t recognize you on a rainy morning or after a major haircut — you’re not stuck outside. There’s always a human with the authority to correct the record, and the system makes sure that pathway is logged, fair, and accountable.
Fair Access for All
Safeguards and human review routes are not reserved for executives, premium users, or specific geographies. At Alcatraz, protections apply equally to everyone interacting with the Rock — whether you’re an employee, contractor, or visitor. That commitment to fairness also extends to how our AI is designed: it does not assign categories, labels, or “social scores” that could result in unequal or disproportionate treatment.
Our system is deliberately scoped to authentication and access control only. It does not evaluate behavior to infer trustworthiness, does not track individuals for profiling, and never generates “risk scores” unrelated to the act of access itself. This ensures the technology cannot be repurposed for objectionable use cases, mass surveillance, or rights-violating outcomes.
What it means for you: Every user interacts with the same protections, appeal rights, and technical safeguards — without discrimination, without social scoring, and without hidden hierarchies that could erode trust or fairness.
Our system is deliberately scoped to authentication and access control only. It does not evaluate behavior to infer trustworthiness, does not track individuals for profiling, and never generates “risk scores” unrelated to the act of access itself. This ensures the technology cannot be repurposed for objectionable use cases, mass surveillance, or rights-violating outcomes.
What it means for you: Every user interacts with the same protections, appeal rights, and technical safeguards — without discrimination, without social scoring, and without hidden hierarchies that could erode trust or fairness.
Transparent and Traceable AI
AI should never be a mystery, especially when it governs access to secure spaces. At Alcatraz, transparency and traceability are baked into the design of our Rock platform so that every decision can be understood, explained, and, if necessary, challenged.
Audit Trails by Design: Every access decision the Rock makes — whether it grants, denies, or flags an attempt — is logged with the inputs, system actions, and contextual factors. These logs are structured so administrators can trace exactly how the outcome was reached without exposing raw biometric data.
Explainable Authentication: Our architecture ensures that when questions arise, administrators can see whether the denial was due to a mismatch, liveness check failure, withdrawn consent, or system policy. This avoids the frustration of “mystery denials” and provides a clear chain of reasoning.
Regulatory Alignment: Traceability is more than good practice — it is often required by law. Frameworks like the EU AI Act, GDPR, and BIPA expect explainability and accountability in high-risk AI applications such as biometric authentication. Our audit trails and reporting tools are built to meet these requirements, enabling Solution Owners to demonstrate compliance when regulators or auditors come knocking.
No Hidden Logic: The Rock does not use opaque or hidden decision layers that administrators cannot see or review. While advanced AI models drive authentication, their decision pathways are supported by interpretable rules and logging so administrators can connect outcomes to real-world factors.
What it means for you: You won’t be left in the dark if your access is denied. The decision is traceable, explainable, and reviewable — giving you and your organization confidence that the Rock works fairly, consistently, and transparently.
Audit Trails by Design: Every access decision the Rock makes — whether it grants, denies, or flags an attempt — is logged with the inputs, system actions, and contextual factors. These logs are structured so administrators can trace exactly how the outcome was reached without exposing raw biometric data.
Explainable Authentication: Our architecture ensures that when questions arise, administrators can see whether the denial was due to a mismatch, liveness check failure, withdrawn consent, or system policy. This avoids the frustration of “mystery denials” and provides a clear chain of reasoning.
Regulatory Alignment: Traceability is more than good practice — it is often required by law. Frameworks like the EU AI Act, GDPR, and BIPA expect explainability and accountability in high-risk AI applications such as biometric authentication. Our audit trails and reporting tools are built to meet these requirements, enabling Solution Owners to demonstrate compliance when regulators or auditors come knocking.
No Hidden Logic: The Rock does not use opaque or hidden decision layers that administrators cannot see or review. While advanced AI models drive authentication, their decision pathways are supported by interpretable rules and logging so administrators can connect outcomes to real-world factors.
What it means for you: You won’t be left in the dark if your access is denied. The decision is traceable, explainable, and reviewable — giving you and your organization confidence that the Rock works fairly, consistently, and transparently.
Complaint Logging and Retention
A responsible AI system must make decisions, record, and learn from when those decisions are questioned. At Alcatraz, we treat complaint logging and retention as a core safeguard — ensuring that issues aren’t erased, ignored, or hidden but instead preserved to allow us and our customers to identify patterns and continuously improve.
Structured Complaint Logging: Every reported issue, whether it’s a misidentification, access denial, or user concern, is logged into the system. These logs are tied to specific events (like an attempted access) without exposing sensitive biometric data. This means administrators can review issues in detail while user privacy remains protected.
Retention to Spot Patterns: Logs are retained for periods long enough to identify recurring issues or systemic risks. This enables organizations to detect, for example, if a particular door or lighting condition consistently creates authentication errors, or if a subset of users is disproportionately impacted.
Compliance-Ready Retention: Privacy laws such as GDPR, BIPA, and CPRA require accountability, including keeping evidence of when things went wrong. Our retention policies ensure that complaint records are available for audits, investigations, or regulator inquiries for as long as legally necessary, while still respecting data minimization principles.
Driving Improvements: Complaint logs are not just stored — they’re reviewed as part of risk assessments and ongoing AI governance. Organizations can adjust policies, retrain models, or fine-tune environmental conditions by analyzing complaint data to reduce error rates and improve fairness.
What it means for you: Your concern isn’t lost in the system if something goes wrong. It becomes part of a documented history that helps ensure issues are acknowledged, patterns are spotted, and both technology and policies evolve to serve you better.
Structured Complaint Logging: Every reported issue, whether it’s a misidentification, access denial, or user concern, is logged into the system. These logs are tied to specific events (like an attempted access) without exposing sensitive biometric data. This means administrators can review issues in detail while user privacy remains protected.
Retention to Spot Patterns: Logs are retained for periods long enough to identify recurring issues or systemic risks. This enables organizations to detect, for example, if a particular door or lighting condition consistently creates authentication errors, or if a subset of users is disproportionately impacted.
Compliance-Ready Retention: Privacy laws such as GDPR, BIPA, and CPRA require accountability, including keeping evidence of when things went wrong. Our retention policies ensure that complaint records are available for audits, investigations, or regulator inquiries for as long as legally necessary, while still respecting data minimization principles.
Driving Improvements: Complaint logs are not just stored — they’re reviewed as part of risk assessments and ongoing AI governance. Organizations can adjust policies, retrain models, or fine-tune environmental conditions by analyzing complaint data to reduce error rates and improve fairness.
What it means for you: Your concern isn’t lost in the system if something goes wrong. It becomes part of a documented history that helps ensure issues are acknowledged, patterns are spotted, and both technology and policies evolve to serve you better.
Durable and Verifiable Records
Trust in AI depends on how decisions are made and whether those decisions can stand up to external scrutiny. At Alcatraz, we ensure that records of important events, complaints, and escalations are both durable and verifiable. This makes them usable in compliance audits, regulatory inquiries, or even legal proceedings.
Tamper-Resistant Logs: All logs, from authentication attempts to complaint resolutions, are protected against unauthorized alteration or deletion. This ensures that records remain accurate reflections of what occurred, not vulnerable files that could be manipulated after the fact.
Exportable in Secure Formats: When records are needed by regulators, courts, or compliance teams, they can be exported in standardized, tamper-evident formats such as digitally signed PDFs or encrypted CSV files. These formats preserve the integrity of the data and allow them to serve as admissible evidence.
Retention with Accountability: Logs are retained according to applicable privacy laws and organizational policies — long enough to ensure that harm can be investigated, but not longer than necessary for compliance. This balance minimizes unnecessary data retention while preserving accountability.
Independent Verification: Because records are preserved in secure, auditable formats, external auditors or regulators can independently confirm their authenticity. This external check builds confidence that our systems work as described, not just as claimed.
What it means for you: If an issue arises that requires proof — whether in an audit, a compliance review, or a legal dispute — there’s a trustworthy, verifiable record to back it up. You aren’t asked to “just trust” the system; you can rely on durable, exportable, and regulator-ready evidence.
Tamper-Resistant Logs: All logs, from authentication attempts to complaint resolutions, are protected against unauthorized alteration or deletion. This ensures that records remain accurate reflections of what occurred, not vulnerable files that could be manipulated after the fact.
Exportable in Secure Formats: When records are needed by regulators, courts, or compliance teams, they can be exported in standardized, tamper-evident formats such as digitally signed PDFs or encrypted CSV files. These formats preserve the integrity of the data and allow them to serve as admissible evidence.
Retention with Accountability: Logs are retained according to applicable privacy laws and organizational policies — long enough to ensure that harm can be investigated, but not longer than necessary for compliance. This balance minimizes unnecessary data retention while preserving accountability.
Independent Verification: Because records are preserved in secure, auditable formats, external auditors or regulators can independently confirm their authenticity. This external check builds confidence that our systems work as described, not just as claimed.
What it means for you: If an issue arises that requires proof — whether in an audit, a compliance review, or a legal dispute — there’s a trustworthy, verifiable record to back it up. You aren’t asked to “just trust” the system; you can rely on durable, exportable, and regulator-ready evidence.
Appropriate Use and Safeguards Against Misuse
The computer vision models we use — including face recognition — are among the most accurate and rigorously tested globally. But accuracy alone isn’t enough. Like any powerful technology, AI can be used for positive, intended purposes or misused in ways that cause harm. At Alcatraz, we draw clear lines.
Our technology is designed for identity, safety, and security use cases — protecting enterprises, enabling the convergence of physical access control and video analytics, and creating safer, frictionless environments for travel, payments, and workplace access. All of these use cases require notice and consent for the individuals involved.
We do not support or enable objectionable deployments, mass surveillance, or misuse by bad actors. Our safeguards — from opt-in enrollment to governance structures and vendor assessments — exist to ensure our models are only applied where they protect people, not where they compromise rights.
What it means for you: You can trust that when you encounter a Rock, it’s in a context designed to make spaces safer, access simpler, and privacy stronger — not in one that puts your rights or freedoms at risk.
Our technology is designed for identity, safety, and security use cases — protecting enterprises, enabling the convergence of physical access control and video analytics, and creating safer, frictionless environments for travel, payments, and workplace access. All of these use cases require notice and consent for the individuals involved.
We do not support or enable objectionable deployments, mass surveillance, or misuse by bad actors. Our safeguards — from opt-in enrollment to governance structures and vendor assessments — exist to ensure our models are only applied where they protect people, not where they compromise rights.
What it means for you: You can trust that when you encounter a Rock, it’s in a context designed to make spaces safer, access simpler, and privacy stronger — not in one that puts your rights or freedoms at risk.
Genuine, Not Simulated, Safeguards
At Alcatraz, safeguards aren’t just for show — they’re engineered into the system and fully operational in the field. Too often, technology providers advertise “safety features” that are symbolic: menus that don’t lead anywhere, buttons that don’t function, or “appeal” paths that never actually change outcomes. We reject that approach.
Real Functionality, Not Placeholders: Every safeguard we publish — from opt-out to escalation to deletion — is live, tested, and usable in production. Nothing exists only on paper or as a marketing promise.
Built Into Architecture: Safeguards are enforced at the technical level, not just through policy. For example, the Rock will not allow enrollment until consent has been confirmed, and it will automatically delete Facial Signatures when opt-out is triggered. These are system-level guardrails, not optional settings.
Verified Through Testing: Our safeguards undergo rigorous internal validation and external audits to confirm they work as intended. This includes red-team testing, penetration testing, and compliance checks aligned with privacy regulations.
Transparency in Reporting: We are clear with Solution Owners and System End Users about what protections exist and how they work. There are no hidden conditions or fake assurances.
What it means for you: When you see a safeguard in the Trust Center or on the Rock, you can count on it being real. If we say you can opt out, escalate, or request deletion — those pathways are fully operational, not symbolic.
Real Functionality, Not Placeholders: Every safeguard we publish — from opt-out to escalation to deletion — is live, tested, and usable in production. Nothing exists only on paper or as a marketing promise.
Built Into Architecture: Safeguards are enforced at the technical level, not just through policy. For example, the Rock will not allow enrollment until consent has been confirmed, and it will automatically delete Facial Signatures when opt-out is triggered. These are system-level guardrails, not optional settings.
Verified Through Testing: Our safeguards undergo rigorous internal validation and external audits to confirm they work as intended. This includes red-team testing, penetration testing, and compliance checks aligned with privacy regulations.
Transparency in Reporting: We are clear with Solution Owners and System End Users about what protections exist and how they work. There are no hidden conditions or fake assurances.
What it means for you: When you see a safeguard in the Trust Center or on the Rock, you can count on it being real. If we say you can opt out, escalate, or request deletion — those pathways are fully operational, not symbolic.
Tone from the Top
Responsible AI isn’t just a feature of the Rock — it’s a company-wide value that starts with leadership. At Alcatraz, governance is not delegated to just the legal team; it is driven and reinforced at the highest levels of our organization.
Executive Accountability: Our leadership defines clear AI policies and objectives aligned with Alcatraz’s mission to make the world safer through simple, secure, and trusted facial authentication. This responsibility is owned at the executive level, with oversight extending to the Board of Directors.
Integration into Operations: AI requirements are embedded into core processes, from engineering design to deployment and customer support. This ensures that responsible AI is not just an add-on but a part of daily operations and decision-making.
Resourcing and Support: Leadership allocates the resources — people, technology, training, and budget — to maintain world-class AI governance. Teams are empowered with the tools and support needed to uphold our standards.
Communication and Culture: The importance of responsible AI is communicated across the organization. At every level, employees understand the why and the how of our commitments and are empowered to raise concerns or suggest improvements.Continuous Improvement: Governance is not static. Leaders track results, commission independent audits, and support a culture of learning so that the AI system continually evolves to meet higher standards.
What it means for you: Responsible AI at Alcatraz is not a marketing line — it’s a leadership priority. You can trust that the safeguards you experience in the Rock are backed by executive commitment, board oversight, and an organizational culture dedicated to doing AI right.
Executive Accountability: Our leadership defines clear AI policies and objectives aligned with Alcatraz’s mission to make the world safer through simple, secure, and trusted facial authentication. This responsibility is owned at the executive level, with oversight extending to the Board of Directors.
Integration into Operations: AI requirements are embedded into core processes, from engineering design to deployment and customer support. This ensures that responsible AI is not just an add-on but a part of daily operations and decision-making.
Resourcing and Support: Leadership allocates the resources — people, technology, training, and budget — to maintain world-class AI governance. Teams are empowered with the tools and support needed to uphold our standards.
Communication and Culture: The importance of responsible AI is communicated across the organization. At every level, employees understand the why and the how of our commitments and are empowered to raise concerns or suggest improvements.Continuous Improvement: Governance is not static. Leaders track results, commission independent audits, and support a culture of learning so that the AI system continually evolves to meet higher standards.
What it means for you: Responsible AI at Alcatraz is not a marketing line — it’s a leadership priority. You can trust that the safeguards you experience in the Rock are backed by executive commitment, board oversight, and an organizational culture dedicated to doing AI right.