Docs
Product Documentation

Risk Assessment

SuperAlign Radar scores each AI tool's risk across five key areas.

What is Risk Assessment?

SuperAlign Radar evaluates each AI tool across five key dimensions to provide a comprehensive understanding of its inherent risk profile.

24,000+ AI Tools Assessed

SuperAlign maintains a proprietary database of AI tools, each assessed based on publicly available vendor information.

Overview

SuperAlign maintains a proprietary database of 24,000+ AI tools, each assessed based on publicly available vendor information including privacy policies, security pages, compliance statements, and product documentation.


Risk Assessment Dimensions

Each AI tool is evaluated across five key areas:

Data Privacy & Security

Examines how the vendor collects, stores, handles, and protects data, including assessment of encryption practices, data retention policies, third-party data sharing, and compliance certifications.

AI Safety, Accuracy & Fairness

Evaluates transparency about AI system limitations (such as hallucination rates), acknowledgment of potential bias, published accuracy metrics, and fairness testing practices.

Security & Attack Safeguards

Assesses defenses against prompt injection attacks, content filtering safeguards, vulnerability disclosure programs, DDoS protection, and service availability commitments.

Governance, Oversight & Transparency

Examines the vendor's documentation of how the AI system works, explainability of decisions, release notes, and overall transparency about the tool's capabilities and limitations.

Regulatory Compliance

Reviews claimed alignment with major regulatory frameworks including GDPR, CCPA, HIPAA, and SOX. Assesses availability of data processing agreements and compliance documentation.


How Risk Scores are Calculated

AI tools receive a grade (A through F) based on the completeness and quality of publicly disclosed information:

Risk Grade Scale

  • Grade A (90-100%): Excellent transparency and documented safeguards
  • Grade B (80-89%): Good transparency and documented safeguards
  • Grade C (70-79%): Fair transparency with some gaps in disclosed information
  • Grade D (60-69%): Poor transparency with significant gaps
  • Grade E (50-59%): Very poor transparency
  • Grade F (less than 50%): Minimal or no published information

Key Principle

The grade reflects what vendors publicly disclose about their tools. A higher grade indicates better documentation of practices, policies, and safeguards.


User Risk Score

What is User Risk Score?

User Risk Score measures the risk associated with an individual employee's AI tool usage pattern based on:

  • Which specific AI tools they access
  • How frequently they use each tool
  • The inherent risk profiles of those tools

Score Result

A 0-100 score that reflects the overall risk level of their AI tool usage mix.

Why Tool Mix Matters

User risk isn't simply based on transaction volume. Instead, it considers what proportion of their activity involves higher-risk tools.

Example: User A vs User B

User A: Heavy user with 250 transactions, all to ChatGPT (a moderate-risk tool)

  • Result: Moderate overall risk score

User B: Moderate user with 200 transactions, majority to a high-risk financial AI tool

  • Result: High overall risk score despite fewer transactions

The difference: User B's tool selection carries more risk than User A's, even though User A has more transactions overall.


Usage Ratio

What is Usage Ratio?

Usage Ratio for a tool shows what percentage of your organization's total AI transactions involve that tool.

Why it matters: Helps you identify which tools are dominating your AI consumption and should be prioritized for governance decisions.


Metrics Summary

The three primary metrics work together to give a complete picture of your organizational AI risk:

On this page