Docs
Product Documentation

FAQs

Common questions about SuperAlign Radar

General Questions

How does Radar detect AI tool usage?

Radar analyzes your firewall and network logs to detect when employees access AI tools. This approach provides visibility into:

  • Web-based AI tools (ChatGPT, Claude, Bard, etc.)
  • API calls to AI services
  • Connections to SaaS applications with AI features (Slack AI, Teams Copilot, etc.)

Since detection is network-based, Radar automatically sees all AI usage flowing through your organization's infrastructure without requiring agent deployment on individual devices.

Why do I see AI tools I've never heard of?

Radar's database tracks 24,000+ AI tools globally. Many are niche or emerging tools that employees discover through:

  • App stores and online communities
  • Recommendations from colleagues
  • Viral adoption on social media
  • Internal experimentation and testing

These are legitimate discoveries that provide visibility into AI tools previously invisible to your security team.

How often is the risk database updated?

New AI tools are assessed and added to the database continuously. Existing tools are re-evaluated when:

  • Vendors publish updated policies or security information
  • Major security incidents occur
  • Regulatory changes affect compliance requirements
  • Tools release significant updates

Is Radar agentless?

Network-Based Detection

Yes. Radar uses network-based detection through firewall and proxy logs, so there's no need to deploy browser extensions, endpoint agents, or device-level monitoring. This means faster deployment and minimal impact on user devices.


Risk Grades & Assessment

What does each risk grade mean?

Risk Grades

  • Grade A: Vendor has excellent public transparency about their tool's practices, safeguards, and compliance posture
  • Grade B: Vendor has good transparency with most key areas documented
  • Grade C: Vendor has fair transparency but some gaps in publicly available information
  • Grade D: Vendor has poor transparency with significant gaps
  • Grade E-F: Vendor has minimal publicly available information about their practices and safeguards

A tool's grade reflects what the vendor publicly discloses, not necessarily the tool's popularity or capability. Reasons for lower grades include:

  • Vendor doesn't publish a detailed privacy policy or security information
  • No published security certifications
  • Limited public documentation about data practices
  • No transparency about AI system limitations

Popular tools may have limited public disclosure simply because they don't prioritize publishing detailed information.

What if a tool I'm considering shows as low-risk but I have concerns?

A grade is based on publicly disclosed information. If you have specific concerns about a tool:

  • Request detailed information directly from the vendor
  • Ask about their data handling practices, security measures, and compliance certifications
  • Request a security audit or penetration test if the use case is sensitive
  • Contact your information security team for additional assessment

User Risk & Behavior

What does a high user risk score mean?

High Risk Score

A high user risk score (70+) indicates that the user's AI tool portfolio is heavily weighted toward higher-risk tools, or they are using significant volumes of higher-risk tools. It flags that the user's activity warrants attention from a governance perspective.

What if a user has a high risk score but I believe their usage is appropriate?

User risk scores are based on tool selection, not role appropriateness. If a user legitimately needs to access higher-risk tools for their job:

  • Document the business justification
  • Review what data they're handling
  • Ensure appropriate monitoring and policies are in place
  • Mark their access as approved/documented in your governance system

How should I interpret low user risk scores?

Low Risk Score

A low user risk score (0-20) indicates the user's AI interactions are primarily with lower-risk tools, or their usage volume is low. This generally suggests lower governance concern from an AI risk perspective.


Documentation

On this page