Introduction
SuperAlign Surface is an AI asset discovery and endpoint visibility platform that gives you a ground-level view of every AI tool installed and running across your organization's endpoints.
What is SuperAlign Surface?
SuperAlign Surface is an AI asset discovery and endpoint visibility platform designed to help organizations understand exactly what AI tools, applications, browser extensions, and IDE plugins are installed and running across every employee device.
While Radar monitors AI activity flowing through your network, Surface goes deeper — giving you a ground-level view of the AI software that exists on your endpoints, regardless of whether it has been used or approved. Surface discovers, categorizes, and risk-scores every AI-related asset across your fleet, empowering security and IT teams to govern AI adoption before it becomes a threat.
Key Features
- Endpoint-level discovery — Detect AI tools installed on every Mac, Windows, and Linux device across your organization
- Unified asset inventory — Browse and filter all discovered AI software in one place, organized by type, risk level, and governance status
- Risk scoring — Every asset is automatically assigned a risk level (Critical, High, Medium, Low) so you can prioritize action
- Multi-asset-type coverage — Surface identifies Applications, AI Skills, Browser Extensions, IDE Plugins, MCP Servers, Running Processes, Background Services, Node and Python environments, Sandboxed Apps, and more
- Real-time endpoint health — Track which devices are active, which have gone stale, and when each was last seen
Core Capabilities
Discovery
Surface deploys a lightweight agent to employee endpoints that continuously scans for installed software. It identifies AI-related assets across all major asset types — whether a user installed a productivity app, a GitHub Copilot plugin, a browser extension, or a local MCP server.
Asset Classification
Every discovered asset is classified by type:
- Application — Desktop or web applications with AI capabilities (e.g., Granola, Discord, Arc)
- AI Skill — AI-powered scripting frameworks or agent toolkits (e.g., Agent-development, Async-python-patterns)
- Browser Extension — AI tools installed directly in the browser (e.g., Grammarly, Take Webpage Screenshot)
- IDE Plugin — Developer-facing AI coding tools integrated into code editors (e.g., Claude Code for VS Code, GitHub Copilot Chat)
- MCP Server — Model Context Protocol servers that extend AI agent capabilities by connecting them to external tools and services (e.g., Asana, io.github.github-mcp)
- Running Process — AI-related processes actively running in memory on the endpoint at the time of discovery
- Background Service — AI-related services configured to run in the background, often at system startup, without direct user interaction
- Node_env — AI-related packages and libraries discovered in local Node.js environments
- Python_env — AI-related packages and libraries discovered in local Python environments (e.g., ML frameworks, agent orchestration libraries)
- Sandboxed App — AI applications running in an isolated or sandboxed environment on the device
Risk Assessment
Each asset is automatically evaluated and assigned one of four risk levels:
| Risk Level | Description |
|---|---|
| Critical | Requires immediate attention; poses severe data, security, or compliance risk |
| High | Significant risk; should be reviewed and governed promptly |
| Medium | Moderate risk; warrants monitoring and policy consideration |
| Low | Minimal immediate risk; standard governance applies |
Governance Status
Ungoverned Assets
Assets are tagged as Ungoverned until your team takes action to approve, restrict, or formally acknowledge them. This status makes it easy to identify gaps in your AI governance program.
Who Should Use Surface?
Key Stakeholders
- Security & IT Teams: Identify unauthorized or risky AI software installed across the device fleet
- CISOs & IT Leaders: Understand the full scope of AI adoption at the endpoint level, beyond what network monitoring alone reveals
- Compliance & Risk Teams: Audit which AI tools exist in the environment and confirm governance coverage
- Engineering Managers: Track what AI development tools — IDE plugins, MCP servers, AI skills — are in use across engineering endpoints