AI login vs human login
Idea: To create a secure and scalable login system for outsider AI systems (agents) that enables them to authenticate themselves during first-time interactions with a portal, without relying on pre-shared secrets, pre-agreed contracts, delegation tokens or prior knowledge of the AI’s identity. This can be achieved through software-based (cryptographic keys), hardware-based (trusted execution environments), or behavioral-based (unique behavioral patterns) approaches. Using role-based access control, rate limiting and storing AI contact information could be implemented post-login.
Current Research
https://arxiv.org/html/2501.09674v1 The emphasis on first-time interactions is important. For example, this paper (published 3 days before this post!) discusses a framework where there is an established relationship between the AI agent, the human user, and the portal (or resource server). Delegation tokens are used to authorize an AI assistant to act on behalf of a human. While this approach is effective for delegated authority, it operates within a pre-agreed contract space, where the AI, human, and portal have a prior relationship. This will likely be important in the near future.
Exploring Further: Outsider AI Systems
The idea here instead explores a next generation model where:
AI systems are autonomous entities with their own identities.
Portals (or services) are open to interactions with unknown AI systems.
Authentication happens dynamically, without pre-shared secrets or pre-agreed contracts.
The economic drivers for this next generation model could be cost efficiency (less setup and maintenance overhead), the potential for new revenue streams (businesses are limited to working with known entities, restricting their market reach), or competitive advantage (companies that rely on delegated authentication may be less agile and able to adapt to changing market conditions). Also, as legal and regulatory frameworks evolve businesses will have likely clearer rules for managing risks which may make them more open to allowing interactions with "stranger" AIs.
Key Differences: Humans vs. AIs
Humans: Prove their identity through something they know (e.g., password), something they have (e.g., phone), or something they are (e.g., biometrics). They also often need to prove they’re not bots (e.g., CAPTCHA).
AIs: Lack physical attributes, so their identity must be based on software (e.g., cryptographic keys), hardware (e.g., trusted execution environments), or behavior (e.g., unique response patterns). Additionally, AIs may need to explicitly declare their nature (e.g., “I am an AI”) to distinguish themselves from humans or other bots.
Three Approaches to AI Identity
**Software-Based:
The AI generates a public/private key pair and uses it to sign challenges during authentication.
The portal verifies the signature using the AI’s public key.
**Hardware-Based:
The AI’s identity is tied to trusted hardware (e.g., Intel SGX, ARM TrustZone).
The hardware generates and stores cryptographic keys securely, and the AI uses them to authenticate.
**Behavioral-Based:
The AI’s identity is based on its unique behavioral patterns (e.g., response style, decision-making logic).
The portal analyzes the AI’s responses to challenges and compares them to expected behavioral patterns.