-
Notifications
You must be signed in to change notification settings - Fork 0
Shared Utilities & General Endpoints
- Introduction
- Health Check Endpoint (/healthz)
- Feature Detection Routes (/features)
- Client Metadata Services (/client-data)
- GitHub Client Integration
- JSON Response Structure
- Caching Strategies
- Common Deployment Issues
- Performance Optimization
- Conclusion
This document provides comprehensive documentation for the shared utilities and general endpoints implemented in the general.py module of the post-quantum WebAuthn platform. The system provides essential services for health monitoring, feature detection, and client metadata management that support frontend functionality in web authentication scenarios. These endpoints enable critical operations such as browser compatibility detection, session metadata retrieval, and client capability validation, forming the backbone of the platform's utility services.
The documented endpoints serve as the interface between the frontend client applications and backend services, facilitating secure communication, metadata exchange, and system health monitoring. By exposing feature flags, processing client data, and providing health status, these utilities ensure the platform operates reliably across diverse client environments while maintaining security and performance standards.
Section sources
- general.py
- config.py
The health check endpoint provides a simple mechanism for monitoring the operational status of the WebAuthn server. This endpoint returns a minimal HTTP 200 response with "OK" text, indicating the server is running and capable of handling requests. The health check is designed to be lightweight and fast, avoiding any complex processing or database queries that could impact performance during high-traffic periods.
The implementation follows standard health check patterns, returning only the essential status information without exposing sensitive system details. This approach prevents potential information disclosure while still providing reliable monitoring capabilities for load balancers, container orchestration systems, and external monitoring services. The endpoint is accessible without authentication, allowing external systems to verify server availability without requiring credentials.
flowchart TD
A[Health Check Request] --> B{Server Running?}
B --> |Yes| C[Return HTTP 200 OK]
B --> |No| D[No Response/Timeout]
Diagram sources
- general.py
The feature detection routes expose the platform's capabilities through the fido2.features module, which implements a feature flag system for managing optional WebAuthn functionality. The primary feature exposed is webauthn_json_mapping, which controls JSON serialization behavior for WebAuthn data classes. When enabled, this feature transforms binary values into URL-safe base64 encoded strings, aligning with the WebAuthn Level 2 specification.
The feature system uses a singleton pattern with the _Feature class, which provides methods for enabling features, requiring specific states, and issuing deprecation warnings. Features are configured during application startup in the config.py file, where webauthn_json_mapping is explicitly enabled to ensure consistent JSON formatting across the platform. This design allows for gradual feature adoption and backward compatibility during transitions to new specifications.
classDiagram
class _Feature {
+_enabled : Optional[bool]
+_name : str
+_desc : str
+enabled : bool
+require(state=True) : None
+warn() : None
}
_Feature --> FeatureNotEnabledError : "raises"
Diagram sources
- features.py
- config.py
The client metadata services provide comprehensive functionality for managing session-specific metadata, including custom FIDO metadata entries, session state, and user-specific data. These services are implemented through multiple endpoints that handle metadata upload, listing, deletion, and retrieval operations. The system maintains both verified metadata from the FIDO Alliance and custom metadata uploaded by users during testing and development.
The metadata management system uses a hybrid storage approach, supporting both local file storage and Google Cloud Storage (GCS) based on configuration. Session metadata is organized by unique session identifiers stored in cookies, with automatic cleanup of inactive sessions after 14 days. Each metadata entry includes timestamps, original filenames, and legal headers, providing comprehensive audit trails and provenance information for uploaded data.
sequenceDiagram
participant Frontend
participant GeneralPy
participant MetadataPy
participant Storage
Frontend->>GeneralPy : POST /api/mds/metadata/upload
GeneralPy->>MetadataPy : expand_metadata_entry_payloads()
MetadataPy-->>GeneralPy : Processed entries
GeneralPy->>MetadataPy : save_session_metadata_item()
MetadataPy->>Storage : write_file()
Storage-->>MetadataPy : Success
MetadataPy-->>GeneralPy : Serialized item
GeneralPy-->>Frontend : JSON response
Diagram sources
- general.py
- metadata.py
- session_metadata_store.py
The GitHub client integration enables secure logging and storage of credential data and metadata through the GitHub API. Implemented in github_client.py, this service provides functions for uploading JSON data, retrieving stored files, and managing repository directories. The integration uses GitHub's REST API with bearer token authentication, ensuring secure communication with the credential log repository.
The system automatically enables logging in hosted environments like Render and Google Cloud Run, while allowing explicit control through the ENABLE_GITHUB_LOGGING environment variable. All GitHub operations include retry logic for handling transient network errors, with exponential backoff for server-side issues. The integration also implements content deduplication by calculating Git blob SHA1 hashes, preventing redundant storage of identical metadata files.
sequenceDiagram
participant Server
participant GitHubClient
participant GitHubAPI
Server->>GitHubClient : github_upload_file()
GitHubClient->>GitHubClient : git_blob_sha()
GitHubClient->>GitHubAPI : GET /repos/{owner}/{repo}/contents/{path}
GitHubAPI-->>GitHubClient : File list
GitHubClient->>GitHubClient : Check for duplicates
GitHubClient->>GitHubAPI : PUT /repos/{owner}/{repo}/contents/{path}
GitHubAPI-->>GitHubClient : Response
GitHubClient-->>Server : Success/Failure
Diagram sources
- github_client.py
- metadata.py
The shared utilities return standardized JSON responses with consistent structure across endpoints. Successful responses typically include the requested data in a top-level property, while error responses contain an "error" field with descriptive messages. The health check endpoint returns plain text "OK" rather than JSON, following conventional health check patterns.
For metadata operations, responses include serialized metadata items with properties such as filename, uploaded timestamp, original filename, and parsed metadata payload. The system uses Flask's jsonify function to ensure proper JSON encoding and content-type headers. Error responses include appropriate HTTP status codes (400 for client errors, 500 for server errors) with human-readable error messages that assist in debugging without exposing sensitive system information.
erDiagram
RESPONSE ||--o{ METADATA_ITEM : contains
RESPONSE {
object items
array errors
string error
boolean deleted
string status
}
METADATA_ITEM {
string filename
string original_filename
string uploaded_at
object payload
string legal_header
float mtime
}
Diagram sources
- general.py
- metadata.py
The platform implements multi-layer caching strategies to optimize performance and reduce redundant processing. The metadata system uses in-memory caching for the base FIDO MDS metadata snapshot, loading it once per server process during startup. This prevents repeated file I/O operations for the frequently accessed metadata. The caching state is tracked using a global dictionary with timestamps and completion markers to ensure fresh data is loaded daily.
Session metadata is cached at the storage layer, with file modification times tracked to detect changes. The system also implements cookie-based session management, reducing the need for repeated session identifier generation. For cloud deployments, the integration with Google Cloud Storage provides additional caching benefits through GCS's built-in storage hierarchy and edge caching capabilities.
flowchart TD
A[Request] --> B{Metadata Cached?}
B --> |Yes| C[Return Cached Data]
B --> |No| D[Load from Storage]
D --> E[Parse Metadata]
E --> F[Store in Memory Cache]
F --> G[Return Data]
H[Daily Timer] --> I{New Day?}
I --> |Yes| J[Refresh Cache]
I --> |No| K[Continue Using Cache]
Diagram sources
- general.py
- metadata.py
Several common deployment issues can affect the shared utilities' functionality. Misconfigured health checks may occur when load balancers expect JSON responses but receive plain text, requiring adjustment of health check configuration to accept 200 status codes with any content. CORS (Cross-Origin Resource Sharing) issues can arise with the client-data endpoints when frontend applications are served from different domains than the API, necessitating proper CORS header configuration.
Version skew between frontend and backend can cause compatibility issues, particularly when new metadata fields are introduced or feature flags are modified. This is mitigated by the system's use of flexible JSON parsing and default values for missing fields. Authentication token misconfiguration can prevent GitHub logging from functioning, as the GITHUB_TOKEN environment variable must be properly set in production environments.
Other issues include storage permission problems when the server cannot write to the session metadata directory, certificate validation errors when the MDS metadata trust root is not properly configured, and rate limiting when GitHub API requests exceed quotas. These are addressed through comprehensive error handling, detailed logging, and graceful degradation when external services are unavailable.
Section sources
- general.py
- config.py
- github_client.py
Performance optimization for the shared utilities focuses on reducing latency, minimizing resource usage, and improving scalability. The health check endpoint is optimized for minimal processing, returning immediately without database queries or external service calls. For metadata endpoints, response compression is recommended to reduce bandwidth usage, particularly for large metadata payloads.
CDN (Content Delivery Network) caching is highly effective for static utility responses, especially the verified metadata endpoint. By configuring appropriate cache headers, CDNs can serve these responses from edge locations, significantly reducing latency for geographically distributed clients. The system's stateless design allows for easy horizontal scaling, with multiple server instances handling requests independently.
Additional optimizations include connection pooling for database operations, efficient JSON serialization using built-in Python libraries, and lazy loading of metadata only when required. The use of lightweight data structures and avoidance of unnecessary object creation helps minimize memory usage. For high-traffic scenarios, implementing Redis or Memcached for session metadata storage could further improve performance beyond the current file-based and GCS-based storage options.
Section sources
- general.py
- metadata.py
The shared utilities and general endpoints in the post-quantum WebAuthn platform provide essential services that support the overall functionality and reliability of the system. Through well-designed health checks, feature detection, and metadata management, these components enable robust client-server interactions while maintaining security and performance standards. The modular architecture allows for easy maintenance and extension, with clear separation of concerns between different utility functions.
The implementation demonstrates best practices in web service design, including proper error handling, standardized response formats, and efficient resource management. By leveraging modern Python features and Flask's capabilities, the system achieves a balance between functionality and simplicity. The documentation of common deployment issues and performance optimization strategies provides valuable guidance for operators and developers working with the platform in production environments.