[User/Client App]
     |
     | (1) Auth → Keycloak (OAuth2)
     ↓
[Access Token]
     |
     | (2) API Call + Token → 3scale Gateway
     ↓
[3scale OIDC Plugin] --- verifies token with ---> Keycloak
     |
     | (3) Forward Authenticated Request
     ↓
[OpenShift AI Model Endpoint (Route)]

Step-by-Step Configuration

  1. Set up Keycloak as an Identity Provider

    • Deploy Keycloak (if not already available):

    • Use OpenShift Operator Hub or an external instance.

    • Create a Realm:

Name it (e.g., ai-realm).

To find the admin user password it is in a secret, ie credential-keycloak-maas, so credential-(keycloakname) Created realm as maas-keycloakrealm

  • Create a Client (for your API consumer app):

Client ID: ai-client Access Type: confidential Valid Redirect URIs: * (for testing) or your app URL Enable Standard Flow and Client Credentials Save the Client Secret

  • Create Users/Roles (optional):

Add users and assign roles or groups to manage access policies.

  1. Secure AI Model with OpenShift Route & Token Auth

Deploy the Model via RHODS:

  • Either use Jupyter + ModelMesh (recommended) or a custom Flask/FastAPI service.

Expose via Route:

  • Create a Route in OpenShift to expose the model endpoint.

  • Ensure it’s protected by an OAuth2 token (Keycloak) or make it internal.

  • Test endpoint locally with curl or Postman using a Bearer token.

    1. Configure 3scale to Manage the AI Model API

Set up 3scale Admin Portal:

Create the Product:

  • Link the backend.

  • Set up application plans and methods (e.g., /predict).

Set up OpenID Connect Auth (OIDC):

  • Go to [Integration > Settings] of the product.

  • Enable OpenID Connect.

Use Keycloak Realm Info:

Define Mapping Rules:

  • Match API paths (e.g., /predict) to usage metrics.

Update API Gateway Configuration:

  • Promote the staging config to production.

    1. Test the Workflow

  • Get an access token from Keycloak:

curl -X POST 'https://<keycloak-host>/realms/ai-realm/protocol/openid-connect/token' \
  -d 'grant_type=client_credentials' \
  -d 'client_id=ai-client' \
  -d 'client_secret=<your-secret>' \
  | jq
  • Call the model via 3scale gateway:

curl -X POST 'https://<3scale-api-gateway-url>/predict' \
  -H "Authorization: Bearer <access_token>" \
  -H "Content-Type: application/json" \
  -d '{"input": [your_input_data]}'
  • 3scale validates the token via OIDC, applies rate limits, and proxies to the model.

Optional Enhancements

  • Use Red Hat SSO (Keycloak) as a managed offering if available.

  • Configure Keycloak client scopes for more granular claims.

  • Apply 3scale rate limits, alerts, and API analytics.

  • Enable logging/monitoring via OpenShift Logging/Grafana/Prometheus.

3scale and RHCL Current Capabilities (Noel)

Checklist of Capabilities (Kenny)

Pattern Assumptions (kenny)

Potential Topics to Cover in the Lab

API Gateway

  • APISix

  • 3scale

  • LiteLLM

Authorization

  • Keycloak

  • Customer provided