-
Notifications
You must be signed in to change notification settings - Fork 54
add functions for creating ray with oauth proxy in front of the dashboard #298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This isn't quite finished yet. I'm having tls related issues between the Ray API and the OAuth proxy. |
Blocked due to required changes upstream with the (edit: submission client has necessary changes. I was using an older version locally) |
a5c2ebc
to
bb0a0a7
Compare
bb0a0a7
to
aa79c14
Compare
aa79c14
to
faa08fe
Compare
This URL is used not only for dashboard access but also for the APIs - currently jobs and In future also Serve. Oath proxy requires manual log in, which will break these APIs |
faa08fe
to
6bd6f31
Compare
@blublinsky I've launched OAuth so that it supports both GUI manual login as well as authentication using an auth bearer token. Can you PTAL? |
c064fec
to
85a153e
Compare
How do you get the auth bearer token to use? |
It is the same token used for interacting with the kubernetes api with the kubectl client. I grab it here based on the currently logged in user or in cluster config. I believe it is the same token as the one you generate from the |
So basically the only thing that you are validating is that user is logged in. Not very secure, really |
The OAuth Proxy supports checking for authorization based on RBAC. Here I check to make sure the authenticated user has authorization to get pods in the given namespace. |
So it is basically a single cluster solution |
I'm unsure what you are expecting. How would you propose we authenticate for a multi-cluster design here? AFAIK, the |
I do not think authentication/authorization should be linked to a given cluster, It should rather be for a given user, who can be logged in any of the several clusters, but that's just my opinion |
I was running into issues with the generate CA cert from the service annotation. Putting this here so I remember to look into the ca-injector. |
Where do you propose authentication and authorization occur in this case? |
761a2df
to
8a83cbc
Compare
Tested out starting a ray cluster from local machine and submitting jobs. Seems to work as expected when logging in via sdk authentication or lgtm @Maxusmusti could you please review as well. |
8cec67f
to
104f77c
Compare
…oard Signed-off-by: Kevin <[email protected]>
57d89f0
to
6606dcf
Compare
Signed-off-by: Kevin <[email protected]>
6606dcf
to
f513e3c
Compare
Michael LGTM'd the PR already.
@KPostOffice could you create a follow-on task to switch this over to istio (once we have istio available ootb in ODH). Or at least make it configurable somehow. Alternatively, the follow-on winds up being "Remove this, and update the kuberay operator to allow auth enablement" |
Signed-off-by: Kevin <[email protected]>
4d7d5d6
to
0d31bce
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verified working on OSD cluster awesome work Kevin /approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Bobbins228 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Issue link
closes: #174
What changes have been made
I added functions which create the necessary objects and update the appwrapper so that the SDK can create a ray cluster with an OAuth Proxy in front of the dashboard.
Verification steps
go through notebook but set
openshift_oauth
to true in the configChecks