Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

watch api - Websocket isn't closing the connection when client requests it #10449

Closed
jwendell opened this issue Aug 16, 2016 · 7 comments
Closed
Assignees
Labels

Comments

@jwendell
Copy link
Member

Version

OpenShift Master: v3.3.0.21
Kubernetes Master: v1.3.0+507d3a7

I'm watching the pods using websockets, with the pattern https://192.168.1.36:8443/api/v1/namespaces/myproject/pods/?resourceVersion=0&watch=true

It's working fine until I (the client) send a close request to the websocket (I mean, in the websocket protocol). The server (OSE 3.3) seems to be ignoring any close request and keeps the connection alive, sending data when events happen...

This was working fine on OSE 3.2.

@liggitt
Copy link
Contributor

liggitt commented Aug 16, 2016

How are you opening/closing the websocket and verifying there is still data being sent?

@jwendell
Copy link
Member Author

I'm using arquillian-cube, which uses kubernetes-client which uses okhttp library to do the "lowlevel" websocket stuff.

I have debugged the connection with tcpdump, and it's clear that in OSE 3.2 after the client sends the close request, the server replies with another close message and then the underlying socket is closed. With OSE 3.3 the server doesn't reply the close request, it's just ignored.

Right now I'm trying to write a simple python client that reproduces this issue against a faulty OSE...

@liggitt liggitt added kind/bug Categorizes issue or PR as related to a bug. priority/P1 component/restapi component/kubernetes labels Aug 16, 2016
@liggitt liggitt self-assigned this Aug 16, 2016
@jwendell
Copy link
Member Author

Here's a simple way of reproducing it in python:

  • Clone this websocket client
  • Copy this sample to the examples dir
  • Run it: python ./ws.py URL TOKEN
  • Example: python ws.py 'wss://localhost:4433/api/v1/namespaces/myproject/pods/?resourceVersion=0&watch=true' $(oc whoami -t)

Run it twice: Against a 3.2 and 3.3 and you'll see the bug.

@liggitt
Copy link
Contributor

liggitt commented Aug 17, 2016

thanks for the report and reproduce steps. I tracked down the issue and opened kubernetes/kubernetes#30735 upstream. The fix is in kubernetes/kubernetes#30736 upstream and will be in #10475 for origin.

Currently, the server closes the watch when it hits an error sending a watch event. Is it normal for a client to send a close request, then keep receiving and keep the connection open?

@jwendell
Copy link
Member Author

Thanks for the fix!

I wouldn't say it's normal. Quoting https://tools.ietf.org/html/rfc6455#section-7 :

In abnormal cases (such as not
having received a TCP Close from the server after a reasonable amount
of time) a client MAY initiate the TCP Close. As such, when a server
is instructed to Close the WebSocket Connection it SHOULD initiate
a TCP Close immediately, and when a client is instructed to do the
same, it SHOULD wait for a TCP Close from the server.

That said, I have opened a PR against java's okhttp library to close the connection if the server doesn't reply the close request: square/okhttp#2789

@jwforres
Copy link
Member

@liggitt looks like this issue isn't fixed for logs

@liggitt
Copy link
Contributor

liggitt commented Aug 18, 2016

Sigh. Will sweep for calls to IgnoreReceives

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants