Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FAIL: TestWatch (flake) #12989

Closed
danwinship opened this issue Feb 16, 2017 · 6 comments
Closed

FAIL: TestWatch (flake) #12989

danwinship opened this issue Feb 16, 2017 · 6 comments
Assignees
Labels
component/kubernetes kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2

Comments

@danwinship
Copy link
Contributor

In https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_future/258/:

--- FAIL: TestWatch (18.62s)
07:06:25 	resttest.go:1242: unexpected error: etcdserver: request timed out
07:06:25 FAIL
07:06:25 In suite "github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/secret/etcd", test case "TestWatch" failed:
07:06:25 === RUN   TestWatch

a little before that:

07:06:25 2017-02-16 10:46:40.079463 W | wal: sync duration of 14.367946558s, expected less than 1s
07:06:25 2017-02-16 10:46:40.185410 W | etcdserver: apply entries took too long [104.924275ms for 1 entries]
@derekwaynecarr
Copy link
Member

I am lowering the severity, as this looks like it was a infrastructure hiccup. If its prevalent, we can increase, but it has not been prevalent upstream.

@enj
Copy link
Contributor

enj commented Mar 20, 2017

Saw something similar in #13466

Regression

github.com/openshift/origin/pkg/image/registry/image/etcd.TestWatch (from github.com_openshift_origin_pkg_image_registry_image_etcd)
Failing for the past 1 build (Since Failed#484 )
Took 32 sec.
Stacktrace

=== RUN   TestWatch
2017-03-20 16:40:36.597288 I | integration: launching 6003822963750531531 (unix://localhost:60038229637505315310)
2017-03-20 16:40:36.598511 I | etcdserver: name = 6003822963750531531
2017-03-20 16:40:36.598572 I | etcdserver: data dir = /openshifttmp/etcd404003075
2017-03-20 16:40:36.598654 I | etcdserver: member dir = /openshifttmp/etcd404003075/member
2017-03-20 16:40:36.598702 I | etcdserver: heartbeat = 10ms
2017-03-20 16:40:36.598747 I | etcdserver: election = 100ms
2017-03-20 16:40:36.598793 I | etcdserver: snapshot count = 0
2017-03-20 16:40:36.598848 I | etcdserver: advertise client URLs = unix://127.0.0.1:2101431918
2017-03-20 16:40:36.598902 I | etcdserver: initial advertise peer URLs = unix://127.0.0.1:2101331918
2017-03-20 16:40:36.598975 I | etcdserver: initial cluster = 6003822963750531531=unix://127.0.0.1:2101331918
2017-03-20 16:40:36.676055 I | etcdserver: starting member c25570b83922bfe in cluster fd61cacd056466a0
2017-03-20 16:40:36.676187 I | raft: c25570b83922bfe became follower at term 0
2017-03-20 16:40:36.676290 I | raft: newRaft c25570b83922bfe [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2017-03-20 16:40:36.676355 I | raft: c25570b83922bfe became follower at term 1
2017-03-20 16:40:36.773706 I | etcdserver: set snapshot count to default 10000
2017-03-20 16:40:36.773793 I | etcdserver: starting server... [version: 3.1.0, cluster version: to_be_decided]
2017-03-20 16:40:36.778875 I | integration: launched 6003822963750531531 (unix://localhost:60038229637505315310)
2017-03-20 16:40:36.780791 I | etcdserver/membership: added member c25570b83922bfe [unix://127.0.0.1:2101331918] to cluster fd61cacd056466a0
2017-03-20 16:40:36.830736 I | raft: c25570b83922bfe is starting a new election at term 1
2017-03-20 16:40:36.830843 I | raft: c25570b83922bfe became candidate at term 2
2017-03-20 16:40:36.830913 I | raft: c25570b83922bfe received MsgVoteResp from c25570b83922bfe at term 2
2017-03-20 16:40:36.830998 I | raft: c25570b83922bfe became leader at term 2
2017-03-20 16:40:36.831063 I | raft: raft.node: c25570b83922bfe elected leader c25570b83922bfe at term 2
2017-03-20 16:40:36.831830 I | etcdserver: setting up the initial cluster version to 3.1
2017-03-20 16:40:36.832696 I | etcdserver: published {Name:6003822963750531531 ClientURLs:[unix://127.0.0.1:2101431918]} to cluster fd61cacd056466a0
2017-03-20 16:40:36.832891 N | etcdserver/membership: set the initial cluster version to 3.1
2017-03-20 16:41:09.103350 I | integration: terminating 6003822963750531531 (unix://localhost:60038229637505315310)
2017-03-20 16:41:09.104702 I | etcdserver/api/v3rpc: transport: http2Server.HandleStreams failed to read frame: read unix localhost:6003822963750531531->@: use of closed network connection
2017-03-20 16:41:09.104846 I | etcdserver/api/v3rpc: transport: http2Server.HandleStreams failed to read frame: read unix localhost:6003822963750531531->@: use of closed network connection
2017-03-20 16:41:09.132943 I | integration: terminated 6003822963750531531 (unix://localhost:60038229637505315310)
--- FAIL: TestWatch (32.54s)
	resttest.go:1256: unexpected timeout from result channel

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 18, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 20, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/kubernetes kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2
Projects
None yet
Development

No branches or pull requests

7 participants