-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sometimes the tuple of the captured http2 packet is 0 #739
Comments
The issue of a connection tuple being logged as To address this, you might want to ensure that the connection data is accessed and logged before the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Hello! I continued to test and found the same test environment and methods, the http1.1 protocol did not have this problem, http2 protocol occasionally has this problem |
Please provide steps to reproduce |
My steps are as follows: 1.Run the ecapture 2.curl another machine's nginx server and its files 3.The result of ecapture is as follows
I may not be doing anything special, in my tests, this problem comes up occasionally, but it also comes up fairly easily. |
same error occur on ubuntu 22.04(Linux cd-ubuntu 5.15.0-131-generic), with -d for more log, sometimes DestroyConn occurs before SSLDataEvent ? @dosu For https access, especially with http2, Is it possible for the kernel call SEC("kprobe/tcp_v4_destroy_sock") occurs before userspace call SEC("uretprobe/SSL_read") or SEC("uretprobe/SSL_write")? sometimes found tuple failed for http2
ok for http on same test environment
|
The issue you're encountering, where HTTP/2 connections sometimes log a tuple as For HTTP/2, the interaction between kernel-level probes ( To mitigate this, you could explore adjusting the sequence of events in your capture process to ensure that connection data is accessed and logged before the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
@dosu HTTP/2 where the connection handling might differ from HTTP/1.1 -->Can you give a specific explanation of the differences and relevant literature references? |
…ausing the tuple to be unreachable.
@huaixia777 @chilli13 please try #741 |
I wasn't able to find specific details within the eCapture repository about how HTTP/2 connection handling differs from HTTP/1.1. However, generally speaking, HTTP/2 introduces multiplexing, which allows multiple streams to be sent over a single TCP connection. This can lead to differences in how connections are managed and destroyed compared to HTTP/1.1, which uses a simpler request-response model. In terms of eCapture, the handling of HTTP/2 might involve more complex interactions between kernel and user-space probes due to the multiplexed nature of HTTP/2 connections. This complexity can affect the timing of events such as connection destruction and data capture, potentially leading to issues like the tuple being logged as For more detailed information, you might want to look into general resources on HTTP/2 protocol differences and eBPF-based network monitoring, as these could provide insights into the underlying mechanisms that might affect eCapture's behavior with HTTP/2. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Hello!
When I use this tool to capture http2 packets, sometimes the tuple of the packets is 0, that is, 0.0.0.0:0-0.0.0.0:0.
And I have not stopped the running of the tool during the capture.
Here are the results of my run, this question seems to arise easily.
"DestroyConn success fd=5 pid=396386 tuple=192.168.10.201:41370-192.168.10.41:4443" printed before printing tuple information, maybe because the connection was destroyed when the tuple was fetched.
Expect your reply!
The text was updated successfully, but these errors were encountered: