Show multiple user PF's in the output#3
Show multiple user PF's in the output#3damiandsap wants to merge 2 commits intoDispatchCode:mainfrom
Conversation
9e2dde9 to
adb47a7
Compare
dbd19c0 to
6c07d5c
Compare
sigsegv-monitor.bpf.c
Outdated
| bpf_map_delete_elem(&tgid_cr2, &tgid); | ||
| event->pf_count = 0; | ||
| #ifdef TRACE_PF_CR2 | ||
| u32 tgid = task->tgid; |
There was a problem hiding this comment.
I'm not sure if we should use the process ID (tgid) or rather the thread ID (pid) for the key of the map... on the one hand, the map can be smaller. On the other hand, we'd need to record which thread has generated the PF, and we might rotate through the ring buffer to quickly.
I'm also not sure if we'd need to use some form of locking if multiple threads can write into the ring buffer.
There was a problem hiding this comment.
Yeah we'd need locking: https://docs.ebpf.io/linux/helper-function/bpf_map_lookup_elem/
I just don't know if per-thread would be sufficient, or if we'd need per-CPU.
There was a problem hiding this comment.
I think *_PERCPU_HASH is sufficient.
In any case, atomic operations cannot be used in a tracing program.
|
|
||
|
|
||
| #define MAX_LBR_ENTRIES 32 | ||
| #define MAX_USER_PF_ENTRIES 16 |
There was a problem hiding this comment.
16 is fine if the ring buffer is for a single thread, but I fear that with dozens of threads, there might be too many PF, so any interesting one might be rotated out by the time we land in signal_generate. But due to the locking requirement (see my comment in the .bpf.c file) I think we should split this into a per-thread data structure (or per-CPU but while recording the pid, not sure...).
work-robot
left a comment
There was a problem hiding this comment.
We need to use a finer-granular key for the PF map, due to race conditions.
…stead of per-process
No description provided.