-
Notifications
You must be signed in to change notification settings - Fork 1.4k
impl more winapi #6529
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
impl more winapi #6529
Conversation
📝 WalkthroughWalkthroughSix new Windows API Python-callable wrappers were added to the stdlib module: named pipe utilities (WaitNamedPipe, PeekNamedPipe), event management (CreateEventW, SetEvent), file I/O (WriteFile), and a sophisticated BatchedWaitForMultipleObjects function that batches handle waiting, manages thread coordination, and handles timeouts across multiple handle groups. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant BatchedWait as BatchedWaitForMultipleObjects
participant Batcher as Batching Logic
participant Thread as Worker Threads
participant WinAPI as Windows API
participant Events as Event Objects
Client->>BatchedWait: call with handles, wait_all, timeout
BatchedWait->>Events: create cancel_event
rect rgb(220, 240, 255)
note over BatchedWait,Batcher: Batching Phase
BatchedWait->>Batcher: partition handles into groups (≤64 each)
end
rect rgb(240, 220, 255)
note over Thread,WinAPI: Parallel Waiting Phase
par Per-Batch Processing
Batcher->>Thread: spawn worker thread for batch
Thread->>WinAPI: WaitForMultipleObjects(batch + cancel_event)
WinAPI-->>Thread: returns triggered index or timeout
Thread->>Thread: collect result & triggered indices
end
end
rect rgb(240, 255, 220)
note over BatchedWait,Thread: Coordination & Cleanup
alt timeout elapsed
BatchedWait->>Events: signal cancel_event
Thread->>Thread: abort wait, cleanup
end
BatchedWait->>BatchedWait: collect all thread results
BatchedWait->>BatchedWait: build return (status + indices)
end
BatchedWait-->>Client: return PyObjectRef (result tuple)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
crates/vm/src/stdlib/winapi.rs (2)
1002-1028: Add safety comments for unsafe trait implementations.The
unsafe impl Sendandunsafe impl SyncforBatchDataare correct given the access patterns, but the safety invariants should be documented to prevent future maintenance issues.🔎 Suggested documentation
+ // SAFETY: BatchData is Send because all fields are either Send or are only accessed + // from the main thread (thread field is written before threads start, read after they finish). unsafe impl Send for BatchData {} + // SAFETY: BatchData is Sync because: + // - handles, cancel_event, handle_base are immutable after construction + // - result uses AtomicU32 for synchronization + // - thread (UnsafeCell) is only accessed from the main thread: written after + // CreateThread but before ResumeThread, read only after threads complete unsafe impl Sync for BatchData {}
1062-1084: Consider using 0 for default stack size.The stack size of
1at line 1066 works (Windows rounds up to minimum), but using0is more conventional and explicitly requests the default stack size for the executable.🔎 Suggested change
let thread = unsafe { CreateThread( null(), - 1, // Smallest stack + 0, // Default stack size Some(batch_wait_thread), Arc::as_ptr(data) as *const _ as *mut _, 4, // CREATE_SUSPENDED null_mut(), ) };
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Lib/test/test_winapi.pyis excluded by!Lib/**
📒 Files selected for processing (1)
crates/vm/src/stdlib/winapi.rs
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.rs: Follow the default rustfmt code style by runningcargo fmtto format Rust code
Always run clippy to lint Rust code (cargo clippy) before completing tasks and fix any warnings or lints introduced by changes
Follow Rust best practices for error handling and memory management
Use the macro system (pyclass,pymodule,pyfunction, etc.) when implementing Python functionality in Rust
Files:
crates/vm/src/stdlib/winapi.rs
🧬 Code graph analysis (1)
crates/vm/src/stdlib/winapi.rs (1)
crates/stdlib/src/overlapped.rs (1)
SetEvent(398-404)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Run snippets and cpython tests (macos-latest)
- GitHub Check: Run snippets and cpython tests (ubuntu-latest)
- GitHub Check: Run snippets and cpython tests (windows-latest)
🔇 Additional comments (9)
crates/vm/src/stdlib/winapi.rs (9)
729-743: LGTM!Clean implementation following the established pattern in this module.
745-809: LGTM!Good validation of negative size and proper handling of both read and peek-only modes.
811-841: LGTM!The
security_attributesparameter is intentionally ignored to maintain API compatibility while using NULL security attributes, which is the common case.
843-855: LGTM!Correct implementation. Note that
overlapped.rshas a similarSetEventthat acceptsu64handles - both are valid for their respective module conventions.
857-900: LGTM!Clean implementation with proper error handling. The early return on error at line 895-897 ensures the success case correctly returns
(written, 0).
902-953: LGTM!The batching logic correctly handles the Windows
MAXIMUM_WAIT_OBJECTSlimit by reserving one slot per batch for the cancel event.
955-994: LGTM!The sequential batch waiting with deadline tracking correctly handles timeouts across multiple batches.
1030-1058: LGTM!The thread function correctly handles all wait outcomes and uses proper atomic ordering for the result.
1091-1163: LGTM!The cleanup logic correctly ensures all threads complete before closing handles, and the triggered index collection properly excludes the cancel event.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.