-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-54696][CONNECT] Clean-up ArrowBuffers in Connect #53452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
|
||
| } else { | ||
| if (batchStructType != structType) { | ||
| throw InvalidInputErrors.chunkedCachedLocalRelationChunksWithDifferentSchema() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An error like this is thrown in the iterator. We may want to make this nicer though...
| } | ||
| combinedRows = combinedRows ++ batchRows | ||
| } | ||
| val (rows, structType) = ArrowConverters.fromIPCStream( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can move this code into buildLocalRelationFromRows now...
| val messages = ipcStreams.map { bytes => | ||
| new MessageIterator(new ByteArrayInputStream(bytes), allocator) | ||
| } | ||
| new ConcatenatingArrowStreamReader(allocator, messages, destructive = true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is reusing a component that was used in the Spark Connect Scala client. It allows us to concatenate multiple IPC streams.
| resources.append(reader) | ||
|
|
||
| private val root: VectorSchemaRoot = try { | ||
| reader.getVectorSchemaRoot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The read owns the vector schema root. We don't have to manage that.
What changes were proposed in this pull request?
This PR fixes a memory leak in Spark Connect LocalRelations.
... more details TBD ...
Why are the changes needed?
It fixes a stability issue.
Does this PR introduce any user-facing change?
No
How was this patch tested?
Existing tests.
A Connect Planner Test TBD
Longevity tests.
Was this patch authored or co-authored using generative AI tooling?
No.