Skip to content

Allow Python user type pass through Beam SQL#38206

Open
Abacn wants to merge 1 commit intoapache:masterfrom
Abacn:support-python-any-2
Open

Allow Python user type pass through Beam SQL#38206
Abacn wants to merge 1 commit intoapache:masterfrom
Abacn:support-python-any-2

Conversation

@Abacn
Copy link
Copy Markdown
Contributor

@Abacn Abacn commented Apr 15, 2026

Currently Python SDK's Row can have arbitrary type and work within Python SDK, but not language boundary. There are many missing connections to make it work from Python->Java/Beam SQL->Python end-to-end.

This change completes most missing point. Beam Row with Python user type (backed by FastPrimitiveCoder and with pythonsdk_any logical type urn) can now be recognized decoded by Java.

  • Complete pythonsdk_any logical type representation def. Otherwise Java side SchemaTranslation for this logical type would fail

  • Handle PassthroughLogicalType in Beam SQL. Allow Beam SQL treat PassthroughLogicalType as its base type

  • Fix nested bytes in Beam SQL

  • Introduce a schema option for compact encoding for static non-null schema

  • Support BYTE atomic type in Python

Please add a meaningful description for your change here

However due to Beam Row->Calcite Row->Beam Row mapping losing the originally logical type schema, the output Row becomes a Row instead of language type (this limitation is related to #24019). At least the pipeline expansion no longer fail for xlang pipeline involving Any typehints

Fix #21024; fix #20738


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances Apache Beam's capabilities for handling Python user types within Beam SQL. It refines the definition of the pythonsdk_any logical type, ensuring seamless interoperability between Python and Java SDKs. The changes also introduce robust support for PassThroughLogicalType in SQL queries, allowing for more flexible and accurate type interpretation. Furthermore, improvements were made to correctly process nested byte structures and a new schema option was added to optimize encoding for specific row types, contributing to a more efficient and reliable data processing pipeline.

Highlights

  • Python SDK Logical Type Definition: Completed the pythonsdk_any logical type representation definition, ensuring proper schema translation on the Java side.
  • Beam SQL PassThroughLogicalType Handling: Implemented support for PassThroughLogicalType in Beam SQL, allowing it to be treated as its underlying base type for correct processing.
  • Nested Bytes Fix in Beam SQL: Resolved issues related to handling nested bytes within Beam SQL operations.
  • Compact Encoding Schema Option: Introduced a new compact_nonnull schema option for RowCoderGenerator to enable optimized encoding for static non-null schemas.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@Abacn Abacn force-pushed the support-python-any-2 branch 3 times, most recently from 335b7d9 to f670f41 Compare April 16, 2026 04:04
@Abacn Abacn changed the title Prerequisites for supporting Python user type for Beam SQL Allow Python user type pass through Beam SQL Apr 16, 2026
* Complete pythonsdk_any logical type representation def. Otherwise
  Java side SchemaTranslation for this logical type would fail

* Handle PassthroughLogicalType in Beam SQL.
  Allow Beam SQL treat PassthroughLogicalType as its base type

* Fix nested bytes in Beam SQL

* Introduce a schema option for compact encoding for static non-null
  schema
@Abacn Abacn force-pushed the support-python-any-2 branch from f670f41 to a8fa6db Compare April 16, 2026 04:20
@Abacn Abacn marked this pull request as ready for review April 16, 2026 14:14
@Abacn
Copy link
Copy Markdown
Contributor Author

Abacn commented Apr 16, 2026

R: @ahmedabu98 would you mind taking a look? Since you've been working on Beam SQL

@github-actions
Copy link
Copy Markdown
Contributor

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment assign set of reviewers

Copy link
Copy Markdown
Contributor

@ahmedabu98 ahmedabu98 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot of this was new to me but was mostly able to follow along. Left some comments

Comment on lines +187 to +188
Schema.LogicalType<org.apache.beam.sdk.values.Row, org.apache.beam.sdk.values.Row> logicalType =
new org.apache.beam.sdk.schemas.logicaltypes.PassThroughLogicalType<
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

readability nit: import these instead of using fully qualified names?


def estimate_size(self, unused_value, nested=False):
# type: (Any, bool) -> int
# A short is encoded as 2 bytes, regardless of nesting.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: cleanup

def recover_to_python_type(input):
fields = []
for field in input:
print(field)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: cleanup

UserTypeRow(1, Aribitrary("abc"), -1j),
])
| SqlTransform("SELECT arb, complex FROM PCOLLECTION")
# TODO: recover to user type. Currently pipeline can run,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we create a github issue for this TODO ?

Comment on lines +269 to +273
name="type_byte",
type=schema_pb2.FieldType(
atomic_type=schema_pb2.BYTE, nullable=False)),
schema_pb2.Field(
name="payload",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we create static top-level variables for "type_byte" and "payload" too?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And maybe a comment on what they represent (IIUC it's for FastPrimitivesCoder?)

Comment on lines +166 to +176
def recover_to_python_type(input):
fields = []
for field in input:
print(field)
if hasattr(field, 'type_byte') and hasattr(field, 'payload'):
obj = coders.FastPrimitivesCoder().decode(
field.type_byte.to_bytes() + field.payload)
fields.append(obj)
else:
fields.append(field)
return tuple(fields)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we make this utility public?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Nested bytes broken by Calcite upgrade Python SqlTransform fails with an unhelpful Java error when type inference uses the pythonsdk_any fallback

2 participants