Search before asking
Paimon version
1.3.1
Compute Engine
Flink 2.1.1
Minimal reproduce step
- Start any Iceberg-REST compatible catalog.
- Create Paimon Hadoop catalog with S3-based warehouse. Then, create a Paimon table inside the Paimon catalog adding required metadata properties for Iceberg Metadata functionality:
'metadata.iceberg.storage' = 'rest-catalog',
'metadata.iceberg.rest.uri' = 'http://localhost:55807/', -- specify your URL
'metadata.iceberg.rest.warehouse' = 'rck_warehouse', -- specify your warehouse name
'metadata.iceberg.rest.clients' = '1'
- Launch Flink job to INSERT data to this table.
What doesn't meet your expectations?
1.Flink could not create a table in the Iceberg REST Catalog. However, some traces of the operation do appear on the S3 location, such as metadata JSON file.
2. Flink Job is not able to finish successfully, it fails on Writer operator, trying to commit table creation to Iceberg REST catalog endpoint, but gets:
java.lang.RuntimeException: java.lang.RuntimeException: Fail to create table or get table: default.my_orders
at org.apache.paimon.iceberg.IcebergRestMetadataCommitter.commitMetadata(IcebergRestMetadataCommitter.java:123)
at org.apache.paimon.iceberg.IcebergCommitCallback.createMetadataWithBase(IcebergCommitCallback.java:666)
at org.apache.paimon.iceberg.IcebergCommitCallback.createMetadata(IcebergCommitCallback.java:281)
at org.apache.paimon.iceberg.IcebergCommitCallback.call(IcebergCommitCallback.java:229)
at org.apache.paimon.operation.FileStoreCommitImpl.lambda$tryCommitOnce$16(FileStoreCommitImpl.java:1215)
at java.base/java.util.ArrayList.forEach(Unknown Source)
at org.apache.paimon.operation.FileStoreCommitImpl.tryCommitOnce(FileStoreCommitImpl.java:1213)
at org.apache.paimon.operation.FileStoreCommitImpl.tryCommit(FileStoreCommitImpl.java:840)
at org.apache.paimon.operation.FileStoreCommitImpl.commit(FileStoreCommitImpl.java:362)
at org.apache.paimon.table.sink.TableCommitImpl.commitMultiple(TableCommitImpl.java:229)
at org.apache.paimon.flink.sink.StoreCommitter.commit(StoreCommitter.java:111)
at org.apache.paimon.flink.sink.CommitterOperator.commitUpToCheckpoint(CommitterOperator.java:215)
at org.apache.paimon.flink.sink.CommitterOperator.notifyCheckpointComplete(CommitterOperator.java:192)
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.notifyCheckpointComplete(StreamOperatorWrapper.java:104)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.notifyCheckpointComplete(RegularOperatorChain.java:145)
at org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpoint(SubtaskCheckpointCoordinatorImpl.java:479)
at org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpointComplete(SubtaskCheckpointCoordinatorImpl.java:412)
at org.apache.flink.streaming.runtime.tasks.StreamTask.notifyCheckpointComplete(StreamTask.java:1578)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointCompleteAsync$20(StreamTask.java:1519)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointOperation$23(StreamTask.java:1558)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:118)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMail(MailboxProcessor.java:415)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsNonBlocking(MailboxProcessor.java:400)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:362)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:229)
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:980)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:917)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:963)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:942)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:756)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:568)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.lang.RuntimeException: Fail to create table or get table: default.my_orders
at org.apache.paimon.iceberg.IcebergRestMetadataCommitter.commitMetadataImpl(IcebergRestMetadataCommitter.java:167)
at org.apache.paimon.iceberg.IcebergRestMetadataCommitter.commitMetadata(IcebergRestMetadataCommitter.java:121)
... 32 more
Caused by: java.lang.NullPointerException
at org.apache.paimon.iceberg.IcebergRestMetadataCommitter.checkBase(IcebergRestMetadataCommitter.java:355)
at org.apache.paimon.iceberg.IcebergRestMetadataCommitter.commitMetadataImpl(IcebergRestMetadataCommitter.java:154)
... 33 more
Anything else?
I used Nessie REST Catalog of version 0.106.0
Are you willing to submit a PR?
Search before asking
Paimon version
1.3.1
Compute Engine
Flink 2.1.1
Minimal reproduce step
What doesn't meet your expectations?
1.Flink could not create a table in the Iceberg REST Catalog. However, some traces of the operation do appear on the S3 location, such as metadata JSON file.
2. Flink Job is not able to finish successfully, it fails on
Writeroperator, trying to commit table creation to Iceberg REST catalog endpoint, but gets:Anything else?
I used Nessie REST Catalog of version 0.106.0
Are you willing to submit a PR?