[RFC] Add BOLT 12 payer proof primitives#4297
[RFC] Add BOLT 12 payer proof primitives#4297vincenzopalazzo wants to merge 16 commits intolightningdevkit:mainfrom
Conversation
|
👋 Thanks for assigning @jkczyz as a reviewer! |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #4297 +/- ##
==========================================
+ Coverage 86.20% 87.20% +1.00%
==========================================
Files 160 164 +4
Lines 107545 110168 +2623
Branches 107545 110168 +2623
==========================================
+ Hits 92707 96070 +3363
+ Misses 12214 11569 -645
+ Partials 2624 2529 -95
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
TheBlueMatt
left a comment
There was a problem hiding this comment.
A few notes, though I didn't dig into the code at a particularly low level.
2324361 to
9f84e19
Compare
Add a Rust CLI tool that generates and verifies test vectors for BOLT 12 payer proofs as specified in lightning/bolts#1295. The tool uses the rust-lightning implementation from lightningdevkit/rust-lightning#4297. Features: - Generate deterministic test vectors with configurable seed - Verify test vectors from JSON files - Support for basic proofs, proofs with notes, and invalid test cases - Uses refund flow for explicit payer key control Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
🔔 1st Reminder Hey @valentinewallace! This PR has been waiting for your review. |
TheBlueMatt
left a comment
There was a problem hiding this comment.
Some API comments. I'll review the actual code somewhat later (are we locked on on the spec or is it still in flux at all?), but would be nice to reduce allocations in it first anyway.
|
🔔 2nd Reminder Hey @valentinewallace! This PR has been waiting for your review. |
|
🔔 1st Reminder Hey @jkczyz! This PR has been waiting for your review. |
|
🔔 2nd Reminder Hey @jkczyz! This PR has been waiting for your review. |
|
🔔 3rd Reminder Hey @jkczyz! This PR has been waiting for your review. |
|
🔔 4th Reminder Hey @jkczyz! This PR has been waiting for your review. |
|
🔔 5th Reminder Hey @jkczyz! This PR has been waiting for your review. |
|
🔔 6th Reminder Hey @jkczyz! This PR has been waiting for your review. |
|
🔔 7th Reminder Hey @jkczyz! This PR has been waiting for your review. |
|
🔔 8th Reminder Hey @jkczyz! This PR has been waiting for your review. |
|
🔔 9th Reminder Hey @jkczyz! This PR has been waiting for your review. |
fb8c68c to
9ad5c35
Compare
88ffb7c to
84d2374
Compare
|
Quick update after the latest push. I cherry-picked the follow-up fixes that came out of the Codex/spec review. What changed:
I also updated the external test vectors here: At this point, the real review bugs found around the producer/parser mismatch are fixed on the branch. The one discussion I still consider open is the The other remaining review threads look like follow-up material to me (TLV0 invariant docs, parser shape, secp context reuse), not missing correctness fixes. |
Move the invoice/refund payer key derivation logic into reusable helpers so payer proofs can derive the same signing keys without duplicating the metadata and signer flow.
Add the payer proof types, selective disclosure merkle support, parsing, and tests for constructing and validating BOLT 12 payer proofs from invoices.
Add regression coverage for payer proofs that disclose experimental TLVs above the reserved signature range. The test builds a proof that includes an experimental invoice TLV and asserts that the proof survives a full byte round-trip. This captures the spec-compliance issue found during review against BOLTs PR lightningdevkit#1295, where only 240..=1000 is reserved for signature-related TLVs.
Treat only the 240..=1000 range as reserved for payer-proof and signature TLVs. This updates the inclusion gate, preserves TLV ordering when serializing proofs that contain experimental records, and teaches the parser to accept non-signature invoice TLVs above 1000. The change makes the implementation match the spec text reviewed in BOLTs PR lightningdevkit#1295 while keeping the experimental round-trip test green.
Add a round-trip test for proofs that omit multiple TLVs after the last disclosed field. The writer already emits this shape when trailing fields are withheld, so the new test documents that the parser must accept it as well. This locks in the review finding from the BOLTs PR discussion before the parser change lands.
Add a compile-time and runtime regression test for selectively disclosed fields. The new test constructs a proof that reveals description, issuer, invoice amount, and creation time, then asserts that a parsed proof can expose those values. This captures the API gap found during review: the proof carried the bytes needed for verification, but discarded the disclosed values themselves.
Preserve selected invoice fields inside PayerProof and add accessors for verifiers. The parser and builder now both populate a shared disclosed-field structure so locally produced proofs and parsed proofs expose the same API surface. This keeps selective disclosure useful to callers instead of limiting the proof to merkle reconstruction and signature verification only.
8ca2247 to
44ecb16
Compare
Fix the formatting drift introduced by the payer proof follow-up cherry-picks and update the stale splicing test to use the current RecipientOnionFields::secret_only signature so the CI build matrix compiles again.
Allow payer proof merkle reconstruction to reuse hashes placed at omitted subtree roots instead of descending into already materialized descendants.
c96cef5 to
d5baee8
Compare
Add the `Nonce` from the BOLT 12 `OffersContext` to `Event::PaymentSent` so downstream consumers can build payer proofs without needing to set `manually_handle_bolt12_invoices` and intercept `InvoiceReceived`. The nonce is extracted from `OffersContext::OutboundPaymentForOffer` or `OffersContext::OutboundPaymentForRefund` when the invoice is received, threaded through `HTLCSource::OutboundRoute` (persisted for restart safety), stored in `PendingOutboundPayment::Retryable` (available for retries), and finally included in the `PaymentSent` event. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| BigSize(TLV_PREIMAGE).write(&mut bytes).expect("Vec write should not fail"); | ||
| BigSize(32).write(&mut bytes).expect("Vec write should not fail"); | ||
| bytes.extend_from_slice(&self.preimage.0); | ||
|
|
||
| if !self.disclosure.omitted_markers.is_empty() { | ||
| let omitted_len: u64 = self | ||
| .disclosure | ||
| .omitted_markers | ||
| .iter() | ||
| .map(|m| BigSize(*m).serialized_length() as u64) | ||
| .sum(); | ||
| BigSize(TLV_OMITTED_TLVS).write(&mut bytes).expect("Vec write should not fail"); | ||
| BigSize(omitted_len).write(&mut bytes).expect("Vec write should not fail"); | ||
| for marker in &self.disclosure.omitted_markers { | ||
| BigSize(*marker).write(&mut bytes).expect("Vec write should not fail"); | ||
| } | ||
| } | ||
|
|
||
| if !self.disclosure.missing_hashes.is_empty() { | ||
| let len = self.disclosure.missing_hashes.len() * 32; | ||
| BigSize(TLV_MISSING_HASHES).write(&mut bytes).expect("Vec write should not fail"); | ||
| BigSize(len as u64).write(&mut bytes).expect("Vec write should not fail"); | ||
| for hash in &self.disclosure.missing_hashes { | ||
| bytes.extend_from_slice(hash.as_ref()); | ||
| } | ||
| } | ||
|
|
||
| if !self.disclosure.leaf_hashes.is_empty() { | ||
| let len = self.disclosure.leaf_hashes.len() * 32; | ||
| BigSize(TLV_LEAF_HASHES).write(&mut bytes).expect("Vec write should not fail"); | ||
| BigSize(len as u64).write(&mut bytes).expect("Vec write should not fail"); | ||
| for hash in &self.disclosure.leaf_hashes { | ||
| bytes.extend_from_slice(hash.as_ref()); | ||
| } | ||
| } | ||
|
|
||
| let note_bytes = note.map(|n| n.as_bytes()).unwrap_or(&[]); | ||
| let payer_sig_len = 64 + note_bytes.len(); | ||
| BigSize(TLV_PAYER_SIGNATURE).write(&mut bytes).expect("Vec write should not fail"); | ||
| BigSize(payer_sig_len as u64).write(&mut bytes).expect("Vec write should not fail"); | ||
| payer_signature.write(&mut bytes).expect("Vec write should not fail"); | ||
| bytes.extend_from_slice(note_bytes); |
There was a problem hiding this comment.
Why can't we implement Writable for sha256::Hash?
| if !self.disclosure.omitted_markers.is_empty() { | ||
| let omitted_len: u64 = self | ||
| .disclosure | ||
| .omitted_markers | ||
| .iter() | ||
| .map(|m| BigSize(*m).serialized_length() as u64) | ||
| .sum(); | ||
| BigSize(TLV_OMITTED_TLVS).write(&mut bytes).expect("Vec write should not fail"); | ||
| BigSize(omitted_len).write(&mut bytes).expect("Vec write should not fail"); | ||
| for marker in &self.disclosure.omitted_markers { | ||
| BigSize(*marker).write(&mut bytes).expect("Vec write should not fail"); | ||
| } | ||
| } | ||
|
|
||
| if !self.disclosure.missing_hashes.is_empty() { | ||
| let len = self.disclosure.missing_hashes.len() * 32; | ||
| BigSize(TLV_MISSING_HASHES).write(&mut bytes).expect("Vec write should not fail"); | ||
| BigSize(len as u64).write(&mut bytes).expect("Vec write should not fail"); | ||
| for hash in &self.disclosure.missing_hashes { | ||
| bytes.extend_from_slice(hash.as_ref()); | ||
| } | ||
| } | ||
|
|
||
| if !self.disclosure.leaf_hashes.is_empty() { | ||
| let len = self.disclosure.leaf_hashes.len() * 32; | ||
| BigSize(TLV_LEAF_HASHES).write(&mut bytes).expect("Vec write should not fail"); | ||
| BigSize(len as u64).write(&mut bytes).expect("Vec write should not fail"); | ||
| for hash in &self.disclosure.leaf_hashes { | ||
| bytes.extend_from_slice(hash.as_ref()); | ||
| } | ||
| } |
There was a problem hiding this comment.
Should use Wrirteable for the values. See also WithoutLength for Vecs. We shouldn't roll our own serialization if we already have helper utilities.
There was a problem hiding this comment.
Fixed in a622196. Using hash.as_byte_array().write() for hashes and self.preimage.write() for the preimage.
There was a problem hiding this comment.
I mean for all the Vec types. Why don't we define serialization for each type it contains and use WithoutLength instead of using for loops here?
rust-lightning/lightning/src/util/ser.rs
Lines 818 to 849 in 450c03a
There was a problem hiding this comment.
Fixed — omitted_markers now also uses WithoutLength by mapping to Vec<BigSize>, matching the spec's [...*bigsize:missing] encoding. All three Vec fields (omitted_markers, missing_hashes, leaf_hashes) now use WithoutLength consistently for both serialization and deserialization.
There was a problem hiding this comment.
Actually, we can use encode_tlv_stream! with all of these, and IterableOwned to map u64 to BigSize. You can take it from jkczyz@6663f18.
- Reference invoice.bytes instead of copying (added pub(super) accessor) - Change reconstruct_merkle_root to take &[TlvRecord] instead of tuples - Change compute_selective_disclosure to take Iterator<Item = TlvRecord> - Use Readable/Writeable for hash and preimage ser/deser - Use TlvStream::range instead of manual filter conditions - Replace magic 64/32 with SCHNORR_SIGNATURE_SIZE and Writeable - Add FromStr impl for PayerProof (matching Offer/Refund pattern) - Use Vec::with_capacity for output buffer - Derive signing keys at builder construction (fail early) - Remove build_with_derived_key in favor of new_derived + build_signed Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Follow the InvoiceBuilder pattern: PayerProofBuilder<'a, S> is now generic over a SigningStrategy type parameter. - ExplicitSigningKey: build(sign_fn, note) takes a closure - DerivedSigningKey: build(note) uses the Keypair stored at construction This ensures at compile time that derived-key builders can only call build(note) and explicit-key builders can only call build(sign_fn, note). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| } else { | ||
| let marker = prev_value + 1; | ||
| markers.push(marker); | ||
| prev_value = marker; | ||
| } |
There was a problem hiding this comment.
Bug (producer-consumer mismatch): compute_omitted_markers generates markers sequentially via prev_value + 1 with no awareness of the SIGNATURE_TYPES (240..=1000) hole. Both validate_omitted_markers (line 662) and validate_omitted_markers_for_parsing (line 833) reject markers that fall in this range.
This means if an invoice has enough omitted TLVs after the last included type that sequential markers reach 240, the producer generates an invalid proof that the consumer rejects. Concretely:
- Required included types max out at 176 (
INVOICE_NODE_ID_TYPE) - Omitted invoice types in 178..238 contribute ~31 positions, generating markers 177..207
- Each additional omitted experimental type (>1000) adds one more marker: 208, 209, ..., 239, 240 (rejected!)
- Only 33 experimental invoice types beyond the 178..238 range triggers this
While unlikely today, this is a correctness gap: the producer and consumer disagree on how markers interact with the reserved range. The fix depends on how the spec handles it — either the producer needs to skip markers in SIGNATURE_TYPES (jumping from 239 to 1001), or the consumer needs to allow them. Both sides need to agree.
Consider adding a debug_assert that no generated markers fall in SIGNATURE_TYPES to catch this during testing, and a runtime check or skip logic once the spec clarifies the behavior.
| /// [`InvoiceRequest`]: crate::offers::invoice_request::InvoiceRequest | ||
| /// [`Offer`]: crate::offers::offer::Offer | ||
| /// [`Refund`]: crate::offers::refund::Refund | ||
| payment_nonce: Option<Nonce>, |
There was a problem hiding this comment.
@TheBlueMatt Should this be part of PaidBolt12Invoice? Maybe serialization won't be nice...
There was a problem hiding this comment.
I almost wonder if we should have a first-class PaidBolt12Invoice to encapsulate the preimage and Nonce to avoid needing the user to pass all that when building the proof. Will attempt this.
There was a problem hiding this comment.
Yea, I think so. I think the easiest way to handle serialization is to just pull the ser into the event ser and add the field separately?
| const TLV_SIGNATURE: u64 = 240; | ||
| const TLV_PREIMAGE: u64 = 242; | ||
| const TLV_OMITTED_TLVS: u64 = 244; | ||
| const TLV_MISSING_HASHES: u64 = 246; | ||
| const TLV_LEAF_HASHES: u64 = 248; | ||
| const TLV_PAYER_SIGNATURE: u64 = 250; |
There was a problem hiding this comment.
Let's follow the naming convention used in the other BOLT12 TLV constants.
| /// `TlvStream::new()` assumes well-formed input and panics on malformed BigSize | ||
| /// values or out-of-bounds lengths. This function validates the framing first, | ||
| /// returning an error instead of panicking on untrusted input. | ||
| fn validate_tlv_framing(bytes: &[u8]) -> Result<(), crate::ln::msgs::DecodeError> { |
There was a problem hiding this comment.
Could we define this on TlvStream even if only an is_valid static method? Maybe as a follow-up we can build it into TlvStream or have some wrapper for performing validation.
There was a problem hiding this comment.
Make sense to me, I will not resolve this review also if I moved the is_valid to remember to open a new tracking issue for this
- Add Writeable/Readable for sha256::Hash (matching Hmac<Sha256> pattern) - Use WithoutLength for hash Vec ser/deser instead of manual for-loops - Use serialized_length() for TLV lengths instead of hardcoded constants - Rename build() to build_and_sign() on DerivedSigningKey for consistency - Use fixed PAYER_PROOF_ALLOCATION_SIZE for with_capacity - Import TlvRecord directly in merkle tests module - Reference Event::PaymentSent in doc comments - Change payment_nonce from Option<&Nonce> to Option<Nonce> (Nonce is Copy) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
jkczyz
left a comment
There was a problem hiding this comment.
Made some of the more involved requests in https://github.com/jkczyz/rust-lightning/tree/macros/proof-of-payment-bolt12-spec. Didn't want to push anything directly to your branch in case you were still making changes. But feel free to grab those commits. They are referenced in my comments.
| // Strict ascending order check covers both ordering and duplicates. | ||
| if let Some(prev) = prev_tlv_type { | ||
| if tlv_type <= prev { | ||
| return Err(Bolt12ParseError::Decode(DecodeError::InvalidValue)); | ||
| } | ||
| } | ||
| prev_tlv_type = Some(tlv_type); |
There was a problem hiding this comment.
Let's do this in validate_tlv_framing
| if !self.disclosure.omitted_markers.is_empty() { | ||
| let omitted_len: u64 = self | ||
| .disclosure | ||
| .omitted_markers | ||
| .iter() | ||
| .map(|m| BigSize(*m).serialized_length() as u64) | ||
| .sum(); | ||
| BigSize(TLV_OMITTED_TLVS).write(&mut bytes).expect("Vec write should not fail"); | ||
| BigSize(omitted_len).write(&mut bytes).expect("Vec write should not fail"); | ||
| for marker in &self.disclosure.omitted_markers { | ||
| BigSize(*marker).write(&mut bytes).expect("Vec write should not fail"); | ||
| } | ||
| } | ||
|
|
||
| if !self.disclosure.missing_hashes.is_empty() { | ||
| let len = self.disclosure.missing_hashes.len() * 32; | ||
| BigSize(TLV_MISSING_HASHES).write(&mut bytes).expect("Vec write should not fail"); | ||
| BigSize(len as u64).write(&mut bytes).expect("Vec write should not fail"); | ||
| for hash in &self.disclosure.missing_hashes { | ||
| bytes.extend_from_slice(hash.as_ref()); | ||
| } | ||
| } | ||
|
|
||
| if !self.disclosure.leaf_hashes.is_empty() { | ||
| let len = self.disclosure.leaf_hashes.len() * 32; | ||
| BigSize(TLV_LEAF_HASHES).write(&mut bytes).expect("Vec write should not fail"); | ||
| BigSize(len as u64).write(&mut bytes).expect("Vec write should not fail"); | ||
| for hash in &self.disclosure.leaf_hashes { | ||
| bytes.extend_from_slice(hash.as_ref()); | ||
| } | ||
| } |
There was a problem hiding this comment.
Actually, we can use encode_tlv_stream! with all of these, and IterableOwned to map u64 to BigSize. You can take it from jkczyz@6663f18.
| fn update_disclosed_fields( | ||
| record: &crate::offers::merkle::TlvRecord<'_>, disclosed_fields: &mut DisclosedFields, | ||
| ) -> Result<(), crate::ln::msgs::DecodeError> { | ||
| use crate::ln::msgs::DecodeError; | ||
|
|
||
| match record.r#type { | ||
| OFFER_DESCRIPTION_TYPE => { | ||
| disclosed_fields.offer_description = Some( | ||
| String::from_utf8(record.value_bytes.to_vec()) | ||
| .map_err(|_| DecodeError::InvalidValue)?, | ||
| ); | ||
| }, | ||
| OFFER_ISSUER_TYPE => { | ||
| disclosed_fields.offer_issuer = Some( | ||
| String::from_utf8(record.value_bytes.to_vec()) | ||
| .map_err(|_| DecodeError::InvalidValue)?, | ||
| ); | ||
| }, | ||
| INVOICE_CREATED_AT_TYPE => { | ||
| disclosed_fields.invoice_created_at = Some(Duration::from_secs( | ||
| record.read_value::<HighZeroBytesDroppedBigSize<u64>>()?.0, | ||
| )); | ||
| }, | ||
| INVOICE_AMOUNT_TYPE => { | ||
| disclosed_fields.invoice_amount_msats = | ||
| Some(record.read_value::<HighZeroBytesDroppedBigSize<u64>>()?.0); | ||
| }, | ||
| _ => {}, | ||
| } | ||
|
|
||
| Ok(()) | ||
| } | ||
|
|
||
| fn extract_disclosed_fields<'a>( | ||
| records: impl core::iter::Iterator<Item = crate::offers::merkle::TlvRecord<'a>>, | ||
| ) -> Result<DisclosedFields, crate::ln::msgs::DecodeError> { | ||
| let mut disclosed_fields = DisclosedFields::default(); | ||
| for record in records { | ||
| update_disclosed_fields(&record, &mut disclosed_fields)?; | ||
| } | ||
| Ok(disclosed_fields) | ||
| } |
There was a problem hiding this comment.
These can be methods on DisclosedFields. Feel free to take jkczyz@1693846.
| } | ||
|
|
||
| fn update_disclosed_fields( | ||
| record: &crate::offers::merkle::TlvRecord<'_>, disclosed_fields: &mut DisclosedFields, |
There was a problem hiding this comment.
Let's use imports for this as well as others that are fully-qualified. Fee free to take jkczyz@89f37db.
| /// TLV record type for the invoice creation timestamp. | ||
| pub(super) const INVOICE_CREATED_AT_TYPE: u64 = 164; | ||
|
|
||
| /// TLV record type for [`Bolt12Invoice::payment_hash`]. | ||
| pub(super) const INVOICE_PAYMENT_HASH_TYPE: u64 = 168; | ||
|
|
||
| /// TLV record type for [`Bolt12Invoice::amount_msats`]. | ||
| pub(super) const INVOICE_AMOUNT_TYPE: u64 = 170; | ||
|
|
||
| /// TLV record type for [`Bolt12Invoice::invoice_features`]. | ||
| pub(super) const INVOICE_FEATURES_TYPE: u64 = 174; | ||
|
|
||
| /// TLV record type for [`Bolt12Invoice::signing_pubkey`]. | ||
| pub(super) const INVOICE_NODE_ID_TYPE: u64 = 176; | ||
|
|
| const TLV_SIGNATURE: u64 = 240; | ||
| const TLV_PREIMAGE: u64 = 242; | ||
| const TLV_OMITTED_TLVS: u64 = 244; | ||
| const TLV_MISSING_HASHES: u64 = 246; | ||
| const TLV_LEAF_HASHES: u64 = 248; | ||
| const TLV_PAYER_SIGNATURE: u64 = 250; |
| && r.r#type > *SIGNATURE_TYPES.end() | ||
| }) { | ||
| for record in TlvStream::new(&self.invoice_bytes) | ||
| .range((*SIGNATURE_TYPES.end() + 1)..) |
| /// [`InvoiceRequest`]: crate::offers::invoice_request::InvoiceRequest | ||
| /// [`Offer`]: crate::offers::offer::Offer | ||
| /// [`Refund`]: crate::offers::refund::Refund | ||
| payment_nonce: Option<Nonce>, |
There was a problem hiding this comment.
I almost wonder if we should have a first-class PaidBolt12Invoice to encapsulate the preimage and Nonce to avoid needing the user to pass all that when building the proof. Will attempt this.
| fn compute_payer_signature_message(note: Option<&str>, merkle_root: &sha256::Hash) -> Message { | ||
| let mut inner_hasher = sha256::Hash::engine(); | ||
| if let Some(n) = note { | ||
| inner_hasher.input(n.as_bytes()); | ||
| } | ||
| inner_hasher.input(merkle_root.as_ref()); | ||
| let inner_msg = sha256::Hash::from_engine(inner_hasher); | ||
|
|
||
| let tag_hash = sha256::Hash::hash(PAYER_SIGNATURE_TAG.as_bytes()); | ||
|
|
||
| let mut final_hasher = sha256::Hash::engine(); | ||
| final_hasher.input(tag_hash.as_ref()); | ||
| final_hasher.input(tag_hash.as_ref()); | ||
| final_hasher.input(inner_msg.as_ref()); | ||
| let final_digest = sha256::Hash::from_engine(final_hasher); | ||
|
|
||
| Message::from_digest(*final_digest.as_byte_array()) |
There was a problem hiding this comment.
Let's take jkczyz@34aaab3 so we can reuse as much code as possible.
| impl UnsignedPayerProof<'_> { | ||
| fn sign<F>(self, sign_fn: F, note: Option<&str>) -> Result<PayerProof, PayerProofError> | ||
| where | ||
| F: FnOnce(&Message) -> Result<Signature, ()>, |
There was a problem hiding this comment.
Let's take jkczyz@af71153 and jkczyz@34aaab3, so the interface is similar to our other BOLT12 signing interfaces.
There was a problem hiding this comment.
Oh, please double-check this conforms to the spec still 😅
| if !SIGNATURE_TYPES.contains(&tlv_type) { | ||
| // Included invoice TLV record (passthrough for merkle | ||
| // reconstruction). | ||
| included_types.insert(tlv_type); | ||
| included_records.push(record); | ||
| } else if tlv_type % 2 == 0 { | ||
| // Unknown even types are mandatory-to-understand per | ||
| // BOLT convention — reject them. | ||
| return Err(Bolt12ParseError::Decode(DecodeError::InvalidValue)); | ||
| } |
There was a problem hiding this comment.
Doesn't this bypass the unknown even TLV check for non-signature types? Do we have a test to check this case?
This is a first draft implementation of the payer proof extension to BOLT 12 as proposed in lightning/bolts#1295. The goal is to get early feedback on the API design before the spec is finalized.
Payer proofs allow proving that a BOLT 12 invoice was paid by demonstrating possession of:
This PR adds the core building blocks:
This is explicitly a PoC to validate the API surface - the spec itself is still being refined. Looking for feedback on:
cc @TheBlueMatt @jkczyz