BIP: 47 @@ -10,11 +10,17 @@ RECENT CHANGES: Author: Justus Ranvier+==Status== + +This BIP can be be considered final in terms of enabling compatibility with wallets that implement version 1 and version 2 reusable payment codes, however future developments of the reusable payment codes specification will not be distributed via the BIP process. + +The Open Bitcoin Privacy Project RFC repo should be consulted for specifications related to version 3 or higher payment codes: https://github.com/OpenBitcoinPrivacyProject/rfc + ==Abstract== This BIP defines a technique for creating a payment code which can be publicly advertised and associated with a real-life identity without creating the loss of security or privacy inherent to P2PKH address reuse. @@ -150,7 +156,7 @@ It is assumed that Alice can easily obtain Bob's payment code via a suitable met Prior to the first time Alice initiates a transaction to Bob, Alice MUST inform Bob of her payment code via the following procedure: -Note: this procedure is used if Bob uses a version 1 payment code (regardless of the the version of Alice's payment code). If Bob's payment code is not version 1, see the appropriate section in this specification. +Note: this procedure is used if Bob uses a version 1 payment code (regardless of the version of Alice's payment code). If Bob's payment code is not version 1, see the appropriate section in this specification. # Alice constructs a transaction which sends a small quantity of bitcoins to Bob's notification address (notification transaction) ## The inputs selected for this transaction MUST NOT be easily associated with Alice's notification address @@ -158,7 +164,7 @@ Note: this procedure is used if Bob uses a version 1 payment code (regardless of ## Alice selects the private key corresponding to the designated pubkey:Comments-Summary: Unanimously Discourage for implementation Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0047 - Status: Draft + Status: Final Type: Informational Created: 2015-04-24
a## Alice selects the public key associated with Bob's notification address:
B, where B = bG## Alice calculates a secret point:
S = aB-## Alice calculates a 64 byte blinding factor:
s = HMAC-SHA512(x, o)+## Alice calculates a 64 byte blinding factor:
s = HMAC-SHA512(o, x)### "x" is the x value of the secret point ### "o" is the outpoint being spent by the designated input # Alice serializes her payment code in binary form. @@ -229,7 +235,7 @@ The following actions are recommended to reduce this risk:
B' = B + sG## Bob calculate the private key for each ephemeral address as:
b' = b + s
additionalfeeoutputindex=
, if the sender is willing to pay for increased fee, this indicate output can have its value substracted to pay for it.
+* additionalfeeoutputindex=
, if the sender is willing to pay for increased fee, this indicate output can have its value subtracted to pay for it.
If the additionalfeeoutputindex
is out of bounds or pointing to the payment output meant for the receiver, the receiver should ignore the parameter. See [[#fee-output|fee output]] for more information.
@@ -198,7 +198,7 @@ It is advised to hard code the description of the well known error codes into th
===Fee output===
In some situation, the sender might want to pay some additional fee in the payjoin proposal.
-If such is the case, the sender must use both [[#optional-params|optional parameters]] additionalfeeoutputindex=
and maxadditionalfeecontribution=
to indicate which output and how much the receiver can substract fee.
+If such is the case, the sender must use both [[#optional-params|optional parameters]] additionalfeeoutputindex=
and maxadditionalfeecontribution=
to indicate which output and how much the receiver can subtract fee.
There is several cases where a fee output is useful:
@@ -273,7 +273,7 @@ The sender should check the payjoin proposal before signing it to prevent a mali
* For each outputs in the proposal:
** Verify that no keypaths is in the PSBT output
** If the output is the [[#fee-output|fee output]]:
-*** The amount that was substracted from the output's value is less than or equal to maxadditionalfeecontribution
. Let's call this amount actual contribution
.
+*** The amount that was subtracted from the output's value is less than or equal to maxadditionalfeecontribution
. Let's call this amount actual contribution
.
*** Make sure the actual contribution is only paying fee: The actual contribution
is less than or equals to the difference of absolute fee between the payjoin proposal and the original PSBT.
*** Make sure the actual contribution is only paying for fee incurred by additional inputs: actual contribution
is less than or equals to originalPSBTFeeRate * vsize(sender_input_type) * (count(payjoin_proposal_inputs) - count(original_psbt_inputs))
. (see [[#fee-output|Fee output]] section)
** If the output is the payment output and payment output substitution is allowed.
@@ -344,7 +344,7 @@ On top of this the receiver can poison analysis by randomly faking a round amoun
===Payment output substitution===
-Unless disallowed by sender explicitely via `disableoutputsubstitution=true` or by the BIP21 url via query parameter the `pjos=0`, the receiver is free to decrease the amount, remove, or change the scriptPubKey output paying to himself.
+Unless disallowed by sender explicitly via `disableoutputsubstitution=true` or by the BIP21 url via query parameter the `pjos=0`, the receiver is free to decrease the amount, remove, or change the scriptPubKey output paying to himself.
Note that if payment output substitution is disallowed, the reveiver can still increase the amount of the output. (See [[#reference-impl|the reference implementation]])
For example, if the sender's scriptPubKey type is P2WPKH while the receiver's payment output in the original PSBT is P2SH, then the receiver can substitute the payment output to be P2WPKH to match the sender's scriptPubKey type.
@@ -413,7 +413,7 @@ Here is pseudo code of a sender implementation.
The signedPSBT
represents a PSBT which has been fully signed, but not yet finalized.
We then prepare originalPSBT
from the signedPSBT
via the CreateOriginalPSBT
function and get back the proposal
.
-While we verify the proposal
, we also import into it informations about our own inputs and outputs from the signedPSBT
.
+While we verify the proposal
, we also import into it information about our own inputs and outputs from the signedPSBT
.
At the end of this RequestPayjoin
, the proposal is verified and ready to be signed.
We logged the different PSBT involved, and show the result in our [[#test-vectors|test vectors]].
@@ -557,7 +557,7 @@ public async Taskcosigner_index
in order to sort the purpose'
public keys of each cosigner. This too is redundant, as descriptors can set the order of the public keys with multi
or have them sorted lexicographically (as described in [https://github.com/bitcoin/bips/blob/master/bip-0067.mediawiki BIP67]) with sortedmulti
. Sorting public keys between cosigners in order to create the full derivation path, prior to sending the key record to the coordinator to create the descriptor, merely adds additional unnecessary communication rounds.
+BIP45 unnecessarily demands a single script type (here, P2SH). In addition, BIP45 sets cosigner_index
in order to sort the purpose'
public keys of each cosigner. This too is redundant, as descriptors can set the order of the public keys with multi
or have them sorted lexicographically (as described in [https://github.com/bitcoin/bips/blob/master/bip-0067.mediawiki BIP67]) with sortedmulti
. Sorting public keys between cosigners in order to create the full derivation path, prior to sending the key record to the coordinator to create the descriptor, merely adds additional unnecessary communication rounds.
The second multisignature "standard" in use is m/48', which specifies:
diff --git a/bip-0088.mediawiki b/bip-0088.mediawiki
index 936f2ca9..49be7dba 100644
--- a/bip-0088.mediawiki
+++ b/bip-0088.mediawiki
@@ -89,7 +89,7 @@ installation of malicious or incorrect profiles, though.
==Specification==
-The format for the template was choosen to make it easy to read, convenient and visually unambigous.
+The format for the template was chosen to make it easy to read, convenient and visually unambigous.
Template starts with optional prefix m/
, and then one or more sections delimited by the slash character (/
).
@@ -127,13 +127,13 @@ Constraints:
# To avoid ambiguity, an index range that matches a single value MUST be specified as Unit range.
# To avoid ambiguity, an index range 0-2147483647
is not allowed, and MUST be specified as Wildcard index template instead
# For Non-unit range, range_end MUST be larger than range_start.
-# If there is more than one index range within the Ranged index template, range_start of the second and any subsequent range MUST be larger than the range_end of the preceeding range.
+# If there is more than one index range within the Ranged index template, range_start of the second and any subsequent range MUST be larger than the range_end of the preceding range.
# To avoid ambiguity, all representations of integer values larger than 0 MUST NOT start with character 0
(no leading zeroes allowed).
# If hardened marker appears within any section in the path template, all preceding sections MUST also specify hardened matching.
# To avoid ambiguity, if a hardened marker appears within any section in the path template, all preceding sections MUST also use the same hardened marker (either h
or '
).
# To avoid ambiguity, trailing slashes (for example, 1/2/
) and duplicate slashes (for example, 0//1
) MUST NOT appear in the template.
-It may be desireable to have fully unambiguous encoding, where for each valid path template string, there is no other valid template string that matches the exact same set of paths. This would enable someone to compare templates for equality through a simple string equality check, without any parsing.
+It may be desirable to have fully unambiguous encoding, where for each valid path template string, there is no other valid template string that matches the exact same set of paths. This would enable someone to compare templates for equality through a simple string equality check, without any parsing.
To achieve this, two extra rules are needed:
diff --git a/bip-0099.mediawiki b/bip-0099.mediawiki
index 8882e003..156eec02 100644
--- a/bip-0099.mediawiki
+++ b/bip-0099.mediawiki
@@ -56,7 +56,7 @@ development, diversity, etc) to fork the Bitcoin Core software and it's good
that there's many alternative implementations of the protocol (forks
of Bitcoin Core or written from scratch).
-But sometimes a bug in the reimplementaion of the consensus
+But sometimes a bug in the reimplementation of the consensus
validation rules can prevent users of alternative implementation from
following the longest (most work) valid chain. This can result in
those users losing coins or being defrauded, making reimplementations
diff --git a/bip-0109.mediawiki b/bip-0109.mediawiki
index 69b265b1..4822d4a2 100644
--- a/bip-0109.mediawiki
+++ b/bip-0109.mediawiki
@@ -37,7 +37,7 @@ In particular:
* The coinbase scriptSig is not counted
* Signature operations in un-executed branches of a Script are not counted
-* OP_CHECKMULTISIG evaluations are counted accurately; if the signature for a 1-of-20 OP_CHECKMULTISIG is satisified by the public key nearest the top of the execution stack, it is counted as one signature operation. If it is satisfied by the public key nearest the bottom of the execution stack, it is counted as twenty signature operations.
+* OP_CHECKMULTISIG evaluations are counted accurately; if the signature for a 1-of-20 OP_CHECKMULTISIG is satisfied by the public key nearest the top of the execution stack, it is counted as one signature operation. If it is satisfied by the public key nearest the bottom of the execution stack, it is counted as twenty signature operations.
* Signature operations involving invalidly encoded signatures or public keys are not counted towards the limit
=== Add a new limit of 1,300,000,000 bytes hashed to compute transaction signatures per block ===
diff --git a/bip-0112.mediawiki b/bip-0112.mediawiki
index 63a77975..d6ed5460 100644
--- a/bip-0112.mediawiki
+++ b/bip-0112.mediawiki
@@ -36,7 +36,7 @@ When executed, if any of the following conditions are true, the script interpret
Otherwise, script execution will continue as if a NOP had been executed.
-BIP 68 prevents a non-final transaction from being selected for inclusion in a block until the corresponding input has reached the specified age, as measured in block-height or block-time. By comparing the argument to CHECKSEQUENCEVERIFY against the nSequence field, we indirectly verify a desired minimum age of the
+BIP 68 prevents a non-final transaction from being selected for inclusion in a block until the corresponding input has reached the specified age, as measured in block-height or block-time. By comparing the argument to CHECKSEQUENCEVERIFY against the nSequence field, we indirectly verify a desired minimum age of
the output being spent; until that relative age has been reached any script execution pathway including the CHECKSEQUENCEVERIFY will fail to validate, causing the transaction not to be selected for inclusion in a block.
diff --git a/bip-0119.mediawiki b/bip-0119.mediawiki
index acbe97cf..d661f4c4 100644
--- a/bip-0119.mediawiki
+++ b/bip-0119.mediawiki
@@ -89,7 +89,7 @@ def execute_bip_119(self):
self.context.precomputed_ctv_data = self.context.tx.get_default_check_template_precomputed_data()
# If the hashes do not match, return error
- if stack[-1] != self.context.tx.get_default_check_template_hash(self.context.nIn, self.context.precomputed_ctv_data)
+ if stack[-1] != self.context.tx.get_default_check_template_hash(self.context.nIn, self.context.precomputed_ctv_data):
return self.errors_with(errors.script_err_template_mismatch)
return self.return_as_nop()
diff --git a/bip-0126.mediawiki b/bip-0126.mediawiki
index f498b1cb..2c04eb45 100644
--- a/bip-0126.mediawiki
+++ b/bip-0126.mediawiki
@@ -14,7 +14,7 @@
When a Bitcoin transaction contains inputs that reference previous transaction outputs sent to different Bitcoin addresses, personally identifiable information of the user will leak into the blockchain in an uncontrolled manner. While undesirable, these transactions are frequently unavoidable due to the natural fragmentation of wallet balances over time.
-This document proposes a set of best practice guidelines which minimize the uncontrolled disclosure of personally identifiable information by defining standard forms for transactions containing heterogenous input scripts.
+This document proposes a set of best practice guidelines which minimize the uncontrolled disclosure of personally identifiable information by defining standard forms for transactions containing heterogeneous input scripts.
==Copyright==
@@ -23,8 +23,8 @@ This BIP is in the public domain.
==Definitions==
* '''Heterogenous input script transaction (HIT)''': A transaction containing multiple inputs where the scripts of the previous transaction outputs being consumed are not identical (e.g. a transaction spending outputs which were sent to more than one Bitcoin address)
-* '''Unavoidable heterogenous input script transaction''': A HIT created as a result of a user’s desire to create a new output with a value larger than the value of his wallet's largest existing unspent output
-* '''Intentional heterogenous input script transaction''': A HIT created as part of a user protection protocol for reducing uncontrolled disclosure of personally-identifying information (PII)
+* '''Unavoidable heterogeneous input script transaction''': A HIT created as a result of a user’s desire to create a new output with a value larger than the value of his wallet's largest existing unspent output
+* '''Intentional heterogeneous input script transaction''': A HIT created as part of a user protection protocol for reducing uncontrolled disclosure of personally-identifying information (PII)
Throughout this procedure, when input scripts are evaluated for uniqueness, "input script" should be interpreted to mean, "the script of the previous output referenced by an input to a transaction".
@@ -33,10 +33,10 @@ Throughout this procedure, when input scripts are evaluated for uniqueness, "inp
The recommendations in this document are designed to accomplish three goals:
# Maximise the effectiveness of user-protecting protocols: Users may find that protection protocols are counterproductive if such transactions have a distinctive fingerprint which renders them ineffective.
-# Minimise the adverse consequences of unavoidable heterogenous input transactions: If unavoidable HITs are indistinguishable from intentional HITs, a user creating an unavoidable HIT benefits from ambiguity with respect to graph analysis.
+# Minimise the adverse consequences of unavoidable heterogeneous input transactions: If unavoidable HITs are indistinguishable from intentional HITs, a user creating an unavoidable HIT benefits from ambiguity with respect to graph analysis.
# Limiting the effect on UTXO set growth: To date, non-standardized intentional HITs tend to increase the network's UTXO set with each transaction; this standard attempts to minimize this effect by standardizing unavoidable and intentional HITs to limit UTXO set growth.
-In order to achieve these goals, this specification proposes a set of best practices for heterogenous input script transaction creation. These practices accommodate all applicable requirements of both intentional and unavoidable HITs while maximising the effectiveness of both in terms of preventing uncontrolled disclosure of PII.
+In order to achieve these goals, this specification proposes a set of best practices for heterogeneous input script transaction creation. These practices accommodate all applicable requirements of both intentional and unavoidable HITs while maximising the effectiveness of both in terms of preventing uncontrolled disclosure of PII.
In order to achieve this, two forms of HIT are proposed: Standard form and alternate form.
@@ -44,13 +44,13 @@ In order to achieve this, two forms of HIT are proposed: Standard form and alter
Applications which wish to comply both with this procedure and BIP69 should apply this procedure prior to applying BIP69.
-==Standard form heterogenous input script transaction==
+==Standard form heterogeneous input script transaction==
===Rules===
A HIT is Standard form if it adheres to all of the following rules:
-# The number of unique output scripts must be equal to the number of unique inputs scripts (irrespective of the number of inputs and outputs).
+# The number of unique output scripts must be equal to the number of unique input scripts (irrespective of the number of inputs and outputs).
# All output scripts must be unique.
# At least one pair of outputs must be of equal value.
# The largest output in the transaction is a member of a set containing at least two identically-sized outputs.
@@ -63,7 +63,7 @@ The requirement that all output scripts are unique prevents address reuse. Restr
The requirement for at least one pair of outputs in an intentional HIT to be of equal value results in optimal behavior, and causes intentional HITs to resemble unavoidable HITs.
-==Alternate form heterogenous input script transactions==
+==Alternate form heterogeneous input script transactions==
The formation of a standard form HIT is not possible in the following cases:
@@ -88,7 +88,7 @@ Clients which create intentional HITs must have the capability to form alternate
An HIT formed via the preceding procedure will adhere to the following conditions:
-# The number of unique inputs scripts must exceed the number of output scripts.
+# The number of unique input scripts must exceed the number of output scripts.
# All output scripts must be unique.
# At least one pair of outputs must be of equal value.
## "Standard outputs" refers to the set of outputs with equal value
@@ -100,7 +100,7 @@ An HIT formed via the preceding procedure will adhere to the following condition
## The sum of the inputs in the set minus the value of the change output is equal to the standard value with a tolerance equal to the transaction fee.
## Change outputs with a value of zero (virtual change outputs) are permitted. The are defined for the purpose of testing whether or not a HIT adheres to this specification but are not present in the version of the transaction which is broadcast to the network.
-==Non-compliant heterogenous input script transactions==
+==Non-compliant heterogeneous input script transactions==
If a user wishes to create an output that is larger than half the total size of their spendable outputs, or if their inputs are not distributed in a manner in which the alternate form procedure can be completed, then the user can not create a transaction which is compliant with this procedure.
diff --git a/bip-0127.mediawiki b/bip-0127.mediawiki
index 44a90d79..87071d8e 100644
--- a/bip-0127.mediawiki
+++ b/bip-0127.mediawiki
@@ -124,7 +124,7 @@ message FinalProof {
// Bitcoin transaction.
bytes proof_tx = 1;
- // The metadata of the ouputs used in the proof transaction.
+ // The metadata of the outputs used in the proof transaction.
repeated OutputMeta output_metadata = 2;
}
diff --git a/bip-0129.mediawiki b/bip-0129.mediawiki
index 8719fe42..b5dfae82 100644
--- a/bip-0129.mediawiki
+++ b/bip-0129.mediawiki
@@ -47,11 +47,14 @@ Concerns #4 and #5 should be handled by Signers and are out of scope of this pro
==Specification==
===Prerequisites===
-This proposal assumes the parties in the multisig support [https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki BIP-0032], [https://github.com/bitcoin/bips/blob/master/bip-0322.mediawiki BIP-0322], [https://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md the descriptor language] and [https://tools.ietf.org/html/rfc3686 AES encryption].
+This proposal assumes the parties in the multisig support [https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki BIP-0032], [https://github.com/bitcoin/bips/blob/master/bip-0322.mediawiki BIP-0322], [https://github.com/bitcoin/bips/blob/master/bip-0380.mediawiki BIP-0380 Output Script Descriptors] ([https://github.com/bitcoin/bips/blob/master/bip-0381.mediawiki BIP-0381],[https://github.com/bitcoin/bips/blob/master/bip-0382.mediawiki BIP-0382],[https://github.com/bitcoin/bips/blob/master/bip-0383.mediawiki BIP-0383]) and [https://tools.ietf.org/html/rfc3686 AES encryption].
===File Extensions===
All descriptor and key records should have a .bsms file extension. Encrypted data should have a .dat extension.
+===Newline===
+This specification uses line feed (LF) control character \n.
+
===Roles===
====Coordinator====
@@ -141,7 +144,7 @@ Whereas:
* Password = "No SPOF"
* Salt = TOKEN
* c = 2048
-* dkLen = 256
+* dkLen = 256 bits (32 bytes)
* DKey = Derived ENCRYPTION_KEY
====Encryption Scheme====
@@ -452,7 +455,7 @@ sh(wsh(multi(2,[793cc70b/48'/0'/0'/1']xpub6ErVmcYYHmavsMgxEcTZyzN5sqth1ZyRpFNJC2
==Acknowledgement==
-Special thanks to Pavol Rusnak, Dmitry Petukhov, Christopher Allen, Craig Raw, Robert Spigler, Gregory Sanders, Ta Tat Tai, Michael Flaxman, Pieter Wuille, Salvatore Ingala, Andrew Chow and others for their feedback on the specification.
+Special thanks to Pavol Rusnak, Dmitry Petukhov, Christopher Allen, Craig Raw, Robert Spigler, Gregory Sanders, Ta Tat Tai, Michael Flaxman, Pieter Wuille, Salvatore Ingala, Ava Chow and others for their feedback on the specification.
==References==
diff --git a/bip-0132.mediawiki b/bip-0132.mediawiki
index e7aed292..173c9198 100644
--- a/bip-0132.mediawiki
+++ b/bip-0132.mediawiki
@@ -48,7 +48,7 @@ The author doesn't believe this is a problem because a BIP cannot be forced on c
== Process ==
-* '''Submit for Comments.''' The first BIP champion named in the proposal can call a "submit for comments" at any time by posting to the [https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev Dev Mailing List] mailling with the BIP number and a statement that the champion intends to immediately submit the BIP for comments.
+* '''Submit for Comments.''' The first BIP champion named in the proposal can call a "submit for comments" at any time by posting to the [https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev Dev Mailing List] mailing with the BIP number and a statement that the champion intends to immediately submit the BIP for comments.
** The BIP must have been assigned BIP-number (i.e. been approved by the BIP editor) to be submitted for comments.
* '''Comments.'''
** After a BIP has been submitted for comments, a two-week waiting period begins in which the community should transition from making suggestions about a proposal to publishing their opinions or concerns on the proposal.
diff --git a/bip-0133.mediawiki b/bip-0133.mediawiki
index c109f12f..b37370d9 100644
--- a/bip-0133.mediawiki
+++ b/bip-0133.mediawiki
@@ -5,7 +5,7 @@
Author: Alex Morcos scriptPubKey
(or redeemScript
as defined in BIP16/P2SH) that consists of a 1-byte push opcode (for 0 to 16) followed by a data push between 2 and 40 bytes gets a new special meaning. The value of the first push is called the "version byte". The following byte vector pushed is called the "witness program".
+A scriptPubKey
(or redeemScript
as defined in BIP16/P2SH) that consists of a 1-byte push opcode (one of OP_0,OP_1,OP_2,...,OP_16
) followed by a direct data push between 2 and 40 bytes gets a new special meaning. The value of the first push is called the "version byte". The following byte vector pushed is called the "witness program".
+In more detail, this means a scriptPubKey
or redeemScript
which consists of (in order):
+* First, byte 0x00 (OP_0
) or any byte between 0x51 (OP_1
) and 0x60 (OP_16
) inclusive (the version byte).
+* Then, a byte ''L'' between 0x02 (push of 2 bytes) and 0x28 (push of 40 bytes) inclusive.
+* Finally, ''L'' arbitrary bytes (the witness program).
There are two cases in which witness validation logic are triggered. Each case determines the location of the witness version byte and program, as well as the form of the scriptSig:
# Triggered by a scriptPubKey
that is exactly a push of a version byte, plus a push of a witness program. The scriptSig must be exactly empty or validation fails. (''"native witness program"'')
# Triggered when a scriptPubKey
is a P2SH script, and the BIP16 redeemScript
pushed in the scriptSig
is exactly a push of a version byte plus a push of a witness program. The scriptSig
must be exactly a push of the BIP16 redeemScript
or validation fails. (''"P2SH witness program"'')
-If the version byte is 0, and the witness program is 20 bytes:
+If the version byte is 0, and the witness program is 20 bytes (''L = 20''):
* It is interpreted as a pay-to-witness-public-key-hash (P2WPKH) program.
* The witness must consist of exactly 2 items (≤ 520 bytes each). The first one a signature, and the second one a public key.
* The HASH160 of the public key must match the 20-byte witness program.
* After normal script evaluation, the signature is verified against the public key with CHECKSIG operation. The verification must result in a single TRUE on the stack.
-If the version byte is 0, and the witness program is 32 bytes:
+If the version byte is 0, and the witness program is 32 bytes (''L = 32''):
* It is interpreted as a pay-to-witness-script-hash (P2WSH) program.
* The witness must consist of an input stack to feed to the script, followed by a serialized script (witnessScript
).
* The witnessScript
(≤ 10,000 bytes) is popped off the initial witness stack. SHA256 of the witnessScript
must match the 32-byte witness program.
@@ -276,7 +280,7 @@ These commitments could be included in the extensible commitment structure throu
Since a version byte is pushed before a witness program, and programs with unknown versions are always considered as anyone-can-spend script, it is possible to introduce any new script system with a soft fork. The witness as a structure is not restricted by any existing script semantics and constraints, the 520-byte push limit in particular, and therefore allows arbitrarily large scripts and signatures.
-Examples of new script system include Schnorr signatures which reduce the size of multisig transactions dramatically, Lamport signature which is quantum computing resistance, and Merklized abstract syntax trees which allow very compact witness for conditional scripts with extreme complexity.
+Examples of new script systems include Schnorr signatures, which reduce the size of multisig transactions dramatically; Lamport signatures, which are quantum computing resistant; and Merklized abstract syntax trees, which allow very compact witnesses for conditional scripts with extreme complexity.
=== Per-input lock-time and relative-lock-time ===
@@ -303,7 +307,7 @@ As a soft fork, older software will continue to operate without modification. N
This BIP will be deployed by "version bits" BIP9 with the name "segwit" and using bit 1.
-For Bitcoin mainnet, the BIP9 starttime will be midnight 15 november 2016 UTC (Epoch timestamp 1479168000) and BIP9 timeout will be midnight 15 november 2017 UTC (Epoch timestamp 1510704000).
+For Bitcoin mainnet, the BIP9 starttime will be midnight 15 November 2016 UTC (Epoch timestamp 1479168000) and BIP9 timeout will be midnight 15 November 2017 UTC (Epoch timestamp 1510704000).
For Bitcoin testnet, the BIP9 starttime will be midnight 1 May 2016 UTC (Epoch timestamp 1462060800) and BIP9 timeout will be midnight 1 May 2017 UTC (Epoch timestamp 1493596800).
diff --git a/bip-0143.mediawiki b/bip-0143.mediawiki
index 81763a07..9935eaa2 100644
--- a/bip-0143.mediawiki
+++ b/bip-0143.mediawiki
@@ -39,12 +39,12 @@ A new transaction digest algorithm is defined, but only applicable to sigops in
9. nLocktime of the transaction (4-byte little endian)
10. sighash type of the signature (4-byte little endian)
-Semantics of the original sighash types remain unchanged, except the followings:
+Semantics of the original sighash types remain unchanged, except the following:
# The way of serialization is changed;
# All sighash types commit to the amount being spent by the signed input;
# FindAndDelete
of the signature is not applied to the scriptCode
;
# OP_CODESEPARATOR
(s) after the last executed OP_CODESEPARATOR
are not removed from the scriptCode
(the last executed OP_CODESEPARATOR
and any script before it are always removed);
-# SINGLE
does not commit to the input index. When ANYONECANPAY
is not set, the semantics are unchanged since hashPrevouts
and outpoint
together implictly commit to the input index. When SINGLE
is used with ANYONECANPAY
, omission of the index commitment allows permutation of the input-output pairs, as long as each pair is located at an equivalent index.
+# SINGLE
does not commit to the input index. When ANYONECANPAY
is not set, the semantics are unchanged since hashPrevouts
and outpoint
together implicitly commit to the input index. When SINGLE
is used with ANYONECANPAY
, omission of the index commitment allows permutation of the input-output pairs, as long as each pair is located at an equivalent index.
The items 1, 4, 7, 9, 10 have the same meaning as the original algorithm.
@@ -187,7 +187,7 @@ To ensure consistency in consensus-critical behaviour, developers should test th
nHashType: 01000000
sigHash: c37af31116d1b27caf68aae9e3ac82f1477929014d5b917657d0eb49478cb670
- signature: 304402203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4518331561406f90300e8f3358f51928d43c212a8caed02de67eebee
+ signature: 304402203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4518331561406f90300e8f3358f51928d43c212a8caed02de67eebee01
The serialized signed transaction is: 01000000000102fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f00000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5cdd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eeffffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac000247304402203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4518331561406f90300e8f3358f51928d43c212a8caed02de67eebee0121025476c2e83188368da1ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee635711000000
@@ -551,7 +551,7 @@ These examples show that FindAndDelete
for the signature is not app
nLockTime: 00000000
The input comes from a P2WSH witness program:
- scriptPubKey : 00209e1be07558ea5cc8e02ed1d80c0911048afad949affa36d5c3951e3159dbea19, value: 200000
+ scriptPubKey : 00209e1be07558ea5cc8e02ed1d80c0911048afad949affa36d5c3951e3159dbea19, value: 0.00200000
redeemScript : OP_CHECKSIGVERIFY <0x30450220487fb382c4974de3f7d834c1b617fe15860828c7f96454490edd6d891556dcc9022100baf95feb48f845d5bfc9882eb6aeefa1bc3790e39f59eaa46ff7f15ae626c53e01>
ad4830450220487fb382c4974de3f7d834c1b617fe15860828c7f96454490edd6d891556dcc9022100baf95feb48f845d5bfc9882eb6aeefa1bc3790e39f59eaa46ff7f15ae626c53e01
diff --git a/bip-0151.mediawiki b/bip-0151.mediawiki
index 793c2441..8bc11978 100644
--- a/bip-0151.mediawiki
+++ b/bip-0151.mediawiki
@@ -85,7 +85,7 @@ a 64 bit nonce and a 64 bit counter into 64 bytes of output. This output is used
Poly1305, also by Daniel Bernstein [4], is a one-time Carter-Wegman MAC that computes a 128 bit integrity tag given a message and a single-use
256 bit secret key.
-The chacha20-poly1305@openssh.com specified and defined by openssh [5] combines these two primitives into an authenticated encryption mode. The construction used is based on that proposed for TLS by Adam Langley [6], but differs in the layout of data passed to the MAC and in the addition of encyption of the packet lengths.
+The chacha20-poly1305@openssh.com specified and defined by openssh [5] combines these two primitives into an authenticated encryption mode. The construction used is based on that proposed for TLS by Adam Langley [6], but differs in the layout of data passed to the MAC and in the addition of encryption of the packet lengths.
K_1
must be used to only encrypt the payload size of the encrypted message to avoid leaking information by revealing the message size.
diff --git a/bip-0152.mediawiki b/bip-0152.mediawiki
index 8200714b..fad17460 100644
--- a/bip-0152.mediawiki
+++ b/bip-0152.mediawiki
@@ -211,7 +211,7 @@ There are several design goals for the Short ID calculation:
SipHash is a secure, fast, and simple 64-bit MAC designed for network traffic authentication and collision-resistant hash tables. We truncate the output from SipHash-2-4 to 48 bits (see next section) in order to minimize space. The resulting 48-bit hash is certainly not large enough to avoid intentionally created individual collisons, but by using the block hash as a key to SipHash, an attacker cannot predict what keys will be used once their transactions are actually included in a relayed block. We mix in a per-connection 64-bit nonce to obtain independent short IDs on every connection, so that even block creators cannot control where collisions occur, and random collisions only ever affect a small number of connections at any given time. The mixing is done using SHA256(block_header || nonce), which is slow compared to SipHash, but only done once per block. It also adds the ability for nodes to choose the nonce in a better than random way to minimize collisions, though that is not necessary for correct behaviour. Conversely, nodes can also abuse this ability to increase their ability to introduce collisions in the blocks they relay themselves. However, they can already cause more problems by simply refusing to relay blocks. That is inevitable, and this design only seeks to prevent network-wide misbehavior.
-====Random collision probabilty====
+====Random collision probability====
Thanks to the block-header-based SipHash keys, we can assume that the only collisions on links between honest nodes are random ones.
diff --git a/bip-0155.mediawiki b/bip-0155.mediawiki
index 3e7b0d82..0ec68019 100644
--- a/bip-0155.mediawiki
+++ b/bip-0155.mediawiki
@@ -117,6 +117,11 @@ The list of reserved network IDs is as follows:
| CJDNS
| 16
| Cjdns overlay network address
+|-
+| 0x07
+| YGGDRASIL
+| 16
+| Yggdrasil overlay network address
|}
Clients are RECOMMENDED to gossip addresses from all known networks even if they are currently not connected to some of them. That could help multi-homed nodes and make it more difficult for an observer to tell which networks a node is connected to.
@@ -184,6 +189,10 @@ I2P addresses MUST be sent with the I2P
network ID, with the decode
Cjdns addresses are simply IPv6 addresses in the fc00::/8
range[https://github.com/cjdelisle/cjdns/blob/6e46fa41f5647d6b414612d9d63626b0b952746b/doc/Whitepaper.md#pulling-it-all-together Cjdns whitepaper: Pulling It All Together]. They MUST be sent with the CJDNS
network ID.
+==Appendix E: Yggdrasil address encoding==
+
+Yggdrasil addresses are simply IPv6 addresses in the 0200::/7
range[https://yggdrasil-network.github.io/faq.html#will-yggdrasil-conflict-with-my-network-routing Yggdrasil FAQ]. They MUST be sent with the YGGDRASIL
network ID.
+
==References==
new_bit_stream
instantiates a new writable bit stream
@@ -312,6 +309,8 @@ complete serialization of a filter is:
* N
, encoded as a CompactSize
* The bytes of the compressed filter itself
+A zero element filter MUST be written as one byte containing zeroes.
+
==== Signaling ====
This BIP allocates a new service bit:
diff --git a/bip-0158/gentestvectors.go b/bip-0158/gentestvectors.go
index 3435eb3c..e51b9842 100644
--- a/bip-0158/gentestvectors.go
+++ b/bip-0158/gentestvectors.go
@@ -207,7 +207,7 @@ func main() {
prevOutputScripts, err := fetchPrevOutputScripts(client, block)
if err != nil {
- fmt.Println("Couldn't fetch prev output scipts: ", err)
+ fmt.Println("Couldn't fetch prev output scripts: ", err)
return
}
diff --git a/bip-0173.mediawiki b/bip-0173.mediawiki
index 1fdd8bed..7087fffa 100644
--- a/bip-0173.mediawiki
+++ b/bip-0173.mediawiki
@@ -11,6 +11,7 @@
Created: 2017-03-20
License: BSD-2-Clause
Replaces: 142
+ Superseded-By: 350
==Introduction==
@@ -403,3 +404,12 @@ separator).
This document is inspired by the [https://rusty.ozlabs.org/?p=578 address proposal] by Rusty Russell, the
[https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2014-February/004402.html base32] proposal by Mark Friedenbach, and had input from Luke Dashjr,
Johnson Lau, Eric Lombrozo, Peter Todd, and various other reviewers.
+
+==Disclosures (added 2024)==
+
+Due to an oversight in the design of bech32, this checksum scheme is not always
+robust against
+[[https://gist.github.com/sipa/a9845b37c1b298a7301c33a04090b2eb|the insertion
+and deletion of fewer than 5 consecutive characters]]. Due to this weakness,
+[[bip-0350.mediawiki|BIP-350]] proposes using the scheme described in this BIP
+only for Native Segwit v0 outputs.
diff --git a/bip-0174.mediawiki b/bip-0174.mediawiki
index f4c0807c..95a5573b 100644
--- a/bip-0174.mediawiki
+++ b/bip-0174.mediawiki
@@ -2,7 +2,7 @@
BIP: 174
Layer: Applications
Title: Partially Signed Bitcoin Transaction Format
- Author: Andrew Chow to_sign
tra
A signer may construct a proof of funds, demonstrating control of a set of UTXOs, by constructing a full signature as above, with the following modifications.
-* message_challenge
is unused and shall be set to OP_TRUE
-* Similarly, message_signature
is then empty.
* All outputs that the signer wishes to demonstrate control of are included as additional inputs of to_sign
, and their witness and scriptSig data should be set as though these outputs were actually being spent.
Unlike an ordinary signature, validators of a proof of funds need access to the current UTXO set, to learn that the claimed inputs exist on the blockchain, and to learn their scriptPubKeys.
diff --git a/bip-0324/test_sage_decoding.py b/bip-0324/test_sage_decoding.py
index c26c3347..1ec5fdf4 100644
--- a/bip-0324/test_sage_decoding.py
+++ b/bip-0324/test_sage_decoding.py
@@ -5,8 +5,8 @@ Instructions:
* Clone the SwiftEC repository, and enter the directory:
git clone https://github.com/Jchavezsaab/SwiftEC
- git checkout 5320a25035d91addde29d14164cce684b56a12ed
cd SwiftEC
+ git checkout 5320a25035d91addde29d14164cce684b56a12ed
* Generate parameters for the secp256k1 curve:
diff --git a/bip-0327.mediawiki b/bip-0327.mediawiki
index 07b40f53..b5600ab3 100644
--- a/bip-0327.mediawiki
+++ b/bip-0327.mediawiki
@@ -123,7 +123,7 @@ This is by design: All algorithms in this proposal handle multiple signers who (
and applications are not required to check for duplicate individual public keys.
In fact, applications are recommended to omit checks for duplicate individual public keys in order to simplify error handling.
Moreover, it is often impossible to tell at key aggregation which signer is to blame for the duplicate, i.e., which signer came up with an individual public key honestly and which disruptive signer copied it.
-In contrast, MuSig2 is designed to identify disruptive signers at signing time (see [[#identifiying-disruptive-signers|Identifiying Disruptive Signers]]).
+In contrast, MuSig2 is designed to identify disruptive signers at signing time (see [[#identifying-disruptive-signers|Identifying Disruptive Signers]]).
While the algorithms in this proposal are able to handle duplicate individual public keys, there are scenarios where applications may choose to abort when encountering duplicates.
For example, we can imagine a scenario where a single entity creates a MuSig2 setup with multiple signing devices.
@@ -211,7 +211,7 @@ The bit can be obtained with ''GetPlainPubkey(keyagg_ctx)[0] & 1''.
The following specification of the algorithms has been written with a focus on clarity.
As a result, the specified algorithms are not always optimal in terms of computation and space.
-In particular, some values are recomputed but can be cached in actual implementations (see [[#signing-flow|Signing Flow]]).
+In particular, some values are recomputed but can be cached in actual implementations (see [[#general-signing-flow|General Signing Flow]]).
=== Notation ===
@@ -367,7 +367,7 @@ Algorithm ''ApplyTweak(keyagg_ctx, tweak, is_xonly_t)'':
Algorithm ''NonceGen(sk, pk, aggpk, m, extra_in)'':
* Inputs:
** The secret signing key ''sk'': a 32-byte array (optional argument)
-** The individual public key ''pk'': a 33-byte array (see [[#modifications-to-nonce-generation|Modifications to Nonce Generation]] for the reason that this argument is mandatory)
+** The individual public key ''pk'': a 33-byte array (see [[#signing-with-tweaked-individual-keys|Signing with Tweaked Individual Keys]] for the reason that this argument is mandatory)
** The x-only aggregate public key ''aggpk'': a 32-byte array (optional argument)
** The message ''m'': a byte array (optional argument)In theory, the allowed message size is restricted because SHA256 accepts byte strings only up to size of 2^61-1 bytes (and because of the 8-byte length encoding).
** The auxiliary input ''extra_in'': a byte array with ''0 ≤ len(extra_in) ≤ 232-1'' (optional argument)
@@ -465,7 +465,7 @@ Algorithm ''Sign(secnonce, sk, session_ctx)'':
* Fail if ''pk ≠ secnonce[64:97]''
* Let ''a = GetSessionKeyAggCoeff(session_ctx, P)''; fail if that failsFailing ''Sign'' when ''GetSessionKeyAggCoeff(session_ctx, P)'' fails is not necessary for unforgeability. It merely indicates to the caller that the scheme is not being used correctly.
* Let ''g = 1'' if ''has_even_y(Q)'', otherwise let ''g = -1 mod n''
-* Let ''d = g⋅gacc⋅d' mod n'' (See [[negation-of-the-secret-key-when-signing|Negation Of The Secret Key When Signing]])
+* Let ''d = g⋅gacc⋅d' mod n'' (See [[#negation-of-the-secret-key-when-signing|Negation Of The Secret Key When Signing]])
* Let ''s = (k1 + b⋅k2 + e⋅a⋅d) mod n''
* Let ''psig = bytes(32, s)''
* Let ''pubnonce = cbytes(k1'⋅G) || cbytes(k2'⋅G)''
diff --git a/bip-0330.mediawiki b/bip-0330.mediawiki
index c24ea427..996f74e1 100644
--- a/bip-0330.mediawiki
+++ b/bip-0330.mediawiki
@@ -210,7 +210,7 @@ The reconcildiff message is used by reconciliation initiator to announce transac
| uint32[] || ask_shortids || The short IDs that the sender did not have.
|}
-Upon receipt a "reconcildiff" message with ''success=1'' (reconciliation success), a node sends an "inv" message for the transactions requested by 32-bit IDs (first vector) containing their wtxids (with parent transactions occuring before their dependencies).
+Upon receipt a "reconcildiff" message with ''success=1'' (reconciliation success), a node sends an "inv" message for the transactions requested by 32-bit IDs (first vector) containing their wtxids (with parent transactions occurring before their dependencies).
If ''success=0'' (reconciliation failure), receiver should announce all transactions from the reconciliation set via an "inv" message.
In both cases, transactions the sender of the message thinks the receiver is missing are announced via an "inv" message.
The regular "inv" deduplication should apply.
diff --git a/bip-0330/minisketch.py b/bip-0330/minisketch.py
index f64286fd..5e397793 100755
--- a/bip-0330/minisketch.py
+++ b/bip-0330/minisketch.py
@@ -120,7 +120,7 @@ def find_roots_inner(p, a):
return []
elif len(p) == 2:
return [p[0]]
- # Otherwise, split p in left*right using paramater a_vals[0].
+ # Otherwise, split p in left*right using parameter a_vals[0].
t = poly_monic(poly_trace(p, a))
left = poly_gcd(list(p), t)
right = poly_divmod(list(left), p)
diff --git a/bip-0331.mediawiki b/bip-0331.mediawiki
new file mode 100644
index 00000000..08927a23
--- /dev/null
+++ b/bip-0331.mediawiki
@@ -0,0 +1,430 @@
++ BIP: 331 + Layer: Peer Services + Title: Ancestor Package Relay + Author: Gloria Zhao+ +==Abstract== + +Peer-to-peer protocol messages enabling nodes to request and relay the unconfirmed ancestor package +of a given transaction, and to request and relay transactions in batches. + +==Motivation== + +===Propagate High Feerate Transactions=== + +Since v0.13, Bitcoin Core has used ancestor packages instead of individual transactions to evaluate +the incentive compatibility of transactions in the mempool +[https://github.com/bitcoin/bitcoin/pull/7594 Add tracking of ancestor packages] and +selecting them for inclusion in blocks +[https://github.com/bitcoin/bitcoin/pull/7600 Select transactions using feerate-with-ancestors]. +Incentive-compatible mempool and miner policies help create a fair, fee-based market for block +space. While miners maximize transaction fees in order to earn higher block rewards, non-mining +users participating in transaction relay reap many benefits from employing policies that result in a +mempool with similar contents, including faster compact block relay and more accurate fee +estimation. Additionally, users may take advantage of mempool and miner policy to bump the priority +of their transactions by attaching high-fee descendants (Child Pays for Parent or CPFP). + +Only individually considering transactions for submission to the mempool creates a limitation in +the node's ability to determine which transactions to include in the mempool, since it cannot take +into account descendants until all the transactions are in the mempool. Similarly, it cannot use a +transaction's descendants when considering which of two conflicting transactions to keep (Replace by +Fee or RBF). + +When a user's transaction does not meet a mempool's minimum feerate and they cannot create a +replacement transaction directly, their transaction will simply be rejected by this mempool or +evicted if already included. They also cannot attach a descendant to pay for replacing a conflicting +transaction; it would be rejected for spending inputs that do not exist. + +This limitation harms users' ability to fee-bump their transactions. Further, it presents security and complexity +issues in contracting protocols which rely on presigned, time-sensitive transactions'''Examples of time-sensitive pre-signed transactions in L2 protocols.''' +* [https://github.com/lightning/bolts/blob/master/03-transactions.md#htlc-timeout-and-htlc-success-transactions HTCL-Timeout in LN Penalty] +* [https://github.com/revault/practical-revault/blob/master/transactions.md#cancel_tx Unvault Cancel in Revault] +* [https://github.com/discreetlogcontracts/dlcspecs/blob/master/Transactions.md#refund-transaction Refund Transaction in Discreet Log Contracts] +* [https://gist.github.com/instagibbs/60264606e181451e977e439a49f69fe1 Updates in Eltoo] +* [https://github.com/ElementsProject/peerswap/blob/master/docs/peer-protocol.md#claim-transaction Claim Transactions in PeerSwap] + to prevent cheating. +In other words, a key security assumption of many contracting protocols is that all parties can +propagate and confirm transactions in a timely manner. Increasing attention has been brought to +"pinning attacks," a type of censorship in which the attacker uses mempool policy restrictions to +prevent a transaction from being relayed or getting mined. +'''Concerns for pinning attacks in L2 protocols''' +* [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-May/020458.html Greg Sanders, "Bringing a nuke to a knife fight: Transaction introspection to stop RBF pinning"] +* [https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002639.html Matt Corallo, "RBF Pinning with Counterparties and Competing Interest"] +* [https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-June/002758.html Antoine Riard, "Pinning : The Good, The Bad, The Ugly"] +* [https://github.com/t-bast/lightning-docs/blob/master/pinning-attacks.md Bastien Teinturier, "Pinning Attacks"] +* [https://gist.github.com/instagibbs/60264606e181451e977e439a49f69fe1 Greg Sanders, "Eltoo Pinning"] + + +These transactions must meet a certain confirmation target to be effective, but their feerates +are negotiated well ahead of broadcast time. If the forecast feerate was too low and no +fee-bumping options are available, attackers can steal money from their counterparties. Always +overestimating fees may sidestep this issue (but only while mempool traffic is low and +predictable), but this solution is not guaranteed to work and wastes users' money. For some attacks, +the available defenses require nodes to have a bird's-eye view of Bitcoin nodes' mempools, which is +an unreasonable security requirement. + +Part of the solution is to enable nodes to consider packages of transactions as a unit, e.g. one or +more low-fee ancestor transactions with a high-fee descendant, instead of separately. A package-aware +mempool policy can help determine if it would actually be economically rational to accept a +transaction to the mempool if it doesn't meet fee requirements individually. Network-wide adoption +of these policies would create a more purely-feerate-based market for block space and allow +contracting protocols to adjust fees (and therefore mining priority) at broadcast time. + +Theoretically, developing a safe and incentive-compatible package mempool acceptance policy is +sufficient to solve this issue. Nodes could opportunistically accept packages (e.g. by trying +combinations of transactions rejected from their mempools), but this practice would likely be +inefficient at best and open new Denial of Service attacks at worst. As such, this proposal +suggests adding new p2p messages enabling nodes to request and share package-validation-related +information with one another, resulting in a more efficient and reliable way to propagate packages. + +===Handle Orphans Better=== + +Txid-based transaction relay is problematic since a transaction's witness may be malleated without +changing its txid; a node cannot use txid to deduplicate transactions it has already downloaded +or validated. Ideally, two nodes that both support BIP339 wtxid-based transaction relay shouldn't +ever need to use txid-based transaction relay. + +A single use case of txid-based relay remains: handling "orphan" transactions that spend output(s) +from an unconfirmed transaction the receiving node is unaware of. Orphan transactions are very +common for new nodes that have just completed Initial Block Download and do not have an up-to-date +mempool. Nodes also download transactions from multiple peers. If the peer from which a child +transaction was requested responds faster than the peer from which its parent was requested, that +child is seen as an orphan transaction. + +Nodes may handle orphans by storing them in a cache and requesting any missing parent(s) by txid +(prevouts specify txid, not wtxid). These parents may end up being orphans as well, if they also +spend unconfirmed inputs that the node is unaware of. This method of handling orphans is problematic +for two reasons: it requires nodes to allocate memory for unvalidated data received on the p2p +network and it relies on txid-based relay between two wtxid-relay peers. + +This proposal makes orphan resolution more efficient and no longer require txid-based relay. + +==Definitions== + +Given any two transactions Tx0 and Tx1 where Tx1 spends an output of Tx0, Tx0 is a '''parent''' of +Tx1 and Tx1 is a '''child''' of Tx0. + +A transaction's '''ancestors''' include, recursively, its parents, the parents of its parents, etc. +A transaction's '''descendants''' include, recursively, its children, the children of its children, +etc. A transaction's parent is its ancestor, but an ancestor is not necessarily a parent. + +A '''package''' is a list of transactions, representable by a connected Directed Acyclic +Graph (a directed edge exists between a transaction that spends the output of another transaction). +In this proposal, a package is limited to unconfirmed transactions. + +An '''ancestor package''' consists of an unconfirmed transaction with all of its unconfirmed +ancestors. + +In a '''topologically sorted''' package, each parent appears somewhere in the list before its child. + +==Specification== + +Ancestor Package Relay includes two parts: a package information round and a transaction data +download round. +The package information round is used to help a receiver learn what transactions are in a package and +decide whether they want to download them. The transaction data round is used to help a node download +multiple transactions in one message instead of as separate messages. +'''Why are package information and transaction data rounds both necessary?''' + +Several alternative designs were considered. One should measure alternative solutions based on the +resources used to communicate (not necessarily trustworthy) information: We would like to minimize +network bandwidth, avoid downloading a transaction more than once, avoid downloading transactions +that are eventually rejected, and minimize storage allocated for not-yet-validated transactions. + ++ Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0331 + Status: Draft + Type: Standards Track + Created: 2022-08-08 + License: BSD-3-Clause + Post-History: 2022-05-17 https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-May/020493.html [bitcoin-dev] post +
PKG_RELAY_PKGTXNS = (1 << 0)
bit in their "sendpackages"
+messages during version handshake. They indicate support for the ancestor package information
+round ("ancpkginfo", "MSG_ANCPKGINFO") using the PKG_RELAY_ANC = (1 << 1)
bit in their
+"sendpackages" messages during version handshake.
+
+===Protocol Flow Examples===
+
+This package relay protocol satisfies both use cases (orphan transaction handling and high-feerate
+transaction paying for low-feerate ancestors).
+
+====Orphan Transaction Handling====
+
+Upon receiving an orphan transaction, a node may request ancestor package information delineating
+the wtxids of the transaction's unconfirmed ancestors. This is done without using txid-based relay.
+The package information can be used to request transaction data. As these transactions are dependent
+upon one another to be valid, the transactions can be requested and sent as a batch.
+
+Contrast this protocol with legacy orphan handling, which requires requesting the missing
+transactions by their txids and may require new round trips for each generation of missing parents.
+[[File:./bip-0331/orphan_handling_flow.png|1000px]]
+
+====Fee-Bumped Transactions====
+
+Too-low-feerate transactions (i.e. below the node's minimum mempool feerate) with high-feerate
+descendants can also be relayed this way. If the peers are using BIP133 fee filters and a
+low-feerate transaction is below the node's fee filter, the sender will not announce it. The
+high-feerate transaction will be sent by the sender, and received and handled as an orphan by the
+receiver, the transactions are validated as a package, and so the protocol naturally works for this
+use case.
+
+This does not mean BIP133 is required for package relay to work, provided that nodes do not
+immediately reject transactions previously found to be too low feerate. If the low-feerate
+transaction was sent and rejected, the receiver should later re-request and accept it after learning
+that it is the ancestor of another transaction, and that they meet the receiver's mempool policy
+requirements when validated together.
+
+[[File:./bip-0331/package_cpfp_flow.png|600px]]
+
+This protocol is receiver-initiated only; nodes do not proactively announce packages to their peers.
+'''Why no sender-initiated protocol?''' Sender-initiated package
+relay can, theoretically, save a round trip by notifying the receiver ahead of time that they will
+probably need to request and validate a group of transactions together in order for them to be
+accepted. As with any proactive communication, there is a chance that the receiver already knows
+this information, so this network bandwidth may be wasted. Shortened latency is less significant
+than wasted bandwidth.
+
+The logic used to decide when to announce a package proactively determines whether it is a net
+increase or decrease for overall bandwidth usage. However, it is difficult to design anything to
+save bandwidth without any idea of what its bandwidth usage actually looks like in practice. No
+historical data is available, as one of the primary goals of this protocol is to enable
+currently-rejected transactions to propagate. After deploying receiver-initiated package relay, we
+can observe its usage and then introduce a sender-initiated package relay protocol informed by data
+collected from the p2p network.
+
+===Combined Hash===
+
+A "combined hash" serves as a unique "package id" for some list of transactions and helps provide a
+meaningful but short "notfound" response to "getpkgtxns."
+
+The combined hash of a package of transactions is equal to the sha256 hash of each transaction's
+wtxid concatenated in lexicographical order.
+
+===New Messages===
+
+Four new protocol messages and two inv types are added.
+
+====sendpackages====
+
+{|
+| Field Name || Type || Size || Purpose
+|-
+|versions || uint64_t || 4 || Bit field that is 64 bits wide, denoting the package versions supported by the sender.
+|-
+|}
+
+# The "sendpackages" message has the structure defined above, with pchCommand == "sendpackages".
+
+# During version handshake, nodes should send one "sendpackages" message indicating they support package relay, with the versions field indicating which versions they support.
+
+# The "sendpackages" message MUST be sent before sending a "verack" message. If a "sendpackages" message is received after "verack", the sender may be disconnected.
+
+# Upon successful connection ("verack" sent by both peers), a node may relay packages with the peer if they did not set "fRelay" to false in the "version" message, both peers sent "wtxidrelay", and both peers sent "sendpackages" for matching version bit(s). Unknown bits (including versions==0) should be ignored. Peers should relay packages corresponding to versions that both sent "sendpackages" for.'''Is it ok to send "sendpackages" to a peer that specified fRelay=false in their "version" message?'''
+Yes, this is allowed in order to reduce the number of negotiation steps. This means nodes can
+announce features without first checking what the other peer has sent, and then apply negotiation
+logic at the end based on what was sent and received. See [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-May/020510.html this discussion].
+
+
+====ancpkginfo====
+{|
+| Field Name || Type || Size || Purpose
+|-
+|txns_length||CompactSize||1 or 3 bytes|| The number of transactions provided.
+|-
+|txns||List of wtxids||txns_length * 32|| The wtxids of each transaction in the package.
+|}
+
+# The "ancpkginfo" message has the structure defined above, with pchCommand == "ancpkginfo".
+
+# The "txns" field should contain a list of wtxids which constitute the ancestor package of the last wtxid. For the receiver's convenience, the sender should - but is not required to - sort the wtxids in topological order. The topological sort can be achieved by sorting the transactions by mempool acceptance order (if parents are always accepted before children). Apart from the last wtxid which is used to learn which transaction the message corresponds to, there is no enforced ordering. Nodes should not disconnect or punish a peer who provides a list not sorted in topological order.'''Why not include feerate information to help the receiver decide whether these transactions are worth downloading?'''
+A simple feerate is typically insufficient; the receiver must also know the dependency
+relationships between transactions and their respective sizes.
+'''Should a peer be punished if they provide incorrect package info, e.g. a list of unrelated transactions?'''
+Ideally, there should be a way to enforce that peers are providing correct information to each
+other. However, two peers may have different views of what a transaction's unconfirmed ancestors
+are based on their chainstate. For example, during a reorg or when two blocks are found at the same
+time, one peer may see a transaction as confirmed while the other peer does not.
+As such, it is impossible to accurately enforce this without also knowing the peer's chainstate.
+It was [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-May/020493.html originally proposed]
+to include a block hash in "ancpkginfo" to avoid unwarranted disconnections. However, it does not
+make much sense to stop or delay transaction data requests due to mismatched chainstates, and the
+chainstate may change again between package information and transaction data rounds. Instead,
+differences in chainstate should be handled at the validation level. The node has already spent
+network bandwidth downloading these transactions; it should make a best effort to validate them.
+See [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-June/020558.html discussion].
+'''Why not require topological order?'''
+It is not possible to determine whether a list of transactions is topologically sorted without first
+establishing that the list contains a full ancestor package. It is not possible to determine whether
+a list of transactions contains a full ancestor package without knowing what the chainstate is.
+
+
+# Upon receipt of a "ancpkginfo" message, the node may use it to request the transactions it does not already have (e.g. using "getpkgtxns" or "tx").
+
+# Upon receipt of a malformed "ancpkginfo" message, the sender may be disconnected. An "ancpkginfo" message is malformed if it contains duplicate wtxids or conflicting transactions (spending the same inputs). The receiver may learn that a package info was malformed after downloading the transactions.
+
+# A node MUST NOT send a "ancpkginfo" message that has not been requested by the recipient. Upon receipt of an unsolicited "ancpkginfo", a node may disconnect the sender.
+
+# This message must only be used if both peers set PKG_RELAY_ANC
in their "sendpackages" message. If an "ancpkginfo" message is received from a peer with which this type of package relay was not negotiated, no response should be sent and the sender may be disconnected.
+
+====MSG_ANCPKGINFO====
+
+# A new inv type (MSG_ANCPKGINFO == 0x7) is added, for use only in getdata requests pertaining to ancestor packages.
+
+# As a getdata request type, it indicates that the sender wants an "ancpkginfo" containing all of the unconfirmed ancestors of a transaction, referenced by wtxid.
+
+# Upon receipt of a "getdata(MSG_ANCPKGINFO)" request, the node should respond with an "ancpkginfo" message corresponding to the transaction's unconfirmed ancestor package, or with "notfound". The wtxid of the requested transaction must be the last item in the "ancpkginfo" response list, as the last item is used to determine which transaction the "ancpkginfo" pertains to.
+
+# The inv type must only be used in a "getdata" message. An "inv(MSG_ANCPKGINFO)" must never be sent. If an "inv(MSG_ANCPKGINFO)" is received, the sender may be disconnected.
+
+# This inv type must only be used if both peers set PKG_RELAY_ANC
in their "sendpackages" message. If a "getdata" message with type MSG_ANCPKGINFO is received from a peer with which this type of package relay was not negotiated, no response should be sent and the sender may be disconnected.
+
+====getpkgtxns====
+
+{|
+| Field Name || Type || Size || Purpose
+|-
+|txns_length||CompactSize||1 or 3 bytes|| The number of transactions requested.
+|-
+|txns||List of wtxids||txns_length * 32|| The wtxids of each transaction in the package.
+|}
+
+# The "getpkgtxns" message has the structure defined above, with pchCommand == "getpkgtxns".
+
+# A "getpkgtxns" message should be used to request some list of transactions specified by witness transaction id. It indicates that the node wants to receive either all the specified transactions or none of them. This message is intended to allow nodes to avoid downloading and storing transactions that cannot be validated without each other. The list of transactions does not need to correspond to a previously-received ancpkginfo message.
+
+# Upon receipt of a "getpkgtxns" message, a node should respond with either a "pkgtxns" containing all of the requested transactions in the same order specified in the "getpkgtxns" request or one "notfound" message of type MSG_PKGTXNS and combined hash of all of the wtxids in the "getpkgtxns" request (only one "notfound" message and nothing else), indicating one or more of the transactions is unavailable.
+
+# A "getpkgtxns" message must contain at most 100 wtxids. Upon receipt of a "getpkgtxns" message with more than 100 wtxids, a node may ignore the message (to avoid calculating the combined hash) and disconnect the sender.
+
+# This message must only be used if both peers set PKG_RELAY_PKGTXNS
in their "sendpackages" message. If a "getpkgtxns" message is received from a peer with which this type of package relay was not negotiated, no response should be sent and the sender may be disconnected.
+
+====pkgtxns====
+
+{|
+| Field Name || Type || Size || Purpose
+|-
+|txns_length||CompactSize||1 or 3 bytes|| The number of transactions provided.
+|-
+|txns||List of transactions||variable|| The transactions in the package.
+|}
+
+# The "pkgtxns" message has the structure defined above, with pchCommand == "pkgtxns".
+
+# A "pkgtxns" message should contain the transaction data requested using "getpkgtxns".
+
+# A "pkgtxns" message should only be sent to a peer that requested the package using "getpkgtxns". If a node receives an unsolicited package, it may choose to validate the transactions or not, and the sender may be disconnected.
+
+# This message must only be used if both peers set PKG_RELAY_PKGTXNS
in their "sendpackages" message. If a "pkgtxns" message is received from a peer with which this type of package relay was not negotiated, no response should be sent and the sender may be disconnected.
+
+====MSG_PKGTXNS====
+
+# A new inv type (MSG_PKGTXNS == 0x6) is added, for use only in "notfound" messages pertaining to package transactions.
+
+# As a "notfound" type, it indicates that the sender is unable to send all the transactions requested in a prior "getpkgtxns" message. The hash used is equal to the combined hash of the wtxids in the getpkgtxns request.
+
+# This inv type should only be used in "notfound" messages, i.e. "inv(MSG_PKGTXNS)" and "getdata(MSG_PKGTXNS)" must never be sent. Upon receipt of an "inv" or "getdata" message of this type, the sender may be disconnected.
+
+# This inv type must only be used if both peers set PKG_RELAY_PKGTXNS
in their "sendpackages" message.
+
+==Compatibility==
+
+Older clients remain fully compatible and interoperable after this change. Clients implementing this
+protocol will only attempt to send and request packages if agreed upon during the version handshake.
+'''Will package relay cause non-package relay nodes to waste bandwidth on low-feerate transactions?'''
+If a node supports package relay, it may accept low-feerate transactions (e.g. paying zero fees)
+into its mempool, but non-package relay nodes would most likely reject them. To mitigate bandwidth
+waste, a package relay node should not announce descendants of below-fee-filter transactions to
+non-package relay peers.
+
+'''Is Package Erlay possible?'''
+A client using BIP330 reconciliation-based transaction relay (Erlay) is able to use package relay
+without interference. After reconciliation, any transaction with unconfirmed ancestors may have
+those ancestors resolved using ancestor package relay.
+[[File:./bip-0331/package_erlay.png|700px]]
+
+
+==Extensibility==
+
+This protocol can be extended to include more types of package information in the future, while
+continuing to use the same messages for transaction data download. One would define a new package
+information message (named "*pkginfo" in the diagram below), allocate its corresponding inv
+type (named "*PKGINFO" in the diagram below), and specify how to signal support using the
+versions field of "sendpackages" (an additional bit named "PKG_RELAY_*" in the diagram below). A
+future version of package relay may allow a sender-initiated dialogue by specifying that the package
+info type inv type can be used in an "inv" message.
++ BIP: 345 + Layer: Consensus (soft fork) + Title: OP_VAULT + Author: James O'Beirne+ + +== Introduction == + +This BIP proposes two new tapscript opcodes that add consensus support for a specialized +covenant:+ Greg Sanders + Anthony Towns + Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0345 + Status: Draft + Type: Standards Track + Created: 2023-02-03 + License: BSD-3-Clause + Post-History: 2023-01-09: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-January/021318.html [bitcoin-dev] OP_VAULT announcment + 2023-03-01: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-March/021510.html [bitcoin-dev] BIP for OP_VAULT +
OP_VAULT
and OP_VAULT_RECOVER
. These opcodes, in conjunction with
+OP_CHECKTEMPLATEVERIFY
+([https://github.com/bitcoin/bips/blob/master/bip-0119.mediawiki BIP-0119]),
+allow users to enforce a delay period before designated coins may be spent to
+an arbitrary destination, with the exception of a prespecified "recovery" path.
+At any time prior to final withdrawal, the coins can be spent to the
+recovery path.
+
+=== Copyright ===
+
+This document is licensed under the 3-clause BSD license.
+
+
+=== Motivation ===
+
+The hazard of custodying Bitcoin is well-known. Users of Bitcoin must go to
+significant effort to secure their private keys, and hope that once provisioned
+their custody system does not yield to any number of evolving and
+persistent threats. Users have little means to intervene once a compromise is
+detected. This proposal introduces a mechanism that significantly
+mitigates the worst-case outcome of key compromise: coin loss.
+
+Introducing a way to intervene during unexpected spends allows users to
+incorporate highly secure key storage methods or unusual fallback strategies
+that are only exercised in the worst case, and which may otherwise be
+operationally prohibitive. The goal of this proposal is to make this strategy
+usable for custodians of any size with minimal complication.
+
+==== Example uses ====
+
+A common configuration for an individual custodying Bitcoin is "single
+signature and passphrase" using a hardware wallet. A user with such a
+configuration might be concerned about the risk associated with relying on a
+single manufacturer for key management, as well as physical access to the
+hardware.
+
+This individual can use OP_VAULT
to make use of a highly secure
+key as the unlikely recovery path, while using their existing signing procedure
+as the withdrawal trigger key with a configured spend delay of e.g. 1 day.
+
+The recovery path key can be of a highly secure nature that might otherwise
+make it impractical for daily use. For example, the key could be generated in
+some analog fashion, or on an old computer that is then destroyed, with the
+private key replicated only in paper form. Or the key could be a 2-of-3
+multisig using devices from different manufacturers. Perhaps the key is
+geographically or socially distributed.
+
+Since it can be any Bitcoin script policy, the recovery key can include a
+number of spending conditions, e.g. a time-delayed fallback to an "easier"
+recovery method, in case the highly secure key winds up being ''too'' highly
+secure.
+
+The user can run software on their mobile device that monitors the blockchain
+for spends of the vault outpoints. If the vaulted coins move in an unexpected
+way, the user can immediately sweep them to the recovery path, but spending the
+coins on a daily basis works in the same way it did prior to vaulting (aside
+from the spend delay).
+
+Institutional custodians of Bitcoin may use vaults in similar fashion.
+
+===== Provable timelocks =====
+
+This proposal provides a mitigation to the
+[https://web.archive.org/web/20230210123933/https://xkcd.com/538/ "$5 wrench attack."] By
+setting the spend delay to, say, a week, and using as the recovery path a
+script that enforces a longer relative timelock, the owner of the vault can
+prove that he is unable to access its value immediately. To the author's
+knowledge, this is the only way to configure this defense without rolling
+timelocked coins for perpetuity or relying on a trusted third party.
+
+== Goals ==
+
+[[File:bip-0345/vaults-Basic.png|frame|center]]
+
+Vaults in Bitcoin have been discussed formally since 2016
+([http://fc16.ifca.ai/bitcoin/papers/MES16.pdf MES16]) and informally since [https://web.archive.org/web/20160220215151/https://bitcointalk.org/index.php?topic=511881.0 2014]. The value of
+having a configurable delay period with recovery capability in light of an
+unexpected spend has been widely recognized.
+
+The only way to implement vaults given the existing consensus rules, aside from
+[https://github.com/revault emulating vaults with large multisig
+configurations], is to use presigned transactions created with a one-time-use
+key. This approach was first demonstrated
+[https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017755.html in 2020].
+
+Unfortunately, this approach has a number of practical shortcomings:
+* generating and securely deleting ephemeral keys, which are used to emulate the vault covenant, is required,
+* amounts and withdrawal patterns must be precommitted to,
+* there is a necessity to precommit to an address that the funds must pass through on their way to the final withdrawal target, which is likely only known at unvault time,
+* the particular fee management technique or wallet must be decided upon vault creation,
+* coin loss follows if a vault address is reused,
+* the transaction data that represents the "bearer asset" of the vault must be stored for perpetuity, otherwise value is lost, and
+* the vault creation ceremony must be performed each time a new balance is to be deposited.
+
+The deployment of a "precomputed" covenant mechanism like
+[https://github.com/bitcoin/bips/blob/master/bip-0119.mediawiki OP_CHECKTEMPLATEVERIFY] or
+[https://github.com/bitcoin/bips/blob/master/bip-0118.mediawiki SIGHASH_ANYPREVOUT]
+would both remove the necessity to use an ephemeral key, since the
+covenant is enforced on-chain, and lessen the burden of sensitive data storage,
+since the necessary transactions can be generated from a set of compact
+parameters. This approach was demonstrated [https://github.com/jamesob/simple-ctv-vault in
+2022].
+
+However, the limitations of precomputation still apply: amounts,
+destinations, and fee management are all fixed. Funds must flow through a fixed
+intermediary to their final destination. Batch operations, which may be vital
+for successful recovery during fee spikes or short spend delay, are not possible.
+
+[[File:bip-0345/withdrawal-comparison.drawio.png|frame|center]]
+
+Having a "general" covenant mechanism that can encode arbitrary transactional
+state machines would allow us to solve these issues, but at the cost of complex
+and large scripts that would probably be duplicated many times over in the
+blockchain. The particular design and deployment timeline of such a general
+framework is also uncertain. This approach was demonstrated
+[https://blog.blockstream.com/en-covenants-in-elements-alpha/ in 2016].
+
+This proposal intends to address the problems outlined above by
+providing a delay period/recovery path use with minimal transactional and
+operational overhead using a specialized covenant.
+
+The design goals of the proposal are:
+
+* '''efficient reuse of an existing vault configuration.''''''Why does this support address reuse?''' The proposal doesn't rely on or encourage address reuse, but certain uses are unsafe if address reuse cannot be handled - for example, if a custodian gives its users a vault address to deposit to, it cannot enforce that those users make a single deposit for each address. A single vault configuration, whether the same literal scriptPubKey
or not, should be able to “receive” multiple deposits.
+
+* '''batched operations''' for recovery and withdrawal to allow managing multiple vault coins efficiently.
+
+* '''unbounded partial withdrawals''', which allows users to withdraw partial vault balances without having to perform the setup ceremony for a new vault.
+
+* '''dynamic unvault targets''', which allow the proposed withdrawal target for a vault to be specified at withdrawal time rather than when the vault is first created. This would remove the need for a prespecified, intermediate wallet that only exists to route unvaulted funds to their desired destination.
+
+* '''dynamic fee management''' that, like dynamic targets, defers the specification of fee rates and source to unvault time rather than vault creation time.
+
+These goals are accompanied by basic safety considerations (e.g. not being
+vulnerable to mempool pinning) and a desire for concision, both in terms of the number
+of outputs created as well as script sizes.
+
+This proposal is designed to be compatible with any future sighash modes (e.g. SIGHASH_GROUP
) or fee management strategies (e.g. [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html transaction sponsors]) that may be introduced. Use of these opcodes will benefit from, but do not strictly rely on, [https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-September/020937.html v3 transaction relay] and [https://github.com/instagibbs/bips/blob/ephemeral_anchor/bip-ephemeralanchors.mediawiki ephemeral anchors].
+
+== Design ==
+
+In typical usage, a vault is created by encumbering coins under a
+taptree [https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki (BIP-341)]
+containing at least two leaves: one with an OP_VAULT
-containing script that
+facilitates the expected withdrawal process, and another leaf with
+OP_VAULT_RECOVER
which ensures the coins can be recovered
+at any time prior to withdrawal finalization.
+
+The rules of OP_VAULT
ensure the timelocked, interruptible
+withdrawal by allowing a spending transaction to replace the
+OP_VAULT
tapleaf with a prespecified script template, allowing for
+some parameters to be set at spend (trigger) time. All other leaves in the
+taptree must be unchanged in the destination output, which preserves the recovery path as well as any
+other spending conditions originally included in the vault. This is similar to
+the TAPLEAF_UPDATE_VERIFY
design that was proposed
+[https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019419.html in 2021].
+
+These tapleaf replacement rules, described more precisely below, ensure a
+timelocked withdrawal, where the timelock is fixed by the original
+OP_VAULT
parameters, to a fixed set of outputs (via
+OP_CHECKTEMPLATEVERIFY
'''Why is OP_CHECKTEMPLATEVERIFY
(BIP-119) relied upon for this proposal?''' During the withdrawal process, the proposed final destination for value being withdrawn must be committed to. OP_CTV
is the simplest, safest way to commit the spend of some coins to a particular set of outputs. An earlier version of this proposal attempted to use a simpler, but similar method, of locking the spend of coins to a set of outputs, but this method introduced txid malleability.OP_VAULT
with no changes.) which is chosen when the withdrawal
+process is triggered.
+
+While OP_CHECKTEMPLATEVERIFY
is used in this proposal as the
+preferred method to bind the proposed withdrawal to a particular set of final
+outputs, OP_VAULT
is composable with other (and future) opcodes to
+facilitate other kinds of withdrawal processes.
+
+[[File:bip-0345/opvault.drawio.png|frame|center]]
+
+
+=== Transaction types ===
+
+The vault has a number of stages, some of them optional:
+
+* '''vault transaction''': encumbers some coins into a Taproot structure that includes at least one OP_VAULT
leaf and one OP_VAULT_RECOVER
leaf.
+
+* '''trigger transaction''': spends one or more OP_VAULT
-tapleaf inputs into an output which is encumbered by a timelocked withdrawal to a fixed set of outputs, chosen at trigger time. This publicly broadcasts the intent to withdraw to some specific set of outputs.scriptPubKey
as the OP_VAULT
-containing input(s) being spent.
+
+* '''withdrawal transaction''': spends the timelocked, destination-locked trigger inputs into a compatible set of final withdrawal outputs (per, e.g., a CHECKTEMPLATEVERIFY
hash), after the trigger inputs have matured per the spend delay. Timelocked CTV transactions are the motivating usage of OP_VAULT, but any script template can be specified during the creation of the vault.
+
+* '''recovery transaction''': spends one or more vault inputs via OP_VAULT_RECOVER
tapleaf to the prespecified recovery path, which can be done at any point before the withdrawal transaction confirms. Each input can optionally require a witness satisfying a specified ''recovery authorization'' script, an optional script prefixing the OP_VAULT_RECOVER
fragment. The use of recovery authorization has certain trade-offs discussed later.
+
+
+=== Fee management ===
+
+A primary consideration of this proposal is how fee management is handled.
+Providing dynamic fee management is critical to the operation of a vault, since
+
+* precalculated fees are prone to making transactions unconfirmable in high fee environments, and
+* a fee wallet that is prespecified might be compromised or lost before use.
+
+But dynamic fee management can introduce
+[https://bitcoinops.org/en/topics/transaction-pinning/ pinning vectors]. Care
+has been taken to avoid unnecessarily introducing these vectors when using the new
+destination-based spending policies that this proposal introduces.
+
+Originally, this proposal had a hard dependency on reformed transaction
+nVersion=3 policies, including ephemeral anchors, but it has since been revised
+to simply benefit from these changes in policy as well as other potential fee
+management mechanisms.
+
+
+== Specification ==
+
+The tapscript opcodes OP_SUCCESS187
(0xbb
) and
+OP_SUCCESS188
(0xbc
) are constrained with new rules
+to implement OP_VAULT
and OP_VAULT_RECOVER
,
+respectively.
+
+=== OP_VAULT
evaluation ===
+
+When evaluating OP_VAULT
(OP_SUCCESS187
,
+0xbb
), the expected format of the stack, shown top to bottom, is:
+
+
is a minimally-encoded data push of a serialized script. In conjunction with the leaf-update data items, it dictates the tapleaf script in the output taptree that will replace the one currently executing.
+** Otherwise, script execution MUST fail and terminate immediately.
+
+*
is an up to 4-byte minimally encoded CScriptNum
indicating how many leaf-update script items should be popped off the stack. '''Why only prefix with data pushes?''' Prefixing the leaf-update-script-body
with opcodes opens up the door to prefix OP_SUCCESSX opcodes, to name a single issue only, side-stepping the validation that was meant to be run by the committed script.
+** If this value does not decode to a valid CScriptNum, script execution MUST fail and terminate immediately.
+** If this value is less than 0, script execution MUST fail and terminate immediately.
+** If there are fewer than 3 items following the
items on the stack, script execution MUST fail and terminate immediately. In other words, after popping
, there must be at least 3 +
items remaining on the stack.
+
+* The following
stack items are popped off the stack and prefixed as minimally-encoded push-data arguments to the
to construct the expected tapleaf replacement script.
+
+*
is an up to 4-byte minimally encoded CScriptNum
indicating the index of the output which, in conjunction with an optional revault output, carries forward the value of this input, and has an identical taptree aside from the currently executing leaf.
+** If this value does not decode to a valid CScriptNum, script execution MUST fail and terminate immediately.
+** If this value is less than 0 or is greater than or equal to the number of outputs, script execution MUST fail and terminate immediately.
+
+*
is an up to 4-byte minimally encoded CScriptNum
optionally indicating the index of an output which, in conjunction with the trigger output, carries forward the value of this input, and has an identical scriptPubKey to the current input.
+** If this value does not decode to a valid CScriptNum, script execution MUST fail and terminate immediately.
+** If this value is greater than or equal to the number of outputs, script execution MUST fail and terminate immediately.
+** If this value is negative and not equal to -1, script execution MUST fail and terminate immediately.'''Why is -1 the only allowable negative value for revault-vout-idx?''' A negative revault index indicates that no revault output exists; if this value were allowed to be any negative number, the witness could be malleated (and bloated) while a transaction is waiting for confirmation.
+
+*
is an up to 7-byte minimally encoded CScriptNum indicating the number of satoshis being revaulted.
+** If this value does not decode to a valid CScriptNum, script execution MUST fail and terminate immediately.
+** If this value is not greater than or equal to 0, script execution MUST fail and terminate immediately.
+** If this value is non-zero but
is negative, script execution MUST fail and terminate immediately.
+** If this value is zero but
is not -1, script execution MUST fail and terminate immediately.
+
+After the stack is parsed, the following validation checks are performed:
+
+* Decrement the per-script sigops budget (see [https://github.com/bitcoin/bips/blob/master/bip-0342.mediawiki#user-content-Resource_limits BIP-0342]) by 60'''Why is the sigops cost for OP_VAULT set to 60?''' To determine the validity of a trigger output, OP_VAULT must perform an EC multiplication and hashing proportional to the length of the control block in order to generate the output's expected TapTweak. This has been measured to have a cost in the worst case (max length control block) of roughly twice a Schnorr verification. Because the hashing cost could be mitigated by caching midstate, the cost is 60 and not 100.; if the budget is brought below zero, script execution MUST fail and terminate immediately.
+* Let the output designated by
be called ''triggerOut''.
+* If the scriptPubKey of ''triggerOut'' is not a version 1 witness program, script execution MUST fail and terminate immediately.
+* Let the script constructed by taking the
and prefixing it with minimally-encoded data pushes of the
leaf-update script data items be called the ''leaf-update-script''.
+* If the scriptPubKey of ''triggerOut'' does not match that of a taptree that is identical to that of the currently evaluated input, but with the leaf script substituted for ''leaf-update-script'', script execution MUST fail and terminate immediately.
+** Note: the parity bit of the resulting taproot output is allowed to vary, so both values for the new output must be checked.
+* Let the output designated by
(if the index value is non-negative) be called ''revaultOut''.
+* If the scriptPubKey of ''revaultOut'' is not equal to the scriptPubKey of the input being spent, script execution MUST fail and terminate immediately.
+* Implementation recommendation: if the sum of the amounts of ''triggerOut'' and ''revaultOut'' (if any) are not greater than or equal to the value of this input, script execution SHOULD fail and terminate immediately. This ensures that (at a minimum) the vaulted value for this input is carried through.
+** Amount checks are ultimately done with deferred checks, but this check can help short-circuit obviously invalid spends.
+* Queue a deferred check'''What is a deferred check and why does this proposal require them for correct script evaluation?''' A deferred check is a validation check that is executed only after all input scripts have been validated, and is based on aggregate information collected during each input's EvalScript run.nValue
minus
are included within the output nValue
found at
.
+* Queue a deferred check that ensures
satoshis, if non-zero, are included within the output's nValue
found at
.
+** These deferred checks could be characterized in terms of the pseudocode below (in ''Deferred checks'') asTriggerCheck(input_amount, , , )
.
+
+If none of the conditions fail, a single true value (0x01
) is left on the stack.
+
+=== OP_VAULT_RECOVER
evaluation ===
+
+When evaluating OP_VAULT_RECOVER
(OP_SUCCESS188
,
+0xbb
), the expected format of the stack, shown top to bottom, is:
+
+
is a 32-byte data push.
+** If this is not 32 bytes in length, script execution MUST fail and terminate immediately.
+*
is an up to 4-byte minimally encoded CScriptNum
indicating the index of the recovery output.
+** If this value does not decode to a valid CScriptNum, script execution MUST fail and terminate immediately.
+** If this value is less than 0 or is greater than or equal to the number of outputs, script execution MUST fail and terminate immediately.
+
+After the stack is parsed, the following validation checks are performed:
+
+* Let the output at index
be called ''recoveryOut''.
+* If the scriptPubKey of ''recoveryOut'' does not have a tagged hash equal to
(tagged_hash("VaultRecoverySPK", recoveryOut.scriptPubKey) == recovery-sPK-hash
, where tagged_hash()
is from the [https://github.com/bitcoin/bips/blob/master/bip-0340/reference.py BIP-0340 reference code]), script execution MUST fail and terminate immediately.
+** Implementation recommendation: if ''recoveryOut'' does not have an nValue
greater than or equal to this input's amount, the script SHOULD fail and terminate immediately.
+* Queue a deferred check that ensures the nValue
of ''recoveryOut'' contains the entire nValue
of this input.'''How do recovery transactions pay for fees?''' If the recovery is unauthorized, fees are attached either via CPFP with an ephemeral anchor or as inputs which are solely spent to fees (i.e. no change output). If the recovery is authorized, fees can be attached in any manner, e.g. unrelated inputs and outputs or CPFP via anchor.
+** This deferred check could be characterized in terms of the pseudocode below as RecoveryCheck(, input_amount)
.
+
+If none of the conditions fail, a single true value (0x01
) is left on the stack.
+
+=== Deferred check evaluation ===
+
+Once all inputs for a transaction are validated per the rules above, any
+deferred checks queued MUST be evaluated.
+
+The Python pseudocode for this is as follows:
+
+OP_VAULT_RECOVER
input being spent, the script MUST fail (by policy, not consensus) and terminate immediately if both'''Why are recovery transactions required to be replaceable?''' In the case of unauthorized recoveries, an attacker may attempt to pin recovery transactions by broadcasting a "rebundled" version with a low fee rate. Vault owners must be able to overcome this with replacement. In the case of authorized recovery, if an attacker steals the recovery authorization key, the attacker may try to pin the recovery transaction during theft. Requiring replaceability ensures that the owner can always raise the fee rate of the recovery transaction, even if they are RBF rule #3 griefed in the process.
+*# the input is not marked as opt-in replaceable by having an nSequence number less than 0xffffffff - 1
, per [https://github.com/bitcoin/bips/blob/master/bip-0125.mediawiki BIP-0125], and
+*# the version of the recovery transaction has an nVersion other than 3.
+
+If the script containing OP_VAULT_RECOVER
is 34 bytes or less34 bytes is the length of a recovery script that consists solely of OP_VAULT_RECOVER
., let
+it be called "unauthorized," because there is no script guarding the recovery
+process. In order to prevent pinning attacks in the case of unauthorized
+recovery - since the spend of the input (and the structure of the
+transaction) is not authorized by a signed signature message - the output structure of
+unauthorized recovery transaction is limited.
+
+* If the recovery is unauthorized, the recovery transaction MUST (by policy) abide by the following constraints:
+** If the spending transaction has more than two outputs, the script MUST fail and terminate immediately.
+** If the spending transaction has two outputs, and the output which is not ''recoveryOut'' is not an [https://github.com/instagibbs/bips/blob/ephemeral_anchor/bip-ephemeralanchors.mediawiki ephemeral anchor], the script MUST fail and terminate immediately.'''Why can unauthorized recoveries only process a single recovery path?''' Because there is no signature required for unauthorized recoveries, if additional outputs were allowed, someone observing a recovery in the mempool would be able to rebundle and broadcast the recovery with a lower fee rate.
+
+== Implementation ==
+
+A sample implementation is available on bitcoin-inquisition [https://github.com/jamesob/bitcoin/tree/2023-01-opvault-inq here], with an associated [https://github.com/bitcoin-inquisition/bitcoin/pull/21 pull request].
+
+
+== Applications ==
+
+The specification above, perhaps surprisingly, does not specifically cover how a relative timelocked withdrawal process with a fixed target is implemented. The tapleaf update semantics specified in OP_VAULT
as well as the output-based authorization enabled by OP_VAULT_RECOVER
can be used to implement a vault, but they are incomplete without two other pieces:
+
+* a way to enforce relative timelocks, like OP_CHECKSEQUENCEVERIFY
, and
+* a way to enforce that proposed withdrawals are ultimately being spent to a precise set of outputs, like OP_CHECKTEMPLATEVERIFY
.
+
+These two pieces are combined with the tapleaf update capabilities of
+OP_VAULT
to create a vault, described below.
+
+=== Creating a vault ===
+
+In order to vault coins, they can be spent into a witness v1 scriptPubKey
+that contains a taptree of the form
+
+$leaf-update-script-body
is, for example, OP_CHECKSEQUENCEVERIFY OP_DROP OP_CHECKTEMPLATEVERIFY
.
+** This is one example of a trigger script, but ''any'' script fragment can be used, allowing the creation of different types of vaults. For example, you could use OP_CHECKSEQUENCEVERIFY OP_DROP OP_CHECKSIG
to do a time-delayed transfer of the coins to another key. This also future-proofs OP_VAULT
for future scripting capabilities.
+* The script fragment in (i)
is called the "trigger authorization," because it gates triggering the withdrawal. This can be done in whatever manner the wallet designer would like.
+* The script fragment in (ii)
is the incomplete OP_VAULT
invocation - it will be completed once the rest of the parameters (the CTV target hash, trigger vout index, and revault vout index) are provided by the trigger transaction witness.
+
+Typically, the internal key for the vault taproot output will be specified so
+that it is controlled by the same descriptor as the recovery path, which
+facilitates another (though probably unused) means of recovering the vault
+output to the recovery path. This has the potential advantage of recovering the
+coin without ever revealing it was a vault.
+
+Otherwise, the internal key can be chosen to be an unspendable NUMS point to
+force execution of the taptree contents.
+
+=== Triggering a withdrawal ===
+
+To make use of the vault, and spend it towards some output, we construct a spend
+of the above tr()
output that simply replaces the "trigger" leaf with the
+full leaf-update script (in this case, a timelocked CTV script):
+
+OP_VAULT
has allowed the taptree to be transformed so that the trigger leaf
+becomes a timelocked CTV script, which is what actually facilitates the announced
+withdrawal. The withdrawal is interruptible by the recovery path because the
+"recover" leaf is preserved exactly from the original taptree.
+
+Note that the CTV hash is specified at spend time using the witness stack, and
+"locked in" via the OP_VAULT
spend rules which assert its existence in the output.
+
+The vault funds can be recovered at any time prior to the spend of the
+timelocked CTV script by way of a script-path spend using the "recover" leaf.
+
+
+=== Recovery authorization ===
+
+When configuring a vault, the user must decide if they want to have the
+recovery process gated by a script fragment prefixing the
+OP_VAULT_RECOVER
instruction in the "recover" leaf. Its use
+entails trade-offs.
+
+==== Unauthorized recovery ====
+
+Unauthorized recovery simplifies vault use in that recovery never requires additional information aside from the location of the vault outpoints and the recovery path - the "authorization" is simply the reveal of the recovery path, i.e. the preimage of
.
+
+But because this reveal is the only authorization necessary to spend the vault coins to recovery, the user must expect to recover all such vaults at once, since an observer can replay this recovery (provided they know the outpoints).
+
+Additionally, unauthorized recovery across multiple distinct recovery paths
+cannot be done in the same transaction, and fee control is more constrained:
+because the output structure is limited for unauthorized recovery, fee
+management relies either on inputs which are completely spent to fees or the
+use of the optional ephemeral anchor and package relay.
+
+These limitations are to avoid pinning attacks.
+
+==== Authorized recovery ====
+
+With authorized recovery, the user must keep track of an additional piece of information: how to solve the recovery authorization script fragment when recovery is required.
+
+If this key is lost, the user will be unable to initiate the recovery process for their coins. If an attacker obtains the recovery key, they may grief the user during the recovery process by constructing a low fee rate recovery transaction and broadcasting it (though they will not be able to pin because of the replaceability requirement on recovery transactions).
+
+However, authorized recovery configurations have significant benefits. Batched recoveries are possible for vaults with otherwise incompatible recovery parameters. Fee management is much more flexible, since authorized recovery transactions are "free form" and unrelated inputs and outputs can be added, potentially to handle fees.
+
+==== Recommendation: use a simple, offline recovery authorization key seed ====
+
+The benefits of batching and fee management that authorized recovery provides are significant. If the recovery authorization key falls into the hands of an attacker, the outcome is not catastrophic, whereas if the user loses their recovery authorization key as well as their trigger key, the result is likely coin loss. Consequently, the author's recommendation is to use a simple seed for the recovery authorization key that can be written down offline and replicated.
+
+Note that the recovery authorization key '''is not''' the recovery path key, and
+this is '''much different''' than any recommendation on how to generate the
+recovery path key itself.
+
+=== Address reuse and recovery ===
+
+When creating a vault, four factors affect the resulting P2TR address:
+# The internal pubkey (likely belonging to the recovery wallet)
+# The recovery leaf
+# The trigger leaf
+# Any other leaves that exist in the taptree
+
+The end user has the option of varying certain contents along descriptors in
+order to avoid reusing vault addresses without affecting key management, e.g.
+the trigger authorization pubkeys.
+
+Note that when using unauthorized recovery, the reveal of the
+recovery scriptPubKey will allow any observer to initiate the recovery process
+for any vault with matching recovery params, provided they are able to locate
+the vault outpoints. As a result, it is recommended to expect that
+'''all outputs sharing an identical unauthorized
should be recovered together'''.
+
+This situation can be avoided with a comparable key management model by varying
+the generation of each vault's recovery scriptPubKey along a single descriptor,
+but note that this will prevent recovering multiple separate vaults into a single
+recovery output.
+
+Varying the internal pubkey will prevent batching the trigger of multiple vault
+inputs into a single trigger output; consequently it is recommended that users
+instead vary some component of the trigger leaf script if address reuse is
+undesirable. Users could vary the trigger pubkey along a descriptor, keeping
+the recovery path and internal-pubkey the same, which both avoids reusing
+addresses and allows batched trigger and recovery operations.
+
+==== Recommendation: generate new recovery addresses for new trigger keys ====
+
+If using unauthorized recovery, it is recommended that you do not share recovery scriptPubKeys
+across separate trigger keys. If one trigger key is compromised, that will necessitate the (unauthorized)
+recovery of all vaults with that trigger key, which will reveal the recovery path preimage. This
+means that an observer might be able to initiate recovery for vaults controlled by an uncompromised
+trigger key.
+
+==== Fee management ====
+
+Fees can be managed in a variety of ways, but it's worth noting that both
+trigger and recovery transactions must preserve the total value of vault
+inputs, so vaulted values cannot be repurposed to pay for fees. This does not
+apply to the withdrawal transaction, which can allocate value arbitrarily.
+
+In the case of vaults that use recovery authorization, all transactions can
+"bring their own fees" in the form of unrelated inputs and outputs. These
+transactions are also free to specify ephemeral anchors, once the related relay
+policies are deployed. This means that vaults using recovery authorization have
+no dependence on the deploy of v3 relay policy.
+
+For vaults using unauthorized recovery, the recovery
+transaction relies on the use of either fully-spent fee inputs or an ephemeral
+anchor output. This means that vaults which do not use recovery authorization
+are essentially dependent on v3 transaction relay policy being deployed.
+
+=== Batching ===
+
+==== During trigger ====
+
+OP_VAULT
outputs with the same taptree, aside from slightly
+different trigger leaves, can be batched together in the same withdrawal
+process. Two "trigger" leaves are compatible if they have the same
+OP_VAULT
arguments.
+
+Note that this allows the trigger authorization -- the script prefixing the
+OP_VAULT
invocation -- to differ while still allowing batching.
+
+Trigger transactions can act on multiple incompatible OP_VAULT
+input sets, provided each set has a suitable associated ''triggerOut''
+output.
+
+Since SIGHASH_DEFAULT
can be used to sign the trigger
+authorization, unrelated inputs and outputs can be included, possibly to
+facilitate fee management or the batch withdrawal of incompatible vaults.
+
+==== During withdrawal ====
+
+During final withdrawal, multiple trigger outputs can be used towards the same
+withdrawal transaction provided that they share identical
+
parameters. This facilitates batched
+withdrawals.
+
+==== During recovery ====
+
+OP_VAULT_RECOVER
outputs with the same
+can be recovered into the same output.
+
+Recovery-incompatible vaults which have authorized recovery can be recovered in
+the same transaction, so long as each set (grouped by
+
) has an associated ''recoveryOut''. This allows
+unrelated recoveries to share common fee management.
+
+=== Watchtowers ===
+
+The value of vaults is contingent upon having monitoring in place that will
+alert the owner when unexpected spends are taking place. This can be done in a
+variety of ways, with varying degrees of automation and trust in the
+watchtower.
+
+In the maximum-trust case, the watchtower can be fully aware of all vaulted
+coins and has the means to initiate the recovery process if spends are not
+pre-reported to the watchtower.
+
+In the minimum-trust case, the user can supply a probabilistic filter of which
+coins they wish to monitor; the watchtower would then alert the user if any
+coins matching the filter move, and the user would be responsible for ignoring
+false positives and handling recovery initiation.
+
+=== Output descriptors ===
+
+Output descriptors for vault-related outputs will be covered in a subsequent BIP.
+
+== Deployment ==
+
+Activation mechanism is to be determined.
+
+This BIP should be deployed concurrently with BIP-0119 to enable full use of vaults.
+
+== Backwards compatibility ==
+
+OP_VAULT
and OP_VAULT_RECOVER
replace, respectively,
+the witness v1-only opcodes OP_SUCCESS187 and OP_SUCCESS188 with stricter
+verification semantics. Consequently, scripts using those opcodes which
+previously were valid will cease to be valid with this change.
+
+Stricter verification semantics for an OP_SUCCESSx opcode are a soft fork, so
+existing software will be fully functional without upgrade except for mining
+and block validation.
+
+Backwards compatibility considerations are very comparable to previous
+deployments for OP_CHECKSEQUENCEVERIFY and OP_CHECKLOCKTIMEVERIFY (see
+[https://github.com/bitcoin/bips/blob/master/bip-0065.mediawiki BIP-0065] and
+[https://github.com/bitcoin/bips/blob/master/bip-0112.mediawiki BIP-0112]).
+
+
+== Rationale ==
+
+$]) { diff --git a/scripts/diffcheck.sh b/scripts/diffcheck.sh index 4e4c4592..aa9f557c 100755 --- a/scripts/diffcheck.sh +++ b/scripts/diffcheck.sh @@ -1,5 +1,6 @@ #!/bin/bash +scripts/buildtable.pl >/tmp/table.mediawiki 2> /dev/null diff README.mediawiki /tmp/table.mediawiki | grep '^[<>] |' >/tmp/after.diff || true if git checkout HEAD^ && scripts/buildtable.pl >/tmp/table.mediawiki 2>/dev/null; then diff README.mediawiki /tmp/table.mediawiki | grep '^[<>] |' >/tmp/before.diff || true @@ -8,6 +9,8 @@ if git checkout HEAD^ && scripts/buildtable.pl >/tmp/table.mediawiki 2>/dev/null echo "$newdiff" exit 1 fi + echo "README table matches expected table from BIP files" else echo 'Cannot build previous commit table for comparison' + exit 1 fi diff --git a/scripts/link-format-chk.sh b/scripts/link-format-chk.sh index e3f0f6d7..9493765d 100755 --- a/scripts/link-format-chk.sh +++ b/scripts/link-format-chk.sh @@ -8,16 +8,14 @@ ECODE=0 FILES="" -for fname in $(git diff --name-only HEAD $(git merge-base HEAD master)); do - if [[ $fname == *.mediawiki ]]; then - GRES=$(grep -n '](http' $fname) - if [ "$GRES" != "" ]; then - if [ $ECODE -eq 0 ]; then - >&2 echo "Github Mediawiki format writes link as [URL text], not as [text](url):" - fi - ECODE=1 - echo "- $fname:$GRES" +for fname in *.mediawiki; do + GRES=$(grep -n '](http' $fname) + if [ "$GRES" != "" ]; then + if [ $ECODE -eq 0 ]; then + >&2 echo "Github Mediawiki format writes link as [URL text], not as [text](url):" fi + ECODE=1 + echo "- $fname:$GRES" fi done exit $ECODE