1
0
mirror of https://github.com/bitcoin/bips.git synced 2025-05-12 12:03:29 +00:00

Monthly typo fixups

Co-authored-by: xiaobei0715 <1505929057@qq.com>
Co-authored-by: wgyt <wgythe@gmail.com>
Co-authored-by: Ragnar <rodiondenmark@gmail.com>
This commit is contained in:
Jon Atack 2025-04-17 15:46:10 +08:00
parent b60b886414
commit 8137279570
16 changed files with 21 additions and 22 deletions

View File

@ -203,7 +203,7 @@ bool CheckSequenceLocks(const CTransaction &tx, int flags)
return error("%s: Missing input", __func__);
}
if (coins.nHeight == MEMPOOL_HEIGHT) {
// Assume all mempool transaction confirm in the next block
// Assume all mempool transactions are confirmed in the next block
prevheights[txinIndex] = tip->nHeight + 1;
} else {
prevheights[txinIndex] = coins.nHeight;

View File

@ -177,7 +177,7 @@ This BIP introduces version 1 of this protocol. All messages sent using these ba
When initiating communication, the version field of the first message SHOULD be set to the highest version number the sender understands. All clients MUST be able to understand all version numbers less than the highest number they support. If a client receives a message with a version number higher than they understand, they MUST send the message back to the sender with a status code of 101 ("version too high") and the version field set to the highest version number the recipient understands. The sender must then resend the original message using the same version number returned by the recipient or abort.
===EncryptedProtocolMessage===
The '''EncryptedProtocolMessage''' message is an encapsualting wrapper for any Payment Protocol message. It allows two-way, authenticated and encrypted communication of Payment Protocol messages in order to keep their contents secret. The message also includes a status code and status message that is used for error communication such that the protocol does not rely on transport-layer error handling.
The '''EncryptedProtocolMessage''' message is an encapsulating wrapper for any Payment Protocol message. It allows two-way, authenticated and encrypted communication of Payment Protocol messages in order to keep their contents secret. The message also includes a status code and status message that is used for error communication such that the protocol does not rely on transport-layer error handling.
<pre>
message EncryptedProtocolMessage {
required uint64 version = 1 [default = 1];

View File

@ -74,7 +74,7 @@ Should the receiver reject a transaction, it should not attempt to propagate it
The receiver must add at least one input to the transaction (the "contributed inputs"). If the receiver has no inputs, it should use a 500 internal server error, so the client can send the transaction as per normal (or try again later). Its generally advised to only add a single contributed input, however they are circumstances where adding more than a single input can be useful.
To prevent an attack where a receiver is continually sent variations of the same transaction to enumerate the receivers utxo set, it is essential that the receiver always returns the same contributed inputs when it's seen the same inputs.
To prevent an attack where a receiver is continually sent variations of the same transaction to enumerate the receiver's utxo set, it is essential that the receiver always returns the same contributed inputs when it's seen the same inputs.
It is strongly preferable that the receiver makes an effort to pick a contributed input of the same type as the other transaction inputs if possible.
@ -97,7 +97,7 @@ The sender *must* do important validation on the partial transaction. They *must
=== Creating Final Transaction ===
After validating the partial transaction, the sender signs all its inputs to create what is now the final transaction. It is important that the sender is careful to not be tricked by the receiver into signing other inputs it owns. The sender must only sign inputs that existed in the template transaction. If the sender is not careful the receiver may "contribute" inputs that are actually owned with by the sender, with the hope the sender blindly signs everything.
After validating the partial transaction, the sender signs all its inputs to create what is now the final transaction. It is important that the sender is careful to not be tricked by the receiver into signing other inputs it owns. The sender must only sign inputs that existed in the template transaction. If the sender is not careful the receiver may "contribute" inputs that are actually owned by the sender, with the hope the sender blindly signs everything.
=== Transaction Publishing ===

View File

@ -57,7 +57,7 @@ We assume a single BIP32 master root key. This specification is not concerned wi
For each application that requires its own wallet, a unique private key is derived from the BIP32 master root key using a fully hardened derivation path. The resulting private key (k) is then processed with HMAC-SHA512, where the key is "bip-entropy-from-k", and the message payload is the private key k: <code>HMAC-SHA512(key="bip-entropy-from-k", msg=k)</code>
<ref name="hmac-sha512">
The reason for running the derived key through HMAC-SHA512 and truncating the result as necessary is to prevent leakage of the parent tree should the derived key (''k'') be compromised. While the specification requires the use of hardended key derivation which would prevent this, we cannot enforce hardened derivation, so this method ensures the derived entropy is hardened. Also, from a semantic point of view, since the purpose is to derive entropy and not a private key, we are required to transform the child key. This is done out of an abundance of caution, in order to ward off unwanted side effects should ''k'' be used for a dual purpose, including as a nonce ''hash(k)'', where undesirable and unforeseen interactions could occur.
The reason for running the derived key through HMAC-SHA512 and truncating the result as necessary is to prevent leakage of the parent tree should the derived key (''k'') be compromised. While the specification requires the use of hardened key derivation which would prevent this, we cannot enforce hardened derivation, so this method ensures the derived entropy is hardened. Also, from a semantic point of view, since the purpose is to derive entropy and not a private key, we are required to transform the child key. This is done out of an abundance of caution, in order to ward off unwanted side effects should ''k'' be used for a dual purpose, including as a nonce ''hash(k)'', where undesirable and unforeseen interactions could occur.
</ref>.
The result produces 512 bits of entropy. Each application SHOULD use up to the required number of bits necessary for their operation, and truncate the rest.

View File

@ -212,7 +212,7 @@ There is a discussion on path templating for bitcoin script descriptors at https
<code>m/{44,49,84}'/0'/0'/{0-1}/{0-50000}</code> specifies a full template that matches both external and internal chains of BIP44, BIP49 and BIP84 paths, with a constraint that the address index cannot be larger than 50000
Its representation after parsing can be (using Python syntax, ignoring full/partial distinction):
[[(2147483692, 2147483692), (2147483697, 2147483697), (2147483732, 2147483732)),
[[(2147483692, 2147483692), (2147483697, 2147483697), (2147483732, 2147483732)],
[(2147483648, 2147483648)],
[(2147483648, 2147483648)],
[(0, 1)],

View File

@ -93,7 +93,7 @@ The applied changes were the result of discussions on the mailing list and the P
* Removal of the 20-minute exception was discussed but dismissed since several reviewers insisted that it was a useful feature allowing non-standard transactions to be mined with just a CPU. The 20-minute exception also allows CPU users to move the chain forward (except on the first block that needs to be mined at actual difficulty) in case a large amount of hash power suddenly leaves the network. This would allow the chain to recover to a normal difficulty level faster if left stranded at high difficulty.
* Increase of minimum difficulty was discussed but dismissed as it would categorically prevent participation in the network using a CPU miner (utilizing the 20-minute exception).
* Increase of the delay in the 20-minute exception was suggested but did not receive significant support.
* Re-enabling <code>acceptnonstdtxn</code> in bitcoin core by default was dismissed as it had led to confusion among layer-2s that had used testnet for transaction propagation tests and expected it to behave similar to mainnet.
* Re-enabling <code>acceptnonstdtxn</code> in bitcoin core by default was dismissed as it had led to confusion among layer-2s that had used testnet for transaction propagation tests and expected it to behave similarly to mainnet.
* Motivating miners to re-org min difficulty blocks was suggested, but was considered out of scope for this BIP, since adoption of such a mining policy remains available after Testnet 4 is deployed. As 20-minute exception blocks only contribute work corresponding to difficulty one to the chaintip, and actual difficulty blocks should have a difficulty magnitudes higher, a block mined at actual difficulty could easily replace even multiple 20-minute exception blocks.
* Persisting the real difficulty in the version field was suggested to robustly prevent exploits of the 20-minute exception while allowing it to be used on any block, but did not receive a sufficient level of support to justify the more invasive change.

View File

@ -32,7 +32,7 @@ This BIP is licensed under a Creative Commons Attribution-ShareAlike license. Al
A Merkle hash-tree is a directed acyclic graph data structure where all non-terminal nodes are labeled with the hash of combined labels or values of the node(s) it is connected to.
Bitcoin uses a unique Merkle hash-tree construct invented by Satoshi for calculating the block header commitment to the list of transactions in a block.
While it would be convenient for new applications to make use of this same data structure so as to share implementation and maintenance costs, there are three principle drawbacks to reuse.
While it would be convenient for new applications to make use of this same data structure so as to share implementation and maintenance costs, there are three principal drawbacks to reuse.
First, Satoshi's Merkle hash-tree has a serious vulnerability[1] related to duplicate tree entries that can cause bugs in protocols that use it.
While it is possible to secure protocols and implementations against exploit of this flaw, it requires foresight and it is a bit more tricky to design secure protocols that work around this vulnerability.

View File

@ -145,7 +145,7 @@ Fundamental disagreements and controversies are part of social
systems, like the one defined as the human participants in the Bitcoin
network. Without judging the motivation of the rule discrepancies or
what rules were in place first, we're defining schism[1] hardforks as
those in which - for whatever reason - users are consiously going to validate 2
those in which - for whatever reason - users are consciously going to validate 2
different sets of consensus rules. Since they will validate different
rulesets, they will end up following 2 different chains for at least
some time, maybe forever.
@ -154,7 +154,7 @@ One possible result observed in the past[non_proportional_inflatacoin_fork]
is that one of the chains rapidly disappears, but nothing indicates
that this must always be the case.
While 2 chains cohexist, they can be considered two different
While 2 chains coexist, they can be considered two different
currencies.
We could say that bitcoin becomes bitcoinA and bitcoinB. The implications for market
capitalization are completely unpredictable,

View File

@ -17,7 +17,7 @@
==Abstract==
A general approach to bitcoin contracts is to fully enumerate the possible spending conditions and then program verification of these conditions into a single script.
At redemption, the spending condition used is explicitly selected, e.g. by pushing a value on the witness stack which cascades through a series if if/else constructs.
At redemption, the spending condition used is explicitly selected, e.g. by pushing a value on the witness stack that cascades through a series of if/else constructs.
This approach has significant downsides, such as requiring all program pathways to be visible in the scriptPubKey or redeem script, even those which are not used at validation.
This wastes space on the block chain, restricts the size of possible scripts due to push limits, and impacts both privacy and fungibility as details of the contract can often be specific to the user.
@ -67,7 +67,7 @@ With Merkle commitments to policy these size and runtime limitations constrain t
The MERKLEBRANCHVERIFY opcode uses fast Merkle hash trees as specified by BIP98[1] rather than the construct used by Satoshi for committing transactions to the block header as the later has a known vulnerability relating to duplicate entries that introduces a source of malleability to downstream protocols[4].
A source of malleability in Merkle proofs could potentially lead to spend vulnerabilities in protocols that use MERKLEBRANCHVERIFY.
For example, a compact 2-of-N policy could be written by using MERKLEBRANCHVERIFY to prove that two keys are extracted from the same tree, one at a time, then checking the proofs for bitwise equality to make sure the same entry wasn't used twice.
With the vulnerable Merkle tree implementation there are privledged positions in unbalanced Merkle trees that allow multiple proofs to be constructed for the same, single entry.
With the vulnerable Merkle tree implementation there are privileged positions in unbalanced Merkle trees that allow multiple proofs to be constructed for the same, single entry.
BIP141 (Segregated Witness)[3] provides support for a powerful form of script upgrades called script versioning, which is able to achieve the sort of upgrades which would previously have been hard-forks.
If script versioning were used for deployment then MERKLEBRANCHVERIFY could be written to consume its inputs, which would provide a small 2-byte savings for many anticipated use cases.

View File

@ -370,7 +370,7 @@ OP_CHECKTEMPLATEVERIFY is not subject to this sort of vulnerability as the
hashes are effectively tagged externally, that is, by OP_CHECKTEMPLATEVERIFY
itself and therefore cannot be confused for another hash.
It would be a conservative design decisison to make it a tagged hash even if
It would be a conservative design decision to make it a tagged hash even if
there was no obvious benefit and no cost. However, in the future, if OP_CAT were
to be introduced to Bitcoin, it would make programs which dynamically build
OP_CHECKTEMPLATEVERIFY hashes less space-efficient. Therefore, bare untagged hashes
@ -472,7 +472,7 @@ from the leaves of the CHECKTEMPLATEVERIFY tree.
Key-reuse with CHECKTEMPLATEVERIFY may be used as a form of "forwarding address contract".
A forwarding address is an address which can automatically execute in a predefined way.
For example, a exchange's hot wallet might use an address which can automatically be moved to a cold
For example, an exchange's hot wallet might use an address which can automatically be moved to a cold
storage address after a relative timeout.
The issue is that reusing addresses in this way can lead to loss of funds.

View File

@ -59,7 +59,7 @@ inputs present in the transaction.
A coalescing transaction is formulated the exact same way as a version 1 transaction
with one exception: each input is treated as a "wildcard input".
A wildcard input beings the value of all inputs with the exact same scriptPubKey
A wildcard input being the value of all inputs with the exact same scriptPubKey
in a block lower or equal to the block the wildcard input is confirmed into.
== Changes needed to implement ==

View File

@ -91,7 +91,7 @@ The chacha20-poly1305@openssh.com specified and defined by openssh [5] combines
<code>K_2</code> must be used in conjunction with poly1305 to build an AEAD.
Optimized implementations of ChaCha20-Poly1305 are very fast in general, therefore it is very likely that encrypted messages require less CPU cycles per byte then the current unencrypted p2p message format. A quick analysis by Pieter Wuille of the current ''standard implementations'' has shown that SHA256 requires more CPU cycles per byte then ChaCha20 & Poly1304.
Optimized implementations of ChaCha20-Poly1305 are very fast in general, therefore it is very likely that encrypted messages require less CPU cycles per byte than the current unencrypted p2p message format. A quick analysis by Pieter Wuille of the current ''standard implementations'' has shown that SHA256 requires more CPU cycles per byte than ChaCha20 & Poly1304.
=== The <code>encack</code> message type ===

View File

@ -26,7 +26,7 @@ This BIP is licensed under the 2-clause BSD license.
===Motivation===
Tor v3 hidden services are part of the stable release of Tor since version 0.3.2.9. They have
various advantages compared to the old hidden services, among which better encryption and privacy
various advantages compared to the old hidden services, among which are better encryption and privacy
<ref>[https://gitweb.torproject.org/torspec.git/tree/rend-spec-v3.txt Tor Rendezvous Specification - Version 3]</ref>.
These services have 256 bit addresses and thus do not fit in the existing <code>addr</code> message, which encapsulates onion addresses in OnionCat IPv6 addresses.

View File

@ -355,7 +355,7 @@ random-access disk reads.
Nodes SHOULD NOT generate filters dynamically on request, as malicious peers may
be able to perform DoS attacks by requesting small filters derived from large
blocks. This would require an asymmetical amount of I/O on the node to compute
blocks. This would require an asymmetrical amount of I/O on the node to compute
and serve, similar to attacks against BIP 37 enabled nodes noted in BIP 111.
Nodes MAY prune block data after generating and storing all filters for a block.

View File

@ -42,7 +42,7 @@ def modinv(a, n):
if a == 0:
return 0
if sys.hexversion >= 0x3080000:
# More efficient version available in Python 3.8.
# A more efficient version is available in Python 3.8.
return pow(a, -1, n)
t1, t2 = 0, 1
r1, r2 = n, a
@ -174,7 +174,7 @@ class FE:
"""Compute all cube roots of a field element, if any.
Due to the fact that our modulus p is of the form p = 7 (mod 9), one cube root
can always be computed by raising to the power (p + 2) / 9. The other roots
can always be computed by raising to the power of (p + 2) / 9. The other roots
(if any) can be found by multiplying with the two non-trivial cube roots of 1.
To see why: p-1 = 0 (mod 3), so 3 divides the order of the multiplicative group,

View File

@ -468,8 +468,7 @@ import sys
def fromhex_all(l):
return [bytes.fromhex(l_i) for l_i in l]
# Check that calling `try_fn` raises a `exception`. If `exception` is raised,
# examine it with `except_fn`.
# Check if calling `try_fn` raises an exception. If yes, examine it with `except_fn`.
def assert_raises(exception, try_fn, except_fn):
raised = False
try: