54b39cfb342d10a448d49299c715e3a25c2aca4a Add release notes (stickies-v) f959fc0397c3f3615e99bc28d2df549d9d52f277 Update /<count>/ endpoints to use a '?count=' query parameter instead (stickies-v) a09497614e9bb603fff36286d9611a25b23eeb02 Add GetQueryParameter helper function (stickies-v) fff771ee864975cee8c831651239bac95503c37a Handle query string when parsing data format (stickies-v) c1aad1b3b95b7c6bdf05e0c2095aba2f2db8310b scripted-diff: rename RetFormat to RESTResponseFormat (stickies-v) 9f1c54787c81177dd56a31c881a9ad2834a122dc Refactoring: move declarations to rest.h (stickies-v) Pull request description: In RESTful APIs, [typically](https://rapidapi.com/blog/api-glossary/parameters/query/) path parameters (e.g. `/some/unique/resource/`) are used to represent resources, and query parameters (e.g. `?sort=asc`) are used to control how these resources are being loaded through e.g. sorting, pagination, filtering, ... As first [discussed in #17631](https://github.com/bitcoin/bitcoin/pull/17631#discussion_r733031180), the [current REST api](https://github.com/bitcoin/bitcoin/blob/master/doc/REST-interface.md) contains two endpoints `/headers/` and `/blockfilterheaders/` that rather unexpectedly use path parameters to control how many (filter) headers are returned in the response. While this is no critical issue, it is unintuitive and we are still early enough to easily phase this behaviour out and ensure new endpoints (if any) do not have to stick to non-standard behaviour just for internal consistency. In this PR, a new `HTTPRequest::GetQueryParameter` method is introduced to easily parse query parameters, as well as two new `/headers/` and `/blockfilterheaders/` endpoints that use a count query parameter are introduced. The old path parameter-based endpoints are kept without too much overhead, but the documentation now points to the new query parameter-based endpoints as the default interface to encourage standardness. ## Behaviour change ### New endpoints and default values `/headers/` and `/blockfilterheaders/` now have 2 new endpoints that contain query parameters (`?count=<count>`) instead of path parameters (`/<count>/`), as described in REST-interface.md. Since query parameters can easily have default values, I have set this at 5 for both endpoints. **headers** `GET /rest/headers/<BLOCK-HASH>.<bin|hex|json>?count=<COUNT=5>` should now be used instead of `GET /rest/headers/<COUNT>/<BLOCK-HASH>.<bin|hex|json>` **blockfilterheaders** `GET /rest/blockfilterheaders/<FILTERTYPE>/<BLOCK-HASH>.<bin|hex|json>?count=<COUNT=5>` should now be used instead of `GET /rest/blockfilterheaders/<FILTERTYPE>/<COUNT>/<BLOCK-HASH>.<bin|hex|json>` ### Some previously invalid API calls are now valid API calls that contained query strings in the URI could not be parsed prior to this PR. This PR changes behaviour in that previously invalid calls (e.g. `GET /rest/headers/5/somehash.json?someunusedparam=foo`) would now become valid, as the query parameters are properly parsed, and discarded if unused. For example, prior to this PR, adding an irrelevant `someparam` parameter would be illegal: ``` GET /rest/headers/5/0000004c6aad0c89c1c060e8e116dcd849e0554935cd78ff9c6a398abeac6eda.json?someparam=true -> Invalid hash: 0000004c6aad0c89c1c060e8e116dcd849e0554935cd78ff9c6a398abeac6eda.json?someparam=true ``` **This behaviour change affects all rest endpoints, not just the 2 new ones introduced here.** *(Note: I'd be open to implementing additional logic to refuse requests containing unrecognized query parameters to minimize behaviour change, but for the endpoints that we currently have I don't really see the point for that added complexity. E.g. I don't see any scenarios where misspelling a parameter could lead to harmful outcomes)* ## Using the REST API To run the API HTTP server, start a bitcoind instance with the `-rest` flag enabled. To use the `blockfilterheaders` endpoint, you'll also need to set `-blockfilterindex=1`: ``` ./bitcoind -signet -rest -blockfilterindex=1 ``` As soon as bitcoind is fully up and running, you should be able to query the API, for example by using curl on the command line: ```curl "127.0.0.1:38332/rest/chaininfo.json"```. To more easily parse the JSON output, you can also use tools like 'jq' or `json_pp`, e.g.: ``` curl -s "localhost:38332/rest/blockfilterheaders/basic/0000004c6aad0c89c1c060e8e116dcd849e0554935cd78ff9c6a398abeac6eda.json?count=2" | json_pp . ``` ## To do - [x] update `doc/release-notes` ## Feedback This is my first PR (hooray!). Please don't hold back on any feedback/comments/nits/... you may have, big or small, whether they are code, process, language, ... related. I welcome private messages too if there's anything you don't want to clutter the PR with. I'm here to learn and am grateful for everyone's input. ACKs for top commit: stickies-v: I've had to push a tiny doc update to `REST-interface.md` (`git range-diff 219d728 9aac438 54b39cf`) since this was not merged for v23, but since there are no significant changes beyond theStack and jnewbery's ACKs I think this PR is now ready to be considered for merging? @MarcoFalke jnewbery: ACK 54b39cfb342d10a448d49299c715e3a25c2aca4a theStack: re-ACK 54b39cfb342d10a448d49299c715e3a25c2aca4a Tree-SHA512: 3b393ffde34f25605ca12c0b1300799a19684b816a1d03aed38b0f5439df47bfe6a589ffbcd7b83fd2def6c9d00a1bae5e45b1d18df4ae998c617c709990f83f
Functional tests
Writing Functional Tests
Example test
The file test/functional/example_test.py is a heavily commented example of a test case that uses both the RPC and P2P interfaces. If you are writing your first test, copy that file and modify to fit your needs.
Coverage
Running test/functional/test_runner.py
with the --coverage
argument tracks which RPCs are
called by the tests and prints a report of uncovered RPCs in the summary. This
can be used (along with the --extended
argument) to find out which RPCs we
don't have test cases for.
Style guidelines
- Where possible, try to adhere to PEP-8 guidelines
- Use a python linter like flake8 before submitting PRs to catch common style nits (eg trailing whitespace, unused imports, etc)
- The oldest supported Python version is specified in doc/dependencies.md. Consider using pyenv, which checks .python-version, to prevent accidentally introducing modern syntax from an unsupported Python version. The CI linter job also checks this, but possibly not in all cases.
- See the python lint script that checks for violations that could lead to bugs and issues in the test code.
- Use type hints in your code to improve code readability and to detect possible bugs earlier.
- Avoid wildcard imports
- Use a module-level docstring to describe what the test is testing, and how it is testing it.
- When subclassing the BitcoinTestFramework, place overrides for the
set_test_params()
,add_options()
andsetup_xxxx()
methods at the top of the subclass, then locally-defined helper methods, then therun_test()
method. - Use
f'{x}'
for string formatting in preference to'{}'.format(x)
or'%s' % x
.
Naming guidelines
- Name the test
<area>_test.py
, where area can be one of the following:feature
for tests for full features that aren't wallet/mining/mempool, egfeature_rbf.py
interface
for tests for other interfaces (REST, ZMQ, etc), eginterface_rest.py
mempool
for tests for mempool behaviour, egmempool_reorg.py
mining
for tests for mining features, egmining_prioritisetransaction.py
p2p
for tests that explicitly test the p2p interface, egp2p_disconnect_ban.py
rpc
for tests for individual RPC methods or features, egrpc_listtransactions.py
tool
for tests for tools, egtool_wallet.py
wallet
for tests for wallet features, egwallet_keypool.py
- Use an underscore to separate words
- exception: for tests for specific RPCs or command line options which don't include underscores, name the test after the exact RPC or argument name, eg
rpc_decodescript.py
, notrpc_decode_script.py
- exception: for tests for specific RPCs or command line options which don't include underscores, name the test after the exact RPC or argument name, eg
- Don't use the redundant word
test
in the name, eginterface_zmq.py
, notinterface_zmq_test.py
General test-writing advice
- Instead of inline comments or no test documentation at all, log the comments to the test log, e.g.
self.log.info('Create enough transactions to fill a block')
. Logs make the test code easier to read and the test logic easier to debug. - Set
self.num_nodes
to the minimum number of nodes necessary for the test. Having additional unrequired nodes adds to the execution time of the test as well as memory/CPU/disk requirements (which is important when running tests in parallel). - Avoid stop-starting the nodes multiple times during the test if possible. A stop-start takes several seconds, so doing it several times blows up the runtime of the test.
- Set the
self.setup_clean_chain
variable inset_test_params()
toTrue
to initialize an empty blockchain and start from the Genesis block, rather than load a premined blockchain from cache with the default value ofFalse
. The cached data directories contain a 200-block pre-mined blockchain with the spendable mining rewards being split between four nodes. Each node has 25 mature block subsidies (25x50=1250 BTC) in its wallet. Using them is much more efficient than mining blocks in your test. - When calling RPCs with lots of arguments, consider using named keyword arguments instead of positional arguments to make the intent of the call clear to readers.
- Many of the core test framework classes such as
CBlock
andCTransaction
don't allow new attributes to be added to their objects at runtime like typical Python objects allow. This helps prevent unpredictable side effects from typographical errors or usage of the objects outside of their intended purpose.
RPC and P2P definitions
Test writers may find it helpful to refer to the definitions for the RPC and P2P messages. These can be found in the following source files:
/src/rpc/*
for RPCs/src/wallet/rpc*
for wallet RPCsProcessMessage()
in/src/net_processing.cpp
for parsing P2P messages
Using the P2P interface
-
P2P
s can be used to test specific P2P protocol behavior. p2p.py contains test framework p2p objects and messages.py contains all the definitions for objects passed over the network (CBlock
,CTransaction
, etc, along with the network-level wrappers for them,msg_block
,msg_tx
, etc). -
P2P tests have two threads. One thread handles all network communication with the bitcoind(s) being tested in a callback-based event loop; the other implements the test logic.
-
P2PConnection
is the class used to connect to a bitcoind.P2PInterface
contains the higher level logic for processing P2P payloads and connecting to the Bitcoin Core node application logic. For custom behaviour, subclass the P2PInterface object and override the callback methods.
P2PConnection
s can be used as such:
p2p_conn = node.add_p2p_connection(P2PInterface())
p2p_conn.send_and_ping(msg)
They can also be referenced by indexing into a TestNode
's p2ps
list, which
contains the list of test framework p2p
objects connected to itself
(it does not include any TestNode
s):
node.p2ps[0].sync_with_ping()
More examples can be found in p2p_unrequested_blocks.py, p2p_compactblocks.py.
Prototyping tests
The TestShell
class exposes the BitcoinTestFramework
functionality to interactive Python3 environments and can be used to prototype
tests. This may be especially useful in a REPL environment with session logging
utilities, such as
IPython.
The logs of such interactive sessions can later be adapted into permanent test
cases.
Test framework modules
The following are useful modules for test developers. They are located in test/functional/test_framework/.
authproxy.py
Taken from the python-bitcoinrpc repository.
test_framework.py
Base class for functional tests.
util.py
Generally useful functions.
p2p.py
Test objects for interacting with a bitcoind node over the p2p interface.
script.py
Utilities for manipulating transaction scripts (originally from python-bitcoinlib)
key.py
Test-only secp256k1 elliptic curve implementation
blocktools.py
Helper functions for creating blocks and transactions.
Benchmarking with perf
An easy way to profile node performance during functional tests is provided
for Linux platforms using perf
.
Perf will sample the running node and will generate profile data in the node's
datadir. The profile data can then be presented using perf report
or a graphical
tool like hotspot.
There are two ways of invoking perf: one is to use the --perf
flag when
running tests, which will profile each node during the entire test run: perf
begins to profile when the node starts and ends when it shuts down. The other
way is the use the profile_with_perf
context manager, e.g.
with node.profile_with_perf("send-big-msgs"):
# Perform activity on the node you're interested in profiling, e.g.:
for _ in range(10000):
node.p2ps[0].send_message(some_large_message)
To see useful textual output, run
perf report -i /path/to/datadir/send-big-msgs.perf.data.xxxx --stdio | c++filt | less
See also:
- Installing perf
- Perf examples
- Hotspot: a GUI for perf output analysis