Merge branch 'master' into release/0.14.0
This commit is contained in:
commit
69b184a0a4
26
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
26
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@ -0,0 +1,26 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: 'bug'
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
<!-- A clear and concise description of what the bug is. -->
|
||||
|
||||
**To Reproduce**
|
||||
<!-- Steps or code to reproduce the behavior. -->
|
||||
|
||||
**Expected behavior**
|
||||
<!-- A clear and concise description of what you expected to happen. -->
|
||||
|
||||
**Build environment**
|
||||
- BDK tag/commit: <!-- e.g. v0.13.0, 3a07614 -->
|
||||
- OS+version: <!-- e.g. ubuntu 20.04.01, macOS 12.0.1, windows -->
|
||||
- Rust/Cargo version: <!-- e.g. 1.56.0 -->
|
||||
- Rust/Cargo target: <!-- e.g. x86_64-apple-darwin, x86_64-unknown-linux-gnu, etc. -->
|
||||
|
||||
**Additional context**
|
||||
<!-- Add any other context about the problem here. -->
|
77
.github/ISSUE_TEMPLATE/summer_project.md
vendored
Normal file
77
.github/ISSUE_TEMPLATE/summer_project.md
vendored
Normal file
@ -0,0 +1,77 @@
|
||||
---
|
||||
name: Summer of Bitcoin Project
|
||||
about: Template to suggest a new https://www.summerofbitcoin.org/ project.
|
||||
title: ''
|
||||
labels: 'summer-of-bitcoin'
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
## Overview
|
||||
|
||||
Project ideas are scoped for a university-level student with a basic background in CS and bitcoin
|
||||
fundamentals - achievable over 12-weeks. Below are just a few types of ideas:
|
||||
|
||||
- Low-hanging fruit: Relatively short projects with clear goals; requires basic technical knowledge
|
||||
and minimal familiarity with the codebase.
|
||||
- Core development: These projects derive from the ongoing work from the core of your development
|
||||
team. The list of features and bugs is never-ending, and help is always welcome.
|
||||
- Risky/Exploratory: These projects push the scope boundaries of your development effort. They
|
||||
might require expertise in an area not covered by your current development team. They might take
|
||||
advantage of a new technology. There is a reasonable chance that the project might be less
|
||||
successful, but the potential rewards make it worth the attempt.
|
||||
- Infrastructure/Automation: These projects are the code that your organization uses to get its
|
||||
development work done; for example, projects that improve the automation of releases, regression
|
||||
tests and automated builds. This is a category where a Summer of Bitcoin student can be really
|
||||
helpful, doing work that the development team has been putting off while they focus on core
|
||||
development.
|
||||
- Quality Assurance/Testing: Projects that work on and test your project's software development
|
||||
process. Additionally, projects that involve a thorough test and review of individual PRs.
|
||||
- Fun/Peripheral: These projects might not be related to the current core development focus, but
|
||||
create new innovations and new perspectives for your project.
|
||||
-->
|
||||
|
||||
**Descriptive Title**
|
||||
<!-- Description: 3-7 sentences describing the project background and tasks to be done. -->
|
||||
|
||||
**Expected Outcomes**
|
||||
<!-- Short bullet list describing what is to be accomplished -->
|
||||
|
||||
**Resources**
|
||||
<!-- 2-3 reading materials for candidate to learn about the repo, project, scope etc -->
|
||||
<!-- Recommended reading such as a developer/contributor guide -->
|
||||
<!-- [Another example a paper citation](https://arxiv.org/pdf/1802.08091.pdf) -->
|
||||
<!-- [Another example an existing issue](https://github.com/opencv/opencv/issues/11013) -->
|
||||
<!-- [An existing related module](https://github.com/opencv/opencv_contrib/tree/master/modules/optflow) -->
|
||||
|
||||
**Skills Required**
|
||||
<!-- 3-4 technical skills that the candidate should know -->
|
||||
<!-- hands on experience with git -->
|
||||
<!-- mastery plus experience coding in C++ -->
|
||||
<!-- basic knowledge in matrix and tensor computations, college course work in cryptography -->
|
||||
<!-- strong mathematical background -->
|
||||
<!-- Bonus - has experience with React Native. Best if you have also worked with OSSFuzz -->
|
||||
|
||||
**Mentor(s)**
|
||||
<!-- names of mentor(s) for this project go here -->
|
||||
|
||||
**Difficulty**
|
||||
<!-- Easy, Medium, Hard -->
|
||||
|
||||
**Competency Test (optional)**
|
||||
<!-- 2-3 technical tasks related to the project idea or repository you’d like a candidate to
|
||||
perform in order to demonstrate competency, good first bugs, warm-up exercises -->
|
||||
<!-- ex. Read the instructions here to get Bitcoin core running on your machine -->
|
||||
<!-- ex. pick an issue labeled as “newcomer” in the repository, and send a merge request to the
|
||||
repository. You can also suggest some other improvement that we did not think of yet, or
|
||||
something that you find interesting or useful -->
|
||||
<!-- ex. fixes for coding style are usually easy to do, and are good issues for first time
|
||||
contributions for those learning how to interact with the project. After you are done with the
|
||||
coding style issue, try making a different contribution. -->
|
||||
<!-- ex. setup a full Debian packaging development environment and learn the basics of Debian
|
||||
packaging. Then identify and package the missing dependencies to package Specter Desktop -->
|
||||
<!-- ex. write a pull parser for CSV files. You'll be judged by the decisions to store the parser
|
||||
state and how flexible it is to wrap this parser in other scenarios. -->
|
||||
<!-- ex. Stretch Goal: Implement some basic metaprogram/app to prove you're very familiar with BDK.
|
||||
Be prepared to make adjustments as we judge your solution. -->
|
@ -6,6 +6,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
- Overhauled sync logic for electrum and esplora.
|
||||
- Unify ureq and reqwest esplora backends to have the same configuration parameters. This means reqwest now has a timeout parameter and ureq has a concurrency parameter.
|
||||
- Fixed esplora fee estimation.
|
||||
|
||||
## [v0.14.0] - [v0.13.0]
|
||||
|
||||
- BIP39 implementation dependency, in `keys::bip39` changed from tiny-bip39 to rust-bip39.
|
||||
|
@ -472,7 +472,7 @@ pub struct CompactFiltersBlockchainConfig {
|
||||
pub peers: Vec<BitcoinPeerConfig>,
|
||||
/// Network used
|
||||
pub network: Network,
|
||||
/// Storage dir to save partially downloaded headers and full blocks
|
||||
/// Storage dir to save partially downloaded headers and full blocks. Should be a separate directory per descriptor. Consider using [crate::wallet::wallet_name_from_descriptor] for this.
|
||||
pub storage_dir: String,
|
||||
/// Optionally skip initial `skip_blocks` blocks (default: 0)
|
||||
pub skip_blocks: Option<usize>,
|
||||
|
@ -24,20 +24,20 @@
|
||||
//! # Ok::<(), bdk::Error>(())
|
||||
//! ```
|
||||
|
||||
use std::collections::HashSet;
|
||||
use std::collections::{HashMap, HashSet};
|
||||
|
||||
#[allow(unused_imports)]
|
||||
use log::{debug, error, info, trace};
|
||||
|
||||
use bitcoin::{BlockHeader, Script, Transaction, Txid};
|
||||
use bitcoin::{Transaction, Txid};
|
||||
|
||||
use electrum_client::{Client, ConfigBuilder, ElectrumApi, Socks5Config};
|
||||
|
||||
use self::utils::{ElectrumLikeSync, ElsGetHistoryRes};
|
||||
use super::script_sync::Request;
|
||||
use super::*;
|
||||
use crate::database::BatchDatabase;
|
||||
use crate::database::{BatchDatabase, Database};
|
||||
use crate::error::Error;
|
||||
use crate::FeeRate;
|
||||
use crate::{BlockTime, FeeRate};
|
||||
|
||||
/// Wrapper over an Electrum Client that implements the required blockchain traits
|
||||
///
|
||||
@ -71,10 +71,139 @@ impl Blockchain for ElectrumBlockchain {
|
||||
fn setup<D: BatchDatabase, P: Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: P,
|
||||
_progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
self.client
|
||||
.electrum_like_setup(self.stop_gap, database, progress_update)
|
||||
let mut request = script_sync::start(database, self.stop_gap)?;
|
||||
let mut block_times = HashMap::<u32, u32>::new();
|
||||
let mut txid_to_height = HashMap::<Txid, u32>::new();
|
||||
let mut tx_cache = TxCache::new(database, &self.client);
|
||||
let chunk_size = self.stop_gap;
|
||||
// The electrum server has been inconsistent somehow in its responses during sync. For
|
||||
// example, we do a batch request of transactions and the response contains less
|
||||
// tranascations than in the request. This should never happen but we don't want to panic.
|
||||
let electrum_goof = || Error::Generic("electrum server misbehaving".to_string());
|
||||
|
||||
let batch_update = loop {
|
||||
request = match request {
|
||||
Request::Script(script_req) => {
|
||||
let scripts = script_req.request().take(chunk_size);
|
||||
let txids_per_script: Vec<Vec<_>> = self
|
||||
.client
|
||||
.batch_script_get_history(scripts)
|
||||
.map_err(Error::Electrum)?
|
||||
.into_iter()
|
||||
.map(|txs| {
|
||||
txs.into_iter()
|
||||
.map(|tx| {
|
||||
let tx_height = match tx.height {
|
||||
none if none <= 0 => None,
|
||||
height => {
|
||||
txid_to_height.insert(tx.tx_hash, height as u32);
|
||||
Some(height as u32)
|
||||
}
|
||||
};
|
||||
(tx.tx_hash, tx_height)
|
||||
})
|
||||
.collect()
|
||||
})
|
||||
.collect();
|
||||
|
||||
script_req.satisfy(txids_per_script)?
|
||||
}
|
||||
|
||||
Request::Conftime(conftime_req) => {
|
||||
// collect up to chunk_size heights to fetch from electrum
|
||||
let needs_block_height = {
|
||||
let mut needs_block_height_iter = conftime_req
|
||||
.request()
|
||||
.filter_map(|txid| txid_to_height.get(txid).cloned())
|
||||
.filter(|height| block_times.get(height).is_none());
|
||||
let mut needs_block_height = HashSet::new();
|
||||
|
||||
while needs_block_height.len() < chunk_size {
|
||||
match needs_block_height_iter.next() {
|
||||
Some(height) => needs_block_height.insert(height),
|
||||
None => break,
|
||||
};
|
||||
}
|
||||
needs_block_height
|
||||
};
|
||||
|
||||
let new_block_headers = self
|
||||
.client
|
||||
.batch_block_header(needs_block_height.iter().cloned())?;
|
||||
|
||||
for (height, header) in needs_block_height.into_iter().zip(new_block_headers) {
|
||||
block_times.insert(height, header.time);
|
||||
}
|
||||
|
||||
let conftimes = conftime_req
|
||||
.request()
|
||||
.take(chunk_size)
|
||||
.map(|txid| {
|
||||
let confirmation_time = txid_to_height
|
||||
.get(txid)
|
||||
.map(|height| {
|
||||
let timestamp =
|
||||
*block_times.get(height).ok_or_else(electrum_goof)?;
|
||||
Result::<_, Error>::Ok(BlockTime {
|
||||
height: *height,
|
||||
timestamp: timestamp.into(),
|
||||
})
|
||||
})
|
||||
.transpose()?;
|
||||
Ok(confirmation_time)
|
||||
})
|
||||
.collect::<Result<_, Error>>()?;
|
||||
|
||||
conftime_req.satisfy(conftimes)?
|
||||
}
|
||||
Request::Tx(tx_req) => {
|
||||
let needs_full = tx_req.request().take(chunk_size);
|
||||
tx_cache.save_txs(needs_full.clone())?;
|
||||
let full_transactions = needs_full
|
||||
.map(|txid| tx_cache.get(*txid).ok_or_else(electrum_goof))
|
||||
.collect::<Result<Vec<_>, _>>()?;
|
||||
let input_txs = full_transactions.iter().flat_map(|tx| {
|
||||
tx.input
|
||||
.iter()
|
||||
.filter(|input| !input.previous_output.is_null())
|
||||
.map(|input| &input.previous_output.txid)
|
||||
});
|
||||
tx_cache.save_txs(input_txs)?;
|
||||
|
||||
let full_details = full_transactions
|
||||
.into_iter()
|
||||
.map(|tx| {
|
||||
let prev_outputs = tx
|
||||
.input
|
||||
.iter()
|
||||
.map(|input| {
|
||||
if input.previous_output.is_null() {
|
||||
return Ok(None);
|
||||
}
|
||||
let prev_tx = tx_cache
|
||||
.get(input.previous_output.txid)
|
||||
.ok_or_else(electrum_goof)?;
|
||||
let txout = prev_tx
|
||||
.output
|
||||
.get(input.previous_output.vout as usize)
|
||||
.ok_or_else(electrum_goof)?;
|
||||
Ok(Some(txout.clone()))
|
||||
})
|
||||
.collect::<Result<Vec<_>, Error>>()?;
|
||||
Ok((prev_outputs, tx))
|
||||
})
|
||||
.collect::<Result<Vec<_>, Error>>()?;
|
||||
|
||||
tx_req.satisfy(full_details)?
|
||||
}
|
||||
Request::Finish(batch_update) => break batch_update,
|
||||
}
|
||||
};
|
||||
|
||||
database.commit_batch(batch_update)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
@ -101,43 +230,48 @@ impl Blockchain for ElectrumBlockchain {
|
||||
}
|
||||
}
|
||||
|
||||
impl ElectrumLikeSync for Client {
|
||||
fn els_batch_script_get_history<'s, I: IntoIterator<Item = &'s Script> + Clone>(
|
||||
&self,
|
||||
scripts: I,
|
||||
) -> Result<Vec<Vec<ElsGetHistoryRes>>, Error> {
|
||||
self.batch_script_get_history(scripts)
|
||||
.map(|v| {
|
||||
v.into_iter()
|
||||
.map(|v| {
|
||||
v.into_iter()
|
||||
.map(
|
||||
|electrum_client::GetHistoryRes {
|
||||
height, tx_hash, ..
|
||||
}| ElsGetHistoryRes {
|
||||
height,
|
||||
tx_hash,
|
||||
},
|
||||
)
|
||||
.collect()
|
||||
})
|
||||
.collect()
|
||||
})
|
||||
.map_err(Error::Electrum)
|
||||
struct TxCache<'a, 'b, D> {
|
||||
db: &'a D,
|
||||
client: &'b Client,
|
||||
cache: HashMap<Txid, Transaction>,
|
||||
}
|
||||
|
||||
impl<'a, 'b, D: Database> TxCache<'a, 'b, D> {
|
||||
fn new(db: &'a D, client: &'b Client) -> Self {
|
||||
TxCache {
|
||||
db,
|
||||
client,
|
||||
cache: HashMap::default(),
|
||||
}
|
||||
}
|
||||
fn save_txs<'c>(&mut self, txids: impl Iterator<Item = &'c Txid>) -> Result<(), Error> {
|
||||
let mut need_fetch = vec![];
|
||||
for txid in txids {
|
||||
if self.cache.get(txid).is_some() {
|
||||
continue;
|
||||
} else if let Some(transaction) = self.db.get_raw_tx(txid)? {
|
||||
self.cache.insert(*txid, transaction);
|
||||
} else {
|
||||
need_fetch.push(txid);
|
||||
}
|
||||
}
|
||||
|
||||
if !need_fetch.is_empty() {
|
||||
let txs = self
|
||||
.client
|
||||
.batch_transaction_get(need_fetch.clone())
|
||||
.map_err(Error::Electrum)?;
|
||||
for (tx, _txid) in txs.into_iter().zip(need_fetch) {
|
||||
debug_assert_eq!(*_txid, tx.txid());
|
||||
self.cache.insert(tx.txid(), tx);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn els_batch_transaction_get<'s, I: IntoIterator<Item = &'s Txid> + Clone>(
|
||||
&self,
|
||||
txids: I,
|
||||
) -> Result<Vec<Transaction>, Error> {
|
||||
self.batch_transaction_get(txids).map_err(Error::Electrum)
|
||||
}
|
||||
|
||||
fn els_batch_block_header<I: IntoIterator<Item = u32> + Clone>(
|
||||
&self,
|
||||
heights: I,
|
||||
) -> Result<Vec<BlockHeader>, Error> {
|
||||
self.batch_block_header(heights).map_err(Error::Electrum)
|
||||
fn get(&self, txid: Txid) -> Option<Transaction> {
|
||||
self.cache.get(&txid).map(Clone::clone)
|
||||
}
|
||||
}
|
||||
|
||||
|
117
src/blockchain/esplora/api.rs
Normal file
117
src/blockchain/esplora/api.rs
Normal file
@ -0,0 +1,117 @@
|
||||
//! structs from the esplora API
|
||||
//!
|
||||
//! see: <https://github.com/Blockstream/esplora/blob/master/API.md>
|
||||
use crate::BlockTime;
|
||||
use bitcoin::{OutPoint, Script, Transaction, TxIn, TxOut, Txid};
|
||||
|
||||
#[derive(serde::Deserialize, Clone, Debug)]
|
||||
pub struct PrevOut {
|
||||
pub value: u64,
|
||||
pub scriptpubkey: Script,
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize, Clone, Debug)]
|
||||
pub struct Vin {
|
||||
pub txid: Txid,
|
||||
pub vout: u32,
|
||||
// None if coinbase
|
||||
pub prevout: Option<PrevOut>,
|
||||
pub scriptsig: Script,
|
||||
#[serde(deserialize_with = "deserialize_witness")]
|
||||
pub witness: Vec<Vec<u8>>,
|
||||
pub sequence: u32,
|
||||
pub is_coinbase: bool,
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize, Clone, Debug)]
|
||||
pub struct Vout {
|
||||
pub value: u64,
|
||||
pub scriptpubkey: Script,
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize, Clone, Debug)]
|
||||
pub struct TxStatus {
|
||||
pub confirmed: bool,
|
||||
pub block_height: Option<u32>,
|
||||
pub block_time: Option<u64>,
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize, Clone, Debug)]
|
||||
pub struct Tx {
|
||||
pub txid: Txid,
|
||||
pub version: i32,
|
||||
pub locktime: u32,
|
||||
pub vin: Vec<Vin>,
|
||||
pub vout: Vec<Vout>,
|
||||
pub status: TxStatus,
|
||||
pub fee: u64,
|
||||
}
|
||||
|
||||
impl Tx {
|
||||
pub fn to_tx(&self) -> Transaction {
|
||||
Transaction {
|
||||
version: self.version,
|
||||
lock_time: self.locktime,
|
||||
input: self
|
||||
.vin
|
||||
.iter()
|
||||
.cloned()
|
||||
.map(|vin| TxIn {
|
||||
previous_output: OutPoint {
|
||||
txid: vin.txid,
|
||||
vout: vin.vout,
|
||||
},
|
||||
script_sig: vin.scriptsig,
|
||||
sequence: vin.sequence,
|
||||
witness: vin.witness,
|
||||
})
|
||||
.collect(),
|
||||
output: self
|
||||
.vout
|
||||
.iter()
|
||||
.cloned()
|
||||
.map(|vout| TxOut {
|
||||
value: vout.value,
|
||||
script_pubkey: vout.scriptpubkey,
|
||||
})
|
||||
.collect(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn confirmation_time(&self) -> Option<BlockTime> {
|
||||
match self.status {
|
||||
TxStatus {
|
||||
confirmed: true,
|
||||
block_height: Some(height),
|
||||
block_time: Some(timestamp),
|
||||
} => Some(BlockTime { timestamp, height }),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn previous_outputs(&self) -> Vec<Option<TxOut>> {
|
||||
self.vin
|
||||
.iter()
|
||||
.cloned()
|
||||
.map(|vin| {
|
||||
vin.prevout.map(|po| TxOut {
|
||||
script_pubkey: po.scriptpubkey,
|
||||
value: po.value,
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
}
|
||||
|
||||
fn deserialize_witness<'de, D>(d: D) -> Result<Vec<Vec<u8>>, D::Error>
|
||||
where
|
||||
D: serde::de::Deserializer<'de>,
|
||||
{
|
||||
use crate::serde::Deserialize;
|
||||
use bitcoin::hashes::hex::FromHex;
|
||||
let list = Vec::<String>::deserialize(d)?;
|
||||
list.into_iter()
|
||||
.map(|hex_str| Vec::<u8>::from_hex(&hex_str))
|
||||
.collect::<Result<Vec<Vec<u8>>, _>>()
|
||||
.map_err(serde::de::Error::custom)
|
||||
}
|
@ -21,8 +21,6 @@ use std::collections::HashMap;
|
||||
use std::fmt;
|
||||
use std::io;
|
||||
|
||||
use serde::Deserialize;
|
||||
|
||||
use bitcoin::consensus;
|
||||
use bitcoin::{BlockHash, Txid};
|
||||
|
||||
@ -41,33 +39,24 @@ mod ureq;
|
||||
#[cfg(feature = "ureq")]
|
||||
pub use self::ureq::*;
|
||||
|
||||
mod api;
|
||||
|
||||
fn into_fee_rate(target: usize, estimates: HashMap<String, f64>) -> Result<FeeRate, Error> {
|
||||
let fee_val = estimates
|
||||
.into_iter()
|
||||
.map(|(k, v)| Ok::<_, std::num::ParseIntError>((k.parse::<usize>()?, v)))
|
||||
.collect::<Result<Vec<_>, _>>()
|
||||
.map_err(|e| Error::Generic(e.to_string()))?
|
||||
.into_iter()
|
||||
.take_while(|(k, _)| k <= &target)
|
||||
.map(|(_, v)| v)
|
||||
.last()
|
||||
.unwrap_or(1.0);
|
||||
|
||||
let fee_val = {
|
||||
let mut pairs = estimates
|
||||
.into_iter()
|
||||
.filter_map(|(k, v)| Some((k.parse::<usize>().ok()?, v)))
|
||||
.collect::<Vec<_>>();
|
||||
pairs.sort_unstable_by_key(|(k, _)| std::cmp::Reverse(*k));
|
||||
pairs
|
||||
.into_iter()
|
||||
.find(|(k, _)| k <= &target)
|
||||
.map(|(_, v)| v)
|
||||
.unwrap_or(1.0)
|
||||
};
|
||||
Ok(FeeRate::from_sat_per_vb(fee_val as f32))
|
||||
}
|
||||
|
||||
/// Data type used when fetching transaction history from Esplora.
|
||||
#[derive(Deserialize)]
|
||||
pub struct EsploraGetHistory {
|
||||
txid: Txid,
|
||||
status: EsploraGetHistoryStatus,
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct EsploraGetHistoryStatus {
|
||||
block_height: Option<usize>,
|
||||
}
|
||||
|
||||
/// Errors that can happen during a sync with [`EsploraBlockchain`]
|
||||
#[derive(Debug)]
|
||||
pub enum EsploraError {
|
||||
@ -107,10 +96,50 @@ impl fmt::Display for EsploraError {
|
||||
}
|
||||
}
|
||||
|
||||
/// Configuration for an [`EsploraBlockchain`]
|
||||
#[derive(Debug, serde::Deserialize, serde::Serialize, Clone, PartialEq)]
|
||||
pub struct EsploraBlockchainConfig {
|
||||
/// Base URL of the esplora service
|
||||
///
|
||||
/// eg. `https://blockstream.info/api/`
|
||||
pub base_url: String,
|
||||
/// Optional URL of the proxy to use to make requests to the Esplora server
|
||||
///
|
||||
/// The string should be formatted as: `<protocol>://<user>:<password>@host:<port>`.
|
||||
///
|
||||
/// Note that the format of this value and the supported protocols change slightly between the
|
||||
/// sync version of esplora (using `ureq`) and the async version (using `reqwest`). For more
|
||||
/// details check with the documentation of the two crates. Both of them are compiled with
|
||||
/// the `socks` feature enabled.
|
||||
///
|
||||
/// The proxy is ignored when targeting `wasm32`.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub proxy: Option<String>,
|
||||
/// Number of parallel requests sent to the esplora service (default: 4)
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub concurrency: Option<u8>,
|
||||
/// Stop searching addresses for transactions after finding an unused gap of this length.
|
||||
pub stop_gap: usize,
|
||||
/// Socket timeout.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub timeout: Option<u64>,
|
||||
}
|
||||
|
||||
impl EsploraBlockchainConfig {
|
||||
/// create a config with default values given the base url and stop gap
|
||||
pub fn new(base_url: String, stop_gap: usize) -> Self {
|
||||
Self {
|
||||
base_url,
|
||||
proxy: None,
|
||||
timeout: None,
|
||||
stop_gap,
|
||||
concurrency: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl std::error::Error for EsploraError {}
|
||||
|
||||
#[cfg(feature = "ureq")]
|
||||
impl_error!(::ureq::Error, Ureq, EsploraError);
|
||||
#[cfg(feature = "ureq")]
|
||||
impl_error!(::ureq::Transport, UreqTransport, EsploraError);
|
||||
#[cfg(feature = "reqwest")]
|
||||
@ -127,3 +156,57 @@ crate::bdk_blockchain_tests! {
|
||||
EsploraBlockchain::new(&format!("http://{}",test_client.electrsd.esplora_url.as_ref().unwrap()), 20)
|
||||
}
|
||||
}
|
||||
|
||||
const DEFAULT_CONCURRENT_REQUESTS: u8 = 4;
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn feerate_parsing() {
|
||||
let esplora_fees = serde_json::from_str::<HashMap<String, f64>>(
|
||||
r#"{
|
||||
"25": 1.015,
|
||||
"5": 2.3280000000000003,
|
||||
"12": 2.0109999999999997,
|
||||
"15": 1.018,
|
||||
"17": 1.018,
|
||||
"11": 2.0109999999999997,
|
||||
"3": 3.01,
|
||||
"2": 4.9830000000000005,
|
||||
"6": 2.2359999999999998,
|
||||
"21": 1.018,
|
||||
"13": 1.081,
|
||||
"7": 2.2359999999999998,
|
||||
"8": 2.2359999999999998,
|
||||
"16": 1.018,
|
||||
"20": 1.018,
|
||||
"22": 1.017,
|
||||
"23": 1.017,
|
||||
"504": 1,
|
||||
"9": 2.2359999999999998,
|
||||
"14": 1.018,
|
||||
"10": 2.0109999999999997,
|
||||
"24": 1.017,
|
||||
"1008": 1,
|
||||
"1": 4.9830000000000005,
|
||||
"4": 2.3280000000000003,
|
||||
"19": 1.018,
|
||||
"144": 1,
|
||||
"18": 1.018
|
||||
}
|
||||
"#,
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(
|
||||
into_fee_rate(6, esplora_fees.clone()).unwrap(),
|
||||
FeeRate::from_sat_per_vb(2.236)
|
||||
);
|
||||
assert_eq!(
|
||||
into_fee_rate(26, esplora_fees).unwrap(),
|
||||
FeeRate::from_sat_per_vb(1.015),
|
||||
"should inherit from value for 25"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
@ -21,20 +21,16 @@ use bitcoin::{BlockHeader, Script, Transaction, Txid};
|
||||
#[allow(unused_imports)]
|
||||
use log::{debug, error, info, trace};
|
||||
|
||||
use futures::stream::{self, FuturesOrdered, StreamExt, TryStreamExt};
|
||||
|
||||
use ::reqwest::{Client, StatusCode};
|
||||
use futures::stream::{FuturesOrdered, TryStreamExt};
|
||||
|
||||
use crate::blockchain::esplora::{EsploraError, EsploraGetHistory};
|
||||
use crate::blockchain::utils::{ElectrumLikeSync, ElsGetHistoryRes};
|
||||
use super::api::Tx;
|
||||
use crate::blockchain::esplora::EsploraError;
|
||||
use crate::blockchain::*;
|
||||
use crate::database::BatchDatabase;
|
||||
use crate::error::Error;
|
||||
use crate::wallet::utils::ChunksIterator;
|
||||
use crate::FeeRate;
|
||||
|
||||
const DEFAULT_CONCURRENT_REQUESTS: u8 = 4;
|
||||
|
||||
#[derive(Debug)]
|
||||
struct UrlClient {
|
||||
url: String,
|
||||
@ -70,7 +66,7 @@ impl EsploraBlockchain {
|
||||
url_client: UrlClient {
|
||||
url: base_url.to_string(),
|
||||
client: Client::new(),
|
||||
concurrency: DEFAULT_CONCURRENT_REQUESTS,
|
||||
concurrency: super::DEFAULT_CONCURRENT_REQUESTS,
|
||||
},
|
||||
stop_gap,
|
||||
}
|
||||
@ -98,11 +94,91 @@ impl Blockchain for EsploraBlockchain {
|
||||
fn setup<D: BatchDatabase, P: Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: P,
|
||||
_progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
maybe_await!(self
|
||||
.url_client
|
||||
.electrum_like_setup(self.stop_gap, database, progress_update))
|
||||
use crate::blockchain::script_sync::Request;
|
||||
let mut request = script_sync::start(database, self.stop_gap)?;
|
||||
let mut tx_index: HashMap<Txid, Tx> = HashMap::new();
|
||||
|
||||
let batch_update = loop {
|
||||
request = match request {
|
||||
Request::Script(script_req) => {
|
||||
let futures: FuturesOrdered<_> = script_req
|
||||
.request()
|
||||
.take(self.url_client.concurrency as usize)
|
||||
.map(|script| async move {
|
||||
let mut related_txs: Vec<Tx> =
|
||||
self.url_client._scripthash_txs(script, None).await?;
|
||||
|
||||
let n_confirmed =
|
||||
related_txs.iter().filter(|tx| tx.status.confirmed).count();
|
||||
// esplora pages on 25 confirmed transactions. If there's 25 or more we
|
||||
// keep requesting to see if there's more.
|
||||
if n_confirmed >= 25 {
|
||||
loop {
|
||||
let new_related_txs: Vec<Tx> = self
|
||||
.url_client
|
||||
._scripthash_txs(
|
||||
script,
|
||||
Some(related_txs.last().unwrap().txid),
|
||||
)
|
||||
.await?;
|
||||
let n = new_related_txs.len();
|
||||
related_txs.extend(new_related_txs);
|
||||
// we've reached the end
|
||||
if n < 25 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
Result::<_, Error>::Ok(related_txs)
|
||||
})
|
||||
.collect();
|
||||
let txs_per_script: Vec<Vec<Tx>> = await_or_block!(futures.try_collect())?;
|
||||
let mut satisfaction = vec![];
|
||||
|
||||
for txs in txs_per_script {
|
||||
satisfaction.push(
|
||||
txs.iter()
|
||||
.map(|tx| (tx.txid, tx.status.block_height))
|
||||
.collect(),
|
||||
);
|
||||
for tx in txs {
|
||||
tx_index.insert(tx.txid, tx);
|
||||
}
|
||||
}
|
||||
|
||||
script_req.satisfy(satisfaction)?
|
||||
}
|
||||
Request::Conftime(conftime_req) => {
|
||||
let conftimes = conftime_req
|
||||
.request()
|
||||
.map(|txid| {
|
||||
tx_index
|
||||
.get(txid)
|
||||
.expect("must be in index")
|
||||
.confirmation_time()
|
||||
})
|
||||
.collect();
|
||||
conftime_req.satisfy(conftimes)?
|
||||
}
|
||||
Request::Tx(tx_req) => {
|
||||
let full_txs = tx_req
|
||||
.request()
|
||||
.map(|txid| {
|
||||
let tx = tx_index.get(txid).expect("must be in index");
|
||||
(tx.previous_outputs(), tx.to_tx())
|
||||
})
|
||||
.collect();
|
||||
tx_req.satisfy(full_txs)?
|
||||
}
|
||||
Request::Finish(batch_update) => break batch_update,
|
||||
}
|
||||
};
|
||||
|
||||
database.commit_batch(batch_update)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
@ -124,10 +200,6 @@ impl Blockchain for EsploraBlockchain {
|
||||
}
|
||||
|
||||
impl UrlClient {
|
||||
fn script_to_scripthash(script: &Script) -> String {
|
||||
sha256::Hash::hash(script.as_bytes()).into_inner().to_hex()
|
||||
}
|
||||
|
||||
async fn _get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, EsploraError> {
|
||||
let resp = self
|
||||
.client
|
||||
@ -196,71 +268,27 @@ impl UrlClient {
|
||||
Ok(req.error_for_status()?.text().await?.parse()?)
|
||||
}
|
||||
|
||||
async fn _script_get_history(
|
||||
async fn _scripthash_txs(
|
||||
&self,
|
||||
script: &Script,
|
||||
) -> Result<Vec<ElsGetHistoryRes>, EsploraError> {
|
||||
let mut result = Vec::new();
|
||||
let scripthash = Self::script_to_scripthash(script);
|
||||
|
||||
// Add the unconfirmed transactions first
|
||||
result.extend(
|
||||
self.client
|
||||
.get(&format!(
|
||||
"{}/scripthash/{}/txs/mempool",
|
||||
self.url, scripthash
|
||||
))
|
||||
.send()
|
||||
.await?
|
||||
.error_for_status()?
|
||||
.json::<Vec<EsploraGetHistory>>()
|
||||
.await?
|
||||
.into_iter()
|
||||
.map(|x| ElsGetHistoryRes {
|
||||
tx_hash: x.txid,
|
||||
height: x.status.block_height.unwrap_or(0) as i32,
|
||||
}),
|
||||
);
|
||||
|
||||
debug!(
|
||||
"Found {} mempool txs for {} - {:?}",
|
||||
result.len(),
|
||||
scripthash,
|
||||
script
|
||||
);
|
||||
|
||||
// Then go through all the pages of confirmed transactions
|
||||
let mut last_txid = String::new();
|
||||
loop {
|
||||
let response = self
|
||||
.client
|
||||
.get(&format!(
|
||||
"{}/scripthash/{}/txs/chain/{}",
|
||||
self.url, scripthash, last_txid
|
||||
))
|
||||
.send()
|
||||
.await?
|
||||
.error_for_status()?
|
||||
.json::<Vec<EsploraGetHistory>>()
|
||||
.await?;
|
||||
let len = response.len();
|
||||
if let Some(elem) = response.last() {
|
||||
last_txid = elem.txid.to_hex();
|
||||
}
|
||||
|
||||
debug!("... adding {} confirmed transactions", len);
|
||||
|
||||
result.extend(response.into_iter().map(|x| ElsGetHistoryRes {
|
||||
tx_hash: x.txid,
|
||||
height: x.status.block_height.unwrap_or(0) as i32,
|
||||
}));
|
||||
|
||||
if len < 25 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(result)
|
||||
last_seen: Option<Txid>,
|
||||
) -> Result<Vec<Tx>, EsploraError> {
|
||||
let script_hash = sha256::Hash::hash(script.as_bytes()).into_inner().to_hex();
|
||||
let url = match last_seen {
|
||||
Some(last_seen) => format!(
|
||||
"{}/scripthash/{}/txs/chain/{}",
|
||||
self.url, script_hash, last_seen
|
||||
),
|
||||
None => format!("{}/scripthash/{}/txs", self.url, script_hash),
|
||||
};
|
||||
Ok(self
|
||||
.client
|
||||
.get(url)
|
||||
.send()
|
||||
.await?
|
||||
.error_for_status()?
|
||||
.json::<Vec<Tx>>()
|
||||
.await?)
|
||||
}
|
||||
|
||||
async fn _get_fee_estimates(&self) -> Result<HashMap<String, f64>, EsploraError> {
|
||||
@ -275,83 +303,8 @@ impl UrlClient {
|
||||
}
|
||||
}
|
||||
|
||||
#[maybe_async]
|
||||
impl ElectrumLikeSync for UrlClient {
|
||||
fn els_batch_script_get_history<'s, I: IntoIterator<Item = &'s Script>>(
|
||||
&self,
|
||||
scripts: I,
|
||||
) -> Result<Vec<Vec<ElsGetHistoryRes>>, Error> {
|
||||
let mut results = vec![];
|
||||
for chunk in ChunksIterator::new(scripts.into_iter(), self.concurrency as usize) {
|
||||
let mut futs = FuturesOrdered::new();
|
||||
for script in chunk {
|
||||
futs.push(self._script_get_history(script));
|
||||
}
|
||||
let partial_results: Vec<Vec<ElsGetHistoryRes>> = await_or_block!(futs.try_collect())?;
|
||||
results.extend(partial_results);
|
||||
}
|
||||
Ok(await_or_block!(stream::iter(results).collect()))
|
||||
}
|
||||
|
||||
fn els_batch_transaction_get<'s, I: IntoIterator<Item = &'s Txid>>(
|
||||
&self,
|
||||
txids: I,
|
||||
) -> Result<Vec<Transaction>, Error> {
|
||||
let mut results = vec![];
|
||||
for chunk in ChunksIterator::new(txids.into_iter(), self.concurrency as usize) {
|
||||
let mut futs = FuturesOrdered::new();
|
||||
for txid in chunk {
|
||||
futs.push(self._get_tx_no_opt(txid));
|
||||
}
|
||||
let partial_results: Vec<Transaction> = await_or_block!(futs.try_collect())?;
|
||||
results.extend(partial_results);
|
||||
}
|
||||
Ok(await_or_block!(stream::iter(results).collect()))
|
||||
}
|
||||
|
||||
fn els_batch_block_header<I: IntoIterator<Item = u32>>(
|
||||
&self,
|
||||
heights: I,
|
||||
) -> Result<Vec<BlockHeader>, Error> {
|
||||
let mut results = vec![];
|
||||
for chunk in ChunksIterator::new(heights.into_iter(), self.concurrency as usize) {
|
||||
let mut futs = FuturesOrdered::new();
|
||||
for height in chunk {
|
||||
futs.push(self._get_header(height));
|
||||
}
|
||||
let partial_results: Vec<BlockHeader> = await_or_block!(futs.try_collect())?;
|
||||
results.extend(partial_results);
|
||||
}
|
||||
Ok(await_or_block!(stream::iter(results).collect()))
|
||||
}
|
||||
}
|
||||
|
||||
/// Configuration for an [`EsploraBlockchain`]
|
||||
#[derive(Debug, serde::Deserialize, serde::Serialize, Clone, PartialEq)]
|
||||
pub struct EsploraBlockchainConfig {
|
||||
/// Base URL of the esplora service
|
||||
///
|
||||
/// eg. `https://blockstream.info/api/`
|
||||
pub base_url: String,
|
||||
/// Optional URL of the proxy to use to make requests to the Esplora server
|
||||
///
|
||||
/// The string should be formatted as: `<protocol>://<user>:<password>@host:<port>`.
|
||||
///
|
||||
/// Note that the format of this value and the supported protocols change slightly between the
|
||||
/// sync version of esplora (using `ureq`) and the async version (using `reqwest`). For more
|
||||
/// details check with the documentation of the two crates. Both of them are compiled with
|
||||
/// the `socks` feature enabled.
|
||||
///
|
||||
/// The proxy is ignored when targeting `wasm32`.
|
||||
pub proxy: Option<String>,
|
||||
/// Number of parallel requests sent to the esplora service (default: 4)
|
||||
pub concurrency: Option<u8>,
|
||||
/// Stop searching addresses for transactions after finding an unused gap of this length.
|
||||
pub stop_gap: usize,
|
||||
}
|
||||
|
||||
impl ConfigurableBlockchain for EsploraBlockchain {
|
||||
type Config = EsploraBlockchainConfig;
|
||||
type Config = super::EsploraBlockchainConfig;
|
||||
|
||||
fn from_config(config: &Self::Config) -> Result<Self, Error> {
|
||||
let map_e = |e: reqwest::Error| Error::Esplora(Box::new(e.into()));
|
||||
@ -360,13 +313,19 @@ impl ConfigurableBlockchain for EsploraBlockchain {
|
||||
if let Some(concurrency) = config.concurrency {
|
||||
blockchain.url_client.concurrency = concurrency;
|
||||
}
|
||||
let mut builder = Client::builder();
|
||||
#[cfg(not(target_arch = "wasm32"))]
|
||||
if let Some(proxy) = &config.proxy {
|
||||
blockchain.url_client.client = Client::builder()
|
||||
.proxy(reqwest::Proxy::all(proxy).map_err(map_e)?)
|
||||
.build()
|
||||
.map_err(map_e)?;
|
||||
builder = builder.proxy(reqwest::Proxy::all(proxy).map_err(map_e)?);
|
||||
}
|
||||
|
||||
#[cfg(not(target_arch = "wasm32"))]
|
||||
if let Some(timeout) = config.timeout {
|
||||
builder = builder.timeout(core::time::Duration::from_secs(timeout));
|
||||
}
|
||||
|
||||
blockchain.url_client.client = builder.build().map_err(map_e)?;
|
||||
|
||||
Ok(blockchain)
|
||||
}
|
||||
}
|
||||
|
@ -26,14 +26,14 @@ use bitcoin::hashes::hex::{FromHex, ToHex};
|
||||
use bitcoin::hashes::{sha256, Hash};
|
||||
use bitcoin::{BlockHeader, Script, Transaction, Txid};
|
||||
|
||||
use crate::blockchain::esplora::{EsploraError, EsploraGetHistory};
|
||||
use crate::blockchain::utils::{ElectrumLikeSync, ElsGetHistoryRes};
|
||||
use super::api::Tx;
|
||||
use crate::blockchain::esplora::EsploraError;
|
||||
use crate::blockchain::*;
|
||||
use crate::database::BatchDatabase;
|
||||
use crate::error::Error;
|
||||
use crate::FeeRate;
|
||||
|
||||
#[derive(Debug)]
|
||||
#[derive(Debug, Clone)]
|
||||
struct UrlClient {
|
||||
url: String,
|
||||
agent: Agent,
|
||||
@ -47,15 +47,7 @@ struct UrlClient {
|
||||
pub struct EsploraBlockchain {
|
||||
url_client: UrlClient,
|
||||
stop_gap: usize,
|
||||
}
|
||||
|
||||
impl std::convert::From<UrlClient> for EsploraBlockchain {
|
||||
fn from(url_client: UrlClient) -> Self {
|
||||
EsploraBlockchain {
|
||||
url_client,
|
||||
stop_gap: 20,
|
||||
}
|
||||
}
|
||||
concurrency: u8,
|
||||
}
|
||||
|
||||
impl EsploraBlockchain {
|
||||
@ -66,6 +58,7 @@ impl EsploraBlockchain {
|
||||
url: base_url.to_string(),
|
||||
agent: Agent::new(),
|
||||
},
|
||||
concurrency: super::DEFAULT_CONCURRENT_REQUESTS,
|
||||
stop_gap,
|
||||
}
|
||||
}
|
||||
@ -75,6 +68,12 @@ impl EsploraBlockchain {
|
||||
self.url_client.agent = agent;
|
||||
self
|
||||
}
|
||||
|
||||
/// Set the number of parallel requests the client can make.
|
||||
pub fn with_concurrency(mut self, concurrency: u8) -> Self {
|
||||
self.concurrency = concurrency;
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
impl Blockchain for EsploraBlockchain {
|
||||
@ -91,10 +90,94 @@ impl Blockchain for EsploraBlockchain {
|
||||
fn setup<D: BatchDatabase, P: Progress>(
|
||||
&self,
|
||||
database: &mut D,
|
||||
progress_update: P,
|
||||
_progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
self.url_client
|
||||
.electrum_like_setup(self.stop_gap, database, progress_update)
|
||||
use crate::blockchain::script_sync::Request;
|
||||
let mut request = script_sync::start(database, self.stop_gap)?;
|
||||
let mut tx_index: HashMap<Txid, Tx> = HashMap::new();
|
||||
let batch_update = loop {
|
||||
request = match request {
|
||||
Request::Script(script_req) => {
|
||||
let scripts = script_req
|
||||
.request()
|
||||
.take(self.concurrency as usize)
|
||||
.cloned();
|
||||
|
||||
let handles = scripts.map(move |script| {
|
||||
let client = self.url_client.clone();
|
||||
// make each request in its own thread.
|
||||
std::thread::spawn(move || {
|
||||
let mut related_txs: Vec<Tx> = client._scripthash_txs(&script, None)?;
|
||||
|
||||
let n_confirmed =
|
||||
related_txs.iter().filter(|tx| tx.status.confirmed).count();
|
||||
// esplora pages on 25 confirmed transactions. If there's 25 or more we
|
||||
// keep requesting to see if there's more.
|
||||
if n_confirmed >= 25 {
|
||||
loop {
|
||||
let new_related_txs: Vec<Tx> = client._scripthash_txs(
|
||||
&script,
|
||||
Some(related_txs.last().unwrap().txid),
|
||||
)?;
|
||||
let n = new_related_txs.len();
|
||||
related_txs.extend(new_related_txs);
|
||||
// we've reached the end
|
||||
if n < 25 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
Result::<_, Error>::Ok(related_txs)
|
||||
})
|
||||
});
|
||||
|
||||
let txs_per_script: Vec<Vec<Tx>> = handles
|
||||
.map(|handle| handle.join().unwrap())
|
||||
.collect::<Result<_, _>>()?;
|
||||
let mut satisfaction = vec![];
|
||||
|
||||
for txs in txs_per_script {
|
||||
satisfaction.push(
|
||||
txs.iter()
|
||||
.map(|tx| (tx.txid, tx.status.block_height))
|
||||
.collect(),
|
||||
);
|
||||
for tx in txs {
|
||||
tx_index.insert(tx.txid, tx);
|
||||
}
|
||||
}
|
||||
|
||||
script_req.satisfy(satisfaction)?
|
||||
}
|
||||
Request::Conftime(conftime_req) => {
|
||||
let conftimes = conftime_req
|
||||
.request()
|
||||
.map(|txid| {
|
||||
tx_index
|
||||
.get(txid)
|
||||
.expect("must be in index")
|
||||
.confirmation_time()
|
||||
})
|
||||
.collect();
|
||||
conftime_req.satisfy(conftimes)?
|
||||
}
|
||||
Request::Tx(tx_req) => {
|
||||
let full_txs = tx_req
|
||||
.request()
|
||||
.map(|txid| {
|
||||
let tx = tx_index.get(txid).expect("must be in index");
|
||||
(tx.previous_outputs(), tx.to_tx())
|
||||
})
|
||||
.collect();
|
||||
tx_req.satisfy(full_txs)?
|
||||
}
|
||||
Request::Finish(batch_update) => break batch_update,
|
||||
}
|
||||
};
|
||||
|
||||
database.commit_batch(batch_update)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, Error> {
|
||||
@ -117,10 +200,6 @@ impl Blockchain for EsploraBlockchain {
|
||||
}
|
||||
|
||||
impl UrlClient {
|
||||
fn script_to_scripthash(script: &Script) -> String {
|
||||
sha256::Hash::hash(script.as_bytes()).into_inner().to_hex()
|
||||
}
|
||||
|
||||
fn _get_tx(&self, txid: &Txid) -> Result<Option<Transaction>, EsploraError> {
|
||||
let resp = self
|
||||
.agent
|
||||
@ -200,81 +279,6 @@ impl UrlClient {
|
||||
}
|
||||
}
|
||||
|
||||
fn _script_get_history(&self, script: &Script) -> Result<Vec<ElsGetHistoryRes>, EsploraError> {
|
||||
let mut result = Vec::new();
|
||||
let scripthash = Self::script_to_scripthash(script);
|
||||
|
||||
// Add the unconfirmed transactions first
|
||||
|
||||
let resp = self
|
||||
.agent
|
||||
.get(&format!(
|
||||
"{}/scripthash/{}/txs/mempool",
|
||||
self.url, scripthash
|
||||
))
|
||||
.call();
|
||||
|
||||
let v = match resp {
|
||||
Ok(resp) => {
|
||||
let v: Vec<EsploraGetHistory> = resp.into_json()?;
|
||||
Ok(v)
|
||||
}
|
||||
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
|
||||
Err(e) => Err(EsploraError::Ureq(e)),
|
||||
}?;
|
||||
|
||||
result.extend(v.into_iter().map(|x| ElsGetHistoryRes {
|
||||
tx_hash: x.txid,
|
||||
height: x.status.block_height.unwrap_or(0) as i32,
|
||||
}));
|
||||
|
||||
debug!(
|
||||
"Found {} mempool txs for {} - {:?}",
|
||||
result.len(),
|
||||
scripthash,
|
||||
script
|
||||
);
|
||||
|
||||
// Then go through all the pages of confirmed transactions
|
||||
let mut last_txid = String::new();
|
||||
loop {
|
||||
let resp = self
|
||||
.agent
|
||||
.get(&format!(
|
||||
"{}/scripthash/{}/txs/chain/{}",
|
||||
self.url, scripthash, last_txid
|
||||
))
|
||||
.call();
|
||||
|
||||
let v = match resp {
|
||||
Ok(resp) => {
|
||||
let v: Vec<EsploraGetHistory> = resp.into_json()?;
|
||||
Ok(v)
|
||||
}
|
||||
Err(ureq::Error::Status(code, _)) => Err(EsploraError::HttpResponse(code)),
|
||||
Err(e) => Err(EsploraError::Ureq(e)),
|
||||
}?;
|
||||
|
||||
let len = v.len();
|
||||
if let Some(elem) = v.last() {
|
||||
last_txid = elem.txid.to_hex();
|
||||
}
|
||||
|
||||
debug!("... adding {} confirmed transactions", len);
|
||||
|
||||
result.extend(v.into_iter().map(|x| ElsGetHistoryRes {
|
||||
tx_hash: x.txid,
|
||||
height: x.status.block_height.unwrap_or(0) as i32,
|
||||
}));
|
||||
|
||||
if len < 25 {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
fn _get_fee_estimates(&self) -> Result<HashMap<String, f64>, EsploraError> {
|
||||
let resp = self
|
||||
.agent
|
||||
@ -292,6 +296,22 @@ impl UrlClient {
|
||||
|
||||
Ok(map)
|
||||
}
|
||||
|
||||
fn _scripthash_txs(
|
||||
&self,
|
||||
script: &Script,
|
||||
last_seen: Option<Txid>,
|
||||
) -> Result<Vec<Tx>, EsploraError> {
|
||||
let script_hash = sha256::Hash::hash(script.as_bytes()).into_inner().to_hex();
|
||||
let url = match last_seen {
|
||||
Some(last_seen) => format!(
|
||||
"{}/scripthash/{}/txs/chain/{}",
|
||||
self.url, script_hash, last_seen
|
||||
),
|
||||
None => format!("{}/scripthash/{}/txs", self.url, script_hash),
|
||||
};
|
||||
Ok(self.agent.get(&url).call()?.into_json()?)
|
||||
}
|
||||
}
|
||||
|
||||
fn is_status_not_found(status: u16) -> bool {
|
||||
@ -315,84 +335,37 @@ fn into_bytes(resp: Response) -> Result<Vec<u8>, io::Error> {
|
||||
Ok(buf)
|
||||
}
|
||||
|
||||
impl ElectrumLikeSync for UrlClient {
|
||||
fn els_batch_script_get_history<'s, I: IntoIterator<Item = &'s Script>>(
|
||||
&self,
|
||||
scripts: I,
|
||||
) -> Result<Vec<Vec<ElsGetHistoryRes>>, Error> {
|
||||
let mut results = vec![];
|
||||
for script in scripts.into_iter() {
|
||||
let v = self._script_get_history(script)?;
|
||||
results.push(v);
|
||||
}
|
||||
Ok(results)
|
||||
}
|
||||
|
||||
fn els_batch_transaction_get<'s, I: IntoIterator<Item = &'s Txid>>(
|
||||
&self,
|
||||
txids: I,
|
||||
) -> Result<Vec<Transaction>, Error> {
|
||||
let mut results = vec![];
|
||||
for txid in txids.into_iter() {
|
||||
let tx = self._get_tx_no_opt(txid)?;
|
||||
results.push(tx);
|
||||
}
|
||||
Ok(results)
|
||||
}
|
||||
|
||||
fn els_batch_block_header<I: IntoIterator<Item = u32>>(
|
||||
&self,
|
||||
heights: I,
|
||||
) -> Result<Vec<BlockHeader>, Error> {
|
||||
let mut results = vec![];
|
||||
for height in heights.into_iter() {
|
||||
let header = self._get_header(height)?;
|
||||
results.push(header);
|
||||
}
|
||||
Ok(results)
|
||||
}
|
||||
}
|
||||
|
||||
/// Configuration for an [`EsploraBlockchain`]
|
||||
#[derive(Debug, serde::Deserialize, serde::Serialize, Clone, PartialEq)]
|
||||
pub struct EsploraBlockchainConfig {
|
||||
/// Base URL of the esplora service eg. `https://blockstream.info/api/`
|
||||
pub base_url: String,
|
||||
/// Optional URL of the proxy to use to make requests to the Esplora server
|
||||
///
|
||||
/// The string should be formatted as: `<protocol>://<user>:<password>@host:<port>`.
|
||||
///
|
||||
/// Note that the format of this value and the supported protocols change slightly between the
|
||||
/// sync version of esplora (using `ureq`) and the async version (using `reqwest`). For more
|
||||
/// details check with the documentation of the two crates. Both of them are compiled with
|
||||
/// the `socks` feature enabled.
|
||||
///
|
||||
/// The proxy is ignored when targeting `wasm32`.
|
||||
pub proxy: Option<String>,
|
||||
/// Socket read timeout.
|
||||
pub timeout_read: u64,
|
||||
/// Socket write timeout.
|
||||
pub timeout_write: u64,
|
||||
/// Stop searching addresses for transactions after finding an unused gap of this length.
|
||||
pub stop_gap: usize,
|
||||
}
|
||||
|
||||
impl ConfigurableBlockchain for EsploraBlockchain {
|
||||
type Config = EsploraBlockchainConfig;
|
||||
type Config = super::EsploraBlockchainConfig;
|
||||
|
||||
fn from_config(config: &Self::Config) -> Result<Self, Error> {
|
||||
let mut agent_builder = ureq::AgentBuilder::new()
|
||||
.timeout_read(Duration::from_secs(config.timeout_read))
|
||||
.timeout_write(Duration::from_secs(config.timeout_write));
|
||||
let mut agent_builder = ureq::AgentBuilder::new();
|
||||
|
||||
if let Some(timeout) = config.timeout {
|
||||
agent_builder = agent_builder.timeout(Duration::from_secs(timeout));
|
||||
}
|
||||
|
||||
if let Some(proxy) = &config.proxy {
|
||||
agent_builder = agent_builder
|
||||
.proxy(Proxy::new(proxy).map_err(|e| Error::Esplora(Box::new(e.into())))?);
|
||||
}
|
||||
|
||||
Ok(
|
||||
EsploraBlockchain::new(config.base_url.as_str(), config.stop_gap)
|
||||
.with_agent(agent_builder.build()),
|
||||
)
|
||||
let mut blockchain = EsploraBlockchain::new(config.base_url.as_str(), config.stop_gap)
|
||||
.with_agent(agent_builder.build());
|
||||
|
||||
if let Some(concurrency) = config.concurrency {
|
||||
blockchain = blockchain.with_concurrency(concurrency);
|
||||
}
|
||||
|
||||
Ok(blockchain)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<ureq::Error> for EsploraError {
|
||||
fn from(e: ureq::Error) -> Self {
|
||||
match e {
|
||||
ureq::Error::Status(code, _) => EsploraError::HttpResponse(code),
|
||||
e => EsploraError::Ureq(e),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -27,9 +27,6 @@ use crate::database::BatchDatabase;
|
||||
use crate::error::Error;
|
||||
use crate::FeeRate;
|
||||
|
||||
#[cfg(any(feature = "electrum", feature = "esplora"))]
|
||||
pub(crate) mod utils;
|
||||
|
||||
#[cfg(any(
|
||||
feature = "electrum",
|
||||
feature = "esplora",
|
||||
@ -37,6 +34,8 @@ pub(crate) mod utils;
|
||||
feature = "rpc"
|
||||
))]
|
||||
pub mod any;
|
||||
mod script_sync;
|
||||
|
||||
#[cfg(any(
|
||||
feature = "electrum",
|
||||
feature = "esplora",
|
||||
|
@ -35,8 +35,6 @@ use crate::bitcoin::consensus::deserialize;
|
||||
use crate::bitcoin::{Address, Network, OutPoint, Transaction, TxOut, Txid};
|
||||
use crate::blockchain::{Blockchain, Capability, ConfigurableBlockchain, Progress};
|
||||
use crate::database::{BatchDatabase, DatabaseUtils};
|
||||
use crate::descriptor::{get_checksum, IntoWalletDescriptor};
|
||||
use crate::wallet::utils::SecpCtx;
|
||||
use crate::{BlockTime, Error, FeeRate, KeychainKind, LocalUtxo, TransactionDetails};
|
||||
use bitcoincore_rpc::json::{
|
||||
GetAddressInfoResultLabel, ImportMultiOptions, ImportMultiRequest,
|
||||
@ -76,7 +74,7 @@ pub struct RpcConfig {
|
||||
pub auth: Auth,
|
||||
/// The network we are using (it will be checked the bitcoin node network matches this)
|
||||
pub network: Network,
|
||||
/// The wallet name in the bitcoin node, consider using [wallet_name_from_descriptor] for this
|
||||
/// The wallet name in the bitcoin node, consider using [crate::wallet::wallet_name_from_descriptor] for this
|
||||
pub wallet_name: String,
|
||||
/// Skip this many blocks of the blockchain at the first rescan, if None the rescan is done from the genesis block
|
||||
pub skip_blocks: Option<u32>,
|
||||
@ -415,35 +413,6 @@ impl ConfigurableBlockchain for RpcBlockchain {
|
||||
}
|
||||
}
|
||||
|
||||
/// Deterministically generate a unique name given the descriptors defining the wallet
|
||||
pub fn wallet_name_from_descriptor<T>(
|
||||
descriptor: T,
|
||||
change_descriptor: Option<T>,
|
||||
network: Network,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<String, Error>
|
||||
where
|
||||
T: IntoWalletDescriptor,
|
||||
{
|
||||
//TODO check descriptors contains only public keys
|
||||
let descriptor = descriptor
|
||||
.into_wallet_descriptor(secp, network)?
|
||||
.0
|
||||
.to_string();
|
||||
let mut wallet_name = get_checksum(&descriptor[..descriptor.find('#').unwrap()])?;
|
||||
if let Some(change_descriptor) = change_descriptor {
|
||||
let change_descriptor = change_descriptor
|
||||
.into_wallet_descriptor(secp, network)?
|
||||
.0
|
||||
.to_string();
|
||||
wallet_name.push_str(
|
||||
get_checksum(&change_descriptor[..change_descriptor.find('#').unwrap()])?.as_str(),
|
||||
);
|
||||
}
|
||||
|
||||
Ok(wallet_name)
|
||||
}
|
||||
|
||||
/// return the wallets available in default wallet directory
|
||||
//TODO use bitcoincore_rpc method when PR #179 lands
|
||||
fn list_wallet_dir(client: &Client) -> Result<Vec<String>, Error> {
|
||||
|
394
src/blockchain/script_sync.rs
Normal file
394
src/blockchain/script_sync.rs
Normal file
@ -0,0 +1,394 @@
|
||||
/*!
|
||||
This models a how a sync happens where you have a server that you send your script pubkeys to and it
|
||||
returns associated transactions i.e. electrum.
|
||||
*/
|
||||
#![allow(dead_code)]
|
||||
use crate::{
|
||||
database::{BatchDatabase, BatchOperations, DatabaseUtils},
|
||||
wallet::time::Instant,
|
||||
BlockTime, Error, KeychainKind, LocalUtxo, TransactionDetails,
|
||||
};
|
||||
use bitcoin::{OutPoint, Script, Transaction, TxOut, Txid};
|
||||
use log::*;
|
||||
use std::collections::{BTreeMap, BTreeSet, HashMap, HashSet, VecDeque};
|
||||
|
||||
/// A request for on-chain information
|
||||
pub enum Request<'a, D: BatchDatabase> {
|
||||
/// A request for transactions related to script pubkeys.
|
||||
Script(ScriptReq<'a, D>),
|
||||
/// A request for confirmation times for some transactions.
|
||||
Conftime(ConftimeReq<'a, D>),
|
||||
/// A request for full transaction details of some transactions.
|
||||
Tx(TxReq<'a, D>),
|
||||
/// Requests are finished here's a batch database update to reflect data gathered.
|
||||
Finish(D::Batch),
|
||||
}
|
||||
|
||||
/// starts a sync
|
||||
pub fn start<D: BatchDatabase>(db: &D, stop_gap: usize) -> Result<Request<'_, D>, Error> {
|
||||
use rand::seq::SliceRandom;
|
||||
let mut keychains = vec![KeychainKind::Internal, KeychainKind::External];
|
||||
// shuffling improve privacy, the server doesn't know my first request is from my internal or external addresses
|
||||
keychains.shuffle(&mut rand::thread_rng());
|
||||
let keychain = keychains.pop().unwrap();
|
||||
let scripts_needed = db
|
||||
.iter_script_pubkeys(Some(keychain))?
|
||||
.into_iter()
|
||||
.collect();
|
||||
let state = State::new(db);
|
||||
|
||||
Ok(Request::Script(ScriptReq {
|
||||
state,
|
||||
scripts_needed,
|
||||
script_index: 0,
|
||||
stop_gap,
|
||||
keychain,
|
||||
next_keychains: keychains,
|
||||
}))
|
||||
}
|
||||
|
||||
pub struct ScriptReq<'a, D: BatchDatabase> {
|
||||
state: State<'a, D>,
|
||||
script_index: usize,
|
||||
scripts_needed: VecDeque<Script>,
|
||||
stop_gap: usize,
|
||||
keychain: KeychainKind,
|
||||
next_keychains: Vec<KeychainKind>,
|
||||
}
|
||||
|
||||
/// The sync starts by returning script pubkeys we are interested in.
|
||||
impl<'a, D: BatchDatabase> ScriptReq<'a, D> {
|
||||
pub fn request(&self) -> impl Iterator<Item = &Script> + Clone {
|
||||
self.scripts_needed.iter()
|
||||
}
|
||||
|
||||
pub fn satisfy(
|
||||
mut self,
|
||||
// we want to know the txids assoiciated with the script and their height
|
||||
txids: Vec<Vec<(Txid, Option<u32>)>>,
|
||||
) -> Result<Request<'a, D>, Error> {
|
||||
for (txid_list, script) in txids.iter().zip(self.scripts_needed.iter()) {
|
||||
debug!(
|
||||
"found {} transactions for script pubkey {}",
|
||||
txid_list.len(),
|
||||
script
|
||||
);
|
||||
if !txid_list.is_empty() {
|
||||
// the address is active
|
||||
self.state
|
||||
.last_active_index
|
||||
.insert(self.keychain, self.script_index);
|
||||
}
|
||||
|
||||
for (txid, height) in txid_list {
|
||||
// have we seen this txid already?
|
||||
match self.state.db.get_tx(txid, true)? {
|
||||
Some(mut details) => {
|
||||
let old_height = details.confirmation_time.as_ref().map(|x| x.height);
|
||||
match (old_height, height) {
|
||||
(None, Some(_)) => {
|
||||
// It looks like the tx has confirmed since we last saw it -- we
|
||||
// need to know the confirmation time.
|
||||
self.state.tx_missing_conftime.insert(*txid, details);
|
||||
}
|
||||
(Some(old_height), Some(new_height)) if old_height != *new_height => {
|
||||
// The height of the tx has changed !? -- It's a reorg get the new confirmation time.
|
||||
self.state.tx_missing_conftime.insert(*txid, details);
|
||||
}
|
||||
(Some(_), None) => {
|
||||
// A re-org where the tx is not in the chain anymore.
|
||||
details.confirmation_time = None;
|
||||
self.state.finished_txs.push(details);
|
||||
}
|
||||
_ => self.state.finished_txs.push(details),
|
||||
}
|
||||
}
|
||||
None => {
|
||||
// we've never seen it let's get the whole thing
|
||||
self.state.tx_needed.insert(*txid);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
self.script_index += 1;
|
||||
}
|
||||
|
||||
for _ in txids {
|
||||
self.scripts_needed.pop_front();
|
||||
}
|
||||
|
||||
let last_active_index = self
|
||||
.state
|
||||
.last_active_index
|
||||
.get(&self.keychain)
|
||||
.map(|x| x + 1)
|
||||
.unwrap_or(0); // so no addresses active maps to 0
|
||||
|
||||
Ok(
|
||||
if self.script_index > last_active_index + self.stop_gap
|
||||
|| self.scripts_needed.is_empty()
|
||||
{
|
||||
debug!(
|
||||
"finished scanning for transactions for keychain {:?} at index {}",
|
||||
self.keychain, last_active_index
|
||||
);
|
||||
// we're done here -- check if we need to do the next keychain
|
||||
if let Some(keychain) = self.next_keychains.pop() {
|
||||
self.keychain = keychain;
|
||||
self.script_index = 0;
|
||||
self.scripts_needed = self
|
||||
.state
|
||||
.db
|
||||
.iter_script_pubkeys(Some(keychain))?
|
||||
.into_iter()
|
||||
.collect();
|
||||
Request::Script(self)
|
||||
} else {
|
||||
Request::Tx(TxReq { state: self.state })
|
||||
}
|
||||
} else {
|
||||
Request::Script(self)
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/// Then we get full transactions
|
||||
pub struct TxReq<'a, D> {
|
||||
state: State<'a, D>,
|
||||
}
|
||||
|
||||
impl<'a, D: BatchDatabase> TxReq<'a, D> {
|
||||
pub fn request(&self) -> impl Iterator<Item = &Txid> + Clone {
|
||||
self.state.tx_needed.iter()
|
||||
}
|
||||
|
||||
pub fn satisfy(
|
||||
mut self,
|
||||
tx_details: Vec<(Vec<Option<TxOut>>, Transaction)>,
|
||||
) -> Result<Request<'a, D>, Error> {
|
||||
let tx_details: Vec<TransactionDetails> = tx_details
|
||||
.into_iter()
|
||||
.zip(self.state.tx_needed.iter())
|
||||
.map(|((vout, tx), txid)| {
|
||||
debug!("found tx_details for {}", txid);
|
||||
assert_eq!(tx.txid(), *txid);
|
||||
let mut sent: u64 = 0;
|
||||
let mut received: u64 = 0;
|
||||
let mut inputs_sum: u64 = 0;
|
||||
let mut outputs_sum: u64 = 0;
|
||||
|
||||
for (txout, input) in vout.into_iter().zip(tx.input.iter()) {
|
||||
let txout = match txout {
|
||||
Some(txout) => txout,
|
||||
None => {
|
||||
// skip coinbase inputs
|
||||
debug_assert!(
|
||||
input.previous_output.is_null(),
|
||||
"prevout should only be missing for coinbase"
|
||||
);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
inputs_sum += txout.value;
|
||||
if self.state.db.is_mine(&txout.script_pubkey)? {
|
||||
sent += txout.value;
|
||||
}
|
||||
}
|
||||
|
||||
for out in &tx.output {
|
||||
outputs_sum += out.value;
|
||||
if self.state.db.is_mine(&out.script_pubkey)? {
|
||||
received += out.value;
|
||||
}
|
||||
}
|
||||
// we need to saturating sub since we want coinbase txs to map to 0 fee and
|
||||
// this subtraction will be negative for coinbase txs.
|
||||
let fee = inputs_sum.saturating_sub(outputs_sum);
|
||||
Result::<_, Error>::Ok(TransactionDetails {
|
||||
txid: *txid,
|
||||
transaction: Some(tx),
|
||||
received,
|
||||
sent,
|
||||
// we're going to fill this in later
|
||||
confirmation_time: None,
|
||||
fee: Some(fee),
|
||||
verified: false,
|
||||
})
|
||||
})
|
||||
.collect::<Result<Vec<_>, _>>()?;
|
||||
|
||||
for tx_detail in tx_details {
|
||||
self.state.tx_needed.remove(&tx_detail.txid);
|
||||
self.state
|
||||
.tx_missing_conftime
|
||||
.insert(tx_detail.txid, tx_detail);
|
||||
}
|
||||
|
||||
if !self.state.tx_needed.is_empty() {
|
||||
Ok(Request::Tx(self))
|
||||
} else {
|
||||
Ok(Request::Conftime(ConftimeReq { state: self.state }))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Final step is to get confirmation times
|
||||
pub struct ConftimeReq<'a, D> {
|
||||
state: State<'a, D>,
|
||||
}
|
||||
|
||||
impl<'a, D: BatchDatabase> ConftimeReq<'a, D> {
|
||||
pub fn request(&self) -> impl Iterator<Item = &Txid> + Clone {
|
||||
self.state.tx_missing_conftime.keys()
|
||||
}
|
||||
|
||||
pub fn satisfy(
|
||||
mut self,
|
||||
confirmation_times: Vec<Option<BlockTime>>,
|
||||
) -> Result<Request<'a, D>, Error> {
|
||||
let conftime_needed = self
|
||||
.request()
|
||||
.cloned()
|
||||
.take(confirmation_times.len())
|
||||
.collect::<Vec<_>>();
|
||||
for (confirmation_time, txid) in confirmation_times.into_iter().zip(conftime_needed.iter())
|
||||
{
|
||||
debug!("confirmation time for {} was {:?}", txid, confirmation_time);
|
||||
if let Some(mut tx_details) = self.state.tx_missing_conftime.remove(txid) {
|
||||
tx_details.confirmation_time = confirmation_time;
|
||||
self.state.finished_txs.push(tx_details);
|
||||
}
|
||||
}
|
||||
|
||||
if self.state.tx_missing_conftime.is_empty() {
|
||||
Ok(Request::Finish(self.state.into_db_update()?))
|
||||
} else {
|
||||
Ok(Request::Conftime(self))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
struct State<'a, D> {
|
||||
db: &'a D,
|
||||
last_active_index: HashMap<KeychainKind, usize>,
|
||||
/// Transactions where we need to get the full details
|
||||
tx_needed: BTreeSet<Txid>,
|
||||
/// Transacitions that we know everything about
|
||||
finished_txs: Vec<TransactionDetails>,
|
||||
/// Transactions that discovered conftimes should be inserted into
|
||||
tx_missing_conftime: BTreeMap<Txid, TransactionDetails>,
|
||||
/// The start of the sync
|
||||
start_time: Instant,
|
||||
}
|
||||
|
||||
impl<'a, D: BatchDatabase> State<'a, D> {
|
||||
fn new(db: &'a D) -> Self {
|
||||
State {
|
||||
db,
|
||||
last_active_index: HashMap::default(),
|
||||
finished_txs: vec![],
|
||||
tx_needed: BTreeSet::default(),
|
||||
tx_missing_conftime: BTreeMap::default(),
|
||||
start_time: Instant::new(),
|
||||
}
|
||||
}
|
||||
fn into_db_update(self) -> Result<D::Batch, Error> {
|
||||
debug_assert!(self.tx_needed.is_empty() && self.tx_missing_conftime.is_empty());
|
||||
let existing_txs = self.db.iter_txs(false)?;
|
||||
let existing_txids: HashSet<Txid> = existing_txs.iter().map(|tx| tx.txid).collect();
|
||||
let finished_txs = make_txs_consistent(&self.finished_txs);
|
||||
let observed_txids: HashSet<Txid> = finished_txs.iter().map(|tx| tx.txid).collect();
|
||||
let txids_to_delete = existing_txids.difference(&observed_txids);
|
||||
let mut batch = self.db.begin_batch();
|
||||
|
||||
// Delete old txs that no longer exist
|
||||
for txid in txids_to_delete {
|
||||
if let Some(raw_tx) = self.db.get_raw_tx(txid)? {
|
||||
for i in 0..raw_tx.output.len() {
|
||||
// Also delete any utxos from the txs that no longer exist.
|
||||
let _ = batch.del_utxo(&OutPoint {
|
||||
txid: *txid,
|
||||
vout: i as u32,
|
||||
})?;
|
||||
}
|
||||
} else {
|
||||
unreachable!("we should always have the raw tx");
|
||||
}
|
||||
batch.del_tx(txid, true)?;
|
||||
}
|
||||
|
||||
// Set every tx we observed
|
||||
for finished_tx in &finished_txs {
|
||||
let tx = finished_tx
|
||||
.transaction
|
||||
.as_ref()
|
||||
.expect("transaction will always be present here");
|
||||
for (i, output) in tx.output.iter().enumerate() {
|
||||
if let Some((keychain, _)) =
|
||||
self.db.get_path_from_script_pubkey(&output.script_pubkey)?
|
||||
{
|
||||
// add utxos we own from the new transactions we've seen.
|
||||
batch.set_utxo(&LocalUtxo {
|
||||
outpoint: OutPoint {
|
||||
txid: finished_tx.txid,
|
||||
vout: i as u32,
|
||||
},
|
||||
txout: output.clone(),
|
||||
keychain,
|
||||
})?;
|
||||
}
|
||||
}
|
||||
batch.set_tx(finished_tx)?;
|
||||
}
|
||||
|
||||
// we don't do this in the loop above since we may want to delete some of the utxos we
|
||||
// just added in case there are new tranasactions that spend form each other.
|
||||
for finished_tx in &finished_txs {
|
||||
let tx = finished_tx
|
||||
.transaction
|
||||
.as_ref()
|
||||
.expect("transaction will always be present here");
|
||||
for input in &tx.input {
|
||||
// Delete any spent utxos
|
||||
batch.del_utxo(&input.previous_output)?;
|
||||
}
|
||||
}
|
||||
|
||||
for (keychain, last_active_index) in self.last_active_index {
|
||||
batch.set_last_index(keychain, last_active_index as u32)?;
|
||||
}
|
||||
|
||||
info!(
|
||||
"finished setup, elapsed {:?}ms",
|
||||
self.start_time.elapsed().as_millis()
|
||||
);
|
||||
Ok(batch)
|
||||
}
|
||||
}
|
||||
|
||||
/// Remove conflicting transactions -- tie breaking them by fee.
|
||||
fn make_txs_consistent(txs: &[TransactionDetails]) -> Vec<&TransactionDetails> {
|
||||
let mut utxo_index: HashMap<OutPoint, &TransactionDetails> = HashMap::default();
|
||||
for tx in txs {
|
||||
for input in &tx.transaction.as_ref().unwrap().input {
|
||||
utxo_index
|
||||
.entry(input.previous_output)
|
||||
.and_modify(|existing| match (tx.fee, existing.fee) {
|
||||
(Some(fee), Some(existing_fee)) if fee > existing_fee => *existing = tx,
|
||||
(Some(_), None) => *existing = tx,
|
||||
_ => { /* leave it the same */ }
|
||||
})
|
||||
.or_insert(tx);
|
||||
}
|
||||
}
|
||||
|
||||
utxo_index
|
||||
.into_iter()
|
||||
.map(|(_, tx)| (tx.txid, tx))
|
||||
.collect::<HashMap<_, _>>()
|
||||
.into_iter()
|
||||
.map(|(_, tx)| tx)
|
||||
.collect()
|
||||
}
|
@ -1,388 +0,0 @@
|
||||
// Bitcoin Dev Kit
|
||||
// Written in 2020 by Alekos Filini <alekos.filini@gmail.com>
|
||||
//
|
||||
// Copyright (c) 2020-2021 Bitcoin Dev Kit Developers
|
||||
//
|
||||
// This file is licensed under the Apache License, Version 2.0 <LICENSE-APACHE
|
||||
// or http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
||||
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your option.
|
||||
// You may not use this file except in accordance with one or both of these
|
||||
// licenses.
|
||||
|
||||
use std::collections::{HashMap, HashSet};
|
||||
|
||||
#[allow(unused_imports)]
|
||||
use log::{debug, error, info, trace};
|
||||
use rand::seq::SliceRandom;
|
||||
use rand::thread_rng;
|
||||
|
||||
use bitcoin::{BlockHeader, OutPoint, Script, Transaction, Txid};
|
||||
|
||||
use super::*;
|
||||
use crate::database::{BatchDatabase, BatchOperations, DatabaseUtils};
|
||||
use crate::error::Error;
|
||||
use crate::types::{BlockTime, KeychainKind, LocalUtxo, TransactionDetails};
|
||||
use crate::wallet::time::Instant;
|
||||
use crate::wallet::utils::ChunksIterator;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct ElsGetHistoryRes {
|
||||
pub height: i32,
|
||||
pub tx_hash: Txid,
|
||||
}
|
||||
|
||||
/// Implements the synchronization logic for an Electrum-like client.
|
||||
#[maybe_async]
|
||||
pub trait ElectrumLikeSync {
|
||||
fn els_batch_script_get_history<'s, I: IntoIterator<Item = &'s Script> + Clone>(
|
||||
&self,
|
||||
scripts: I,
|
||||
) -> Result<Vec<Vec<ElsGetHistoryRes>>, Error>;
|
||||
|
||||
fn els_batch_transaction_get<'s, I: IntoIterator<Item = &'s Txid> + Clone>(
|
||||
&self,
|
||||
txids: I,
|
||||
) -> Result<Vec<Transaction>, Error>;
|
||||
|
||||
fn els_batch_block_header<I: IntoIterator<Item = u32> + Clone>(
|
||||
&self,
|
||||
heights: I,
|
||||
) -> Result<Vec<BlockHeader>, Error>;
|
||||
|
||||
// Provided methods down here...
|
||||
|
||||
fn electrum_like_setup<D: BatchDatabase, P: Progress>(
|
||||
&self,
|
||||
stop_gap: usize,
|
||||
db: &mut D,
|
||||
_progress_update: P,
|
||||
) -> Result<(), Error> {
|
||||
// TODO: progress
|
||||
let start = Instant::new();
|
||||
debug!("start setup");
|
||||
|
||||
let chunk_size = stop_gap;
|
||||
|
||||
let mut history_txs_id = HashSet::new();
|
||||
let mut txid_height = HashMap::new();
|
||||
let mut max_indexes = HashMap::new();
|
||||
|
||||
let mut wallet_chains = vec![KeychainKind::Internal, KeychainKind::External];
|
||||
// shuffling improve privacy, the server doesn't know my first request is from my internal or external addresses
|
||||
wallet_chains.shuffle(&mut thread_rng());
|
||||
// download history of our internal and external script_pubkeys
|
||||
for keychain in wallet_chains.iter() {
|
||||
let script_iter = db.iter_script_pubkeys(Some(*keychain))?.into_iter();
|
||||
|
||||
for (i, chunk) in ChunksIterator::new(script_iter, stop_gap).enumerate() {
|
||||
// TODO if i == last, should create another chunk of addresses in db
|
||||
let call_result: Vec<Vec<ElsGetHistoryRes>> =
|
||||
maybe_await!(self.els_batch_script_get_history(chunk.iter()))?;
|
||||
let max_index = call_result
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter_map(|(i, v)| v.first().map(|_| i as u32))
|
||||
.max();
|
||||
if let Some(max) = max_index {
|
||||
max_indexes.insert(keychain, max + (i * chunk_size) as u32);
|
||||
}
|
||||
let flattened: Vec<ElsGetHistoryRes> = call_result.into_iter().flatten().collect();
|
||||
debug!("#{} of {:?} results:{}", i, keychain, flattened.len());
|
||||
if flattened.is_empty() {
|
||||
// Didn't find anything in the last `stop_gap` script_pubkeys, breaking
|
||||
break;
|
||||
}
|
||||
|
||||
for el in flattened {
|
||||
// el.height = -1 means unconfirmed with unconfirmed parents
|
||||
// el.height = 0 means unconfirmed with confirmed parents
|
||||
// but we treat those tx the same
|
||||
if el.height <= 0 {
|
||||
txid_height.insert(el.tx_hash, None);
|
||||
} else {
|
||||
txid_height.insert(el.tx_hash, Some(el.height as u32));
|
||||
}
|
||||
history_txs_id.insert(el.tx_hash);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// saving max indexes
|
||||
info!("max indexes are: {:?}", max_indexes);
|
||||
for keychain in wallet_chains.iter() {
|
||||
if let Some(index) = max_indexes.get(keychain) {
|
||||
db.set_last_index(*keychain, *index)?;
|
||||
}
|
||||
}
|
||||
|
||||
// get db status
|
||||
let txs_details_in_db: HashMap<Txid, TransactionDetails> = db
|
||||
.iter_txs(false)?
|
||||
.into_iter()
|
||||
.map(|tx| (tx.txid, tx))
|
||||
.collect();
|
||||
let txs_raw_in_db: HashMap<Txid, Transaction> = db
|
||||
.iter_raw_txs()?
|
||||
.into_iter()
|
||||
.map(|tx| (tx.txid(), tx))
|
||||
.collect();
|
||||
let utxos_deps = utxos_deps(db, &txs_raw_in_db)?;
|
||||
|
||||
// download new txs and headers
|
||||
let new_txs = maybe_await!(self.download_and_save_needed_raw_txs(
|
||||
&history_txs_id,
|
||||
&txs_raw_in_db,
|
||||
chunk_size,
|
||||
db
|
||||
))?;
|
||||
let new_timestamps = maybe_await!(self.download_needed_headers(
|
||||
&txid_height,
|
||||
&txs_details_in_db,
|
||||
chunk_size
|
||||
))?;
|
||||
|
||||
let mut batch = db.begin_batch();
|
||||
|
||||
// save any tx details not in db but in history_txs_id or with different height/timestamp
|
||||
for txid in history_txs_id.iter() {
|
||||
let height = txid_height.get(txid).cloned().flatten();
|
||||
let timestamp = new_timestamps.get(txid).cloned();
|
||||
if let Some(tx_details) = txs_details_in_db.get(txid) {
|
||||
// check if tx height matches, otherwise updates it. timestamp is not in the if clause
|
||||
// because we are not asking headers for confirmed tx we know about
|
||||
if tx_details.confirmation_time.as_ref().map(|c| c.height) != height {
|
||||
let confirmation_time = BlockTime::new(height, timestamp);
|
||||
let mut new_tx_details = tx_details.clone();
|
||||
new_tx_details.confirmation_time = confirmation_time;
|
||||
batch.set_tx(&new_tx_details)?;
|
||||
}
|
||||
} else {
|
||||
save_transaction_details_and_utxos(
|
||||
txid,
|
||||
db,
|
||||
timestamp,
|
||||
height,
|
||||
&mut batch,
|
||||
&utxos_deps,
|
||||
)?;
|
||||
}
|
||||
}
|
||||
|
||||
// remove any tx details in db but not in history_txs_id
|
||||
for txid in txs_details_in_db.keys() {
|
||||
if !history_txs_id.contains(txid) {
|
||||
batch.del_tx(txid, false)?;
|
||||
}
|
||||
}
|
||||
|
||||
// remove any spent utxo
|
||||
for new_tx in new_txs.iter() {
|
||||
for input in new_tx.input.iter() {
|
||||
batch.del_utxo(&input.previous_output)?;
|
||||
}
|
||||
}
|
||||
|
||||
db.commit_batch(batch)?;
|
||||
info!("finish setup, elapsed {:?}ms", start.elapsed().as_millis());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// download txs identified by `history_txs_id` and theirs previous outputs if not already present in db
|
||||
fn download_and_save_needed_raw_txs<D: BatchDatabase>(
|
||||
&self,
|
||||
history_txs_id: &HashSet<Txid>,
|
||||
txs_raw_in_db: &HashMap<Txid, Transaction>,
|
||||
chunk_size: usize,
|
||||
db: &mut D,
|
||||
) -> Result<Vec<Transaction>, Error> {
|
||||
let mut txs_downloaded = vec![];
|
||||
let txids_raw_in_db: HashSet<Txid> = txs_raw_in_db.keys().cloned().collect();
|
||||
let txids_to_download: Vec<&Txid> = history_txs_id.difference(&txids_raw_in_db).collect();
|
||||
if !txids_to_download.is_empty() {
|
||||
info!("got {} txs to download", txids_to_download.len());
|
||||
txs_downloaded.extend(maybe_await!(self.download_and_save_in_chunks(
|
||||
txids_to_download,
|
||||
chunk_size,
|
||||
db,
|
||||
))?);
|
||||
let mut prev_txids = HashSet::new();
|
||||
let mut txids_downloaded = HashSet::new();
|
||||
for tx in txs_downloaded.iter() {
|
||||
txids_downloaded.insert(tx.txid());
|
||||
// add every previous input tx, but skip coinbase
|
||||
for input in tx.input.iter().filter(|i| !i.previous_output.is_null()) {
|
||||
prev_txids.insert(input.previous_output.txid);
|
||||
}
|
||||
}
|
||||
let already_present: HashSet<Txid> =
|
||||
txids_downloaded.union(&txids_raw_in_db).cloned().collect();
|
||||
let prev_txs_to_download: Vec<&Txid> =
|
||||
prev_txids.difference(&already_present).collect();
|
||||
info!("{} previous txs to download", prev_txs_to_download.len());
|
||||
txs_downloaded.extend(maybe_await!(self.download_and_save_in_chunks(
|
||||
prev_txs_to_download,
|
||||
chunk_size,
|
||||
db,
|
||||
))?);
|
||||
}
|
||||
|
||||
Ok(txs_downloaded)
|
||||
}
|
||||
|
||||
/// download headers at heights in `txid_height` if tx details not already present, returns a map Txid -> timestamp
|
||||
fn download_needed_headers(
|
||||
&self,
|
||||
txid_height: &HashMap<Txid, Option<u32>>,
|
||||
txs_details_in_db: &HashMap<Txid, TransactionDetails>,
|
||||
chunk_size: usize,
|
||||
) -> Result<HashMap<Txid, u64>, Error> {
|
||||
let mut txid_timestamp = HashMap::new();
|
||||
let txid_in_db_with_conf: HashSet<_> = txs_details_in_db
|
||||
.values()
|
||||
.filter_map(|details| details.confirmation_time.as_ref().map(|_| details.txid))
|
||||
.collect();
|
||||
let needed_txid_height: HashMap<&Txid, u32> = txid_height
|
||||
.iter()
|
||||
.filter(|(t, _)| !txid_in_db_with_conf.contains(*t))
|
||||
.filter_map(|(t, o)| o.map(|h| (t, h)))
|
||||
.collect();
|
||||
let needed_heights: HashSet<u32> = needed_txid_height.values().cloned().collect();
|
||||
if !needed_heights.is_empty() {
|
||||
info!("{} headers to download for timestamp", needed_heights.len());
|
||||
let mut height_timestamp: HashMap<u32, u64> = HashMap::new();
|
||||
for chunk in ChunksIterator::new(needed_heights.into_iter(), chunk_size) {
|
||||
let call_result: Vec<BlockHeader> =
|
||||
maybe_await!(self.els_batch_block_header(chunk.clone()))?;
|
||||
height_timestamp.extend(
|
||||
chunk
|
||||
.into_iter()
|
||||
.zip(call_result.iter().map(|h| h.time as u64)),
|
||||
);
|
||||
}
|
||||
for (txid, height) in needed_txid_height {
|
||||
let timestamp = height_timestamp
|
||||
.get(&height)
|
||||
.ok_or_else(|| Error::Generic("timestamp missing".to_string()))?;
|
||||
txid_timestamp.insert(*txid, *timestamp);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(txid_timestamp)
|
||||
}
|
||||
|
||||
fn download_and_save_in_chunks<D: BatchDatabase>(
|
||||
&self,
|
||||
to_download: Vec<&Txid>,
|
||||
chunk_size: usize,
|
||||
db: &mut D,
|
||||
) -> Result<Vec<Transaction>, Error> {
|
||||
let mut txs_downloaded = vec![];
|
||||
for chunk in ChunksIterator::new(to_download.into_iter(), chunk_size) {
|
||||
let call_result: Vec<Transaction> =
|
||||
maybe_await!(self.els_batch_transaction_get(chunk))?;
|
||||
let mut batch = db.begin_batch();
|
||||
for new_tx in call_result.iter() {
|
||||
batch.set_raw_tx(new_tx)?;
|
||||
}
|
||||
db.commit_batch(batch)?;
|
||||
txs_downloaded.extend(call_result);
|
||||
}
|
||||
|
||||
Ok(txs_downloaded)
|
||||
}
|
||||
}
|
||||
|
||||
fn save_transaction_details_and_utxos<D: BatchDatabase>(
|
||||
txid: &Txid,
|
||||
db: &mut D,
|
||||
timestamp: Option<u64>,
|
||||
height: Option<u32>,
|
||||
updates: &mut dyn BatchOperations,
|
||||
utxo_deps: &HashMap<OutPoint, OutPoint>,
|
||||
) -> Result<(), Error> {
|
||||
let tx = db.get_raw_tx(txid)?.ok_or(Error::TransactionNotFound)?;
|
||||
|
||||
let mut incoming: u64 = 0;
|
||||
let mut outgoing: u64 = 0;
|
||||
|
||||
let mut inputs_sum: u64 = 0;
|
||||
let mut outputs_sum: u64 = 0;
|
||||
|
||||
// look for our own inputs
|
||||
for input in tx.input.iter() {
|
||||
// skip coinbase inputs
|
||||
if input.previous_output.is_null() {
|
||||
continue;
|
||||
}
|
||||
|
||||
// We already downloaded all previous output txs in the previous step
|
||||
if let Some(previous_output) = db.get_previous_output(&input.previous_output)? {
|
||||
inputs_sum += previous_output.value;
|
||||
|
||||
if db.is_mine(&previous_output.script_pubkey)? {
|
||||
outgoing += previous_output.value;
|
||||
}
|
||||
} else {
|
||||
// The input is not ours, but we still need to count it for the fees
|
||||
let tx = db
|
||||
.get_raw_tx(&input.previous_output.txid)?
|
||||
.ok_or(Error::TransactionNotFound)?;
|
||||
inputs_sum += tx.output[input.previous_output.vout as usize].value;
|
||||
}
|
||||
|
||||
// removes conflicting UTXO if any (generated from same inputs, like for example RBF)
|
||||
if let Some(outpoint) = utxo_deps.get(&input.previous_output) {
|
||||
updates.del_utxo(outpoint)?;
|
||||
}
|
||||
}
|
||||
|
||||
for (i, output) in tx.output.iter().enumerate() {
|
||||
// to compute the fees later
|
||||
outputs_sum += output.value;
|
||||
|
||||
// this output is ours, we have a path to derive it
|
||||
if let Some((keychain, _child)) = db.get_path_from_script_pubkey(&output.script_pubkey)? {
|
||||
debug!("{} output #{} is mine, adding utxo", txid, i);
|
||||
updates.set_utxo(&LocalUtxo {
|
||||
outpoint: OutPoint::new(tx.txid(), i as u32),
|
||||
txout: output.clone(),
|
||||
keychain,
|
||||
})?;
|
||||
|
||||
incoming += output.value;
|
||||
}
|
||||
}
|
||||
|
||||
let tx_details = TransactionDetails {
|
||||
txid: tx.txid(),
|
||||
transaction: Some(tx),
|
||||
received: incoming,
|
||||
sent: outgoing,
|
||||
confirmation_time: BlockTime::new(height, timestamp),
|
||||
fee: Some(inputs_sum.saturating_sub(outputs_sum)), /* if the tx is a coinbase, fees would be negative */
|
||||
verified: height.is_some(),
|
||||
};
|
||||
updates.set_tx(&tx_details)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// returns utxo dependency as the inputs needed for the utxo to exist
|
||||
/// `tx_raw_in_db` must contains utxo's generating txs or errors with [crate::Error::TransactionNotFound]
|
||||
fn utxos_deps<D: BatchDatabase>(
|
||||
db: &mut D,
|
||||
tx_raw_in_db: &HashMap<Txid, Transaction>,
|
||||
) -> Result<HashMap<OutPoint, OutPoint>, Error> {
|
||||
let utxos = db.iter_utxos()?;
|
||||
let mut utxos_deps = HashMap::new();
|
||||
for utxo in utxos {
|
||||
let from_tx = tx_raw_in_db
|
||||
.get(&utxo.outpoint.txid)
|
||||
.ok_or(Error::TransactionNotFound)?;
|
||||
for input in from_tx.input.iter() {
|
||||
utxos_deps.insert(input.previous_output, utxo.outpoint);
|
||||
}
|
||||
}
|
||||
Ok(utxos_deps)
|
||||
}
|
@ -84,7 +84,7 @@ macro_rules! impl_leaf_opcode {
|
||||
)
|
||||
.map_err($crate::descriptor::DescriptorError::Miniscript)
|
||||
.and_then(|minisc| {
|
||||
minisc.check_minsicript()?;
|
||||
minisc.check_miniscript()?;
|
||||
Ok(minisc)
|
||||
})
|
||||
.map(|minisc| {
|
||||
@ -108,7 +108,7 @@ macro_rules! impl_leaf_opcode_value {
|
||||
)
|
||||
.map_err($crate::descriptor::DescriptorError::Miniscript)
|
||||
.and_then(|minisc| {
|
||||
minisc.check_minsicript()?;
|
||||
minisc.check_miniscript()?;
|
||||
Ok(minisc)
|
||||
})
|
||||
.map(|minisc| {
|
||||
@ -132,7 +132,7 @@ macro_rules! impl_leaf_opcode_value_two {
|
||||
)
|
||||
.map_err($crate::descriptor::DescriptorError::Miniscript)
|
||||
.and_then(|minisc| {
|
||||
minisc.check_minsicript()?;
|
||||
minisc.check_miniscript()?;
|
||||
Ok(minisc)
|
||||
})
|
||||
.map(|minisc| {
|
||||
@ -165,7 +165,7 @@ macro_rules! impl_node_opcode_two {
|
||||
std::sync::Arc::new(b_minisc),
|
||||
))?;
|
||||
|
||||
minisc.check_minsicript()?;
|
||||
minisc.check_miniscript()?;
|
||||
|
||||
Ok((minisc, a_keymap, $crate::keys::merge_networks(&a_networks, &b_networks)))
|
||||
})
|
||||
@ -197,7 +197,7 @@ macro_rules! impl_node_opcode_three {
|
||||
std::sync::Arc::new(c_minisc),
|
||||
))?;
|
||||
|
||||
minisc.check_minsicript()?;
|
||||
minisc.check_miniscript()?;
|
||||
|
||||
Ok((minisc, a_keymap, networks))
|
||||
})
|
||||
@ -243,7 +243,7 @@ macro_rules! apply_modifier {
|
||||
),
|
||||
)?;
|
||||
|
||||
minisc.check_minsicript()?;
|
||||
minisc.check_miniscript()?;
|
||||
|
||||
Ok((minisc, keymap, networks))
|
||||
})
|
||||
|
@ -238,13 +238,13 @@ pub(crate) fn into_wallet_descriptor_checked<T: IntoWalletDescriptor>(
|
||||
#[doc(hidden)]
|
||||
/// Used internally mainly by the `descriptor!()` and `fragment!()` macros
|
||||
pub trait CheckMiniscript<Ctx: miniscript::ScriptContext> {
|
||||
fn check_minsicript(&self) -> Result<(), miniscript::Error>;
|
||||
fn check_miniscript(&self) -> Result<(), miniscript::Error>;
|
||||
}
|
||||
|
||||
impl<Ctx: miniscript::ScriptContext, Pk: miniscript::MiniscriptKey> CheckMiniscript<Ctx>
|
||||
for miniscript::Miniscript<Pk, Ctx>
|
||||
{
|
||||
fn check_minsicript(&self) -> Result<(), miniscript::Error> {
|
||||
fn check_miniscript(&self) -> Result<(), miniscript::Error> {
|
||||
Ctx::check_global_validity(self)?;
|
||||
|
||||
Ok(())
|
||||
|
@ -748,7 +748,7 @@ pub fn make_pk<Pk: IntoDescriptorKey<Ctx>, Ctx: ScriptContext>(
|
||||
let (key, key_map, valid_networks) = descriptor_key.into_descriptor_key()?.extract(secp)?;
|
||||
let minisc = Miniscript::from_ast(Terminal::PkK(key))?;
|
||||
|
||||
minisc.check_minsicript()?;
|
||||
minisc.check_miniscript()?;
|
||||
|
||||
Ok((minisc, key_map, valid_networks))
|
||||
}
|
||||
@ -762,7 +762,7 @@ pub fn make_pkh<Pk: IntoDescriptorKey<Ctx>, Ctx: ScriptContext>(
|
||||
let (key, key_map, valid_networks) = descriptor_key.into_descriptor_key()?.extract(secp)?;
|
||||
let minisc = Miniscript::from_ast(Terminal::PkH(key))?;
|
||||
|
||||
minisc.check_minsicript()?;
|
||||
minisc.check_miniscript()?;
|
||||
|
||||
Ok((minisc, key_map, valid_networks))
|
||||
}
|
||||
@ -777,7 +777,7 @@ pub fn make_multi<Pk: IntoDescriptorKey<Ctx>, Ctx: ScriptContext>(
|
||||
let (pks, key_map, valid_networks) = expand_multi_keys(pks, secp)?;
|
||||
let minisc = Miniscript::from_ast(Terminal::Multi(thresh, pks))?;
|
||||
|
||||
minisc.check_minsicript()?;
|
||||
minisc.check_miniscript()?;
|
||||
|
||||
Ok((minisc, key_map, valid_networks))
|
||||
}
|
||||
|
@ -609,6 +609,74 @@ macro_rules! bdk_blockchain_tests {
|
||||
assert_eq!(wallet.list_unspent().unwrap().len(), 1, "incorrect number of unspents");
|
||||
}
|
||||
|
||||
/// Send two conflicting transactions to the same address twice in a row.
|
||||
/// The coins should only be received once!
|
||||
#[test]
|
||||
fn test_sync_double_receive() {
|
||||
let (wallet, descriptors, mut test_client) = init_single_sig();
|
||||
let receiver_wallet = get_wallet_from_descriptors(&("wpkh(cVpPVruEDdmutPzisEsYvtST1usBR3ntr8pXSyt6D2YYqXRyPcFW)".to_string(), None), &test_client);
|
||||
// need to sync so rpc can start watching
|
||||
receiver_wallet.sync(noop_progress(), None).unwrap();
|
||||
|
||||
test_client.receive(testutils! {
|
||||
@tx ( (@external descriptors, 0) => 50_000, (@external descriptors, 1) => 25_000 ) (@confirmations 1)
|
||||
});
|
||||
|
||||
wallet.sync(noop_progress(), None).unwrap();
|
||||
assert_eq!(wallet.get_balance().unwrap(), 75_000, "incorrect balance");
|
||||
let target_addr = receiver_wallet.get_address($crate::wallet::AddressIndex::New).unwrap().address;
|
||||
|
||||
let tx1 = {
|
||||
let mut builder = wallet.build_tx();
|
||||
builder.add_recipient(target_addr.script_pubkey(), 49_000).enable_rbf();
|
||||
let (mut psbt, _details) = builder.finish().unwrap();
|
||||
let finalized = wallet.sign(&mut psbt, Default::default()).unwrap();
|
||||
assert!(finalized, "Cannot finalize transaction");
|
||||
psbt.extract_tx()
|
||||
};
|
||||
|
||||
let tx2 = {
|
||||
let mut builder = wallet.build_tx();
|
||||
builder.add_recipient(target_addr.script_pubkey(), 49_000).enable_rbf().fee_rate(FeeRate::from_sat_per_vb(5.0));
|
||||
let (mut psbt, _details) = builder.finish().unwrap();
|
||||
let finalized = wallet.sign(&mut psbt, Default::default()).unwrap();
|
||||
assert!(finalized, "Cannot finalize transaction");
|
||||
psbt.extract_tx()
|
||||
};
|
||||
|
||||
wallet.broadcast(&tx1).unwrap();
|
||||
wallet.broadcast(&tx2).unwrap();
|
||||
|
||||
receiver_wallet.sync(noop_progress(), None).unwrap();
|
||||
assert_eq!(receiver_wallet.get_balance().unwrap(), 49_000, "should have received coins once and only once");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_sync_many_sends_to_a_single_address() {
|
||||
let (wallet, descriptors, mut test_client) = init_single_sig();
|
||||
|
||||
for _ in 0..4 {
|
||||
// split this up into multiple blocks so rpc doesn't get angry
|
||||
for _ in 0..20 {
|
||||
test_client.receive(testutils! {
|
||||
@tx ( (@external descriptors, 0) => 1_000 )
|
||||
});
|
||||
}
|
||||
test_client.generate(1, None);
|
||||
}
|
||||
|
||||
// add some to the mempool as well.
|
||||
for _ in 0..20 {
|
||||
test_client.receive(testutils! {
|
||||
@tx ( (@external descriptors, 0) => 1_000 )
|
||||
});
|
||||
}
|
||||
|
||||
wallet.sync(noop_progress(), None).unwrap();
|
||||
|
||||
assert_eq!(wallet.get_balance().unwrap(), 100_000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_update_confirmation_time_after_generate() {
|
||||
let (wallet, descriptors, mut test_client) = init_single_sig();
|
||||
|
@ -4006,3 +4006,32 @@ pub(crate) mod test {
|
||||
builder.finish().unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
/// Deterministically generate a unique name given the descriptors defining the wallet
|
||||
pub fn wallet_name_from_descriptor<T>(
|
||||
descriptor: T,
|
||||
change_descriptor: Option<T>,
|
||||
network: Network,
|
||||
secp: &SecpCtx,
|
||||
) -> Result<String, Error>
|
||||
where
|
||||
T: IntoWalletDescriptor,
|
||||
{
|
||||
//TODO check descriptors contains only public keys
|
||||
let descriptor = descriptor
|
||||
.into_wallet_descriptor(secp, network)?
|
||||
.0
|
||||
.to_string();
|
||||
let mut wallet_name = get_checksum(&descriptor[..descriptor.find('#').unwrap()])?;
|
||||
if let Some(change_descriptor) = change_descriptor {
|
||||
let change_descriptor = change_descriptor
|
||||
.into_wallet_descriptor(secp, network)?
|
||||
.0
|
||||
.to_string();
|
||||
wallet_name.push_str(
|
||||
get_checksum(&change_descriptor[..change_descriptor.find('#').unwrap()])?.as_str(),
|
||||
);
|
||||
}
|
||||
|
||||
Ok(wallet_name)
|
||||
}
|
||||
|
@ -138,40 +138,6 @@ impl<Pk: MiniscriptKey + ToPublicKey> Satisfier<Pk> for Older {
|
||||
|
||||
pub(crate) type SecpCtx = Secp256k1<All>;
|
||||
|
||||
pub struct ChunksIterator<I: Iterator> {
|
||||
iter: I,
|
||||
size: usize,
|
||||
}
|
||||
|
||||
#[cfg(any(feature = "electrum", feature = "esplora"))]
|
||||
impl<I: Iterator> ChunksIterator<I> {
|
||||
pub fn new(iter: I, size: usize) -> Self {
|
||||
ChunksIterator { iter, size }
|
||||
}
|
||||
}
|
||||
|
||||
impl<I: Iterator> Iterator for ChunksIterator<I> {
|
||||
type Item = Vec<<I as std::iter::Iterator>::Item>;
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
let mut v = Vec::new();
|
||||
for _ in 0..self.size {
|
||||
let e = self.iter.next();
|
||||
|
||||
match e {
|
||||
None => break,
|
||||
Some(val) => v.push(val),
|
||||
}
|
||||
}
|
||||
|
||||
if v.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
Some(v)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::{
|
||||
|
Loading…
x
Reference in New Issue
Block a user