68 Commits

Author SHA1 Message Date
Andrew Poelstra
e72e93ad9c Add note about y=0 being possible on one of the sextic twists 2016-01-10 08:35:59 +00:00
Pieter Wuille
646662517f Improvements for coordinate decompression 2015-11-05 00:04:39 +01:00
Gregory Maxwell
9f6993f370 Remove some dead code. 2015-09-28 05:43:51 +00:00
Gregory Maxwell
2b199de888 Use the explicit NULL macro for pointer comparisons.
This makes it more clear that a null check is intended. Avoiding the
 use of a pointer as a test condition alse increases the type-safety
 of the comparisons.

(This is also MISRA C 2012 rules 14.4 and 11.9)
2015-09-23 22:00:43 +00:00
Pieter Wuille
dd891e0ed5 Get rid of _t as it is POSIX reserved 2015-09-21 21:03:37 +02:00
Gregory Maxwell
912f203fc5 Eliminate a few unbraced statements that crept into the code.
Also avoids some easily avoided multiple-returns.
2015-09-21 17:21:35 +00:00
GSongHashrate
81e45ff9d1 Update group_impl.h 2015-09-17 22:38:21 +01:00
Andrew Poelstra
4401500060 Add constant-time multiply secp256k1_ecmult_const for ECDH
Designed with clear separation of the wNAF conversion, precomputation
and exponentiation (since the precomp at least we will probably want
to separate in the API for users who reuse points a lot.

Future work:
  - actually separate precomp in the API
  - do multiexp rather than single exponentiation
2015-07-31 12:39:09 -05:00
Pieter Wuille
995c548771 Introduce callback functions for dealing with errors. 2015-07-26 18:08:38 +02:00
Peter Dettman
5a43124c69 Save 1 _fe_negate since s1 == -s2 2015-07-07 22:30:00 +10:00
Peter Dettman
a5d796e0b1 Update code comments 2015-07-07 09:16:15 +09:30
Peter Dettman
7d054cd030 Refactor to save a _fe_negate 2015-07-04 16:38:46 +09:30
Peter Dettman
b28d02a5d5 Refactor to remove a local var 2015-07-04 16:30:56 +09:30
Peter Dettman
55e7fc32cb Perf. improvement in _gej_add_ge
- Avoid one weak normalization
- Change one full normalization to weak
- Avoid unnecessary fe assignment
- Update magnitude annotations
2015-07-04 16:21:35 +09:30
Andrew Poelstra
5de4c5dffd
gej_add_ge: fix degenerate case when computing P + (-lambda)P
If two points (x1, y1) and (x2, y2) are given to gej_add_ge with
x1 != x2 but y1 = -y2, the function gives a wrong answer since
this causes it to compute "lambda = 0/0" during an intermediate
step. (Here lambda refers to an auxiallary variable in the point
addition formula, not the cube-root of 1 used by the endomorphism
optimization.)

This commit catches the 0/0 and replaces it with an alternate
expression for lambda, cmov'ing it in place if necessary.
2015-06-29 08:21:58 -07:00
Andrew Poelstra
bcf2fcfd3a
gej_add_ge: rearrange algebra
There is zero functionality or opcount changes here; I need to do
this to make sure both R and M are computed before they are used,
since a future patch will replace either none or both of them.

Also compute r->y directly in terms of r->x, which again will be
used in a future patch.
2015-06-23 12:44:15 -07:00
Pieter Wuille
a1d5ae1527 Tiny optimization 2015-05-05 20:40:24 +02:00
Peter Dettman
2d5a186cee Apply effective-affine trick to precomp 2015-04-30 09:25:44 -07:00
Peter Dettman
4f9791abba Effective affine addition in EC multiplication
* Make secp256k1_gej_add_var and secp256k1_gej_double return the
  Z ratio to go from a.z to r.z.
* Use these Z ratios to speed up batch point conversion to affine
  coordinates, and to speed up batch conversion of points to a
  common Z coordinate.
* Add a point addition function that takes a point with a known
  Z inverse.
* Due to secp256k1's endomorphism, all additions in the EC
  multiplication code can work on affine coordinate (with an
  implicit common Z coordinate), correcting the Z coordinate of
  the result afterwards.

Refactoring by Pieter Wuille:
* Move more global-z logic into the group code.
* Separate code for computing the odd multiples from the code to bring it
  to either storage or globalz format.
* Rename functions.
* Make all addition operations return Z ratios, and test them.
* Make the zr table format compatible with future batch chaining
  (the first entry in zr becomes the ratio between the input and the
  first output).

Original idea and code by Peter Dettman.
2015-04-30 09:23:21 -07:00
Gregory Maxwell
d2275795ff Add scalar blinding and a secp256k1_context_randomize() call.
This computes (n-b)G + bG with random value b, in place of nG in
 ecmult_gen() for signing.

This is intended to reduce exposure to potential power/EMI sidechannels
 during signing and pubkey generation by blinding the secret value with
 another value which is hopefully unknown to the attacker.

It may not be very helpful if the attacker is able to observe the setup
 or if even the scalar addition has an unacceptable leak, but it has low
 overhead in any case and the security should be purely additive on top
 of the existing defenses against sidechannels.
2015-04-22 19:25:16 +00:00
Gregory Maxwell
bb0ea50de8 Replace set/add with cmov in secp256k1_gej_add_ge.
Use a conditional move of the same kind we use for the affine points
 in the storage  type instead of multiplying  with the infinity flag
 and adding.  This results in fewer constructions to worry about for
 sidechannel behavior.

It also might be faster: It doesn't appear to benchmark as slower for
 me at least; but I think  the CMOV is faster than the mul_int + add,
 but slower than the set+add;  making it a wash.
2015-04-22 00:43:30 +00:00
Gregory Maxwell
c01df1adc9 Avoid some implicit type conversions to make C++ compilers happy. 2015-03-28 02:20:36 +00:00
Gregory Maxwell
2632019713 Brace all the if/for/while.
Unbraced statements spanning multiple lines has been shown in many
 projects to contribute to the introduction of bugs and a failure
 to catch them in review, especially for maintenance on infrequently
 modified code.

Most, but not all, of the existing practice in the codebase were not
 cases that I would have expected to eventually result in bugs but
 applying it as a rule makes it easier for other people to safely
 contribute.

I'm not aware of any such evidence for the case with the statement
 on a single line, but some people strongly prefer to never do that
 and the opposite rule of "_always_ use a single line for single
 statement blocks" isn't a reasonable rule for formatting reasons.
 Might as well brace all these too, since that's more universally
 acceptable.

[In any case, I seem to have introduced the vast majority of the
 single-line form (as they're my preference where they fit).]

This also removes a broken test which is no longer needed.
2015-03-27 23:24:32 +00:00
Pieter Wuille
443cd4b8ee Get rid of hex format and some binary conversions 2015-02-23 04:37:21 -08:00
Gregory Maxwell
6efd6e7777 Some comments explaining some of the constants in the code. 2015-02-07 00:22:13 +00:00
Pieter Wuille
d61e899531 Add group operation counts 2015-01-27 12:32:53 -04:00
Gregory Maxwell
f735446c4d Convert the rest of the codebase to C89.
Update build system to enforce -std=c89 -pedantic.
2015-01-25 17:44:10 +00:00
Pieter Wuille
55422b6aaf Switch ecmult_gen to use storage types 2015-01-25 00:46:31 -04:00
Pieter Wuille
e68d7208ec Add group element storage type 2015-01-25 00:31:56 -04:00
Pieter Wuille
0768bd55a1 Get rid of variable-length hex string conversions 2015-01-24 21:52:48 -04:00
Gregory Maxwell
3627437d80 C89 nits and dead code removal. 2015-01-23 04:17:12 +00:00
Pieter Wuille
4732d26069 Convert the field/group/ecdsa constant initialization to static consts 2015-01-22 22:44:52 -05:00
Peter Dettman
49ee0dbe16 Add _normalizes_to_zero_var variant 2014-12-20 14:38:29 +01:00
Peter Dettman
eed599dd72 Add _fe_normalizes_to_zero method 2014-12-20 14:38:24 +01:00
Pieter Wuille
d7174edf5f Weak normalization for secp256k1_fe_equal 2014-12-20 14:38:20 +01:00
Pieter Wuille
0295f0a33d weak normalization 2014-12-20 14:38:07 +01:00
Pieter Wuille
ce7eb6fb3d Optimize verification: avoid field inverse
Suggested by Greg Maxwell.
2014-12-16 22:38:17 +01:00
Pieter Wuille
6a9901e15b
Merge pull request #137
39bd94d Variable time normalize (Pieter Wuille)
2014-12-07 14:35:23 +01:00
Pieter Wuille
17288069fb
Merge pull request #138
a5759c5 Check return value of malloc (Pieter Wuille)
2b9388b Remove unused secp256k1_fe_inv_all (Pieter Wuille)
f461b76 Allocate precomputation arrays on the heap (Pieter Wuille)
2014-12-07 13:19:21 +01:00
Pieter Wuille
a5759c572e Check return value of malloc 2014-12-07 02:58:24 +01:00
Pieter Wuille
39bd94d86d Variable time normalize 2014-12-06 18:18:28 +01:00
Pieter Wuille
54b768c6da Another redundant secp256k1_fe_normalize 2014-12-06 17:30:08 +01:00
Gregory Maxwell
1c29f2eb49 Remove redundant secp256k1_fe_normalize from secp256k1_gej_add_ge_var.
This was a missed optimization in the extraction of gej+ge from gej+gej.
2014-12-06 05:09:57 -08:00
Pieter Wuille
f461b76925 Allocate precomputation arrays on the heap 2014-12-05 18:13:28 +01:00
Pieter Wuille
efb7d4b299 Use constant-time conditional moves instead of byte slicing 2014-12-03 02:41:55 +01:00
Pieter Wuille
bd313f7d6e
Merge pull request #119
597128d Make num optional (Pieter Wuille)
659b554 Make constant initializers independent from num (Pieter Wuille)
2014-12-02 16:42:50 +01:00
Pieter Wuille
be82e92fc4 Require that r and b are different for field multiplication.
Suggested by Peter Dettman, this prepares for slightly faster muitiplication
which writes results immediately to r before finishing reading b.
2014-12-01 13:40:34 +01:00
Pieter Wuille
659b554d7b Make constant initializers independent from num 2014-12-01 12:38:38 +01:00
Pieter Wuille
0af5b47133
Merge pull request #120
e3d692f Explain why no y=0 check is necessary for doubling (Pieter Wuille)
f7dc1c6 Optimize doubling: secp256k1 has no y=0 point (Pieter Wuille)
2014-12-01 12:38:13 +01:00
Pieter Wuille
4285a98722 Move lambda-splitting code to scalar.
It's not really an operation on group elements.
2014-11-30 23:38:01 +01:00