ecdsa - Bitcoin Stack Exchange

The implementation of the Schnorr/Taproot consensus rules has been merged into Bitcoin Core.

What Are Schnorr and Taproot?
Schnorr is an alternative algorithm to ECDSA, which is currently used to generate cryptographic signatures. Schnorr signatures would enable the flexible creation and execution of multisignature transactions by combining scripts to reduce their size and provide the subsequent benefit of added privacy, as multisignature transactions would become indistinguishable from regular Bitcoin transactions.
Taproot is the specific method in which Schnorr signatures will be leveraged to create multisignature transactions. It was first proposed by former Blockstream CTO Gregory Maxwell in the Bitcoin-dev mailing list. Since then, several iterations have taken place, leading to the pull request that was merged today.
Bitcoin’s Future: Exactly How a Coming Upgrade Could Improve Privacy and Scaling
submitted by mishax1 to CryptoCurrency [link] [comments]

ECDSA In Bitcoin

Digital signatures are considered the foundation of online sovereignty. The advent of public-key cryptography in 1976 paved the way for the creation of a global communications tool – the Internet, and a completely new form of money – Bitcoin. Although the fundamental properties of public-key cryptography have not changed much since then, dozens of different open-source digital signature schemes are now available to cryptographers.

How ECDSA was incorporated into Bitcoin

When Satoshi Nakamoto, a mystical founder of the first crypto, started working on Bitcoin, one of the key points was to select the signature schemes for an open and public financial system. The requirements were clear. An algorithm should have been widely used, understandable, safe enough, easy, and, what is more important, open-sourced.
Of all the options available at that time, he chose the one that met these criteria: Elliptic Curve Digital Signature Algorithm, or ECDSA.
At that time, native support for ECDSA was provided in OpenSSL, an open set of encryption tools developed by experienced cipher banks in order to increase the confidentiality of online communications. Compared to other popular schemes, ECDSA had such advantages as:
These are extremely useful features for digital money. At the same time, it provides a proportional level of security: for example, a 256-bit ECDSA key has the same level of security as a 3072-bit RSA key (Rivest, Shamir и Adleman) with a significantly smaller key size.

Basic principles of ECDSA

ECDSA is a process that uses elliptic curves and finite fields to “sign” data in such a way that third parties can easily verify the authenticity of the signature, but the signer himself reserves the exclusive opportunity to create signatures. In the case of Bitcoin, the “data” that is signed is a transaction that transfers ownership of bitcoins.
ECDSA has two separate procedures for signing and verifying. Each procedure is an algorithm consisting of several arithmetic operations. The signature algorithm uses the private key, and the verification algorithm uses only the public key.
To use ECDSA, such protocol as Bitcoin must fix a set of parameters for the elliptic curve and its finite field, so that all users of the protocol know and apply these parameters. Otherwise, everyone will solve their own equations, which will not converge with each other, and they will never agree on anything.
For all these parameters, Bitcoin uses very, very large (well, awesomely incredibly huge) numbers. It is important. In fact, all practical applications of ECDSA use huge numbers. After all, the security of this algorithm relies on the fact that these values are too large to pick up a key with a simple brute force. The 384-bit ECDSA key is considered safe enough for the NSA's most secretive government service (USA).

Replacement of ECDSA

Thanks to the hard work done by Peter Wuille (a famous cryptography specialist) and his colleagues on an improved elliptical curve called secp256k1, Bitcoin's ECDSA has become even faster and more efficient. However, ECDSA still has some shortcomings, which can serve as a sufficient basis for its complete replacement. After several years of research and experimentation, a new signature scheme was established to increase the confidentiality and efficiency of Bitcoin transactions: Schnorr's digital signature scheme.
Schnorr's signature takes the process of using “keys” to a new level. It takes only 64 bytes when it gets into the block, which reduces the space occupied by transactions by 4%. Since transactions with the Schnorr signature are the same size, this makes it possible to pre-calculate the total size of the part of the block that contains such signatures. A preliminary calculation of the block size is the key to its safe increase in the future.
Keep up with the news of the crypto world at CoinJoy.io Follow us on Twitter and Medium. Subscribe to our YouTube channel. Join our Telegram channel. For any inquiries mail us at [[email protected]](mailto:[email protected]).
submitted by CoinjoyAssistant to btc [link] [comments]

tBTC an erc20 wrapped version of BTC, like erc20 wBTC; but is trustless and does not require a centralised party to mint wrapped btc like wBTC

I found this article on /Ethereum though it didnt go into the specs of how this works:
https://np.reddit.com/ethereum/comments/ftqdna/bitcoin_to_ethereum_bridge_raises_77_million_in/
https://decrypt.co/24336/bitcoin-to-ethereum-bridge-raises-7-7-million-in-token-sale?utm_source=reddit&utm_medium=social&utm_campaign=sm
as the article says wrapped bitcoin has been done before e.g. wBTC but wBTC requires a centralised party to mint wBTC from BTC held by this party; making it out of the question as its centralised.
did some digging there is a whitepaper , but i wanted more details on the tBTC implementation.
I went on their github and looked at the readme on some projects; found a few interesting things, though not an entire explanation.
https://github.com/keep-network/tbtc
tBTC is a trustlessly Bitcoin-backed ERC-20 token.
The goal of the project is to provide a stronger 2-way peg than federated sidechains like Liquid, expanding use cases possible via today’s Bitcoin network, while bringing superior money to other chains.
This repo contains the Solidity smart contracts and specification.
https://github.com/keep-network/tbtc
tbtc.js provides JS bindings to the tBTC system. The tBTC system is a bonded, multi-federated peg made up of many deposits backed by single-use BTC wallets to enable their value’s corresponding usage on the Ethereum chain, primarily through the minting of a TBTC ERC20 token whose supply is guaranteed to be backed by at least 1 BTC per TBTC in circulation.
finally this is the best:
https://tbtc.network/developers/tbtc-technical-system-overview
here is the first few pargarphs
2020-04-01 tBTC incorporates novel design features that carry important implications for users. This piece explains four of these: TDT receipts, multiple lot sizes, Keep's random beacon, and threshold signatures.
TBTC Deposit Token (TDT) The TBTC Deposit Token (TDT) is a non-fungible token that is minted when a user requests a deposit. A TDT is a non-fungible ERC-721 token that serves as a counterpart to TBTC. It represents a claim to a deposit's underlying UTXO on the Bitcoin blockchain.
TBTC deposits can be locked or unlocked. A locked deposit can only be redeemed by the deposit owner with the corresponding TDT. Each TDT is unique to the deposit that mints it and carries the exclusive right for up to a 6 month term to redeem the deposit.
also this paragraph addresses creating wallets with the created tokens
Random Beacon for Signer Selection
The Keep network requires a trusted source of randomness to select tBTC signers. This takes the form of a BLS Threshold Relay.
When a request comes in to create a signing group, the tBTC system uses a random seed from a secure decentralized random beacon to randomly select signing group members from the eligible pool of signers. These signers coordinate a distributed key generation protocol that results in a public ECDSA key for the group, which is used to produce a wallet address that is then published to the host chain. This completes the signer selection phase.
my take away from this is that by using side chains that a trustless, not fedeared like liquid bitcoin sidechains sold by blockstream. it uses NFT erc-721 tokens as representation of the bitccoin UTXO from the bitcoin blockchain, store it in a wallet and mint it into tBTC. given this is all smart contracts generating wallets and minting the tBTC, it does away with the need of a centralised party to provide the funds of BTC to create a wrapped erc20 version on ethereum and so should be trustles.
perhaps erc20 token trading is the way to go forward. just requires wrapping of exisitng tokens. this looks promising for DeXs and DeFi if it happens.
also opens the possibiliy of multicollateral Dai (MCD) using tBTC in addition to eth and BAT. though personally i think btc should not be used in MCD.
any thoughts on this? or if my understanding is off.
thanks
edit: got some more info from px403
I talked to James a bit about tBTC in Osaka, so I have a vague idea of how it works, so I might be able to explain it in a somewhat coherent way.
Basically, the magic here is they reimplemented Bitcoin's SPV as an Ethereum smart contract, effectively letting them query the current state of the Bitcoin network, including validity of payments, directly in contract. Using this, they built an auction system where people can at any time claim ETH by paying BTC, or claim BTC by paying ETH. By design the spread is wide, so this isn't actually intended to be a high volume exchange, but what you do get is a pretty good price oracle.
From the price oracle, I think there were doing some Maker style CDPs or something, where people could lock up their BTC on the Bitcoin network to redeem tBTC, and any of the locked BTC could be reclaimed by burning tBTC or something.
Sorry it's not a complete picture of what's going on, but I think that's the general gist of what they're doing.
submitted by Neophyte- to CryptoTechnology [link] [comments]

ECDSA In Bitcoin

Digital signatures are considered the foundation of online sovereignty. The advent of public-key cryptography in 1976 paved the way for the creation of a global communications tool – the Internet, and a completely new form of money – Bitcoin. Although the fundamental properties of public-key cryptography have not changed much since then, dozens of different open-source digital signature schemes are now available to cryptographers.

How ECDSA was incorporated into Bitcoin

When Satoshi Nakamoto, a mystical founder of the first crypto, started working on Bitcoin, one of the key points was to select the signature schemes for an open and public financial system. The requirements were clear. An algorithm should have been widely used, understandable, safe enough, easy, and, what is more important, open-sourced.
Of all the options available at that time, he chose the one that met these criteria: Elliptic Curve Digital Signature Algorithm, or ECDSA.
At that time, native support for ECDSA was provided in OpenSSL, an open set of encryption tools developed by experienced cipher banks in order to increase the confidentiality of online communications. Compared to other popular schemes, ECDSA had such advantages as:
These are extremely useful features for digital money. At the same time, it provides a proportional level of security: for example, a 256-bit ECDSA key has the same level of security as a 3072-bit RSA key (Rivest, Shamir и Adleman) with a significantly smaller key size.

Basic principles of ECDSA

ECDSA is a process that uses elliptic curves and finite fields to “sign” data in such a way that third parties can easily verify the authenticity of the signature, but the signer himself reserves the exclusive opportunity to create signatures. In the case of Bitcoin, the “data” that is signed is a transaction that transfers ownership of bitcoins.
ECDSA has two separate procedures for signing and verifying. Each procedure is an algorithm consisting of several arithmetic operations. The signature algorithm uses the private key, and the verification algorithm uses only the public key.
To use ECDSA, such protocol as Bitcoin must fix a set of parameters for the elliptic curve and its finite field, so that all users of the protocol know and apply these parameters. Otherwise, everyone will solve their own equations, which will not converge with each other, and they will never agree on anything.
For all these parameters, Bitcoin uses very, very large (well, awesomely incredibly huge) numbers. It is important. In fact, all practical applications of ECDSA use huge numbers. After all, the security of this algorithm relies on the fact that these values are too large to pick up a key with a simple brute force. The 384-bit ECDSA key is considered safe enough for the NSA's most secretive government service (USA).

Replacement of ECDSA

Thanks to the hard work done by Peter Wuille (a famous cryptography specialist) and his colleagues on an improved elliptical curve called secp256k1, Bitcoin's ECDSA has become even faster and more efficient. However, ECDSA still has some shortcomings, which can serve as a sufficient basis for its complete replacement. After several years of research and experimentation, a new signature scheme was established to increase the confidentiality and efficiency of Bitcoin transactions: Schnorr's digital signature scheme.
Schnorr's signature takes the process of using “keys” to a new level. It takes only 64 bytes when it gets into the block, which reduces the space occupied by transactions by 4%. Since transactions with the Schnorr signature are the same size, this makes it possible to pre-calculate the total size of the part of the block that contains such signatures. A preliminary calculation of the block size is the key to its safe increase in the future.
Keep up with the news of the crypto world at CoinJoy.io Follow us on Twitter and Medium. Subscribe to our YouTube channel. Join our Telegram channel. For any inquiries mail us at [[email protected]](mailto:[email protected]).
submitted by CoinjoyAssistant to Bitcoin [link] [comments]

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.
  • Bitcoin (BTC) is a peer-to-peer cryptocurrency that aims to function as a means of exchange that is independent of any central authority. BTC can be transferred electronically in a secure, verifiable, and immutable way.
  • Launched in 2009, BTC is the first virtual currency to solve the double-spending issue by timestamping transactions before broadcasting them to all of the nodes in the Bitcoin network. The Bitcoin Protocol offered a solution to the Byzantine Generals’ Problem with a blockchain network structure, a notion first created by Stuart Haber and W. Scott Stornetta in 1991.
  • Bitcoin’s whitepaper was published pseudonymously in 2008 by an individual, or a group, with the pseudonym “Satoshi Nakamoto”, whose underlying identity has still not been verified.
  • The Bitcoin protocol uses an SHA-256d-based Proof-of-Work (PoW) algorithm to reach network consensus. Its network has a target block time of 10 minutes and a maximum supply of 21 million tokens, with a decaying token emission rate. To prevent fluctuation of the block time, the network’s block difficulty is re-adjusted through an algorithm based on the past 2016 block times.
  • With a block size limit capped at 1 megabyte, the Bitcoin Protocol has supported both the Lightning Network, a second-layer infrastructure for payment channels, and Segregated Witness, a soft-fork to increase the number of transactions on a block, as solutions to network scalability.

https://preview.redd.it/s2gmpmeze3151.png?width=256&format=png&auto=webp&s=9759910dd3c4a15b83f55b827d1899fb2fdd3de1

1. What is Bitcoin (BTC)?

  • Bitcoin is a peer-to-peer cryptocurrency that aims to function as a means of exchange and is independent of any central authority. Bitcoins are transferred electronically in a secure, verifiable, and immutable way.
  • Network validators, whom are often referred to as miners, participate in the SHA-256d-based Proof-of-Work consensus mechanism to determine the next global state of the blockchain.
  • The Bitcoin protocol has a target block time of 10 minutes, and a maximum supply of 21 million tokens. The only way new bitcoins can be produced is when a block producer generates a new valid block.
  • The protocol has a token emission rate that halves every 210,000 blocks, or approximately every 4 years.
  • Unlike public blockchain infrastructures supporting the development of decentralized applications (Ethereum), the Bitcoin protocol is primarily used only for payments, and has only very limited support for smart contract-like functionalities (Bitcoin “Script” is mostly used to create certain conditions before bitcoins are used to be spent).

2. Bitcoin’s core features

For a more beginner’s introduction to Bitcoin, please visit Binance Academy’s guide to Bitcoin.

Unspent Transaction Output (UTXO) model

A UTXO transaction works like cash payment between two parties: Alice gives money to Bob and receives change (i.e., unspent amount). In comparison, blockchains like Ethereum rely on the account model.
https://preview.redd.it/t1j6anf8f3151.png?width=1601&format=png&auto=webp&s=33bd141d8f2136a6f32739c8cdc7aae2e04cbc47

Nakamoto consensus

In the Bitcoin network, anyone can join the network and become a bookkeeping service provider i.e., a validator. All validators are allowed in the race to become the block producer for the next block, yet only the first to complete a computationally heavy task will win. This feature is called Proof of Work (PoW).
The probability of any single validator to finish the task first is equal to the percentage of the total network computation power, or hash power, the validator has. For instance, a validator with 5% of the total network computation power will have a 5% chance of completing the task first, and therefore becoming the next block producer.
Since anyone can join the race, competition is prone to increase. In the early days, Bitcoin mining was mostly done by personal computer CPUs.
As of today, Bitcoin validators, or miners, have opted for dedicated and more powerful devices such as machines based on Application-Specific Integrated Circuit (“ASIC”).
Proof of Work secures the network as block producers must have spent resources external to the network (i.e., money to pay electricity), and can provide proof to other participants that they did so.
With various miners competing for block rewards, it becomes difficult for one single malicious party to gain network majority (defined as more than 51% of the network’s hash power in the Nakamoto consensus mechanism). The ability to rearrange transactions via 51% attacks indicates another feature of the Nakamoto consensus: the finality of transactions is only probabilistic.
Once a block is produced, it is then propagated by the block producer to all other validators to check on the validity of all transactions in that block. The block producer will receive rewards in the network’s native currency (i.e., bitcoin) as all validators approve the block and update their ledgers.

The blockchain

Block production

The Bitcoin protocol utilizes the Merkle tree data structure in order to organize hashes of numerous individual transactions into each block. This concept is named after Ralph Merkle, who patented it in 1979.
With the use of a Merkle tree, though each block might contain thousands of transactions, it will have the ability to combine all of their hashes and condense them into one, allowing efficient and secure verification of this group of transactions. This single hash called is a Merkle root, which is stored in the Block Header of a block. The Block Header also stores other meta information of a block, such as a hash of the previous Block Header, which enables blocks to be associated in a chain-like structure (hence the name “blockchain”).
An illustration of block production in the Bitcoin Protocol is demonstrated below.

https://preview.redd.it/m6texxicf3151.png?width=1591&format=png&auto=webp&s=f4253304912ed8370948b9c524e08fef28f1c78d

Block time and mining difficulty

Block time is the period required to create the next block in a network. As mentioned above, the node who solves the computationally intensive task will be allowed to produce the next block. Therefore, block time is directly correlated to the amount of time it takes for a node to find a solution to the task. The Bitcoin protocol sets a target block time of 10 minutes, and attempts to achieve this by introducing a variable named mining difficulty.
Mining difficulty refers to how difficult it is for the node to solve the computationally intensive task. If the network sets a high difficulty for the task, while miners have low computational power, which is often referred to as “hashrate”, it would statistically take longer for the nodes to get an answer for the task. If the difficulty is low, but miners have rather strong computational power, statistically, some nodes will be able to solve the task quickly.
Therefore, the 10 minute target block time is achieved by constantly and automatically adjusting the mining difficulty according to how much computational power there is amongst the nodes. The average block time of the network is evaluated after a certain number of blocks, and if it is greater than the expected block time, the difficulty level will decrease; if it is less than the expected block time, the difficulty level will increase.

What are orphan blocks?

In a PoW blockchain network, if the block time is too low, it would increase the likelihood of nodes producingorphan blocks, for which they would receive no reward. Orphan blocks are produced by nodes who solved the task but did not broadcast their results to the whole network the quickest due to network latency.
It takes time for a message to travel through a network, and it is entirely possible for 2 nodes to complete the task and start to broadcast their results to the network at roughly the same time, while one’s messages are received by all other nodes earlier as the node has low latency.
Imagine there is a network latency of 1 minute and a target block time of 2 minutes. A node could solve the task in around 1 minute but his message would take 1 minute to reach the rest of the nodes that are still working on the solution. While his message travels through the network, all the work done by all other nodes during that 1 minute, even if these nodes also complete the task, would go to waste. In this case, 50% of the computational power contributed to the network is wasted.
The percentage of wasted computational power would proportionally decrease if the mining difficulty were higher, as it would statistically take longer for miners to complete the task. In other words, if the mining difficulty, and therefore targeted block time is low, miners with powerful and often centralized mining facilities would get a higher chance of becoming the block producer, while the participation of weaker miners would become in vain. This introduces possible centralization and weakens the overall security of the network.
However, given a limited amount of transactions that can be stored in a block, making the block time too longwould decrease the number of transactions the network can process per second, negatively affecting network scalability.

3. Bitcoin’s additional features

Segregated Witness (SegWit)

Segregated Witness, often abbreviated as SegWit, is a protocol upgrade proposal that went live in August 2017.
SegWit separates witness signatures from transaction-related data. Witness signatures in legacy Bitcoin blocks often take more than 50% of the block size. By removing witness signatures from the transaction block, this protocol upgrade effectively increases the number of transactions that can be stored in a single block, enabling the network to handle more transactions per second. As a result, SegWit increases the scalability of Nakamoto consensus-based blockchain networks like Bitcoin and Litecoin.
SegWit also makes transactions cheaper. Since transaction fees are derived from how much data is being processed by the block producer, the more transactions that can be stored in a 1MB block, the cheaper individual transactions become.
https://preview.redd.it/depya70mf3151.png?width=1601&format=png&auto=webp&s=a6499aa2131fbf347f8ffd812930b2f7d66be48e
The legacy Bitcoin block has a block size limit of 1 megabyte, and any change on the block size would require a network hard-fork. On August 1st 2017, the first hard-fork occurred, leading to the creation of Bitcoin Cash (“BCH”), which introduced an 8 megabyte block size limit.
Conversely, Segregated Witness was a soft-fork: it never changed the transaction block size limit of the network. Instead, it added an extended block with an upper limit of 3 megabytes, which contains solely witness signatures, to the 1 megabyte block that contains only transaction data. This new block type can be processed even by nodes that have not completed the SegWit protocol upgrade.
Furthermore, the separation of witness signatures from transaction data solves the malleability issue with the original Bitcoin protocol. Without Segregated Witness, these signatures could be altered before the block is validated by miners. Indeed, alterations can be done in such a way that if the system does a mathematical check, the signature would still be valid. However, since the values in the signature are changed, the two signatures would create vastly different hash values.
For instance, if a witness signature states “6,” it has a mathematical value of 6, and would create a hash value of 12345. However, if the witness signature were changed to “06”, it would maintain a mathematical value of 6 while creating a (faulty) hash value of 67890.
Since the mathematical values are the same, the altered signature remains a valid signature. This would create a bookkeeping issue, as transactions in Nakamoto consensus-based blockchain networks are documented with these hash values, or transaction IDs. Effectively, one can alter a transaction ID to a new one, and the new ID can still be valid.
This can create many issues, as illustrated in the below example:
  1. Alice sends Bob 1 BTC, and Bob sends Merchant Carol this 1 BTC for some goods.
  2. Bob sends Carols this 1 BTC, while the transaction from Alice to Bob is not yet validated. Carol sees this incoming transaction of 1 BTC to him, and immediately ships goods to B.
  3. At the moment, the transaction from Alice to Bob is still not confirmed by the network, and Bob can change the witness signature, therefore changing this transaction ID from 12345 to 67890.
  4. Now Carol will not receive his 1 BTC, as the network looks for transaction 12345 to ensure that Bob’s wallet balance is valid.
  5. As this particular transaction ID changed from 12345 to 67890, the transaction from Bob to Carol will fail, and Bob will get his goods while still holding his BTC.
With the Segregated Witness upgrade, such instances can not happen again. This is because the witness signatures are moved outside of the transaction block into an extended block, and altering the witness signature won’t affect the transaction ID.
Since the transaction malleability issue is fixed, Segregated Witness also enables the proper functioning of second-layer scalability solutions on the Bitcoin protocol, such as the Lightning Network.

Lightning Network

Lightning Network is a second-layer micropayment solution for scalability.
Specifically, Lightning Network aims to enable near-instant and low-cost payments between merchants and customers that wish to use bitcoins.
Lightning Network was conceptualized in a whitepaper by Joseph Poon and Thaddeus Dryja in 2015. Since then, it has been implemented by multiple companies. The most prominent of them include Blockstream, Lightning Labs, and ACINQ.
A list of curated resources relevant to Lightning Network can be found here.
In the Lightning Network, if a customer wishes to transact with a merchant, both of them need to open a payment channel, which operates off the Bitcoin blockchain (i.e., off-chain vs. on-chain). None of the transaction details from this payment channel are recorded on the blockchain, and only when the channel is closed will the end result of both party’s wallet balances be updated to the blockchain. The blockchain only serves as a settlement layer for Lightning transactions.
Since all transactions done via the payment channel are conducted independently of the Nakamoto consensus, both parties involved in transactions do not need to wait for network confirmation on transactions. Instead, transacting parties would pay transaction fees to Bitcoin miners only when they decide to close the channel.
https://preview.redd.it/cy56icarf3151.png?width=1601&format=png&auto=webp&s=b239a63c6a87ec6cc1b18ce2cbd0355f8831c3a8
One limitation to the Lightning Network is that it requires a person to be online to receive transactions attributing towards him. Another limitation in user experience could be that one needs to lock up some funds every time he wishes to open a payment channel, and is only able to use that fund within the channel.
However, this does not mean he needs to create new channels every time he wishes to transact with a different person on the Lightning Network. If Alice wants to send money to Carol, but they do not have a payment channel open, they can ask Bob, who has payment channels open to both Alice and Carol, to help make that transaction. Alice will be able to send funds to Bob, and Bob to Carol. Hence, the number of “payment hubs” (i.e., Bob in the previous example) correlates with both the convenience and the usability of the Lightning Network for real-world applications.

Schnorr Signature upgrade proposal

Elliptic Curve Digital Signature Algorithm (“ECDSA”) signatures are used to sign transactions on the Bitcoin blockchain.
https://preview.redd.it/hjeqe4l7g3151.png?width=1601&format=png&auto=webp&s=8014fb08fe62ac4d91645499bc0c7e1c04c5d7c4
However, many developers now advocate for replacing ECDSA with Schnorr Signature. Once Schnorr Signatures are implemented, multiple parties can collaborate in producing a signature that is valid for the sum of their public keys.
This would primarily be beneficial for network scalability. When multiple addresses were to conduct transactions to a single address, each transaction would require their own signature. With Schnorr Signature, all these signatures would be combined into one. As a result, the network would be able to store more transactions in a single block.
https://preview.redd.it/axg3wayag3151.png?width=1601&format=png&auto=webp&s=93d958fa6b0e623caa82ca71fe457b4daa88c71e
The reduced size in signatures implies a reduced cost on transaction fees. The group of senders can split the transaction fees for that one group signature, instead of paying for one personal signature individually.
Schnorr Signature also improves network privacy and token fungibility. A third-party observer will not be able to detect if a user is sending a multi-signature transaction, since the signature will be in the same format as a single-signature transaction.

4. Economics and supply distribution

The Bitcoin protocol utilizes the Nakamoto consensus, and nodes validate blocks via Proof-of-Work mining. The bitcoin token was not pre-mined, and has a maximum supply of 21 million. The initial reward for a block was 50 BTC per block. Block mining rewards halve every 210,000 blocks. Since the average time for block production on the blockchain is 10 minutes, it implies that the block reward halving events will approximately take place every 4 years.
As of May 12th 2020, the block mining rewards are 6.25 BTC per block. Transaction fees also represent a minor revenue stream for miners.
submitted by D-platform to u/D-platform [link] [comments]

A brief history of the Monero development (Part I)

or a struggle for anonymity and confidentiality of blockchain transaction.
The issues of privacy of electronic currency faced researchers and developers for a long time, long before Bitcoin. In 1991, Tatsuaki Okamoto and Kazuo Ohta from the NTT research laboratory (Japan's largest telecommunications company) introduced 6 criteria for an ideal e-currency, including privacy: "relationship between the user and his purchases must be untraceable by anyone". Nicholas van Saberhagen, an anonymous author behind the work on the CryptoNote protocol, which formed the basis of Monero, in December 2012 summarized these 6 criteria to two specific properties:
Untraceability: for every incoming transaction, all possible senders are equally likely.Unlinkability: for any two outgoing transactions, it is impossible to prove that they were sent to the same person.
None of the other properties are characteristic of Bitcoin, since all transactions are broadcasted publicly. Of course, by the time this work was written, various tumblers made it possible to combine outputs of several transactions and send them through some intermediate address. Also, by that time, some protocols based on the zero-knowledge proof were known, but at that time such evidence was large enough to make them impractical to use.
What was proposed to tackle the issues: firstly, each transaction was signed on behalf of the group, not the individual, as in BTC. To do this, we used the option of an electronic digital signature called "Ring Signature" (further development of the so-called "Group Signature"). However, when implementing a completely anonymous ring signature, a (very high) probability of double spending of coins arose, and therefore the so-called linkable anonymity primitive was taken, which was implemented through a one-time-key mechanism (i.e., when creating each new transaction, the group key changes).
Essentially, although it's certainly worth noting that the CryptoNote implementation used a different scheme of elliptical curves (EdDSA instead of ECDSA, as a result, an elliptic curve with a different equation was used, etc.).
Anonymity achieved, but what about privacy? RingCT to the rescue
You know how it happens: everything seems to be there, but something is missing. The problem with the original CryptoNote protocol was that the user balances were not hidden, and thus, it was possible to analyze the blockchain and deanonymize the members of the group who signed the transaction. An additional problem with hiding balances is that with simple encryption of balances, it is not possible to reach a consensus on whether coins were produced from the thin air or not.
To solve this problem, the developer Shen Noether from Monero Research Lab proposed the use of the Pederson Commitment, which allows the prover to calculate the obligation for the amount without disclosing it and being unable to change it.
Short explanation from Monero Wiki:
As long as the encrypted output amounts created, which include an output for the recipient and a change output back to the sender, and the unencrypted transaction fee is equal to the sum of the inputs that are being spent, it is a legitimate transaction and can be confirmed to not be creating Monero out of thin air.
Thus, it is possible to obtain a ring confidential transaction (hence the name). And, the inquisitive reader will ask, what is wrong this time?
The problem is one, but twofold. On the one hand, the size of the transaction increases with RingCT, which does not have the best effect on scalability and transaction fees. Besides, again, due to the large size of the signature, the number of possible subscribers n is limited. So, the n value in the official software of Monero wallet is from 5 to 20 by default. As a result, the sender anonymity for RingCT1.0 is at most 1 out of 20.
To be continued...
submitted by CUTcoin to cutc0in [link] [comments]

On chain scaling with Schnorr signatures

On chain scaling with Schnorr signatures submitted by bitusher to btc [link] [comments]

Why CHECKDATASIG Does Not Matter

Why CHECKDATASIG Does Not Matter

In this post, I will prove that the two main arguments against the new CHECKDATASIG (CDS) op-codes are invalid. And I will prove that two common arguments for CDS are invalid as well. The proof requires only one assumption (which I believe will be true if we continue to reactive old op-codes and increase the limits on script and transaction sizes [something that seems to have universal support]):
ASSUMPTION 1. It is possible to emmulate CDS with a big long raw script.

Why are the arguments against CDS invalid?

Easy. Let's analyse the two arguments I hear most often against CDS:

ARG #1. CDS can be used for illegal gambling.

This is not a valid reason to oppose CDS because it is a red herring. By Assumption 1, the functionality of CDS can be emulated with a big long raw script. CDS would not then affect what is or is not possible in terms of illegal gambling.

ARG #2. CDS is a subsidy that changes the economic incentives of bitcoin.

The reasoning here is that being able to accomplish in a single op-code, what instead would require a big long raw script, makes transactions that use the new op-code unfairly cheap. We can shoot this argument down from three directions:
(A) Miners can charge any fee they want.
It is true that today miners typically charge transaction fees based on the number of bytes required to express the transaction, and it is also true that a transaction with CDS could be expressed with fewer bytes than the same transaction constructed with a big long raw script. But these two facts don't matter because every miner is free to charge any fee he wants for including a transaction in his block. If a miner wants to charge more for transactions with CDS he can (e.g., maybe the miner believes such transactions cost him more CPU cycles and so he wants to be compensated with higher fees). Similarly, if a miner wants to discount the big long raw scripts used to emmulate CDS he could do that too (e.g., maybe a group of miners have built efficient ways to propagate and process these huge scripts and now want to give a discount to encourage their use). The important point is that the existence of CDS does not impeded the free market's ability to set efficient prices for transactions in any way.
(B) Larger raw transactions do not imply increased orphaning risk.
Some people might argue that my discussion above was flawed because it didn't account for orphaning risk due to the larger transaction size when using a big long raw script compared to a single op-code. But transaction size is not what drives orphaning risk. What drives orphaning risk is the amount of information (entropy) that must be communicated to reconcile the list of transactions in the next block. If the raw-script version of CDS were popular enough to matter, then transactions containing it could be compressed as
....CDS'(signature, message, public-key)....
where CDS' is a code* that means "reconstruct this big long script operation that implements CDS." Thus there is little if any fundamental difference in terms of orphaning risk (or bandwidth) between using a big long script or a single discrete op code.
(C) More op-codes does not imply more CPU cycles.
Firstly, all op-codes are not equal. OP_1ADD (adding 1 to the input) requires vastly fewer CPU cycles than OP_CHECKSIG (checking an ECDSA signature). Secondly, if CDS were popular enough to matter, then whatever "optimized" version that could be created for the discrete CDS op-codes could be used for the big long version emmulating it in raw script. If this is not obvious, realize that all that matters is that the output of both functions (the discrete op-code and the big long script version) must be identical for all inputs, which means that is does NOT matter how the computations are done internally by the miner.

Why are (some of) the arguments for CDS invalid?

Let's go through two of the arguments:

ARG #3. It makes new useful bitcoin transactions possible (e.g., forfeit transactions).

If Assumption 1 holds, then this is false because CDS can be emmulated with a big long raw script. Nothing that isn't possible becomes possible.

ARG #4. It is more efficient to do things with a single op-code than a big long script.

This is basically Argument #2 in reverse. Argument #2 was that CDS would be too efficient and change the incentives of bitcoin. I then showed how, at least at the fundamental level, there is little difference in efficiency in terms of orphaning risk, bandwidth or CPU cycles. For the same reason that Argument #2 is invalid, Argument #4 is invalid as well. (That said, I think a weaker argument could be made that a good scripting language allows one to do the things he wants to do in the simplest and most intuitive ways and so if CDS is indeed useful then I think it makes sense to implement in compact form, but IMO this is really more of an aesthetics thing than something fundamental.)
It's interesting that both sides make the same main points, yet argue in the opposite directions.
Argument #1 and #3 can both be simplified to "CDS permits new functionality." This is transformed into an argument against CDS by extending it with "...and something bad becomes possible that wasn't possible before and so we shouldn't do it." Conversely, it is transformed to an argument for CDS by extending it with "...and something good becomes possible that was not possible before and so we should do it." But if Assumption 1 holds, then "CDS permits new functionality" is false and both arguments are invalid.
Similarly, Arguments #2 and #4 can both be simplified to "CDS is more efficient than using a big long raw script to do the same thing." This is transformed into an argument against CDS by tacking on the speculation that "...which is a subsidy for certain transactions which will throw off the delicate balance of incentives in bitcoin!!1!." It is transformed into an argument for CDS because "... heck, who doesn't want to make bitcoin more efficient!"

What do I think?

If I were the emperor of bitcoin I would probably include CDS because people are already excited to use it, the work is already done to implement it, and the plan to roll it out appears to have strong community support. The work to emulate CDS with a big long raw script is not done.
Moving forward, I think Andrew Stone's (thezerg1) approach outlined here is an excellent way to make incremental improvements to Bitcoin's scripting language. In fact, after writing this essay, I think I've sort of just expressed Andrew's idea in a different form.
* you might call it an "op code" teehee
submitted by Peter__R to btc [link] [comments]

Part 5. I'm writing a series about blockchain tech and possible future security risks. This is the fifth part of the series talking about an advanced vulnerability of BTC.

The previous parts will give you usefull basic blockchain knowledge and insights on quantum resistance vs blockchain that are not explained in this part.
Part 1, what makes blockchain reliable?
Part 2, The mathematical concepts Hashing and Public key cryptography.
Part 3, Quantum resistant blockchain vs Quantum computing.
Part 4A, The advantages of quantum resistance from genesis block, A
Part 4B, The advantages of quantum resistance from genesis block, A

Why BTC is vulnerable for quantum attacks sooner than you would think.
Content:
The BTC misconception: “Original public keys are not visible until you make a transaction, so BTC is quantum resistant.”
Already exposed public keys.
Hijacking transactions.
Hijacks during blocktime
Hijacks pre-blocktime.
MITM attacks

- Why BTC is vulnerable for quantum attacks sooner than you would think. -

Blockchain transactions are secured by public-private key cryptography. The keypairs used today will be at risk when quantum computers reach a certain critical level: Quantum computers can at a certain point of development, derive private keys from public keys. See for more sourced info on this subject in part 3. So if a public key can be obtained by an attacker, he can then use a quantum computer to find the private key. And as he has both the public key and the private key, he can control and send the funds to an address he owns.
Just to make sure there will be no misconceptions: When public-private key cryptography such as ECDSA and RSA can be broken by a quantum computer, this will be an issue for all blockchains who don't use quantum resistant cryptography. The reason this article is about BTC is because I take this paper as a reference point: https://arxiv.org/pdf/1710.10377.pdf Here they calculate an estimate when BTC will be at risk while taking the BTC blocktime as the window of opportunity.
The BTC misconception: “Original public keys are not visible until you make a transaction, so BTC is quantum resistant.”
In pretty much every discussion I've read and had on the subject, I notice that people are under the impression that BTC is quantum resistant as long as you use your address only once. BTC uses a hashed version of the public key as a send-to address. So in theory, all funds are registered on the chain on hashed public keys instead of to the full, original public keys, which means that the original public key is (again in theory) not public. Even a quantum computer can't derive the original public key from a hashed public key, therefore there is no risk that a quantum computer can derive the private key from the public key. If you make a transaction, however, the public key of the address you sent your funds from will be registered in full form in the blockchain. So if you were to only send part of your funds, leaving the rest on the old address, your remaining funds would be on a published public key, and therefore vulnerable to quantum attacks. So the workaround would be to transfer the remaining funds, within the same transaction, to a new address. In that way, your funds would be once again registered on the blockchain on a hashed public key instead of a full, original public key.
If you feel lost already because you are not very familiar with the tech behind blockchain, I will try to explain the above in a more familiar way:
You control your funds through your public- private key pair. Your funds are registered on your public key. And you can create transactions, which you need to sign to be valid. You can only create a signature if you have your private key. See it as your e-mail address (public key) and your password (Private key). Many people got your email address, but only you have your password. So the analogy is, that if you got your address and your password, then you can access your mail and send emails (Transactions). If the right quantum computer would be available, people could use that to calculate your password (private key), if they have your email address (public key).
Now, because BTC doesn’t show your full public key anywhere until you make a transaction. That sounds pretty safe. It means that your public key is private until you make a transaction. The only thing related to your public key that is public is the hash of your public key. Here is a short explanation of what a hash is: a hash is an outcome of an equation. Usually one-way hash functions are used, where you can not derive the original input from the output; but every time you use the same hash function on the same original input (For example IFUHE8392ISHF), you will always get the same output (For example G). That way you can have your coins on public key "IFUHE8392ISHF", while on the chain, they are registered on "G".
So your funds are registered on the blockchain on the "Hash" of the public key. The Hash of the public key is also your "email address" in this case. So you give "G" as your address to send BTC to.
As said before: since it is, even for a quantum computer, impossible to derive a public key from the Hash of a public key, your coins are safe for quantum computers as long as the public key is only registered in hashed form. The obvious safe method would be, never to reuse an address, and always make sure that when you make a payment, you send your remaining funds to a fresh new address. (There are wallets that can do this for you.) In theory, this would make BTC quantum resistant, if used correctly. This, however, is not as simple as it seems. Even though the above is correct, there is a way to get to your funds.
Already exposed public keys.
But before we get to that, there is another point that is often overlooked: Not only is the security of your personal BTC is important, but also the security of funds of other users. If others got hacked, the news of the hack itself and the reaction of the market to that news, would influence the marketprice. Or, if a big account like the Satoshi account were to be hacked and dumped, the dump itself, combined with the news of the hack, could be even worse. An individual does not have the control of other people’s actions. So even though one might make sure his public key is only registered in hashed form, others might not do so, or might no know their public key is exposed. There are several reasons why a substantial amount of addresses actually have exposed full public keys:
In total, about 36% of all BTC are on addresses with exposed public keys Of which about 20% is on lost addresses. and here
Hijacking transactions.
But even if you consider the above an acceptable risk, just because you yourself will make sure you never reuse an address, then still, the fact that only the hashed public key is published until you make a transaction is a false sense of security. It only works, if you never make a transaction. Why? Public keys are revealed while making a transaction, so transactions can be hijacked while being made.
Here it is important to understand two things:
1.) How is a transaction sent?
The owner has the private key and the public key and uses that to log into the secured environment, the wallet. This can be online or offline. Once he is in his wallet, he states how much he wants to send and to what address.
When he sends the transaction, it will be broadcasted to the blockchain network. But before the actual transaction will be sent, it is formed into a package, created by the wallet. This happens out of sight of the sender.
That package ends up carrying roughly the following info: the public key to point to the address where the funds will be coming from, the amount that will be transferred, the address the funds will be transferred to (depending on the blockchain this could be the hashed public key, or the original public key of the address the funds will be transferred to). This package also carries the most important thing: a signature, created by the wallet, derived from the private- public key combination. This signature proves to the miners that you are the rightful owner and you can send funds from that public key.
Then this package is sent out of the secure wallet environment to multiple nodes. The nodes don’t need to trust the sender or establish the sender’s "identity”, because the sender proofs he is the rightful owner by adding the signature that corresponds with the public key. And because the transaction is signed and contains no confidential information, private keys, or credentials, it can be publicly broadcast using any underlying network transport that is convenient. As long as the transaction can reach a node that will propagate it into the network, it doesn’t matter how it is transported to the first node.
2.) How is a transaction confirmed/ fulfilled and registered on the blockchain?
After the transaction is sent to the network, it is ready to be processed. The nodes have a bundle of transactions to verify and register on the next block. This is done during a period called the block time. In the case of BTC that is 10 minutes.
If we process the information written above, we will see that there are two moments where you can actually see the public key, while the transaction is not fulfilled and registered on the blockchain yet.
1: during the time the transaction is sent from the sender to the nodes
2: during the time the nodes verify the transaction. (The blocktime)
Hijacks during blocktime
This paper describes how you could hijack a transaction and make a new transaction of your own, using someone else’s address and send his coins to an address you own during moment 2: the time the nodes verify the transaction:
https://arxiv.org/pdf/1710.10377.pdf
"(Unprocessed transactions) After a transaction has been broadcast to the network, but before it is placed on the blockchain it is at risk from a quantum attack. If the secret key can be derived from the broadcast public key before the transaction is placed on the blockchain, then an attacker could use this secret key to broadcast a new transaction from the same address to his own address. If the attacker then ensures that this new transaction is placed on the blockchain first, then he can effectively steal all the bitcoin behind the original address." (Page 8, point 3.)
So this means that BTC obviously is not a quantum secure blockchain. Because as soon as you will touch your funds and use them for payment, or send them to another address, you will have to make a transaction and you risk a quantum attack.
Hijacks pre-blocktime.
The story doesn't end here. The paper doesn't describe the posibility of a pre-blocktime hijack.
So back to the paper: as explained, while making a transaction your public key is exposed for at least the transaction time. This transaction time is 10 minutes where your transaction is being confirmed during the 10 minute block time. That is the period where your public key is visible and where, as described in the paper, a transaction can be hijacked, and by using quantum computers, a forged transaction can be made. So the critical point is determined to be the moment where quantum computers can derive private keys from public keys within 10 minutes. Based on that 10 minute period, they calculate (estimate) how long it will take before QC's start forming a threat to BTC. (“ By our most optimistic estimates, as early as 2027 a quantum computer could exist that can break the elliptic curve signature scheme in less than 10 minutes, the block time used in Bitcoin.“ This is also shown in figure 4 on page 10 and later more in depth calculated in appendix C, where the pessimistic estimate is around 2037.) But you could extend that 10 minutes through network based attacks like DDoS, BGP routing attacks, NSA Quantum Insert, Eclipse attacks, MITM attacks or anything like that. (And I don’t mean you extend the block time by using a network based attack, but you extend the time you have access to the public key before the transaction is confirmed.) Bitcoin would be earlier at risk than calculated in this paper.
Also other Blockchains with way shorter block times imagine themselves safe for a longer period than BTC, but with this extension of the timeframe within which you can derive the private key, they too will be vulnerable way sooner.
Not so long ago an eclipse attack demonstrated it could have done the trick. and here Causing the blockchain to work over max capacity, means the transactions will be waiting to be added to a block for a longer time. This time needs to be added on the blocktime, expanding the period one would have time to derive the private key from the public key.
That seems to be fixed now, but it shows there are always new attacks possible and when the incentive is right (Like a few billion $ kind of right) these could be specifically designed for certain blockchains.
MITM attacks
An MITM attack could find the public key in the first moment the public key is exposed. (During the time the transaction is sent from the sender to the nodes) So these transactions that are sent to the network, contain public keys that you could intercept. So that means that if you intercept transactions (and with that the private keys) and simultaneously delay their arrival to the blockchain network, you create extra time to derive the private key from the public key using a quantum computer. When you done that, you send a transaction of your own before the original transaction has arrived and is confirmed and send funds from that stolen address to an address of your choosing. The result would be that you have an extra 10, 20, 30 minutes (or however long you can delay the original transactions), to derive the public key. This can be done without ever needing to mess with a blockchain network, because the attack happens outside the network. Therefore, slower quantum computers form a threat. Meaning that earlier models of quantum computers can form a threat than they assume now.
When MITM attacks and hijacking transactions will form a threat to BTC, other blockchains will be vulnerable to the same attacks, especially MITM attacks. There are ways to prevent hijacking after arrival at the nodes. I will elaborate on that in the next article. At this point of time, the pub key would be useless to an attacker due to the fact there is no quantum computer available now. Once a quantum computer of the right size is available, it becomes a problem. For quantum resistant blockchains this is differetn. MITM attacks and hijacking is useless to quantum resistant blockchains like QRL and Mochimo because these projects use quantum resistant keys.
submitted by QRCollector to CryptoTechnology [link] [comments]

Part 6. (Last part) I'm writing a series about blockchain tech and possible future security risks. Failing shortcuts in an attempt to accomplish Quantum Resistance

The previous parts will give you usefull basic blockchain knowledge and insights on quantum resistance vs blockchain that are not explained in this part.
Part 1, what makes blockchain reliable?
Part 2, The mathematical concepts Hashing and Public key cryptography.
Part 3, Quantum resistant blockchain vs Quantum computing.
Part 4A, The advantages of quantum resistance from genesis block, A
Part 4B, The advantages of quantum resistance from genesis block, A
Part 5, Why BTC is vulnerable for quantum attacks sooner than you would think.

Failing shortcuts in an attempt to accomplish Quantum Resistance
Content:
Hashing public keys
“Instant” transactions
FIFO
Standardized fees
Multicast
Timestamped transactions
Change my mind: If a project doesn't use a Quantum Resistant signature scheme, it is not 100% Quantum Resistant.
Here are some of the claims regarding Quantum Resistance without the use of a quantum resistant signature scheme that I have come across so far. For every claim, I give arguments to substantiate why these claims are incorrect.
“We only have public keys in hashed form published. Even quantum computers can't reverse the Hash, so no one can use those public keys to derive the private key. That's why we are quantum resistant.” This is incorrect.
This example has been explained in the previous article. To summarize: Hashed public keys can be used as an address for deposits. Deposits do not need signature authentication. Alternatively, withdrawals do need signature authentication. To authenticate a signature, the public key will always need to be made public in full, original form. As a necessary requirement, the full public key would be needed to spend coins. Therefore the public key will be included in the transaction.
The most famous blockchain to use hashed public keys is Bitcoin. Transactions can be hijacked during the period a user sends a transaction from his or her device to the blockchain and the moment a transaction is confirmed. For example: during Bitcoins 10 minute blockchain, the full public keys can be obtained to find private keys and forge transactions. Page 8, point 3 Hashing public keys does have advantages: they are smaller than the original public keys. So it does save space on the blockchain. It doesn't give you Quantum Resistance however. That is a misconception.
“Besides having only hashed public keys on the blockchain, we also have instant transactions. So there is no time to hijack a transaction and to obtain the public key fast enough to forge a transaction. That's why we are quantum resistant.” This is incorrect and impossible.
There is no such thing as instant transactions. A zero second blocktime for example is a claim that can’t be made. Period. Furthermore, transactions are collected in pools before they are added to a block that is going to be processed. The time it takes for miners to add them to a new block before processing that block depends on the amount of transactions a blockchain needs to process at a certain moment. When a blockchain operates within its maximum capacity (the maximum amount of transactions that a blockchain can process per second), the adding of transactions from the pool will go quite swiftly, but still not instantaneously.
However, when there is high transaction density, transactions can be stuck in the pool for a while. During this period the transactions are published and the full public keys can be obtained. Just as with the previous hijacking example, a transaction can be forged in that period of time. It can be done when the blockchain functions normally, and whenever the maximum capacity is exceeded, the window of opportunity grows for hackers.
Besides the risk that rush hours would bring by extending the time to work with the public key and forge transactions, there are network based attacks that could serve the same purpose: slow the confirmation time and create a bigger window to forge transactions. These types are attacks where the attacker targets the network instead of the sender of the transaction: Performing a DDoS attack or BGP routing attack or NSA Quantum Insert attack on a peer-to-peer network would be hard. But when provided with an opportunity to earn billions, hackers would find a way.
For example: https://bitcoinmagazine.com/articles/researchers-explore-eclipse-attacks-ethereum-blockchain/
For BTC: https://eprint.iacr.org/2015/263.pdf
An eclipse attack is a network-level attack on a blockchain, where an attacker essentially takes control of the peer-to-peer network, obscuring a node’s view of the blockchain.
That is exactly the recipe for what you would need to create extra time to find public keys and derive private keys from them. Then you could sign transactions of your own and confirm them before the originals do.
This specific example seems to be fixed now, but it most definitely shows there is a risk of other variations to be created. Keep in mind, before this variation of attack was known, the common opinion was that it was impossible. With little incentive to create such an attack, it might take a while until another one is developed. But when the possession of full public keys equals the possibility to forge transactions, all of a sudden billions are at stake.
“Besides only using hashed public keys as addresses, we use the First In First Out (FIFO) mechanism. This solves the forged transaction issue, as they will not be confirmed before the original transactions. That's why we are quantum resistant.” This is incorrect.
There is another period where the public key is openly available: the moment where a transaction is sent from the users device to the nodes on the blockchain network. The sent transaction can be delayed or totally blocked from arriving to the blockchain network. While this happens the attacker can obtain the public key. This is a man-in-the-middle (MITM) attack. A MITM is an attack where the attacker secretly relays and possibly alters the communication between two parties who believe they are directly communicating with each other. No transaction is 100% safe from a MITM attack. This type of attack isn’t commonly known amongst average usergroups due to the fact communication is done either encrypted or by the use of private- public key cryptography. Therefore, at this point of time MITM attacks are not an issue, because the information in transactions is useless for hackers. To emphasize the point made: a MITM attack can be done at this point of time to your transactions. But the information obtained by a hacker is useless because he can not break the cryptography. The encryption and private- public key cryptography is safe at this point of time. ECDSA and RSA can not be broken yet. But in the era of quantum computers the problem is clear: an attacker can obtain the public key and create enough time to forge a transaction which will be sent to the blockchain and arrive there first without the network having any way of knowing the transaction is forged. By doing this before the transaction reaches the blockchain, FIFO will be useless. The original transaction will be delayed or blocked from reaching the blockchain. The forged transaction will be admitted to the network first. And First In First Out will actually help the forged transaction to be confirmed before the original.
“Besides having only hashed public keys, we use small standardized fees. Forged transactions will not be able to use higher fees to get prioritized and confirmed before the original transactions, thus when the forged transaction will try to confirm the address is already empty. This is why we are quantum resistant.” This is incorrect.
The same arguments apply as with the FIFO system. The attack can be done before the original transaction reaches the network. Thus the forged transaction will still be handled first no matter the fee hight.
“Besides the above, we use multicast so all nodes receive the transaction at the same time. That's why we are quantum resistant.” This is incorrect.
Multicast is useless against a MITM attack when the attacker is close enough to the source.
“Besides the above, we number all our transactions and authenticate nodes so the user always knows who he's talking to. That's why we are quantum resistant.” This is incorrect.
Besides the fact that you’re working towards a centralized system if only verified people can become nodes. And besides the fact that also verified nodes can go bad and work with hackers. (Which would be useless if quantum resistant signature schemes would be implemented because a node or a hacker would have no use for quantum resistant public keys and signatures.) There are various ways of impersonating either side of a communication channel. IP-spoofing, ARP-spoofing, DSN-spoofing etc. All a hacker needs is time and position. Time can be created in several ways as explained above. All the information in the transaction an original user sends is valid. When a transaction is hijacked and the communication between the user and the rest of the network is blocked, a hacker can copy that information to his own transaction while using a forged signature. The only real effective defense against MITM attacks can be done on router or server-side by a strong encryption between the client and the server (Which in this case would be quantum resistant encryption, but then again you could just as well use a quantum resistant signature scheme.), or you use server authentication but then you would need that to be quantum resistant too. There is no serious protection against MITM attacks when the encryption of the data and the authentication of a server can be broken by quantum computers.
Only quantum resistant signature schemes will secure blockchain to quantum hacks. Every blockchain will need their users to communicate their public key to the blockchain to authenticate signatures and make transactions. There will always be ways to obtain those keys while being communicated and to stretch the period where these keys can be used to forge transactions. Once you have, you can move funds to your own address, a bitcoin mixer, Monero, or some other privacy coin.
Conclusion
There is only one way to currently achieve Quantum Resistance: by making sure the public key can be made public without any risks, as is done now in the pre-quantum period and as Satoshi has designed blockchain. Thus by the use of quantum resistant signature schemes. The rest is all a patchwork of risk mitigation and delaying strategies; they make it slightly harder to obtain a public key and forge a transaction but not impossible.
Addition
And then there is quite often this strategy of postponing quantum resistant signature schemes
“Instead of ECDSA with 256 bit keys we will just use 384 bit keys. And after that 521 bit keys, and then RSA 4096 keys, so we will ride it out for a while. No worries we don’t need to think about quantum resistant signature schemes for a long time.” This is highly inefficient, and creates more problems than it solves.
Besides the fact that this doesn’t make a project quantum resistant, it is nothing but postponing the switch to quantum resistant signatures, it is not a solution. Going from 256 bit keys to 384 bit keys would mean a quantum computer with ~ 3484 qubits instead of ~ 2330 qubits could break the signature scheme. That is not even double and postpones the problem either half a year or one year, depending which estimate you take. (Doubling of qubits every year, or every two years). It does however have the same problems as a real solution and is just as much work. (Changing the code, upgrading the blockchain, finding consensus amongst the nodes, upgrading all supporting systems, hoping the exchanges all go along with the new upgrade and migrate their coins, heaving all users migrate their coins.) And then quite soon after that, they'll have to go at it again. What they will do next? Go for 512 bit curves? Same issues. It's just patchworks and just as much hassle, but then over and over again for every “upgrade” from 384 to 521 etc.
And every upgrade the signatures get bigger, and closer to the quantum resistant signature sizes and thus the advantage you have over blockchains with quantum resistant signature schemes gets smaller. While the quantum resistant blockchains are just steady going and their users aren’t bothered with all the hassle. At the same time the users of the blockchain that is constantly upgrading to a bigger key size, keep on needing to migrate their coins to the new and upgraded addresses to stay safe.
submitted by QRCollector to CryptoTechnology [link] [comments]

Technical: Pay-to-contract and Sign-to-contract

What's this? I don't make a Technical post for a month and now BitPay is censoring the Hong Kong Free Press? Shit I'm sorry, it's all my fault for not posting a Technical post regularly!! Now posting one so that we have a censorship-free Bitcoin universe!
Pay-to-contract and sign-to-contract are actually cryptographic techniques to allow you to embed a commitment in a public key (pay-to-contract) or signature (sign-to-contract). This commitment can be revealed independently of the public key / signature without leaking your private key, and the existence of the commitment does not prevent you from using the public key / signature as a normal pubkey/signature for a normal digital signing algorithm.
Both techniques utilize elliptic curve homomorphism. Let's digress into that a little first.

Elliptic Curve Homomorphism

Let's get an oversimplified view of the maths involved first.
First, we have two "kinds" of things we can compute on.
  1. One kind is "scalars". These are just very large single numbers. Traditionally represented by small letters.
  2. The other kind is "points". These are just pairs of large numbers. Traditionally represented by large letters.
Now, an "Elliptic Curve" is just a special kind of curve with particular mathematical properties. I won't go into those properties, for the very reasonable reason that I don't actually understand them (I'm not a cryptographer, I only play one on reddit!).
If you have an Elliptic Curve, and require that all points you work with are on some Elliptic Curve, then you can do these operations.
  1. Add, subtract, multiply, and divide scalars. Remember, scalars are just very big numbers. So those basic mathematical operations still work on big numbers, they're just big numbers.
  2. "Multiply" a scalar by a point, resulting in a point. This is written as a * B, where a is the scalar and B is a point. This is not just multiplying the scalar to the point coordinates, this is some special Elliptic Curve thing that I don't understand either.
  3. "Add" two points together. This is written as A + B. Again, this is some special Elliptic Curve thing.
The important part is that if you have:
A = a * G B = b * G Q = A + B 
Then:
q = a + b Q = q * G 
That is, if you add together two points that were each derived from multiplying an arbitarry scalar with the same point (G in the above), you get the same result as adding the scalars together first, then multiplying their sum with the same point will yield the same number. Or:
a * G + b * G = (a + b) * G 
And because multiplication is just repeated addition, the same concept applies when multiplying:
a * (b * G) = (a * b) * G = (b * a) * G = b * (a * G) 
Something to note in particular is that there are few operations on points. One operation that's missing is "dividing" a point by a point to yield a scalar. That is, if you have:
A = a * G 
Then, if you know A but don't know the scalar a, you can't do the below:
a = A / G 
You can't get a even if you know both the points A and G.
In Elliptic Curve Cryptography, scalars are used as private keys, while points are used as public keys. This is particularly useful since if you have a private key (scalar), you can derive a public key (point) from it (by multiplying the scalar with a certain standard point, which we call the "generator point", traditionally G). But there is no reverse operation to get the private key from the public key.

Commitments

Let's have another mild digression.
Sometimes, you want to "commit' to something that you want to keep hidden for now. This is actually important in some games and so on. For example, if you are paying a game of Twenty Questions, one player must first write the object they are thinking of, then fold or hide it in such a way that what they wrote is not visible. Then, after the guessing player has asked twenty questions to narrow down what the object is and has revealed what he or she thinks the object being guessed was, the guessee reveals the object by unfodling and showing the paper.
The act of writing down commits you to the specific thing you wrote down. Folding the paper and/or hiding it, err, hides what you wrote down. Later, when you unfold the paper, you reveal your commitment.
The above is the analogy to the development of cryptographic commitments.
  1. First you select some thing --- it could be anything, a song, a random number, a promise to deliver products and services, the real identity of Satoshi Nakamoto.
  2. You commit to it by giving it as input to a one-way function. A one-way function is a function which allows you to get an output from an input, but after you perform that there is no way to reverse it and determine the original input knowing only the final output. Hash functions like SHA are traditionally used as one-way functions. As a one-way function, this hides your original input.
  3. You give the commitment (the output of the one-way function given your original input) to whoever wants you to commit.
  4. Later, when somebody demands to show what you committed to (for example after playing Twenty Questions), you reveal the commitment by giving the original input to the one-way function (i.e. the thing you selected in the first step, which was the thing you wanted to commit to).
  5. Whoever challenged you can verify your commitment by feeding your supposed original input to the same one-way function. If you honestly gave the correct input, then the challenger will get the output that you published above in step 3.

Salting

Now, sometimes there are only a few possible things you can select from. For example, instead of Twenty Questions you might be playing a Coin Toss Guess game.
What we'd do would be that, for example, I am the guesser and you the guessee. You select either "heads" or "tails" and put it in a commitment which you hand over to me. Then, I say "heads" or "tails" and have you reveal your commitment. If I guessed correctly I win, if not you win.
Unfortunately, if we were to just use a one-way function like an SHA hash function, it would be very trivial for me to win. All I would need to do would be to try passing "heads" and "tails" to the one-way function and see which one matches the commitment you gave me. Then I can very easily find out what your committed value was, winning the game consistently. In hacking, this can be made easier by making Rainbow Tables, and is precisely the technique used to derive passwords from password databases containing hashes of the passwords.
The way to solve this is to add a salt. This is basically just a large random number that we prepend (or append, order doesn't matter) to the actual value you want to commit to. This means that not only do I have to feed "heads" or "tails", I also have to guess the large random number (the salt). If the possible space of large random numbers is large enough, this prevents me from being able to peek at your committed data. The salt is sometimes called a blinding factor.

Pay-to-contract

Hiding commitments in pubkeys!
Pay-to-contract allows you to publish a public key, whose private key you can derive, while also being a cryptographic commitment. In particular, your private key is also used to derive a salt.
The key insight here is to realize that "one-way function" is not restricted to hash functions like SHA. The operation below is an example of a one-way function too:
h(a) = a * G 
This results in a point, but once the point (the output) is known, it is not possible to derive the input (the scalar a above). This is of course restricted to having the input be a scalar only, instead of an arbitrary-length message, but you can add a hash function (which can accept an arbitrary-length input) and then make its output (a fixed-length scalar) as the scalar to use.
First, pay-to-contract requires you to have a public and private keypair.
; p is private key P = p * G ; P is now public key 
Then, you have to select a contract. This is just any arbitrary message containing any arbitrary thing (it could be an object for Twenty Questions, or "heads" or "tails" for Coin Toss Guessing). Traditionally, this is symbolized as the small letter s.
In order to have a pay-to-contract public key, you need to compute the below from your public key P (called the internal public key; by analogy the private key p is the internal private key):
Q = P + h(P | s) * G 
"h()" is any convenient hash function, which takes anything of arbitrary length, and outputs a scalar, which you can multiply by G. The syntax "P | s" simply means that you are prepending the point P to the contract s.
The cute thing is that P serves as your salt. Any private key is just an arbitrary random scalar. Multiplying the private key by the generator results in an arbitrary-seeming point. That random point is now your salt, which makes this into a genuine bonafide hiding cryptographic commitment!
Now Q is a point, i.e. a public key. You might be interested in knowing its private key, a scalar. Suppose you postulate the existence of a scalar q such that:
 Q = q * G 
Then you can do the below:
 Q = P + h(P | s) * G Q = p * G + h(P | s) * G Q = (p + h(P | s)) * G 
Then we can conclude that:
 q = p + h(P | s) 
Of note is that somebody else cannot learn the private key q unless they already know the private key p. Knowing the internal public key P is not enough to learn the private key q. Thus, as long as you are the only one who knows the internal private key p, and you keep it secret, then only you can learn the private key q that can be used to sign with the public key Q (that is also a pay-to-contract commitment).
Now Q is supposed to be a commitment, and once somebody else knows Q, they can challenge you to reveal your committed value, the contract s. Revealing the pay-to-contract commitment is done by simply giving the internal public key P (which doubles as the salt) and the committed value contract s.
The challenger then simply computes:
P + h(P | s) * G 
And verifies that it matches the Q you gave before.
Some very important properties are:
  1. If you reveal first, then you still remain in sole control of the private key. This is because revelation only shows the internal public key and the contract, neither of which can be used to learn the internal private key. So you can reveal and sign in any order you want, without precluding the possibility of performing the other operation in the future.
  2. If you sign with the public key Q first, then you do not need to reveal the internal public key P or the contract s. You can compute q simply from the internal private key p and the contract s. You don't even need to pass those in to your signing algorithm, it could just be given the computed q and the message you want to sign!
  3. Anyone verifying your signature using the public key Q is unaware that it is also used as a cryptographic commitment.
Another property is going to blow your mind:
  1. You don't have to know the internal private key p in order to create a commitment pay-to-contract public key Q that commits to a contract s you select.
Remember:
Q = P + h(P | s) * G 
The above equation for Q does not require that you know the internal private key p. All you need to know is the internal public key P. Since public keys are often revealed publicly, you can use somebody else's public key as the internal public key in a pay-to-contract construction.
Of course, you can't sign for Q (you need to know p to compute the private key q) but this is sometimes an interesting use.
The original proposal for pay-to-contract was that a merchant would publish their public key, then a customer would "order" by writing the contract s with what they wanted to buy. Then, the customer would generate the public key Q (committing to s) using the merchant's public key as the internal public key P, then use that in a P2PKH or P2WPKH. Then the customer would reveal the contract s to the merchant, placing their order, and the merchant would now be able to claim the money.
Another general use for pay-to-contract include publishing a commitment on the blockchain without using an OP_RETURN output. Instead, you just move some of your funds to yourself, using your own public key as the internal public key, then selecting a contract s that commits or indicates what you want to anchor onchain. This should be the preferred technique rather than OP_RETURN. For example, colored coin implementations over Bitcoin usually used OP_RETURN, but the new RGB colored coin technique uses pay-to-contract instead, reducing onchain bloat.

Taproot

Pay-to-contract is also used in the nice new Taproot concept.
Briefly, taproot anchors a Merkle tree of scripts. The root of this tree is the contract s committed to. Then, you pay to a SegWit v1 public key, where the public key is the Q pay-to-contract commitment.
When spending a coin paying to a SegWit v1 output with a Taprooted commitment to a set of scripts s, you can do one of two things:
  1. Sign directly with the key. If you used Taproot, use the commitment private key q.
  2. Reveal the commitment, then select the script you want to execute in the Merkle tree of scripts (prove the Markle tree path to the script). Then satisfy the conditions of the script.
Taproot utilizes the characteristics of pay-to-contract:
  1. If you reveal first, then you still remain in sole control of the private key.
    • This is important if you take the Taproot path and reveal the commitment to the set of scripts s. If your transaction gets stalled on the mempool, others can know your commitment details. However, revealing the commitment will not reveal the internal private key p (which is needed to derive the commitment private key q), so nobody can RBF out your transaction by using the sign-directly path.
  2. If you sign with the public key Q first, then you do not need to reveal the internal public key P or the contract s.
    • This is important for privacy. If you are able to sign with the commitment public key, then that automatically hides the fact that you could have used an alternate script s instead of the key Q.
  3. Anyone verifying your signature using the public key Q is unaware that it is also used as a cryptographic commitment.
    • Again, privacy. Fullnodes will not know that you had the ability to use an alternate script path.
Taproot is intended to be deployed with the switch to Schnorr-based signatures in SegWit v1. In particular, Schnorr-based signatures have the following ability that ECDSA cannot do except with much more difficulty:
As public keys can, with Schnorr-based signatures, easily represent an n-of-n signing set, the internal public key P can also actually be a MuSig n-of-n signing set. This allows for a number of interesting protocols, which have a "good path" that will be private if that is taken, but still have fallbacks to ensure proper execution of the protocol and prevent attempts at subverting the protocol.

Escrow Under Taproot

Traditionally, escrow is done with a 2-of-3 multisignature script.
However, by use of Taproot and pay-to-contract, it's possible to get more privacy than traditional escrow services.
Suppose we have a buyer, a seller, and an escrow service. They have keypairs B = b * G, S = s * G, and E = e * G.
The buyer and seller then generate a Taproot output (which the buyer will pay to before the seller sends the product).
The Taproot itself uses an internal public key that is the 2-of-2 MuSig of B and S, i.e. MuSig(B, S). Then it commits to a pair of possible scripts:
  1. Release to a 2-of-2 MuSig of seller and escrow. This path is the "escrow sides with seller" path.
  2. Release to a 2-of-2 MuSig of buyer and escrow. This path is the "escrow sides with buyer" path.
Now of course, the escrow also needs to learn what the transaction was supposed to be about. So what we do is that the escrow key is actually used as the internal public key of another pay-to-contract, this time with the script s containing the details of the transaction. For example, if the buyer wants to buy some USD, the contract could be "Purchase of 50 pieces of United States Federal Reserve Green Historical Commemoration papers for 0.357 satoshis".
This takes advantage of the fact that the committer need not know the private key behind the public key being used in a pay-to-contract commitment. The actual transaction it is being used for is committed to onchain, because the public key published on the blockchain ultimately commits (via a taproot to a merkle tree to a script containing a MuSig of a public key modified with the committed contract) to the contract between the buyer and seller.
Thus, the cases are:
  1. Buyer and seller are satisfied, and cooperatively create a signature that spends the output to the seller.
    • The escrow service never learns it could have been an escrow. The details of their transaction remain hidden and private, so the buyer is never embarrassed over being so tacky as to waste their hard money buying USD.
  2. The buyer and seller disagree (the buyer denies having received the goods in proper quality).
    • They contact the escrow, and reveal the existence of the onchain contract, and provide the data needed to validate just what, exactly, the transaction was supposed to be about. This includes revealing the "Purchase of 50 pieces of United States Federal Reserve Green Historical Commemoration papers for 0.357 satoshis", as well as all the data needed to validate up to that level. The escrow then investigates the situation and then decides in favor of one or the other. It signs whatever transaction it decides (either giving it to the seller or buyer), and possibly also extracts an escrow fee.

Smart Contracts Unchained

Developed by ZmnSCPxj here: https://zmnscpxj.github.io/bitcoin/unchained.html
A logical extension of the above escrow case is to realize that the "contract" being given to the escrow service is simply some text that is interpreted by the escrow, and which is then executed by the escrow to determine where the funds should go.
Now, the language given in the previous escrow example is English. But nothing prevents the contract from being written in another language, including a machine-interpretable one.
Smart Contracts Unchained simply makes the escrow service an interpreter for some Smart Contract scripting language.
The cute thing is that there still remains an "everything good" path where the participants in the smart contract all agree on what the result is. In that case, with Taproot, there is no need to publish the smart contract --- only the participants know, and nobody else has to. This is an improvement in not only privacy, but also blockchain size --- the smart contract itself never has to be published onchain, only the commitment to it is (and that is embedded in a public key, which is necessary for basic security on the blockchain anyway!).

Sign-to-contract

Hiding commitments in signatures!
Sign-to-contract is something like the dual or inverse of pay-to-contract. Instead of hiding a commitment in the public key, it is hidden in the signature.
Sign-to-contract utilizes the fact that signatures need to have a random scalar r which is then published as the point R = r * G.
Similarly to pay-to-contract, we can have an internal random scalar p and internal point P that is used to compute R:
R = P + h(P | s) * G 
The corresponding random scalar r is:
r = p + h(P | s) 
The signing algorithm then uses the modified scalar r.
This is in fact just the same method of commitment as in pay-to-contract. The operations of committing and revealing are the same. The only difference is where the commitment is stored.
Importantly, however, is that you cannot take somebody else's signature and then create an alternate signature that commits to some s you select. This is in contrast with pay-to-contract, where you can take somebody else's public key and then create an alternate public key that commits to some s you select.
Sign-to-contract is somewhat newer as a concept than pay-to-contract. It seems there are not as many applications of pay-to-contract yet.

Uses

Sign-to-contract can be used, like pay-to-contract, to publish commitments onchain.
The difference is below:
  1. Signatures are attached to transaction inputs.
  2. Public keys are attached to transaction outputs.
One possible use is in a competitor to Open Timestamps. Open Timestamps currently uses OP_RETURN to commit to a Merkle Tree root of commitments aggregated by an Open Timestamps server.
Instead of using such an OP_RETURN, individual wallets can publish a timestamped commitment by making a self-paying transaction, embedding the commitment inside the signature for that transaction. Such a feature can be added to any individual wallet software. https://blog.eternitywall.com/2018/04/13/sign-to-contract/
This does not require any additional infrastructure (i.e. no aggregating servers like in Open Timestamps).

R Reuse Concerns

ECDSA and Schnorr-based signature schemes are vulnerable to something called "R reuse".
Basically, if the same R is used for different messages (transactions) with the same public key, a third party with both signatures can compute the private key.
This is concerning especially if the signing algorithm is executed in an environment with insufficient entropy. By complete accident, the environment might yield the same random scalar r in two different runs. Combined with address reuse (which implies public key reuse) this can leak the private key inadvertently.
For example, most hardware wallets will not have any kind of entropy at all.
The usual solution to this is, instead of selecting an arbitrary random r (which might be impossible in limited environments with no available entropy), is to hash the message and use the hash as the r.
This ensures that if the same public key is used again for a different message, then the random r is also different, preventing reuse at all.
Of course, if you are using sign-to-contract, then you can't use the above "best practice".
It seems to me plausible that computing the internal random scalar p using the hash of the message (transaction) should work, then add the commitment on top of that. However, I'm not an actual cryptographer, I just play one on Reddit. Maybe apoelstra or pwuille can explain in more detail.
Copyright 2019 Alan Manuel K. Gloria. Released under CC-BY.
submitted by almkglor to Bitcoin [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

Another fork? Lots of nodes out of sync again.

Another fork? Lots of nodes out of sync again. submitted by etherminer24 to ethereum [link] [comments]

I'm writing a series about blockchain tech and possible future security risks. This is the third part of the series introducing Quantum resistant blockchains.

Part 1 and part 2 will give you usefull basic blockchain knowledge that is not explained in this part.
Part 1 here
Part 2 here
Quantum resistant blockchains explained.
- How would quantum computers pose a threat to blockchain?
- Expectations in the field of quantum computer development.
- Quantum resistant blockchains
- Why is it easier to change cryptography for centralized systems such as banks and websites than for blockchain?
- Conclusion
The fact that whatever is registered on a blockchain can’t be tampered with is one of the great reasons for the success of blockchain. Looking ahead, awareness is growing in the blockchain ecosystem that quantum computers might cause the need for some changes in the cryptography that is used by blockchains to prevent hackers from forging transactions.
How would quantum computers pose a threat to blockchain?
First, let’s get a misconception out of the way. When talking about the risk quantum computers could pose for blockchain, some people think about the risk of quantum computers out-hashing classical computers. This, however, is not expected to pose a real threat when the time comes.
This paper explains why: https://arxiv.org/pdf/1710.10377.pdf "In this section, we investigate the advantage a quantum computer would have in performing the hashcash PoW used by Bitcoin. Our findings can be summarized as follows: Using Grover search, a quantum computer can perform the hashcash PoW by performing quadratically fewer hashes than is needed by a classical computer. However, the extreme speed of current specialized ASIC hardware for performing the hashcash PoW, coupled with much slower projected gate speeds for current quantum architectures, essentially negates this quadratic speedup, at the current difficulty level, giving quantum computers no advantage. Future improvements to quantum technology allowing gate speeds up to 100GHz could allow quantum computers to solve the PoW about 100 times faster than current technology.
However, such a development is unlikely in the next decade, at which point classical hardware may be much faster, and quantum technology might be so widespread that no single quantum enabled agent could dominate the PoW problem."
The real point of vulnerability is this: attacks on signatures wherein the private key is derived from the public key. That means that if someone has your public key, they can also calculate your private key, which is unthinkable using even today’s most powerful classical computers. So in the days of quantum computers, the public-private keypair will be the weak link. Quantum computers have the potential to perform specific kinds of calculations significantly faster than any normal computer. Besides that, quantum computers can run algorithms that take fewer steps to get to an outcome, taking advantage of quantum phenomena like quantum entanglement and quantum superposition. So quantum computers can run these certain algorithms that could be used to make calculations that can crack cryptography used today. https://en.wikipedia.org/wiki/Elliptic-curve_cryptography#Quantum_computing_attacks and https://eprint.iacr.org/2017/598.pdf
Most blockchains use Elliptic Curve Digital Signature Algorithm (ECDSA) cryptography. Using a quantum computer, Shor's algorithm can be used to break ECDSA. (See for reference: https://arxiv.org/abs/quant-ph/0301141 and pdf: https://arxiv.org/pdf/quant-ph/0301141.pdf ) Meaning: they can derive the private key from the public key. So if they got your public key (and a quantum computer), then they got your private key and they can create a transaction and empty your wallet.
RSA has the same vulnerability while RSA will need a stronger quantum computer to be broken than ECDSA.
At this point in time, it is already possible to run Shor’s algorithm on a quantum computer. However, the amount of qubits available right now makes its application limited. But it has been proven to work, we have exited the era of pure theory and entered the era of practical applications:
So far Shor's algorithm has the most potential, but new algorithms might appear which are more efficient. Algorithms are another area of development that makes progress and pushes quantum computer progress forward. A new algorithm called Variational Quantum Factoring is being developed and it looks quite promising. " The advantage of this new approach is that it is much less sensitive to error, does not require massive error correction, and consumes far fewer resources than would be needed with Shor’s algorithm. As such, it may be more amenable for use with the current NISQ (Noisy Intermediate Scale Quantum) computers that will be available in the near and medium term." https://quantumcomputingreport.com/news/zapata-develops-potential-alternative-to-shors-factoring-algorithm-for-nisq-quantum-computers/
It is however still in development, and only works for 18 binary bits at the time of this writing, but it shows new developments that could mean that, rather than a speedup in quantum computing development posing the most imminent threat to RSA and ECDSA, a speedup in the mathematical developments could be even more consequential. More info on VQF here: https://arxiv.org/abs/1808.08927
It all comes down to this: when your public key is visible, which is always necessary to make transactions, you are at some point in the future vulnerable for quantum attacks. (This also goes for BTC, which uses the hash of the public key as an address, but more on that in the following articles.) If you would have keypairs based on post quantum cryptography, you would not have to worry about that since in that case not even a quantum computer could derive your private key from your public key.
The conclusion is that future blockchains should be quantum resistant, using post-quantum cryptography. It’s very important to realize that post quantum cryptography is not just adding some extra characters to standard signature schemes. It’s the mathematical concept that makes it quantum resistant. to become quantm resistant, the algorithm needs to be changed. “The problem with currently popular algorithms is that their security relies on one of three hard mathematical problems: the integer factorization problem, the discrete logarithm problem or the elliptic-curve discrete logarithm problem. All of these problems can be easily solved on a sufficiently powerful quantum computer running Shor's algorithm. Even though current, publicly known, experimental quantum computers lack processing power to break any real cryptographic algorithm, many cryptographers are designing new algorithms to prepare for a time when quantum computing becomes a threat.” https://en.wikipedia.org/wiki/Post-quantum_cryptography
Expectations in the field of quantum computer development.
To give you an idea what the expectations of quantum computer development are in the field (Take note of the fact that the type and error rate of the qubits is not specified in the article. It is not said these will be enough to break ECDSA or RSA, neither is it said these will not be enough. What these articles do show, is that a huge speed up in development is expected.):
When will ECDSA be at risk? Estimates are only estimates, there are several to be found so it's hard to really tell.
The National Academy of Sciences (NAS) has made a very thourough report on the development of quantum computing. The report came out in the end of 2018. They brought together a group of scientists of over 70 people from different interconnecting fields in quantum computing who, as a group, have come up with a close to 200 pages report on the development, funding, implications and upcoming challenges for quantum computing development. But, even though this report is one of the most thourough up to date, it doesn't make an estimate on when the risk for ECDSA or RSA would occur. They acknowledge this is quite impossible due to the fact there are a lot of unknowns and due to the fact that they have to base any findings only on publicly available information, obviously excluding any non available advancements from commercial companies and national efforts. So if this group of specialized scientists can’t make an estimate, who can make that assessment? Is there any credible source to make an accurate prediction?
The conclusion at this point of time can only be that we do not know the answer to the big question "when".
Now if we don't have an answer to the question "when", then why act? The answer is simple. If we’re talking about security, most take certainty over uncertainty. To answer the question when the threat materializes, we need to guess. Whether you guess soon, or you guess not for the next three decades, both are guesses. Going for certain means you'd have to plan for the worst, hope for the best. No matter how sceptical you are, having some sort of a plan ready is a responsible thing to do. Obviously not if you're just running a blog about knitting. But for systems that carry a lot of important, private and valuable information, planning starts today. The NAS describes it quite well. What they lack in guessing, they make up in advice. They have a very clear advice:
"Even if a quantum computer that can decrypt current cryptographic ciphers is more than a decade off, the hazard of such a machine is high enough—and the time frame for transitioning to a new security protocol is sufficiently long and uncertain—that prioritization of the development, standardization, and deployment of post-quantum cryptography is critical for minimizing the chance of a potential security and privacy disaster."
Another organization that looks ahead is the National Security Agency (NSA) They have made a threat assessment in 2015. In August 2015, NSA announced that it is planning to transition "in the not too distant future" (statement of 2015) to a new cipher suite that is resistant to quantum attacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy." NSA advised: "For those partners and vendors that have not yet made the transition to Suite B algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition.” https://en.wikipedia.org/wiki/NSA_Suite_B_Cryptography#cite_note-nsa-suite-b-1
What these organizations both advice is to start taking action. They don't say "implement this type of quantum resistant cryptography now". They don't say when at all. As said before, the "when" question is one that is a hard one to specify. It depends on the system you have, the value of the data, the consequences of postponing a security upgrade. Like I said before: you just run a blog, or a bank or a cryptocurrency? It's an individual risk assesment that's different for every organization and system. Assesments do need to be made now though. What time frame should organisationds think about when changing cryptography? How long would it take to go from the current level of security to fully quantum resistant security? What changes does it require to handle bigger signatures and is it possible to use certain types of cryptography that require to keep state? Do your users need to act, or can al work be done behind the user interface? These are important questions that one should start asking. I will elaborate on these challenges in the next articles.
Besides the unsnswered question on "when", the question on what type of quantum resistant cryptography to use is unanswered too. This also depends on the type of system you use. The NSA and NAS both point to NIST as the authority on developments and standardization of quantum resistant cryptography. NIST is running a competition right now that should end up in one or more standards for quantum resistant cryptography. The NIST competition handles criteria that should filter out a type of quantum resistant cryptography that is feasable for a wide range of systems. This takes time though. There are some new algorithms submitted and assessing the new and the more well known ones must be done thouroughly. They intend to wrap things up around 2022 - 2024. From a blockchain perspective it is important to notice that a specific type of quantum resistant cryptography is excluded from the NIST competition: Stateful Hash-Based Signatures. (LMS and XMSS) This is not because these are no good. In fact they are excelent and XMSS is accepted to be provable quantum resistant. It's due to the fact that implementations will need to be able to securely deal with the requirement to keep state. And this is not a given for most systems.
At this moment NIST intends to approve both LMS and XMSS for a specific group of applications that can deal with the statefull properties. The only loose end at this point is an advice for which applications LMS and XMSS will be adviced and for what applications it is discouraged. These questions will be answered in the beginning of april this year: https://csrc.nist.gov/news/2019/stateful-hbs-request-for-public-comments This means that quite likely LMS and XMSS will be the first type of standardized quantum resistant cryptography ever. To give a small hint: keeping state, is pretty much a naturally added property of blockchain.
Quantum resistant blockchains
“Quantum resistant” is only used to describe networks and cryptography that are secure against any attack by a quantum computer of any size in the sense that there is no algorithm known that makes it possible for a quantum computer to break the applied cryptography and thus that system.
Also, to determine if a project is fully quantum resistant, you would need to take in account not only how a separate element that is implemented in that blockchain is quantum resistant, but also the way it is implemented. As with any type of security check, there should be no backdoors, in which case your blockchain would be just a cardboard box with bulletproof glass windows. Sounds obvious, but since this is kind of new territory, there are still some misconceptions. What is considered safe now, might not be safe in the age of quantum computers. I will address some of these in the following chapters, but first I will elaborate a bit about the special vulnerability of blockchain compared to centralized systems.
Why is it easier to change cryptography for centralized systems such as banks and websites than for blockchain?
Developers of a centralized system can decide from one day to the other that they make changes and update the system without the need for consensus from the nodes. They are in charge, and they can dictate the future of the system. But a decentralized blockchain will need to reach consensus amongst the nodes to update. Meaning that the majority of the nodes will need to upgrade and thus force the blockchain to only have the new signatures to be valid. We can’t have the old signature scheme to be valid besides the new quantum resistant signature scheme. Because that would mean that the blockchain would still allow the use of vulnerable, old public- and private keys and thus the old vulnerable signatures for transactions. So at least the majority of the nodes need to upgrade to make sure that blocks which are constructed using the old rules and thus the old vulnerable signature scheme, are rejected by the network. This will eventually result in a fully upgraded network which only accepts the new post quantum signature scheme in transactions. So, consensus is needed. The most well-known example of how that can be a slow process is Bitcoin’s need to scale. Even though everybody agrees on the need for a certain result, reaching consensus amongst the community on how to get to that result is a slow and political process. Going quantum resistant will be no different, and since it will cause lesser performance due to bigger signatures and it will need hardware upgrades quite likely it will be postponed rather than be done fast and smooth due to lack of consensus. And because there are several quantum resistant signature schemes to choose from, agreement an automatic given. The discussion will be which one to use, and how and when to implement it. The need for consensus is exclusively a problem decentralized systems like blockchain will face.
Another issue for decentralized systems that change their signature scheme, is that users of decentralized blockchains will have to manually transfe migrate their coins/ tokens to a quantum safe address and that way decouple their old private key and activate a new quantum resistant private key that is part of an upgraded quantum resistant network. Users of centralized networks, on the other hand, do not need to do much, since it would be taken care of by their centralized managed system. As you know, for example, if you forget your password of your online bank account, or some website, they can always send you a link, or secret question, or in the worst case they can send you mail by post to your house address and you would be back in business. With the decentralized systems, there is no centralized entity who has your data. It is you who has this data, and only you. So in the centralized system there is a central entity who has access to all the data including all the private accessing data, and therefore this entity can pull all the strings. It can all be done behind your user interface, and you probably wouldn’t notice a thing.
And a third issue will be the lost addresses. Since no one but you has access to your funds, your funds will become inaccessible once you lose your private key. From that point, an address is lost, and the funds on that address can never be moved. So after an upgrade, those funds will never be moved to a quantum resistant address, and thus will always be vulnerable to a quantum hack.
To summarize: banks and websites are centralized systems, they will face challenges, but decentralized systems like blockchain will face some extra challenges that won't apply for centralized systems.
All issues specific for blockchain and not for banks or websites or any other centralized system.
Conclusion
Bitcoin and all currently running traditional cryptocurrencies are not excluded from this problem. In fact, it will be central to ensuring their continued existence over the coming decades. All cryptocurrencies will need to change their signature schemes in the future. When is the big guess here. I want to leave that for another discussion. There are enough certain specifics we can discuss right now on the subject of quantum resistant blockchains and the challenges that existing blockchains will face when they need to transfer. This won’t be an easy transfer. There are some huge challenges to overcome and this will not be done overnight. I will get to this in the next few articles.
Part 1, what makes blockchain reliable?
Part 2, The two most important mathematical concepts in blockchain.
Part 4A, The advantages of quantum resistance from genesis block, A
Part 4B, The advantages of quantum resistance from genesis block, B
Part 5, Why BTC will be vulnerable sooner than expected.
submitted by QRCollector to CryptoTechnology [link] [comments]

Public Key Encryption: Elliptic Curve Ciphers Bitcoin 101 Elliptic Curve Cryptography Part 5 The Magic of Signing & Verifying Elliptic Curve Digital Signature Algorithm Adaptor and Schnorr Signatures in Bitcoin. Digital ... SF Bitcoin Devs Seminar: Key Tree Signatures

In bitcoin, an ECDSA signature is not encoded as a simple concatenation of and Instead, ... In this case, the size of or becomes equal to bytes instead of bytes. The probability that either or be 33-byte long corresponds to the probability that either or be 256-bit long originally (i.e., in the notation above). That means that we are interested in the probability of the most significant bit of ... I've extracted some (R,S) pairs from some ECDSA Signatures encoded in the DER format. I found that R and S are not necessarily of 32 bytes in size. They sometimes can be 33 bytes long. Could anyone tell me why the signature created by the Qt client is always 65 bytes, or whether it is OK to always convert both R and S into a 32-byte array? Thanks. Elliptic Curve Digital Signature Algorithm or ECDSA is a cryptographic algorithm used by Bitcoin to ensure that funds can only be spent by their rightful owners. A few concepts related to ECDSA: private key: A secret number, known only to the person that generated it. A private key is essentially a randomly generated number. In Bitcoin, someone with the private key that corresponds to funds on ... The structure of a DER encoded ECDSA signature is as follows: 30 identifies a SEQUENCE in ASN1 encoding, which is followed by the length of z (the sequence).r and scan be either 32 or 33 bytes long, depending on how big the DER encoded values are.r and s are always leaded by 02, which identify an integer value in ASN1.Finally, the tailing (ht) byte represents the hashtype ECDSA (‘Elliptical Curve Digital Signature Algorithm’) is the cryptography behind private and public keys used in Bitcoin. It consists of combining the math behind finite fields and elliptic ...

[index] [9864] [1286] [4111] [46584] [21972] [16357] [16519] [21774] [45301] [44130]

Public Key Encryption: Elliptic Curve Ciphers

A course on how bitcoin works and how to program bitcoin stuff with the javascript bitcoin library Yours Bitcoin. Taught by Ryan X. Charles, Cofounder & CEO of Yours, and former cryptocurrency ... Elliptic Curve Digital Signature Algorithm ECDSA Part 10 Cryptography Crashcourse - Duration: 35:32. Dr. Julian Hosp - Bitcoin, Aktien, Gold und Co. 6,838 views Pieter Wuille, Bitcoin Core Developer and Blockstream Co-Founder, spoke about Key Tree Signatures. Bitcoin supports multisig transaction outputs, which require more than a single signature to unlock. This video shows how easy it is to paste, verify, and sign a message using an ECDSA private key behind a Bitcoin address. Schorr Signatures are a new signature scheme that is much stronger than ECDSA (Elliptic curve digital signature algorithm) - which is currently used in Bitcoin.

#