Mining Calculator Bitcoin, Ethereum, Litecoin, Dash and Monero
Bitcoin Core :: Segregated Witness Benefits
collecting block reward and transaction fees - Bitcoin ...
Got this in my inbox a couple of minutes back
A new user sent me this to my inbox, its a description of the events after the fork, with a signed message at the bottom. I've gone through it once but its very late here in my timezone, have to go through it again tomorrow. I'm sure I'm not the the only receipient, but just in case pinging some people here. https://honest.cash/kiarahpromise/sigop-counting-4528 *** EDIT 2 *** Before you continue. From the Bitcoin whitepaper: " The system is secure as long as honest nodes collectively control more CPU power than any cooperating group of attacker nodes." *** EDIT *** Ok, I have slept over this. How big is the chance that these two events, the sigop tx spamming of the network and the intended theft of funds stuck in segwit by an unknown miner, were coordinated and not coincidential? I slept over this message and am wondering if that was one two-phased plan and even this message was planned (probably a bit different but it was adapted afterwards to the new situation, that's why the first half of it is such a mess to read) to spread fear after the two plans got foiled. The plan consisted of various Acts Act 1) Distract and spam the network with sigop transactions that exploit a bug to cause distraction and halt all BCH transaction volume. The mempool would become filled with unconfirmed transactions Act 2) When a patch is deployed, start your mining pool and mine the hell out of it to quickly create a legitimate block. They prepared the theft transactions and would hide them in the (predicted) massive mempool of unconfirmed transactions that would have been accumulated. They would mine a big block, everyone would be so happy that BCH works again, and devs would be busy looking for sigop transactions. Act 3) Hope that the chain gets locked in via checkpoint so the theft cannot be reverted Act 4) Leak to the media that plenty of BCH were stolen after the fork and the ABC client is so faulty it caused a halt of the network after the upgrade Act 5) Make a shitload of money by shorting BCH (there was news about a appearance of a big short position right after the fork) But the people who planned this attack have underestimated the awareness and speed of the BCH dev team. They were probably sure that Act 1 would take hours or even days so the mempool would be extremely bloated (maybe they speculated that everyone paniced and wanted to get out of BCH) and Act 2 would consequently be successful because no one would spot their theft transactions quick enough. But they didn't calculate that someone is working together with various BCH pools in precaution to prevent exactly this scenario (segwit theft) and even prepared transactions to move all locked coins back to their owners. Prohashings orphaned block was likely unpredicted collateral damage as Jonathan suggests below, because they were not involved in the plan of the two pools who prepared to return the segwit coins. I'm guessing that the pools did not expect a miner with an attacking theft block that early and had to decide quickly what to do when they spotted it. So now that both plans have been foiled, Plan B) is coming into place again. Guerrilla style fear mongering about how BCH is not decentralized. Spread this info secretly in the community with the proof in form of a signed message connected to the transactions. Of course, the attacker worked actually alone, attacked us for our own good, and will do so again, because the evil dictatorship devs have to be eradicated.... As an unwanted side effect of these events the BTC.top and BTC.com "partnership" has been exposed. So what do we do with this new revelation is a question that we probably have to discuss. They worked together with someone who wanted to return the segwit coins and avoided a theft. They used their combined hashing dominance to avoid a theft. I applaud them for that. From a moral perspective this is defendable and my suspicion that we have more backing for BCH than you can see with your eye by following hash rate charts is now being revealed as true again. But the dilemma BCH has is revealed again as well. we need more of the SHA-256 hash rate cake because we actually do not want that any entity in this space has more than 50% hash power. *** EDIT 2 *** Added Satoshi's quote from the whitepaper.
1.Background In SCRY project, double chain structure is applied in clients. As for signature algorithm, we selected BIP143. In segregated witness, VERSION 0 applied BIP143 signature verification to increase efficiency, but BIP143S algorithm is not applied to general transactions. We have optimized general transaction signature and verification, apply BIP143 signature and verification to increase the efficiency. 1.1Signature algorithm Bitcoin applied ECDSA (Elliptic Curve Digital Signature Algorithm) as digital signature algorithm. There are 3 use cases of digital signature algorithm in Bitcoin: 1. Signature can verify the owner of private key, the owner of money transferring in that transaction. 2. The proxy verification cannot be denied, that is the transaction cannot be denied. 3. The signature cannot be falsified, that is transaction (or details of transaction) cannot be adjusted by anyone after signature. There are two parts of digital signature: one is using private key( signature key) to sign the hash of message(transaction), the other one is to allow everyone can verify the signature by provided public key and information.
The signature algorithm of Bitcoin is as following: Sig = Fsig( Fhash(m), dA ) Explanation： dA is private key signature m is transaction (or part of transaction) Fhash is hash function Fsig is signature algorithm Sig is result signature There are 2 functions in the whole signature: Fhash and Fsig。
Fhash function is to generate Hash of transaction, first serialize the transaction, based on serialized binary data use SHA256 to calculate the transaction Hash. The general transaction (single input and single output) process is as following: Transaction serialization: 1.nVersion Transaction version 2.InputCount Input count 3.Prevouts Serialize the input UTXO 4.OutputCount Output count 5.outpoint Serialize the output UTXO 6.nLocktime Locked period of transaction 7.Hash Twice SHA256 calculation based on the data above
Fsig function signature algorithm is based on ECDSA. There will be a K value every encryption. Based on this K value, the algorithm will generate a temporary public/private key (K,Q), select X axis of public key Q to get a value R, the formula is as following: S=K-1 *(Hash(m) + dA *R) mod p Explanation： K is temporary private key R is x axis of temporary public key dA is signature private key m is transaction data p is the main sequence of elliptical curve The function will generate a value S. In elliptical curve every encryption will generate a K value. Reuse same K value will cause private key exposed, K value should be seriously secured. Bitcoin use FRC6979 TO ensure certainty, use SHA256 to ensure the security of K value. The simple formula is as following: K =SHA256（dA+HASH(m)） Explanation， dA is private key， m is message. Final signature will be generated with the combination of ( R and S)
Verification process is applying signature to generate inverse function, the formula is as following: P=S-1 *Hash(m)*G +S-1*R*Qa Explanation： R and S are signature value Qa is user(signer)’s public key m is signed transaction data G is generator point of elliptical curve We can see from this formula, based on information (transaction or part of Hash value), public key and signature of signer(R and S value), calculate the P value, the value will be one point on elliptical curve. If the X axis equals R, then the signature is valid. 1.2
Bip143 brief introduction
There are 4 ECDSA (Elliptic Curve Digital Signature Algorithm) signature verification code(sigops):CHECKSIG, CHECKSIGVERIFY, CHECKMULTISIG, CHECKMULTISIGVERIFY. One transaction abstract will be SHA256 encryption twice.There are at least 2 disadvantages in Bitcoin original digital signature digest algorithm: ●Hash used for data verification is consistent with transaction bytes. The computation of signature verification is based on O(N2) time complexity, time for verification is too long, BIP143 optimizes digest algorithm by importing some “intermediate state” which can be duplicate, make the time complexity of signature verification turn into O(n). ●The other disadvantages of original signature: There are no Bitcoin amounts included in signature when having the transaction, it is not a disadvantage for nodes, but for offline transaction signature devices (cold wallet), since the importing amount is not available, causing that the exact amount and transaction fees cannot be calculated. BIP143 has included the amount in every transaction in the signature. BIP143 defines a new kind of task digest algorithm, the standard is as following: Transaction serialization https://preview.redd.it/2b6c5q2mk7b11.png?width=783&format=png&auto=webp&s=eb952782464942b6930bbd2632fbcd0fbaaf5023 1，4，7，9，10 in the list is the same as original SIGHASH algorithm, original SIGHASH type meaning stay the same. The following contains are changed:
All SIGHASH commit amount for signature
FindAndDelete signature is not suitable for scripteCode；
AfterOP_CODESEPARATOR(S),OP_CODESEPARATOR will not delete scriptCode（ lastOP_CODESEPARATOR will be deleted after every script）;
SINGLE does not commit input index.When ANYONECANPAY has no setting，the meaning will not be changed，hashPrevouts and outpoint are implicit committed in input index. When SINGLE use ANYONECANPAY, signed input and output will exist in pairs, but have no limitation to index.
2.BIP143 Signature In go language, we use btcsuite database to finish signature, btcsuite database is an integrated Bitcoin database, it can generate all nodes program of Bitcoin, but we just use btcsuite database public key/private key API, SHA API and sign RFC6979 signature API. In order to avoid redundancy, the following codes have no adjustments to codes. 2.1 Transaction HASH generation Transaction information hash generation, every input in transaction will generate a hash value, if there are multi-input in the transaction, then a hash array will be generated, every hash in the array will be consistent with input in transaction. https://preview.redd.it/n0x5bo9cl7b11.png?width=629&format=png&auto=webp&s=63f4951e5ca7d0cffc6e8905f5d4b33354aa6ecc Like two transaction input in the image above, every transaction will generate a hash, the transaction above will generate two hash.
CalcSignatureHash(script byte, hashType SigHashType, tx *EMsgTx, idx int) Explanation： Script，pubscript is input utxo unlocked script HashType，signature method or signature type Tx，details of transaction Idx，Number of transaction, that is to calculate which transaction hash The following is Fhash code https://preview.redd.it/e8xx974gl7b11.png?width=506&format=png&auto=webp&s=9a4f419069bea2e76b8d5b7205a31e06692f3f67 For the situation that multi UTXO input in one transaction, for every input, you can deploy it as examples above, then generate a hash array. Before hash generation, you need to clear “SigantureScript”in other inputs, only leave the “SigantureScript” in this input，That is “ScriptSig”field. https://preview.redd.it/0omhp2ahl7b11.png?width=462&format=png&auto=webp&s=4cee9b0e4fe10185a39d68bde1032ac4e4dbb9ad The amount for every UTXO is different. You need to pay attention to the 6th step, what you need to input is the amount for every transaction Multi-input function generation func txHash(tx msgtx) ( *byte) Code details https://preview.redd.it/rlnxv3lil7b11.png?width=581&format=png&auto=webp&s=804adbee92a9bb9811a4ffc395601ebf191fd664 Repeat deploy Fhash function（CalcSignatureHash）then you can generate a hash array. 2.2Sign with HASH A hash array is generated in the methods above, for every input with a unique hash in the data, we use signRFC6979 signature function to sign the hash, here we deploy functions in btcsuite database directly. signRFC6979(PrivateKey, hash) Through this function, we can generate SigantureScript，add this value to every input SigantureScript field in the transaction. 2.3Multisig Briefly, multi-sig technology is the question that one UTXO should be signed with how many private keys. There is one condition in script, N public keys are recorded in script, at least M public keys must provide signature to unlock the asset. That is also called M-N method, N is the amount of private keys, M is the signature amount needed for verification The following is how to realize a 2-2 multisig based on P2SH(Pay-to-Script-Hash) script with go language. 2-2 codes of script function generation: https://preview.redd.it/7dq7cv9kl7b11.png?width=582&format=png&auto=webp&s=108c6278d656e5fa6b51b5876d5a0f7a1231f933 The function above generated script in the following 2 2 OP_C HECKMULTISIG Signature function 1. Based on transaction TX，it includes input array TxIn，generate transaction HASH array，this process is the same as process in general transaction above, deploy the digest function of general transaction above. func txHash(tx msgtx) ( *byte) this function generated a hash array, that is every transaction input is consistent with one hash value. 2. Use first public key in redeem script, sign with consistent private key. The process is as general transaction. signRFC6979(PrivateKey, hash) After signature, the signature array SignatureScriptArr1 with every single input is generated. Based on this signature value in the array, you can update every input TxIn "SigantureScript" field in transaction TX. 3.Based on updated TX deploy txHash function again, generate new hash array. func txHash(tx msgtx) ( *byte) 4. Use second public key in redeem script, the consistent private key is used for signature. Use the updated TX in the process above, generate every input hash and sign it. signRFC6979(PrivateKey, hash) //Combine the signature generated by first key, signature generated by secondkey and redeem script. etxscript.EncodeSigScript(&(TX.TxIn[i].SignatureScript),&SigHash2, pkScript) There are N transactions, so repeat it N times. The final data is as following: https://preview.redd.it/78aabhqll7b11.png?width=558&format=png&auto=webp&s=453f7129b2cf3c648b68c2369a4622963087d0c8 References https://en.wikipedia.org/wiki/Digital_signature* https://github.com/bitcoin/bips/blob/mastebip-0143.mediawiki 《OReilly.Mastering.Bitcoin.2nd.Edition》 http://www.8btc.com/rfc6979
/u/jl_2012 comments on new extension block BIP - "a block reorg will almost guarantee changing txid of the resolution tx, that will permanently invalidate all the child txs based on the resolution tx"
Comments from jl_2012 I feel particularly disappointed that while this BIP is 80% similar to my proposal made 2 months ago ( https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013490.html ), Matt Corallo was only the person replied me. Also, this BIP seems ignored the txid malleability of the resolution tx, as my major technical critique of xblock design. But anyway, here I’m only making comments on the design. As I said in my earlier post, I consider this more as an academic topic than something really ready for production use.
This specification defines a method of increasing bitcoin transaction throughput without altering any existing consensus rules.
Softforks by definition tighten consensus rules
There has been great debate regarding other ways of increasing transaction throughput, with no proposed consensus-layer solutions that have proven themselves to be particularly safe.
so the authors don’t consider segwit as a consensus-layer solution to increase transaction throughput, or not think segwit is safe? But logically speaking if segwit is not safe, this BIP could only be worse. OTOH, segwit also obviously increases tx throughput, although it may not be as much as some people wish to have.
This specification refines many of Lau's ideas, and offers a much simpler method of tackling the value transfer issue, which, in Lau's proposal, was solved with consensus-layer UTXO selection.
The 2013 one is outdated. As the authors are not quoting it, not sure if they read my January proposal
I think extension block in the proposed form actually breaks BIP141. It may say it activates segregated witness as a general idea, but not a specific proposal like BIP141
The merkle root is to be calculated as a merkle tree with all extension block txids and wtxids as the leaves.
It needs to be more specific here. How are they exactly arranged? I suggest it uses a root of all txids, and a root of all wtxids, and combine them as the commitment. The reason is to allow people to prune the witness data, yet still able to serve the pruned tx to light wallets. If it makes txid and wtxid as pairs, after witness pruning it still needs to store all the wtxids or it can’t reconstruct the tree
Outputs signal to exit the extension block if the contained script is either a minimally encoded P2PKH or P2SH script.
This hits the biggest question I asked in my January post: do you want to allow direct exit payment to legacy addresses? As a block reorg will almost guarantee changing txid of the resolution tx, that will permanently invalidate all the child txs based on the resolution tx. This is a significant change to the current tx model. To fix this, you need to make exit outputs unspendable for up to 100 blocks. Doing this, however, will make legacy wallet users very confused as they do not anticipate funding being locked up for a long period of time. So you can’t let the money sent back to a legacy address directly, but sent to a new format address that only recognized by new wallet, which understands the lock up requirement. This way, however, introduces friction and some fungibility issues, and I’d expect people using cross chain atomic swap to exchange bitcoin and xbitcoin To summarise, my questions are: 1. Is it acceptable to have massive txid malleability and transaction chain invalidation for every natural happening reorg? Yes: the current spec is ok; No: next question (I’d say no) 2. Is locking up exit outputs the best way to deal with the problem? (I tried really hard to find a better solution but failed) 3. How long the lock-up period should be? Answer could be anywhere from 1 to 100 4. With a lock-up period, should it allow direct exit to legacy address? (I think it’s ok if the lock-up is short, like 1-2 block. But is that safe enough?) 5. Due to the fungibility issues, it may need a new name for the tokens in the ext-block
Verification of transactions within the extension block shall enforce all currently deployed softforks, along with an extra BIP141-like ruleset.
I suggest to only allow push-only and OP_RETURN scriptPubKey in xblock. Especially, you don’t want to replicate the sighash bug to xblock. Also, requires scriptSig to be always empty
This leaves room for 7 future soft-fork upgrades to relax DoS limits.
Why 7? There are 16 unused witness program versions
Witness script hash v0 shall be worth the number of accurately counted sigops in the redeem script, multiplied by a factor of 8.
There is a flaw here: witness script with no sigop will be counted as 0 and have a lot free space
every 73 bytes in the serialized witness vector is worth 1 additional point.
so 72 bytes is 1 point or 0 point? Maybe it should just scale everything up by 64 or 128, and make 1 witness byte = 1 point . So it won’t provide any “free space” in the block.
Currently defined witness programs (v0) are each worth 8 points. Unknown witness program outputs are worth 1 point. Any exiting output is always worth 8 points.
I’d suggest to have at least 16 points for each witness v0 output, so it will make it always more expensive to create than spend UTXO. It may even provide extra “discount” if a tx has more input than output. The overall objective is to limit the UTXO growth. The ext block should be mainly for making transactions, not store of value (I’ll explain later)
In general I think it’s ok, but I’d suggest a higher threshold like 5000 satoshi. It may also combine the threshold with the output witness version, so unknown version may have a lower or no threshold. Alternatively, it may start with a high threshold and leave a backdoor softfork to reduce it.
It is a double-edged sword. While it is good for us to be able to discard an unused chain, it may create really bad user experience and people may even lose money. For example, people may have opened Lightning channels and they will find it not possible to close the channel. So you need to make sure people are not making time-locked tx for years, and require people to refresh their channel regularly. And have big red warning when the deactivation SF is locked in. Generally, xblock with deactivation should never be used as long-term storage of value. ———— some general comments:
This BIP in current form is not compatible with BIP141. Since most nodes are already upgraded to BIP141, this BIP must not be activated unless BIP141 failed to activate. However, if the community really endorse the idea of ext block, I see no reason why we couldn’t activate BIP141 first (which could be done in 2 weeks), then work together to make ext block possible. Ext block is more complicated than segwit. If it took dozens of developers a whole year to release segwit, I don’t see how ext block could become ready for production with less time and efforts.
Another reason to make this BIP compatible with BIP141 is we also need malleability fix in the main chain. As the xblock has a deactivation mechanism, it can’t be used for longterm value storage.
I think the size and cost limit of the xblock should be lower at the beginning, and increases as we find it works smoothly. It could be a predefined growth curve like BIP103, or a backdoor softfork. With the current design, it leaves a massive space for miners to fill up with non-tx garbage. Also, I’d also like to see a complete SPV fraud-proof solution before the size grows bigger.
a system completely controlled by just a handful of people. Worse still, the network is on the brink of technical collapse.
This is patently untrue as power dynamics within bitcoin are a complex interwoven level of game theory shared by miners, nodes, developers, merchants and payment processors, and users. Even if one were to make the false assumption that Miners control all the power, the reality is mining pools are either made up of thousands of individual miners who can and do redirect their hashing power or private pools with companies controlled by multiple investors and owners.
Worse still, the network is on the brink of technical collapse.
If and when a fee event happens, bitcoin will be just fine. Wallets already can adjust for fees and tx fee pressures will be kept reasonable because they still need to compete with free off the chain solutions. Whether the Block size is raised to 2, 4, or 8 MB it will also be fine(in the short term) as long as corresponding sigop protections are included. The blocksize debate more has to do with bikeshedding and setting a long term direction for bitcoin than preventing a short term technical collapse.
Couldn’t move your existing money
Bitcoin functions as a payment rails system just fine, just ask Coinbase and bitpay.
Had wildly unpredictable fees that were high and rising fast
False, I normal pay 3-5 pennies , and tx instantly get to their destination and confirm between 5 min to 1 hour like normal. CC txs take weeks to months to confirm.
Allowed buyers to take back payments they’d made after walking out of shops, by simply pressing a button (if >you aren’t aware of this “feature” that’s because Bitcoin was only just changed to allow it)
RBF is opt in , and therefore payment processors won't accept this if they do 0 conf tx approvals.
Is suffering large backlogs and flaky payments The block chain is full.
Blocks are 60-70% full on average . We have yet to see a continuous backlog lasting more than a few hours max. This conf backlog doesn't prevent tx from being processed unlike when the Visa/paypal network goes down and you cannot make a payment at all.
… which is controlled by China
People in China [b]partially [/b]Control one small aspect of the bitcoin ecosystem and why shouldn't they? They do represent 19% of the worlds population. This comment is both misleading and xenophobic.
… and in which the companies and people building it were in open civil war?
Most people are passionate but still friendly behind closed doors. The Blocksize debate has spurred decentralization of developer groups and new ideas which are good things. Sure there has been some unproductive infighting , but we will get through this and be stronger for it. "Civil wars" exist within and between all currencies anyways so this is nothing surprising.
Once upon a time, Bitcoin had the killer advantage of low and even zero fees, but it’s now common to be asked >to pay more to miners than a credit card would charge.
Credit cards charge 2.8% to 7% in the US and 5-8% in many other countries. Bitcoins once had fees up to 40 cents a tx , and for the past few years normal fees have been consistently between 2-8 pennies per tx on the chain and free off the chain.
Because the block chain is controlled by Chinese miners, just two of whom control more >than 50% of the hash >power. At a recent conference over 95% of hashing power was controlled by a handful of guys sitting on a single stage.
Mining pools are controlled by many miners and interests , not individuals. Miners also share the control with many other competing interests and are limited in their ability to harm the bitcoin ecosystem if they so choose.
They have chosen instead to ignore the problem and hope it goes away.
This gives them a perverse financial incentive to actually try and stop Bitcoin becoming popular.
The Chinese miners want bitcoin to scale to at least 2MB in the short term, something that both Core and Classic accommodate. Bitcoin will continue to scale with many other solutions and ultimately payment channels will allow it to scale to Visa like levels of TPS.
The resulting civil war has seen Coinbase — the largest and best known Bitcoin startup in the USA — be erased >from the official Bitcoin website for picking the “wrong” side and banned from the community forums.
Coinbase was re-added to bitcoin.org. Mike conveniently left that important datapoint off.
has gone from being a transparent and open community to one that is dominated by rampant censorship
There are more subreddits, more forums , and more information than ever before. The blocksize debate does sometimes create divisions in our ecosystem but the information is all there and easy for anyone to investigate.
But the inability to get news about XT or the censorship itself through to users has some problematic effects.
The failure of XT has nothing to do with the lack of information. If anything there is too much information available , being repeated over and over , in many different venues.
One of them, Gregory Maxwell, had an unusual set of views: he once claimed he had mathematically proven >Bitcoin to be impossible. More problematically, he did not believe in Satoshi’s original vision.
Satoshi never intended to be used as an argument from authority and if he does he can always come back and contribute. We should not depend upon an authority figure but evidence, valid reasoning, and testing.
And indeed back-of-the-envelope calculations suggested that, as he said to me, “it never really hits a scale >ceiling” even when looking at more factors than just bandwidth.
Hearn's calculations are wrong. More specifically they do not take into account TOR, decentralization in locations with bandwidth limitations, bandwidth softcaps imposed by ISP's, the true scale of historical bandwidth increases, and malicious actors attacking the system with sophisticated attacks.
Once the 5 developers with commit access to the code had been chosen and Gavin had decided he did not want >to be the leader, there was no procedure in place to ever remove one.
The 45 developers who contributed to Bitcoin Core in 2015 could be replaced instantly if the community wanted with little effort. Ultimately, the nodes, miners and users control which code they use and no group of developers can force them to upgrade. In fact Bitcoin Core deliberately avoids and auto-update feature with their releases at the cost of usability to specifically insure that users have to actively choose all new features and can opt out simply by not upgrading. ... end of part one...
Forcenet: an experimental network with a new header format | Johnson Lau | Dec 04 2016
Johnson Lau on Dec 04 2016: Based on Luke Dashjr’s code and BIP: https://github.com/luke-jbips/blob/bip-mmhf/bip-mmhf.mediawiki , I created an experimental network to show how a new header format may be implemented. Basically, the header hash is calculated in a way that non-upgrading nodes would see it as a block with only the coinbase tx and zero output value. They are effectively broken as they won’t see any transactions confirmed. This allows rewriting most of the rules related to block and transaction validity. Such technique has different names like soft-hardfork, firmfork, evil softfork, and could be itself a controversial topic. However, I’d rather not to focus on its soft-hardfork property, as that would be trivial to turn this into a true hardfork (e.g. setting the sign bit in block nVersion, or setting the most significant bit in the dummy coinbase nLockTime) Instead of its soft-HF property, I think the more interesting thing is the new header format. The current bitcoin header has only 80 bytes. It provides only 32bits of nonce space and is far not enough for ASICs. It also provides no room for committing to additional data. Therefore, people are forced to put many different data in the coinbase transaction, such as merge-mining commitments, and the segwit commitment. It is not a ideal solution, especially for light wallets. Following the practice of segwit development of making a experimental network (segnet), I made something similar and call it the Forcenet (as it forces legacy nodes to follow the post-fork chain) The header of forcenet is mostly described in Luke’s BIP, but I have made some amendments as I implemented it. The format is (size in parentheses; little endian): Height (4), BIP9 signalling field (4), hardfork signalling field (3), merge-mining hard fork signalling field (1), prev hash (32), timestamp (4), nonce1 (4), nonce2 (4), nonce3 (compactSize + variable), Hash TMR (32), Hash WMR (32), total tx size (8) , total tx weight (8), total sigops (8), number of tx (4), merkle branches leading to header C (compactSize + 32 bit hashes) In addition to increasing the max block size, I also showed how the calculation and validation of witness commitment may be changed with a new header. For example, since the commitment is no longer in the coinbase tx, we don’t need to use a 0000….0000 hash for the coinbase tx like in BIP141. Something not yet done:
The new merkle root algorithm described in the MMHF BIP
The nTxsSigops has no meaning currently
Communication with legacy nodes. This version can’t talk to legacy nodes through the P2P network, but theoretically they could be linked up with a bridge node
A new block weight definition to provide incentives for slowing down UTXO growth
Many other interesting hardfork ideas, and softfork ideas that works better with a header redesign
For easier testing, forcenet has the following parameters: Hardfork at block 200 Segwit is always activated 1 minutes block with 40000 (prefork) and 80000 (postfork) weight limit 50 blocks coinbase maturity 21000 blocks halving 144 blocks retarget How to join: codes at https://github.com/jl2012/bitcoin/tree/forcenet1 , start with "bitcoind —forcenet" . Connection: I’m running a node at 8333.info with default port (38901) Mining: there is only basic internal mining support. Limited GBT support is theoretically possible but needs more hacking. To use the internal miner, writeup a shell script to repeatedly call “bitcoin-cli —forcenet generate 1” New RPC commands: getlegacyblock and getlegacyblockheader, which generates blocks and headers that are compatible with legacy nodes. This is largely work-in-progress so expect a reset every couple weeks jl2012 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 671 bytes Desc: Message signed with OpenPGP using GPGMail URL: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20161205/126aae21/attachment.sig original: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-Decembe013338.html
Extension block proposal by Jeffrey et al | Luke Dashjr | Apr 04 2017
Luke Dashjr on Apr 04 2017: Recently there has been some discussion of an apparent work-in-progress extension block proposal by Christopher Jeffrey, Joseph Poon, Fedor Indutny, and Steven Pair. Since this hasn't been formally posted on the ML yet, perhaps it is still in pre-draft stages and not quite ready for review, but in light of public interest, I think it is appropriate to open it to discussion, and toward this end, I have reviewed the current revision. For reference, the WIP proposal itself is here:
==Overall analysis & comparison== This is a relatively complicated proposal, creating a lot of additional technical debt and complexity in comparison to both BIP 141 and hardforks. It offers no actual benefits beyond BIP 141 or hardforks, so seems irrational to consider at face value. In fact, it fits much better the inaccurate criticisms made by segwit detractors against BIP 141. That being said, this proposal is very interesting in construction and is for the most part technically sound. While ill-fit to merely making blocks larger, it may be an ideal fit for fundamentally different block designs such as Rootstock and MimbleWimble in absence of decentralised non-integrated sidechains (extension blocks are fundamentally sidechains tied into Bitcoin directly). ==Fundamental problem== Extension blocks are a risk of creating two classes of "full nodes": those which verify the full block (and are therefore truly full nodes), and those which only verify the "base" block. However, because the extension is consensus-critical, the latter are in fact not full nodes at all, and are left insecure like pseudo-SPV (not even real SPV) nodes. This technical nature is of course true of a softfork as well, but softforks are intentionally designed such that all nodes are capable of trivially upgrading, and there is no expectation for anyone to run with pre-softfork rules. In general, hardforks can provide the same benefits of an extension block, but without the false expectation and pointless complexity. ==Other problems & questions==
These outpoints may not be spent inside the mempool (they must be redeemed
from the next resolution txid in reality). This breaks the ability to spend unconfirmed funds in the same block (as is required for CPFP). The extension block's transaction count is not cryptographically committed-to anywhere. (This is an outstanding bug in Bitcoin today, but impractical to exploit in practice; however, exploiting it in an extension block may not be as impractical, and it should be fixed given the opportunity.)
The merkle root is to be calculated as a merkle tree with all extension
block txids and wtxids as the leaves. This needs to elaborate how the merkle tree is constructed. Are all the txids followed by all the wtxids (tx hashes)? Are they alternated? Are txid and wtxid trees built independently and merged at the tip?
Output script code aside from witness programs, p2pkh or p2sh is considered
invalid in extension blocks. Why? This prevents extblock users from sending to bare multisig or other various possible destinations. (While static address forms do not exist for other types, they can all be used by the payment protocol.) Additionally, this forbids datacarrier (OP_RETURN), and forces spam to create unprovably-unspendable UTXOs. Is that intentional?
The maximum extension size should be intentionally high.
This has the same "attacks can do more damage than ordinary benefit" issue as BIP141, but even more extreme since it is planned to be used for future size increases.
Witness key hash v0 shall be worth 1 point, multiplied by a factor of 8.
What is a "point"? What does it mean multiplied by a factor of 8? Why not just say "8 points"?
Witness script hash v0 shall be worth the number of accurately counted
sigops in the redeem script, multiplied by a factor of 8. Please define "accurately counted" here. Is this using BIP16 static counting, or accurately counting sigops during execution?
To reduce the chance of having redeem scripts which simply allow for garbage
data in the witness vector, every 73 bytes in the serialized witness vector is worth 1 additional point. Is the size rounded up or down? If down, 72-byte scripts will carry 0 points...) ==Trivial & process== BIPs must be in MediaWiki format, not Markdown. They should be submitted for discussion to the bitcoin-dev mailing list, not social media and news.
This specification defines a method of increasing bitcoin transaction
throughput without altering any existing consensus rules. This is inaccurate. Even softforks alter consensus rules.
Bitcoin retargetting ensures that the time in between mined blocks will be
roughly 10 minutes. It is not possible to change this rule. There has been great debate regarding other ways of increasing transaction throughput, with no proposed consensus-layer solutions that have proven themselves to be particularly safe. Block time seems entirely unrelated to this spec. Motivation is unclear.
Extension blocks leverage several features of BIP141, BIP143, and BIP144 for
transaction opt-in, serialization, verification, and network services, and as such, extension block activation entails BIP141 activation. As stated in the next paragraph, the rules in BIP 141 are fundamentally incompatible with this one, so saying BIP 141 is activated is confusingly incorrect.
This specification should be considered an extension and modification to
these BIPs. Extension blocks are not compatible with BIP141 in its current form, and will require a few minor additional rules. Extension blocks should be compatible with BIP 141, there doesn’t appear to be any justification for not making them compatible.
This specification prescribes a way of fooling non-upgraded nodes into
believing the existing UTXO set is still behaving as they would expect. The UTXO set behaves fundamentally different to old nodes with this proposal, albeit in a mostly compatible manner.
Note that canonical blocks containing entering outputs MUST contain an
extension block commitment (all zeroes if nothing is present in the extension block). Please explain why in Rationale.
Coinbase outputs MUST NOT contain witness programs, as they cannot be
sweeped by the resolution transaction due to previously existing consensus rules. Seems like an annoying technical debt. I wonder if it can be avoided.
The genesis resolution transaction MAY also include a 1-100 byte pushdata in
the first input script, allowing the miner of the genesis resolution to add a special message. The pushdata MUST be castable to a true boolean. Why? Unlike the coinbase, this seems to create additional technical debt with no apparent purpose. Better to just have a consensus rule every input must be null.
The resolution transaction's version MUST be set to the uint32 max (`232 -
1`). Transaction versions are signed, so I assume this is actually simply -1. (While signed transaction versions seemed silly to me, using it for special cases like this actually makes sense.)
Exiting the extension block
Should specify that spending such an exit must use the resolution txid, not the extblock's txid.
On the policy layer, transaction fees may be calculated by transaction cost
as well as additional size/legacy-sigops added to the canonical block due to entering or exiting outputs. BIPs should not specify policy at all. Perhaps prefix "For the avoidance of doubt:" to be clear that miners may perform any fee logic they like.
Transactions within the extended transaction vector MAY include a witness
vector using BIP141 transaction serialization. Since extblock transactions are all required to be segwit, why wouldn't this be mandatory?
BIP141's nested P2SH feature is no longer available, and no longer a
consensus rule. Note this makes adoption slower: wallets cannot use the extblock until the economy has updated to support segwit-native addresses.
To reduce the chance of having redeem scripts which simply allow for garbage
data in the witness vector, every 73 bytes in the serialized witness vector is worth 1 additional point. Please explain why 73 bytes in Rationale.
This leaves room for 7 future soft-fork upgrades to relax DoS limits.
How so? Please explain.
A consensus dust threshold is now enforced within the extension block.
If the second highest transaction version bit (30th bit) is set to to 1
within an extension block transaction, an extra 700-bytes is reserved on the transaction space used up in the block. Why wouldn't users set this on all transactions?
default_witness_commitment has been renamed to
default_extension_commitment and includes the extension block commitment script. default_witness_commitment was never part of the GBT spec. At least describe what this new key is.
Deployment name: extblk (appears as !extblk in GBT).
Should be just extblk if backward compatibility is supported (and !extblk when not).
The "deactivation" deployment'...[message truncated here by reddit bot]...
Disclaimer: My preferred plan for bitcoin is soft-forking segregated witness in asap, and scheduling a 2MB hardforked blocksize increase sometime mid-2017, and I think doing a 2MB hardfork anytime soon is pretty crazy. Also, I like micropayments, and until I learnt about the lightning network proposal, bitcoin didn't really interest me because a couple of cents in fees is way too expensive, and a few minutes is way too slow. Maybe that's enough to make everything I say uninteresting to you, dear reader, in which case I hope this disclaimer has saved you some time. :) Anyway there's now a good explanation of what segwit does beyond increasing the blocksize via accounting tricks or however you want to call it: https://bitcoincore.org/en/2016/01/26/segwit-benefits/  I'm hopeful that makes it a bit easier to see why many people are more excited by segwit than a 2MB hardfork. In any event hopefully it's easy to see why it might be a good idea to do segwit asap, even if you do a hardfork to double the blocksize first. If you were to do a 2MB hardfork first, and then apply segwit on top of that , I think there are a number of changes you'd want to consider, rather than just doing a straight merge. Number one is that with the 75% discount for witness data and a 2MB blocksize, you run the risk of worst-case 8MB blocks which seems to be too large at present . The obvious solution is to change the discount rate, or limit witness data by some other mechanism. The drawback is that this removes some of the benefits of segwit in reducing UTXO growth and in moving to a simpler cost formula. Not hard, but it's a tradeoff, and exactly what to do isn't obvious (to me, anyway). If IBLT or weak blocks or an improved relay network or something similar comes out after deploying segwit, does it then make sense to increase the discount or otherwise raise the limit on witness data, and is it possible to do this without another hardfork and corresponding forced upgrade? For the core roadmap, I think the answer would be "do segwit as a soft-fork now so no one has to upgrade, and after IBLT/etc is ready perhaps do a hard-fork then because it will be safer" so there's only one forced upgrade for users. Is some similar plan possible if there's an "immediate" hard fork to increase the block size, to avoid users getting hit with two hardforks in quick succession? Number two is how to deal with sighashes -- segwit allows the hash calculation to be changed, so that for 2MB of transaction data (including witness data), you only need to hash up to around 4MB of data when verifying signatures, rather than potentially gigabytes of data. Compare that to Gavin's commits to the 0.11.2 branch in Classic which include a 1.3GB limit on sighash data to make the 2MB blocksize -- which is necessary because the quadratic scaling problem means that the 1.3GB limit can already be hit with 1MB blocks. Do you keep the new limit once you've got 2MB+segwit, or plan to phase it out as more transactions switch to segwit, or something else? Again, I think with the core roadmap the plan here is straightforward -- do segwit now, get as many wallets/transactions switched over to segwit asap (whether due to all the bonus features, or just that they're cheaper in fees), and then revise the sighash limits later as part of soft-forking to increase the blocksize. Finally, and I'm probably projecting my own ideas here, I think a 2MB hardfork in 2017 would give ample opportunity to simultaneously switch to a "validation cost metric" approach, making fees simpler to calculate and avoiding people being able to make sigop attacks to force near-empty blocks and other such nonsense. I think there's even the possibility of changing the limit so that in future it can be increased by soft-forks , instead of needing a hard fork for increases as it does now. ie, I think if we're clever, we can get a gradual increase to 1.8MB-2MB starting in the next few months via segwit with a soft-fork, then have a single hard-fork flag day next year, that allows the blocksize to be managed in a forwards compatible way more or less indefinitely. Anyhoo, I'd love to see more technical discussion of classic vs core, so in the spirit of "write what you want to read", voila...  I wrote most of the text for that, though the content has had a lot of corrections from people who understand how it works better than I do; see the github pull request if you care --https://github.com/bitcoin-core/website/pull/67  https://www.reddit.com/btc/comments/42mequ/jtoomim_192616_utc_my_plan_for_segwit_was_to_pull/  I've done no research myself; jtoomim's talk at Hong Kong said 2MB/4MB seemed okay but 8MB/9MB was "pushing it" -- http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/bip101-block-propagation-data-from-testnet/ and his talks with miners indicated that BIP101's 8MB blocks were "Too much too fast" https://docs.google.com/spreadsheets/d/1Cg9Qo9Vl5PdJYD4EiHnIGMV3G48pWmcWI3NFoKKfIzU/edit#gid=0 Tradeblock's stats also seem to suggest 8MB blocks is probably problematic for now: https://tradeblock.com/blog/bitcoin-network-capacity-analysis-part-6-data-propagation  https://botbot.me/freenode/bitcoin-wizards/2015-12-09/?msg=55794797&page=4
Consensus critical limits in Bitcoin protocol and proposed block resources limit accounting | Johnson Lau | Jan 27 2017
Johnson Lau on Jan 27 2017: There are many consensus critical limits scattered all over the Bitcoin protocol. The first part of this post is to analyse what the current limits are. These limits could be arranged into different categories:
Script level limit. Some limits are restricted to scripts, including size (10000 bytes), nOpCount (201), stack plus alt-stack size (1000), and stack push size (520). If these limits are passed, they won’t have any effects on the limits of the other levels.
Output value limit: any single output value must be >=0 and <= 21 million bitcoin
Transaction level limit: The only transaction level limit we have currently, is the total output value must be equal to or smaller than the total input value for non-coinbase tx.
Block level limit: there are several block level limits:
a. The total output value of all txs must be equal to or smaller than the total input value with block reward. b. The serialised size including block header and transactions must not be over 1MB. (or 4,000,000 in terms of tx weight with segwit) c. The total nSigOpCount must not be over 20,000 (or 80,000 nSigOpCost with segwit) There is an unavoidable layer violation in terms of the block level total output value. However, all the other limits are restricted to its level. Particularly, the counting of nSigOp does not require execution of scripts. BIP109 (now withdrawn) tried to change this by implementing a block level SigatureHash limit and SigOp limit by counting the accurate value through running the scripts. So currently, we have 2 somewhat independent block resources limits: weight and SigOp. A valid block must not exceed any of these limits. However, for miners trying to maximise the fees under these limits, they need to solve a non-linear equation. It’s even worse for wallets trying to estimate fees, as they have no idea what txs are miners trying to include. In reality, everyone just ignore SigOp for fee estimation, as the size/weight is almost always the dominant factor. In order to not introduce further non-linearity with segwit, after examining different alternatives, we decided that the block weight limit should be a simple linear function: 3*base size + total size, which allows bigger block size and provides incentives to limit UTXO growth. With normal use, this allows up to 2MB of block size, and even more if multi-sig becomes more popular. A side effect is that allows a theoretical way to fill up the block to 4MB with mostly non-transaction data, but that’d only happen if a miner decide to do it due to non-standardness. (and this is actually not too bad, as witness could be pruned in the future) Some also criticised that the weight accounting would make a “simple 2MB hardfork” more dangerous, as the theoretical limits will be 8MB which is too much. This is a complete straw man argument, as with a hardfork, one could introduce any rules at will, including revolutionising the calculation of block resources, as shown below. ————————— Proposal: a new block resources limit accounting Objectives:
linear fee estimation
a single, unified, block level limit for everything we want to limit
do not require expensive script evaluation
the maximum base block size is about 1MB (for a hardfork with bigger block, it just needs to upscale the value)
a hardfork is done (despite some of these could also be done with a softfork)
Version 1: without segwit The tx weight is the maximum of the following values: — Serialised size in byte — accurate nSigOpCount * 50 (statical counting of SigOp in scriptSig, redeemScript, and previous scriptPubKey, but not the new scriptPubKey) The block level limit is 1,000,000 Although this looks similar to the existing approach, this actually makes the fee estimation a linear problem. Wallets may now calculate both values for a tx and take the maximum, and compare with other txs on the same basis. On the other hand, the total size and SigOpCount of a block may never go above the existing limits (1MB and 20000) no matter how the txs look like. (In some edge cases, the max block size might be smaller than 1MB, if the weight of some transactions is dominated by the SigOpCount) Version 2: extending version 1 with segwit The tx weight is the maximum of the following values: — Serialised size in byte * 2 — Base size * 3 + total size — accurate SigOpCount * 50 (as a hardfork, segwit and non-segwit SigOp could be counted in the same way and no need to scale) The block level limit is 4,000,000 For similar reasons the fee estimation is also a linear problem. An interesting difference between this and BIP141 is this will limit the total block size under 2MB, as 4,000,000 / 2 (the 2 as the scaling factor for the serialised size). If the witness inflation really happens (which I highly doubt as it’s a miner initiated attack), we could introduce a similar limit just with a softfork. Version 3: extending version 2 to limit UTXO growth: The tx weight is the maximum of the following values: — Serialised size in byte * 2 — Adjusted size = Base size * 3 + total size + (number of non-OP_RETURN outputs - number of inputs) * 4 * 41 — accurate SigOpCount * 50 I have explained the rationale for the adjusted size in an earlier post but just repeat here. “4” in the formula is the witness scale factor, and “41” is the minimum size of transaction input (32 hash + 4 index + 4 sequence + 1 for empty scriptSig). This requires everyone to pay a significant portion of the spending fee when they create a UTXO, so they pay less when it is spent. For transactions with 1:1 input and output ratios, the effect is cancelled out and won’t actually affect the weight estimation. When spending becomes cheaper, even UTXOs with lower value might become economical to spend, which helps cleaning up the UTXO. Since UTXO is the most expensive aspect, I strongly believe that any block size increase proposal must somehow discourage further growth of the set. Version 4: including a sighash limit This is what I actually implemented in my experimental hardfork network: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013472.htmlhttps://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013472.html I’m not repeating here, but it shows how further limits might be added on top of the old ones through a softfork. Basically, you just add more metrics, and always take to maximum one. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20170128/84d6df5e/attachment.html original: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013504.html
Spoonnet: another experimental hardfork | Johnson Lau | Feb 06 2017
Johnson Lau on Feb 06 2017: Finally got some time over the Chinese New Year holiday to code and write this up. This is not the same as my previous forcenet ( https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013472.html ). It is much simpler. Trying to activate it on testnet will get you banned. Trying to activate it on mainnet before consensus is reached will make you lose money. This proposal includes the following features:
A fixed starting time. Not dependent on miner signalling. However, it requires at least 51% of miners to actually build the new block format in order to get activated.
It has no mechanism to prevent a split. If 49% of miners insist on the original chain, they could keep going. Split prevention is a social problem, not a technical one.
It is compatible with existing Stratum mining protocol. Only pool software upgrade is needed
A new extended and flexible header is located at the witness field of the coinbase transaction
It is backward compatible with existing light wallets
Dedicated space for miners to put anything they want, which bitcoin users could completely ignore. Merge-mining friendly.
Small header space for miners to include non-consensus enforced bitcoin related data, useful for fee estimation etc.
A new transaction weight formula to encourage responsible use of UTXO
A linear growth of actual block size until certain limit
Sighash O(n2) protection for legacy (non-segwit) outputs
Optional anti-transaction replay
A new optional coinbase tx format that allows additional inputs, including spending of immature previous coinbase outputs
Specification [Rationales]: Activation:
A "hardfork signalling block" is a block with the sign bit of header nVersion is set [Clearly invalid for old nodes; easy opt-out for light wallets]
If the median-time-past of the past 11 blocks is smaller than the HardForkTime (exact time to be determined), a hardfork signalling block is invalid.
Child of a hardfork signalling block MUST also be a hardfork signalling block
Initial hardfork signalling is optional, even if the HardForkTime has past [requires at least 51% of miners to actually build the new block format]
HardForkTime is determined by a broad consensus of the Bitcoin community. This is the only way to prevent a split.
Main header refers to the original 80 bytes bitcoin block header
A hardfork signalling block MUST have a additional extended header
The extended header is placed at the witness field of the coinbase transaction [There are 2 major advantages: 1. coinbase witness is otherwise useless; 2. Significantly simply the implementation with its stack structure]
There must be exactly 3 witness items (Header1; Header2 ; Header3)
**Header1 must be exactly 32 bytes of the original transaction hash Merkle root. **Header2 is the secondary header. It must be 36-80 bytes. The first 4 bytes must be little-endian encoded number of transactions (minimum 1). The next 32 bytes must be the witness Merkle root (to be defined later). The rest, if any, has no consensus meaning. However, miners MUST NOT use this space of non-bitcoin purpose [the additional space allows non-censensus enforced data to be included, easily accessible to light wallets] **Header3 is the miner dedicated space. It must not be larger than 252 bytes. Anything put here has no consensus meaning [space for merge mining; non-full nodes could completely ignore data in this space; 252 is the maximum size allowed for signal byte CompactSize]
The main header commitment is H(Header1|H(H(Header2)|H(Header3))) H() = dSHA256() [The hardfork is transparent to light wallets, except one more 32-byte hash is needed to connect a transaction to the root]
To place the ext header, segwit becomes mandatory after hardfork
A “backdoor” softfork the relax the size limit of Header 2 and Header 3:
A special BIP9 softfork is defined with bit-15. If this softfork is activated, full nodes will not enforce the size limit for Header 2 and Header 3. [To allow header expansion without a hardfork. Avoid miner abuse while providing flexibility. Expansion might be needed for new commitments like fraud proof commitments]
Hardfork network version bit is 0x02000000. A tx is invalid if the highest nVersion byte is not zero, and the network version bit is not set.
Masked tx version is nVersion with the highest byte masked. If masked version is 3 or above, sighash for OP_CHECKSIG alike is calculated using BIP143, except 0x02000000 is added to the nHashType (the nHashType in signature is still a 1-byte value) [ensure a clean split of signatures; optionally fix the O(n2) problem]
Pre-hardfork policy change: nVersion is determined by the masked tx version for policy purpose. Setting of Pre-hardfork network version bit 0x01000000 is allowed.
Only txs with masked version below 3 are counted. [because they are fixed by the BIP-143 like signature]
Each SigHashSize is defined as 1 tx weight (defined later).
SIGHASH_SCALE_FACTOR is 90 (see the BIP above)
New tx weight definition:
Weight of a transaction is the maximum of the 4 following metrics:
** The total serialised size * 2 * SIGHASH_SCALE_FACTOR (size defined by the witness tx format in BIP144) ** The adjusted size = (Transaction weight by BIP141 - (number of inputs - number of non-OP_RETURN outputs) * 41) * SIGHASH_SCALE_FACTOR ** nSigOps * 50 * SIGHASH_SCALE_FACTOR. All SigOps are equal (no witness scaling). For non-segwit txs, the sigops in output scriptPubKey are not counted, while the sigops in input scriptPubKey are counted. ** SigHashSize defined in the last section Translating to new metric, the current BIP141 limit is 360,000,000. This is equivalent to 360MB of sighashing, 2MB of serialised size, 4MB of adjusted size, or 80000 nSigOp. See rationales in this post: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013472.html Block weight growing by time:
Numbers for example only. Exact number to be determined.
Block weight at HardForkTime is (5,000,000 * SIGHASH_SCALE_FACTOR)
By every 16 seconds growth of the median-time-past, the weight is increased by (1 * SIGHASH_SCALE_FACTOR)
The growth stops at (16,000,000 * SIGHASH_SCALE_FACTOR)
The growth does not dependent on the actual hardfork time. It’s only based on median-time-past [using median-time-past so miners have no incentive to use a fake timestamp]
The limit for serialized size is 2.5 to 8MB in about 8 years. [again, numbers for example only]
New coinbase transaction format:
Existing coinbase format is allowed, except the new extended header in the coinbase witness. No OP_RETURN witness commitment is needed.
A new coinbase format is defined. The tx may have 1 or more inputs. The outpoint of the first input MUST have an n value of 0xffffffff, and use the previous block hash as the outpoint hash [This allows paying to the child of a particular block by signing the block hash]
ScriptSig of the first (coinbase) input is not executed. The size limit increased from 100 to 252 (same for old coinbase format)
Additional inputs MUST provide a valid scriptSig and/or witness for spending
Additional inputs may come from premature previous coinbase outputs [this allows previous blocks paying subsequent blocks to encourage confirmations]
Witness merkle root:
If the coinbase is in old format, the witness merkle root is same as BIP141 by setting the witness hash of the coinbase tx as 0 (without the 32 byte witness reserved value)
If the coinbase is in new format, the witness hash of the coinbase tx is calculated by first removing the extended header
The witness merkle root is put in the extended header 2, not as an OP_RETURN output in coinbase tx.
The witness merkle root becomes mandatory. (It was optional in BIP141)
Other consensus changes:
BIP9 will ignore the sign bit. [Setting the sign bit now is invalid so this has no real consensus impact]
An experimental implementation of the above spec could be found at https://github.com/jl2012/bitcoin/tree/spoonnet1 Not the same as my previous effort on the “forcenet”, the “spoonnet” is a full hardfork that will get you banned on the existing network. Haven’t got the time to test the codes yet, not independently reviewed. But it passes all existing tests in Bitcoin Core. No one should use this in production, but I think it works fine on testnet like a normal bitcoind (as long as it is not activated) Things not implemented yet:
Post-hardfork support for old light wallets
Wallet support, especially anti-tx-replay
New p2p message to transmit secondary header (lower priority)
Full mining and mempool support (not my priority)
Potential second stage change: Relative to the actual activation time, there could be a second stage with more drastic changes to fix one or both of the following problems:
SHA256 shortcut like ASICBoost. All fixes to ASICBoost are not very elegant. But the question is, is it acceptable to have bitcoin-specific patent in the consensus protocol? Still, I believe the best way to solve this problem is the patent holder(s) to kindly som...[message truncated here by reddit bot]...
Bitcoin Core 0.10.0 released | Wladimir | Feb 16 2015
Wladimir on Feb 16 2015: Bitcoin Core version 0.10.0 is now available from: https://bitcoin.org/bin/0.10.0/ This is a new major version release, bringing both new features and bug fixes. Please report bugs using the issue tracker at github: https://github.com/bitcoin/bitcoin/issues The whole distribution is also available as torrent: https://bitcoin.org/bin/0.10.0/bitcoin-0.10.0.torrent magnet:?xt=urn:btih:170c61fe09dafecfbb97cb4dccd32173383f4e68&dn;=0.10.0&tr;=udp%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.publicbt.com%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.ccc.de%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr;=udp%3A%2F%2Fopen.demonii.com%3A1337&ws;=https%3A%2F%2Fbitcoin.org%2Fbin%2F Upgrading and downgrading How to Upgrade If you are running an older version, shut it down. Wait until it has completely shut down (which might take a few minutes for older versions), then run the installer (on Windows) or just copy over /Applications/Bitcoin-Qt (on Mac) or bitcoind/bitcoin-qt (on Linux). Downgrading warning Because release 0.10.0 makes use of headers-first synchronization and parallel block download (see further), the block files and databases are not backwards-compatible with older versions of Bitcoin Core or other software:
Blocks will be stored on disk out of order (in the order they are
received, really), which makes it incompatible with some tools or other programs. Reindexing using earlier versions will also not work anymore as a result of this.
The block index database will now hold headers for which no block is
stored on disk, which earlier versions won't support. If you want to be able to downgrade smoothly, make a backup of your entire data directory. Without this your node will need start syncing (or importing from bootstrap.dat) anew afterwards. It is possible that the data from a completely synchronised 0.10 node may be usable in older versions as-is, but this is not supported and may break as soon as the older version attempts to reindex. This does not affect wallet forward or backward compatibility. Notable changes Faster synchronization Bitcoin Core now uses 'headers-first synchronization'. This means that we first ask peers for block headers (a total of 27 megabytes, as of December 2014) and validate those. In a second stage, when the headers have been discovered, we download the blocks. However, as we already know about the whole chain in advance, the blocks can be downloaded in parallel from all available peers. In practice, this means a much faster and more robust synchronization. On recent hardware with a decent network link, it can be as little as 3 hours for an initial full synchronization. You may notice a slower progress in the very first few minutes, when headers are still being fetched and verified, but it should gain speed afterwards. A few RPCs were added/updated as a result of this:
getblockchaininfo now returns the number of validated headers in addition to
the number of validated blocks.
getpeerinfo lists both the number of blocks and headers we know we have in
common with each peer. While synchronizing, the heights of the blocks that we have requested from peers (but haven't received yet) are also listed as 'inflight'.
A new RPC getchaintips lists all known branches of the block chain,
including those we only have headers for. Transaction fee changes This release automatically estimates how high a transaction fee (or how high a priority) transactions require to be confirmed quickly. The default settings will create transactions that confirm quickly; see the new 'txconfirmtarget' setting to control the tradeoff between fees and confirmation times. Fees are added by default unless the 'sendfreetransactions' setting is enabled. Prior releases used hard-coded fees (and priorities), and would sometimes create transactions that took a very long time to confirm. Statistics used to estimate fees and priorities are saved in the data directory in the fee_estimates.dat file just before program shutdown, and are read in at startup. New command line options for transaction fee changes:
-txconfirmtarget=n : create transactions that have enough fees (or priority)
so they are likely to begin confirmation within n blocks (default: 1). This setting is over-ridden by the -paytxfee option.
-sendfreetransactions : Send transactions as zero-fee transactions if possible
(default: 0) New RPC commands for fee estimation:
estimatefee nblocks : Returns approximate fee-per-1,000-bytes needed for
a transaction to begin confirmation within nblocks. Returns -1 if not enough transactions have been observed to compute a good estimate.
estimatepriority nblocks : Returns approximate priority needed for
a zero-fee transaction to begin confirmation within nblocks. Returns -1 if not enough free transactions have been observed to compute a good estimate. RPC access control changes Subnet matching for the purpose of access control is now done by matching the binary network address, instead of with string wildcard matching. For the user this means that -rpcallowip takes a subnet specification, which can be
a single IP address (e.g. 126.96.36.199 or fe80::0012:3456:789a:bcde)
a network/CIDR (e.g. 188.8.131.52/24 or fe80::0000/64)
a network/netmask (e.g. 184.108.40.206/255.255.255.0 or fe80::0012:3456:789a:bcde/ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff)
An arbitrary number of -rpcallow arguments can be given. An incoming connection will be accepted if its origin address matches one of them. For example: | 0.9.x and before | 0.10.x | |--------------------------------------------|---------------------------------------| | -rpcallowip=192.168.1.1 | -rpcallowip=192.168.1.1 (unchanged) | | -rpcallowip=192.168.1.* | -rpcallowip=192.168.1.0/24 | | -rpcallowip=192.168.* | -rpcallowip=192.168.0.0/16 | | -rpcallowip=* (dangerous!) | -rpcallowip=::/0 (still dangerous!) | Using wildcards will result in the rule being rejected with the following error in debug.log:
Error: Invalid -rpcallowip subnet specification: *. Valid are a single IP (e.g. 220.127.116.11), a network/netmask (e.g. 18.104.22.168/255.255.255.0) or a network/CIDR (e.g. 22.214.171.124/24).
REST interface A new HTTP API is exposed when running with the -rest flag, which allows unauthenticated access to public node data. It is served on the same port as RPC, but does not need a password, and uses plain HTTP instead of JSON-RPC. Assuming a local RPC server running on port 8332, it is possible to request:
In every case, EXT can be bin (for raw binary data), hex (for hex-encoded binary) or json. For more details, see the doc/REST-interface.md document in the repository. RPC Server "Warm-Up" Mode The RPC server is started earlier now, before most of the expensive intialisations like loading the block index. It is available now almost immediately after starting the process. However, until all initialisations are done, it always returns an immediate error with code -28 to all calls. This new behaviour can be useful for clients to know that a server is already started and will be available soon (for instance, so that they do not have to start it themselves). Improved signing security For 0.10 the security of signing against unusual attacks has been improved by making the signatures constant time and deterministic. This change is a result of switching signing to use libsecp256k1 instead of OpenSSL. Libsecp256k1 is a cryptographic library optimized for the curve Bitcoin uses which was created by Bitcoin Core developer Pieter Wuille. There exist attacks against most ECC implementations where an attacker on shared virtual machine hardware could extract a private key if they could cause a target to sign using the same key hundreds of times. While using shared hosts and reusing keys are inadvisable for other reasons, it's a better practice to avoid the exposure. OpenSSL has code in their source repository for derandomization and reduction in timing leaks that we've eagerly wanted to use for a long time, but this functionality has still not made its way into a released version of OpenSSL. Libsecp256k1 achieves significantly stronger protection: As far as we're aware this is the only deployed implementation of constant time signing for the curve Bitcoin uses and we have reason to believe that libsecp256k1 is better tested and more thoroughly reviewed than the implementation in OpenSSL.  https://eprint.iacr.org/2014/161.pdf Watch-only wallet support The wallet can now track transactions to and from wallets for which you know all addresses (or scripts), even without the private keys. This can be used to track payments without needing the private keys online on a possibly vulnerable system. In addition, it can help for (manual) construction of multisig transactions where you are only one of the signers. One new RPC, importaddress, is added which functions similarly to importprivkey, but instead takes an address or script (in hexadecimal) as argument. After using it, outputs credited to this address or script are considered to be received, and transactions consuming these outputs will be considered to be sent. The following RPCs have optional support for watch-only: getbalance, listreceivedbyaddress, listreceivedbyaccount, listtransactions, listaccounts, listsinceblock, gettransaction. See the RPC documentation for those methods for more information. Compared to using getrawtransaction, this mechanism does not require -txindex, scales better, integrates better with the wallet, and is compatible with future block chain pruning functionality. It does mean that all relevant addresses need to added to the wallet before the payment, though. Consensus library Starting from 0.10.0, the Bitcoin Core distribution includes a consensus library. The purpose of this library is to make the verification functionality that is critical to Bitcoin's consensus available to other applications, e.g. to language bindings such as [python-bitcoinlib](https://pypi.python.org/pypi/python-bitcoinlib) or alternative node implementations. This library is called libbitcoinconsensus.so (or, .dll for Windows). Its interface is defined in the C header [bitcoinconsensus.h](https://github.com/bitcoin/bitcoin/blob/0.10/src/script/bitcoinconsensus.h). In its initial version the API includes two functions:
bitcoinconsensus_verify_script verifies a script. It returns whether the indicated input of the provided serialized transaction
correctly spends the passed scriptPubKey under additional constraints indicated by flags
bitcoinconsensus_version returns the API version, currently at an experimental 0
The functionality is planned to be extended to e.g. UTXO management in upcoming releases, but the interface for existing methods should remain stable. Standard script rules relaxed for P2SH addresses The IsStandard() rules have been almost completely removed for P2SH redemption scripts, allowing applications to make use of any valid script type, such as "n-of-m OR y", hash-locked oracle addresses, etc. While the Bitcoin protocol has always supported these types of script, actually using them on mainnet has been previously inconvenient as standard Bitcoin Core nodes wouldn't relay them to miners, nor would most miners include them in blocks they mined. bitcoin-tx It has been observed that many of the RPC functions offered by bitcoind are "pure functions", and operate independently of the bitcoind wallet. This included many of the RPC "raw transaction" API functions, such as createrawtransaction. bitcoin-tx is a newly introduced command line utility designed to enable easy manipulation of bitcoin transactions. A summary of its operation may be obtained via "bitcoin-tx --help" Transactions may be created or signed in a manner similar to the RPC raw tx API. Transactions may be updated, deleting inputs or outputs, or appending new inputs and outputs. Custom scripts may be easily composed using a simple text notation, borrowed from the bitcoin test suite. This tool may be used for experimenting with new transaction types, signing multi-party transactions, and many other uses. Long term, the goal is to deprecate and remove "pure function" RPC API calls, as those do not require a server round-trip to execute. Other utilities "bitcoin-key" and "bitcoin-script" have been proposed, making key and script operations easily accessible via command line. Mining and relay policy enhancements Bitcoin Core's block templates are now for version 3 blocks only, and any mining software relying on its getblocktemplate must be updated in parallel to use libblkmaker either version 0.4.2 or any version from 0.5.1 onward. If you are solo mining, this will affect you the moment you upgrade Bitcoin Core, which must be done prior to BIP66 achieving its 951/1001 status. If you are mining with the stratum mining protocol: this does not affect you. If you are mining with the getblocktemplate protocol to a pool: this will affect you at the pool operator's discretion, which must be no later than BIP66 achieving its 951/1001 status. The prioritisetransaction RPC method has been added to enable miners to manipulate the priority of transactions on an individual basis. Bitcoin Core now supports BIP 22 long polling, so mining software can be notified immediately of new templates rather than having to poll periodically. Support for BIP 23 block proposals is now available in Bitcoin Core's getblocktemplate method. This enables miners to check the basic validity of their next block before expending work on it, reducing risks of accidental hardforks or mining invalid blocks. Two new options to control mining policy:
-datacarrier=0/1 : Relay and mine "data carrier" (OP_RETURN) transactions
if this is 1.
-datacarriersize=n : Maximum size, in bytes, we consider acceptable for
"data carrier" outputs. The relay policy has changed to more properly implement the desired behavior of not relaying free (or very low fee) transactions unless they have a priority above the AllowFreeThreshold(), in which case they are relayed subject to the rate limiter. BIP 66: strict DER encoding for signatures Bitcoin Core 0.10 implements BIP 66, which introduces block version 3, and a new consensus rule, which prohibits non-DER signatures. Such transactions have been non-standard since Bitcoin v0.8.0 (released in February 2013), but were technically still permitted inside blocks. This change breaks the dependency on OpenSSL's signature parsing, and is required if implementations would want to remove all of OpenSSL from the consensus code. The same miner-voting mechanism as in BIP 34 is used: when 751 out of a sequence of 1001 blocks have version number 3 or higher, the new consensus rule becomes active for those blocks. When 951 out of a sequence of 1001 blocks have version number 3 or higher, it becomes mandatory for all blocks. Backward compatibility with current mining software is NOT provided, thus miners should read the first paragraph of "Mining and relay policy enhancements" above. 0.10.0 Change log Detailed release notes follow. This overview includes changes that affect external behavior, not code moves, refactors or string updates. RPC:
f923c07 Support IPv6 lookup in bitcoin-cli even when IPv6 only bound on localhost
b641c9c Fix addnode "onetry": Connect with OpenNetworkConnection
Find out what your expected return is depending on your hash rate and electricity cost. Find out if it's profitable to mine Bitcoin, Ethereum, Litecoin, DASH or Monero. Do you think you've got what it takes to join the tough world of cryptocurrency mining? Bitcoin Unlimited currently does not constrain SIGOPs (signature operations) or transaction size when accepting blocks (block generation is constrained to network norms). The Parallel Validation (PV) BUIP (passed) helps resolve this issue (and has many other usability advantages), but for PV to succeed, it requires that another miner first successfully mine a sibling block, and that the ... Bitcoin transactions are identified by a 64-digit hexadecimal hash called a transaction identifier (txid) which is based on both the coins being spent and on who will be able to spend the results of the transaction. Unfortunately, the way the txid is calculated allows anyone to make small modifications to the transaction that will not change its meaning, but will change the txid. This is ... Sigops bitcoin digital currencies your computer currency into bitcoins and direct to your current. Rates provided by the World Used Like. The dixie is that the first buy things as a version were, and the second push is the mining code. But we get it convenient in a new of other sigops bitcoin exchange elsewhere:. Open pin Bitcoin miner for bad mining. But mum, I raised my grandfathers too not ... Bitcoin Stack Exchange is a question and answer site for Bitcoin crypto-currency enthusiasts. It only takes a minute to sign up. Sign up to join this community. Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Bitcoin . Home ; Questions ; Tags ; Users ; Unanswered ; Jobs; How are sigops calculated? Ask Question Asked 2 years, 9 months ago. Active ...
Paramore's live video for 'Last Hope' from the MONUMENTOUR in Chicago, Illinois on July 11, 2014. The song originally appears on the self-titled album - avai... Ley de los Signos de la Division Cansado de que el profesor siempre te diga; El procedimiento esta bien pero, el signo no es correcto. Este es el vídeo que cambiara todo y no volverás a cometer los mismos errores. Se hace uso de igualdades que se utilizan en matemáticas y en física, para demostrar la necesidad de la adquisición de habilidades algebraicas para resolver ejercicios relacionados con las ... Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.