follow us on twitter . like us on facebook . follow us on instagram . subscribe to our youtube channel . announcements on telegram channel . ask urgent question ONLY . Subscribe to our reddit . Altcoins Talks Shop Shop


This is an Ad. Advertised sites are not endorsement by our Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise Here

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - garlonicon

Pages: [1]
1
Technical Discussion / Re: How viable is PoW for new coin?
« on: May 14, 2024, 04:58:51 PM »
Quote
Just change the hashing algorithm for the new coin so that it is different from that of established coins.
It is double-edged sword, if you do that. Because if your new mining algorithm is more complex than double SHA-256, then guess what: it also increases verification time for all regular users! And this is a huge problem, if you have some CPU-mineable altcoin, where verifying 1000 blocks is as hard, as mining a new block. And if you can see block hashes with three leading zeroes, then guess what: 16^3=(2^4)^3=2^12=4096. Which means, that the algorithm is so complex, that checking only 4k block headers is all you need to mine a new block.

Also note, that hashing a single block header is not the only place, where double SHA-256 is used. And if some altcoin creators replace it everywhere, including Script, including Merkle Tree, and so on, then it is much, much worse, because for every OP_CHECKSIG, for every OP_CHECKMULTISIG, for every FindAndDelete() call from Bitcoin Core source code, it is needed to compute the hash again, and again, and again, by all non-mining nodes. Which is why some altcoin users then share their database, to avoid the whole painful process of verification (or, users can use only SPV nodes, but still, someone has to run a node, to serve them properly).

2
Quote
if you can't fix the testnet and just reset it with new conditions then, what are you going to do on the main chain?
This is a very good question, and I also posted it on the mailing list:

Quote
Quote
so mining is not doing a great job at distributing testnet coins any more
It is a feature, not a bug. Would people want to reset Bitcoin main network in the future, for exactly the same reasons? Or would they want to introduce "tail supply", or other similar inventions, to provide sufficient incentive for miners? This testnet3 is unique, because it has quite low block reward. And that particular feature should be preserved, even if the network would be resetted (for example, it could be "after 12 halvings, but where all previous coins were burned"). And not, it is not the same as starting from 50 tBTC, as long as fee rates are left unchanged (and 0.014 TBTC means "the ability to push around 1.4 MB of data, with feerate of 1 sat/vB", instead of 50 tBTC, which means "pushing 5 GB with the same feerate").

Quote
Honestly, I don't understand yet why tBTC has value to some people?
1. Because it is the only decentralized test network, which works continuously for more than a decade.
2. Because it has almost all rules, identical to the main network, so it doesn't contain a lot of mistakes and quirks, made by many altcoins.
3. Because the whole chainwork is enormously huge, if you compare it to other test networks, like signet or regtest.
4. Because it had a lot of halvings, so if you compare amounts, which you can get for free, then they are closer and closer to something, which you can get from real Bitcoin faucets.

Quote
If this is a testnet and this Bitcoin is not real and has no real value then why are some people interested in buying it?
Because it is great for data pushing, without being censored. If you use Ordinals on the main network, then someone may argue, that you stop the real payments from being processed. But in testnet, coins are supposed to be worthless, so publishing your test cases is the only thing you are supposed to do.

Also note that testnet allowed non-standard transactions by default in a lot of old versions of Bitcoin Core (it was changed in some recent releases, like 26.0). Which means, that there are a lot of miners, actively including non-standard transactions into blocks, and many people rely on that assumption. And that allows pushing data in more efficient ways than on mainnet, and also performing more tricks than usual, for example by not using any address type, and relying on raw Scripts (because then, you don't need any workaround for informing other users about the script behind your challenge in a decentralized way).

Quote
Or are there other uses?
There are many use cases for a test network. Even if you would have a chain with zero supply, where all coins would not pass any value, then still, there are a lot of consensus rules, which are active. For example: the whole Script is followed to the letter, even if you send zero satoshis. All locktimes, difficulty adjustments, and many other rules are preserved. And the whole protection from chain reorganization is still active, even if you have some chain with no coins.

Note that in testnet3, there is one main reason, why you have coins with any amounts: to be protected from spam. To not flood the network with a lot of transactions, created out of thin air. This is the reason for transaction fees, and this is the reason for coin amounts in the first place. Because if not that, then if test coins are supposed to be worthless, then ask yourself a question: why any coin amounts are introduced at all, and the total supply is not just permanently set to zero?

And of course, there are lots of use cases for coins with zero satoshis. For example: any token creation protocol could potentially use the UTXO set to store that kind of information. And in that case, testnet fits even better than mainnet, because you still cover things with a lot of hashrate, but you don't have to take down regular payments, because this is not the purpose of the network, so no real payments are expected in the first place!

3
Technical Discussion / Re: Bitcoin's maximum block size
« on: April 14, 2024, 01:55:21 AM »
Quote
Grin is basically  dead AFAIK  .
Well, many altcoins change a lot of things. I guess the current value of Grin does not come from choosing MimbleWimble, but rather from other design decisions, for example the basic block reward, and the total supply, which is completely different than in Bitcoin. Which means, that even if you introduce the same features, then still, there are other network rules, which can affect the price.

Another completely different choice is the mining algorithm, which is CuckatooC32, instead of SHA-256. Also, there were some different algorithms, used in previous versions, because people wanted to keep the coin ASIC-resistant. On the other hand, Bitcoin uses exactly the same algorithm, as it used in 2009. And being "ASIC-resistant" is not "free". If you make your hash function more complex, then you also make verification more complex as well.

Also, killing ASICs discourage a lot of miners, and then, the amount of work, which is needed to make a single block, is smaller, because of knocking out people with specialized hardware, so only more common mining devices like CPUs or GPUs stay in the network, and consequently, the network difficulty is much lower than it could be, and it is also reflected by the price of the altcoin.

4
Technical Discussion / Re: Bitcoin's maximum block size
« on: April 12, 2024, 08:07:12 AM »
Quote
Mimblewimble (and other variant of non-interactive cut-through) exist, but i don't expect it'll be ever implemented on Bitcoin.
Why not? We currently have full-RBF, which means, that if some transaction is unconfirmed, then it can be replaced completely, without any restrictions. Transaction fee is the only indicator, if something should be replaced, or not. And I expect, that in the future, more people will make use of that, and we will see more and more replacements for unconfirmed transactions, so they will be batched on mempool level.

Because currently, imagine that you have a situation: Alice -> Bob -> Charlie. You have one transaction, which sends some coins from Alice to Bob, and another transaction, which passes them further from Bob into Charlie. If you assume that Alice is online, then there is no reason, to not write a replacement transaction Alice -> Charlie, while using the same fees, and get a higher feerate, because the size of that single transaction, will obviously be smaller, than the size of those two separate transactions, so miners will have no reason to stop that kind of replacement.

And of course, the next natural step, is to make it non-interactive. Because if we have full-RBF, then it means, that interactive version is already there, because full-RBF means "accept any replacement" (including double-spends).

Quote
Even though it can be done through soft fork (just like what LTC did)
I don't think Bitcoin will take the same path, as LTC did. It was the case in Segwit, but today, it is much more likely to get some other BIPs merged, for example something related to OP_CAT. And if you have OP_CAT, then it enables a lot of things, including the ability to accept any signature, and do something like OP_CHECKSIGFROMSTACK, but without any additional opcode (other than OP_CAT). And of course, note that doing all of those things, does not require making any "extension block", because you can just use the witness space, as Taproot did.

Quote
i wonder if community will accept it since it give government to restrict Bitcoin usage.
If that would be the case, then Bitcoin would be restricted, when it introduced Lightning Network. Because then, you also skip some transactions in the middle, and leave only two on-chain transactions: one to open some channel, and one to close it. Also, another case is Taproot, which allows you to have N people on a single UTXO, by forming N-of-N multisig, behind a single public key, and spending it with a single Schnorr signature.

Quote
Improvement for IBD is really limited though
It is not that limited, as you may think. For example: each coin starts from the coinbase transaction, and ends, when it is sent as fees. When it comes to the chain of signatures, it is broken each time, when you send something as a fee, because then the coin is no longer restricted by the Script, so it can be claimed by any miner. Which also means, that if you want to verify things, you don't have to check them deeper, than to the nearest coinbase transaction. Why? Because if the coinbase transaction has 100 confirmations, then it can be spent. And it is very unlikely to ever trigger a deeper chain reorganization, than 100 blocks (even the Value Overflow Incident reorged less than 100 blocks).

There are many bugs, where if something went wrong, then the historical state of the chain, was simply "grandfathered in", and new rules were applied on top of the later chain. One of those cases is for example duplicate coinbase transactions: nobody proposed throwing away the history to fix it (because it was deeply confirmed), and some coinbase transactions are just listed in the code as "exceptions" to some BIP, so they are ignored, while checking, if there are no duplicated transaction IDs.

Quote
unless you skip more verification
You don't have to skip it, if you don't want to. It can be fully optional, because if you replace Alice -> Bob -> Charlie transaction with just Alice -> Charlie, then it will probably contain Alice's signature. And inside that signature, you can put a proof, that there was Bob in-between, so some nodes can still access that history, and preserve it, if there would be any need to show someone, that "Bob owned some coins at this point in time". However, at the same time, it would no longer be strictly needed, if you want to synchronize the chain from scratch, because Alice's signature is correct from ECDSA point of view, and nothing else is needed to prove, that no coins were created out of thin air, and the system is honest.

5
Technical Discussion / Re: Bitcoin's maximum block size
« on: April 10, 2024, 08:52:09 PM »
Quote
Do you support increasing Bitcoin maximum block size?
No.

Quote
If yes, what factor should be considered when choosing new maximum size?
Verification time, also known as Initial Blockchain Download time. If you want to increase the size of the block, then you can never go beyond verification time. Because if you will, then new blocks will be produced at a rate, where maybe you could download all of them, but never verify all of them. And as long as you have to verify the whole chain, to be 100% sure, that your new blocks meet all consensus rules, you cannot do much with the maximum block size.

Which also means, that if you want to seriously consider any increase, then you have to improve Initial Blockchain Download first. If creating a new full node wouldn't require verifying over 500 GB of data, from the last 15 years, then you could think about increasing the size of the block.

Quote
if the fees continue to rise such that it becomes difficult or impossible to use Bitcoin normally, then I think that both miners and society will agree to increase block size within a certain limits
There are ways beyond increasing block size, which could increase Transactions Per Second. One of them is transaction joining. For example: imagine that you have a lot of different coins, and for each of them, there is some public key, representing the owner. Imagine that the only information you need, is to know "who owns what", and you confirm it every 10 minutes. If you have a signature, which allows moving the coin from one public key to another, you can join a chain of signatures, and put only the last owner in the final block. Which means, that if you have a coin, which is owned by Alice, and then is passed into Bob -> Charlie -> Daniel -> ... -> Zack, forming a chain of unconfirmed transactions, then you don't have to record all of those transactions in the final block. What you have to permanently store, is just the information, that Zack is the new owner after 10 minutes, when the block is confirmed. For everyone else, you need just some kind of SPV proof, that somebody "owned" the coin, but you don't have to publicly reveal that information into the whole network (you can keep it private), and you also don't have to inform new nodes about it. In other words: you can compress the history, so the chain of signatures is still valid, and there is some visible proof, that it is correct, but also you don't have to publicly share all "in-between transactions", which occured in the meantime.

In general, you can inform the network about "in-between transactions" only when something is unconfirmed, and when you need that information, to properly resolve all double-spending attempts. But once something is confirmed in a block, then you have a strong cryptographic proof, which is covered by a lot of Proof of Work, that a given transaction happened, and you can easily reject all future double-spending attempts.

6
Technical Discussion / Re: How viable is PoW for new coin?
« on: April 10, 2024, 07:15:31 PM »
Quote
Many new coins these days use seems to avoid PoW since they claim it's trivial to perform 51% attack by renting CPU, GPU or ASIC.
To avoid that problem, you can always track the strongest chain of Proof of Work headers in existence in SPV way. You don't have to follow transactions from other coins. But if you follow their hashrate, then you can determine the real Proof of Work in the world, for a given mining algorithm, and then adjust your network parameters accordingly.

Which means, that if you want to create for example some new altcoin, which could be mined with double SHA-256, then you have to receive 80-byte block headers from other double SHA-256 chains, just to calculate the difficulty in the right way. And the same is true for every other mining algorithm: if it exists, and is used anywhere else, then you cannot pretend that "whatever, blocks from other chains are invalid in my chain, so I won't count them, when I calculate my difficulty".

Bitcoin was created in the way it was, because there were no altcoins, which used Proof of Work before. Hashcash was used only as a spam protection, and it was based on SHA-1, and not double SHA-256. It used different algorithm, and there was no danger of "being reorged by hashcash miners". Which means, that the initial design didn't consider any other chains, so copy-pasting it 1:1 doesn't work, because if you start from scratch, you compete with existing huge networks (and Bitcoin had no such competition, so that design decision was good enough then, but shouldn't be repeated now).

But today, the situation is different. Today, if you want to make a successful Proof of Work altcoin, then you should count Proof of Work from other chains, because if you don't, then you will have the same problems, as for example NameCoin. If you have Merged Mining, then it can help a bit, but still, the design of your altcoin should guarantee, that you can reorg it, only if you can 51% attack the strongest hashrate with your algorithm, and not just reach 51% on your chain (also, that mistake is the very reason, why NameCoin was 51% attacked: that kind of attack, should be as hard, as doing it on Bitcoin, and to achieve that, it is needed to receive every block header, no matter what content is behind it, and count it to calculate "the real total difficulty", and not just "the local difficulty").

As Satoshi said:
Quote
I think it would be possible for BitDNS to be a completely separate network and separate block chain, yet share CPU power with Bitcoin.  The only overlap is to make it so miners can search for proof-of-work for both networks simultaneously.

The networks wouldn't need any coordination.  Miners would subscribe to both networks in parallel.  They would scan SHA such that if they get a hit, they potentially solve both at once.  A solution may be for just one of the networks if one network has a lower difficulty.

I think an external miner could call getwork on both programs and combine the work.  Maybe call Bitcoin, get work from it, hand it to BitDNS getwork to combine into a combined work.

Instead of fragmentation, networks share and augment each other's total CPU power.  This would solve the problem that if there are multiple networks, they are a danger to each other if the available CPU power gangs up on one.  Instead, all networks in the world would share combined CPU power, increasing the total strength.  It would make it easier for small networks to get started by tapping into a ready base of miners.
So, to achieve a successful Proof of Work altcoin, it is needed to fullfill that sentence: "Instead of fragmentation, networks share and augment each other's total CPU power". And the reason, why many altcoins failed, is that nobody designed it in such way. And as people saw, that for example NameCoin can be 51% attacked, they gave up the idea of Merged Mining, without realizing exactly, where it failed, how it failed, and what exactly NameCoin did wrong (they simply ignored Bitcoin headers, which didn't contain "NameCoin commitment", but they should instead blindly accept it, and use those headers to properly increase their difficulty, decrease their coinbase transaction, or perform similar actions, to measure "amount of coins per difficulty" properly).

Pages: [1]
ETH & ERC20 Tokens Donations: 0x2143F7146F0AadC0F9d85ea98F23273Da0e002Ab
BNB & BEP20 Tokens Donations: 0xcbDAB774B5659cB905d4db5487F9e2057b96147F
BTC Donations: bc1qjf99wr3dz9jn9fr43q28x0r50zeyxewcq8swng
BTC Tips for Moderators: 1Pz1S3d4Aiq7QE4m3MmuoUPEvKaAYbZRoG
Powered by SMFPacks Social Login Mod