Been thinking about the registry a lot this week. Merkle trees are a really elegant optimization but the more requirements we tack on to the registry the scarier the idea of using them seems.
It seems like we’re trying to re-solve some of the scaling issues other really smart people have worked on. Here are some examples:
-Manufacturers including duplicate chip PK hashes in merkles. We can rely on observant slashers to disincentivize this case but it still relies on multiple slashers being active to boost the perceived security of the system. (Sort of re-engineering Optimism’s scaling model. To be clear, this is different from the private-key slashers). The alternative to slashers is having each client verify the integrity of the merkles but this has it’s own problems:
-Verifier apps needing to download the entire merkle is no bueno as n devices grows. Alternatively, we can follow an RPC model where various RPCs verify registry integrity but then that’s re-engineering an existing scaling solution (RPCs)
-To my limited imagination, universal resolution of content from a physical chip either relies on flashing stuff to each individual chip before shipping them out to TSMs OR some “resolver permission handoff” done purely in software (Like an ENS domain owner handing off ownership similar to how one transfers an NFT). Manus can flash merkle paths onto the chips but that complicates manufacturing with a second step. It also complicates verifying. On the other hand, the software-only solution would be super tricky to accomplish with merkles and potentially open up security concerns. Permissioned registries (i.e. NFTs) like that used in a software-only solution are a huge push for the people building scaling solutions. So by crafting a custom solution using merkles we’re once again re-engineering their work.
-In a similar vein, we need to worry about cross chain, and cross-chain bridges for permissioned registries(NFTs) are something that other people are already working on. So why reinvent the wheel here too?
Besides re-engineering scaling solutions, there are some other reasons I think moving to L2 is the move:
-Potential to monetize registry for KONG’s benefit. We can charge a small amount (on the order of cents) per-chip if we create 1 registry entry for each one. This could let KONG directly benefit from other manus using the registry. I don’t think charging per-chip is feasible with merkle machines.
-GSD (getting shit done) We could grind out an L2 implementation pretty fast. No merkle magic, just a simple registry and some solution to “yank” the registry entries onto L1 at the expense of the user/TSM, leveraging existing oracle or NFT bridge technologies.
-The create2 contracts explained in @cadillion’s scheme can be deployed contracts viewable on etherscan (or scan) to simplify integration and communication with other devs. In the future we can actually use create2 to minimize gas costs.
-Communication becomes much easier. We basically say “this works like ENS but for physical assets” instead of having to explain how our merkle machines work. I suspect people (esp. big corps) will want to understand how the system works before risking their product line on its viability and security.
Future gas forecast:
Damn this is starting to feel like a DD post on r/WSB. I hate getting into speculation but I feel it’s necessary before making this decision. I’m neutral-to-bearish on gas costs in the long-term. Vitalik himself believes many cool apps are being priced out from ETH and that gas prices need to go down. So already, the demand is being “held back” by gas prices. As @cameron said, “open a bigger highway and more cars will flood in”, so the merge and other scaling solutions will definitely increase the supply of computation but at the same time increase demand since new apps will be built leveraging the lower fees. I see no reason to believe that the new equilibrium caused by the new supply and new demand levels will be significantly higher than it is now. Hopefully it’s lower. Hence my neutral-to-bearish long-term take on gas on L1.
If we go forward with the registry being written directly to either L1 (post merge) or L2 (now), I’m pretty sure we can get gas/chip down to ~2k (amortized) as Azuki has done with their excellent ERC721-A implementation:
That’s under $1 /chip on L1 with current gas prices! So forecasting that gas stays ~the same long-term on L1, that’s just barely non-economical IMO. So moving to L2 would almost certainly be a viable long-term solution that keeps per-chip economics low (<1/10 price of chip).
I completely understand the reluctance of @cameron from getting burned by gas fees again. I really respect his opinion, so I felt the need to write this long post to justify why I’m basically disagreeing with the move to stay on L1 using merkle machines.