Skip to main content

西遊記之我見

· 5 min read
Jimmy Chu
Site Author

因為最近 黑神話:悟空 的流行,讓我最近多接觸回西遊記的故事及思考裡面的一些情節。

看了不少 Youtube 視頻 的講解,明白了該遊戲原來在講一個作為體制一員的悟空,發現自己只不過是在多方勢力角力的棋子,不想繼續留在體制內而想盡辦法要離開,回到過去在花果山消遙自在的日子的故事。而該遊戲最好的結局,就是轉世的悟空 (天命人) 不需要再帶上頭上的緊箍。這也是在遊戲裡孫悟空一開始想出來的法子,消滅自己的驅體,保留元神 (根器),來迴避緊箍的封印。

呀,原來這遊戲從這個角度來講西遊記的故事,也是一個非常不錯的角度。

不過,回到西遊記原著的故事,我越是細想,越覺得吳承恩的故事有另一重隱喻。以下為我所見西遊記的隱喻:

  • 唐僧一行人去西天取經: 這其實是我們每個人在人生的修行,經歷人生的種種歷練。

  • 悟空和白龍馬: 就是我們平常會 心猿意馬 的心。白龍馬這個南海龍王兒子的戲份並不多。而悟空則法力無邊,並且有個筋斗雲隨處飛來飛去。這不就像我們的心,經常可飛到不同的地方嗎?

  • 悟空頭上的緊箍: 如來佛祖為了使悟空不再故作妄為,放了個緊箍在悟空頭上。但想想看,悟空在帶上緊箍前,雖然已法力無邊,但卻没有過什麼作為,能說的頂多就是到海龍王宮奪取如意棒,到天庭大鬧天宫。唯有在帶上緊箍以後,悟空,縱然一開始百般的不情願,終於可定下來,設好一個目標,保護他師父唐僧去取經,經歷九九八十一難,完成到西天取經的成就。在我看來,緊箍不是來限制悟空,而是來成就悟空

    我們的心也是一樣。看似很自由的自己想做什麼就做什麼,結果時間花了去故鬧,回過頭來看,發現自己原來什麼都没有完成到。唯有帶上緊箍,有所不為,才能有所作為

  • 唐僧:就是我們純真無礙的本心。唐僧在故事的設定是他的肉無比矜貴,妖怪們只要吃到唐僧肉,就能增加千年道行諸如此類。所以四方八面的妖怪都明裡暗地的爭奪唐僧。

    在世間上不也是這樣嗎?各類社交媒體,長短視頻,電子遊戲等等,都在搏取我們的眼球,我們的注意力。更有甚是現代科技都善用心理學及人性弱點,知道如何叫用戶上癮,每天花上更多的時間及金錢在這類平台/遊戲上,甚至使人迷失在這裡面。這不就像是四方八面的妖怪在奪取唐僧嗎?當然,阻礙我們本心,也可以是世間的萬事萬物。

於是想起了一句聖經句子:

箴言 4:23: 你要保守你心,勝過保守一切,因為一生的果效是由心發出。

PSE Core Program 2024 Capstone Project

· 7 min read
Jimmy Chu
Site Author

PSE Core Program

In the past two months, I got into PSE Core Program and have intensively studied zero-knowledge cryptography. In the first five weeks, we have a curated list of materials to study, ranging from cryptographic primitives such as Merkle tree, symmetric and asymmetric encryption to going more in-depth about various zk-SNARK protocols such as Groth16 and PLONK, and finally being introduced to the frontier of cryptography, or so they are called, including multi-party computation (MPC) and fully homomorphic encryption (FHE).

In the last three weeks of the program, we are open to scoping and working on our project to put the previous learning into practice. We can either tackle issues from various PSE projects or develop our idea. I decided to build a dApp end-to-end, meaning:

  1. Having a zk-SNARK component generating proofs on the client side (front end),
  2. Having a smart contract component that verifies the proof on-chain and
  3. Building the front end that interacts with users and connects the above two components.

This way, I will be able to see how they connect and how each component interfaces with the others. So let this be the goal!

Capstone Project: Number Guessing Game 🎮

I decided to build a Number Guessing Game. In this game, the goal of each player is to guess the closest to the mean of submissions from all participating players. Each game is divided into multiple rounds. Players first commit to a guess between 1 and 100. The guess value and a randomly generated value, aka nullifier, are processed through a zk-SNARK circuit on the client side, and the commitment [Poseidon(guess, nullifier), Poseidon(nullifier)] and proof are then submitted on-chain.

After all players have submitted their commitments and proofs, they open and reveal their commitments. Then, the game host will conclude the round. The mean of all guesses will be computed (on-chain), and the player who has submitted the guess closest to the mean will win the round. The player who wins three rounds first wins the game.

The following is the game flow:

Game Flowchart

  • Alice (the first player) initiates a game and becomes the game host.
  • In this phase, Other players can join the game. In this case, Bob, Charlie, and Dave join.
  • Alice starts the game, and the first game round begins. No more players will be able to join.
  • Players submit their commitment on-chain.
  • Once all players finish submitting their commitments. They open the commitment. After all players open their commitments, the game host concludes the round. The mean of all submissions is computed, and the round winner is determined.
  • If the winner has won three rounds, the game ends. Otherwise, another round begins.

The project is finally built, and you can try it out at:

You can see the demo of a game walkthrough below:

Some Technicalities ⚙️

  1. The zero-knowledge part kicks in when a player submits a guess on the chain. A 128-bit random value, a.k.a. nullifier, is generated. The guess and nullifier are passed into a wasm zk-circuit with a return value of [Poseidon(guess, nullifier), Poseidon(nullifier)]. The circuit performs a range check that the guess is between 1 and 100.

    We need a nullifier here to expand the preimage domain. We know the guess is a value between 1 and 100. Suppose we store Poseidon(guess) as a commitment. In that case, an attacker/cheater can Poseidon hash all values between 1 to 100 and compare the digest with the one on-chain and be able to figure out what other players' preimages (their guess value) are. So we add a 128-bit nullifier to expand the preimage domain.

  2. This is how the player commitment is stored on-chain.

    Inspecting on-chain storage on commitment

  3. On the front end, upon a round conclusion, only the winning guess is revealed but not the actual mean. This gives the game a "fog of truth" element and will make the game a bit more interesting. But the player openings are actually stored as plain numbers on-chain. So, tech-savvy players can still inspect the smart contract storage to compute the mean. I would love to explore further ZK protocols (multi-party computation, maybe?) that could be used even to hide these openings and be able to compute the mean and the distances between users' guesses and the mean. This will then truly make this game zero-knowledge!

  4. UX matters a lot. The following is my development time allocation for this project.

    • 15% on circuit development, thanks to circomkit library that eases my development workflow.
    • 30% on smart contract development and proof parameter interfacing. The time spent on writing the smart contract logic and comprehensive test cases is about 50-50.
    • 45% is spent on front-end development. The front end involves interacting with various components, e.g., user wallets, loading wasm and generating proofs, transforming proofs to a format appropriate for on-chain submission, etc.

    Many details on the front end must be done right to deliver a great UX. Right now, there is still a lot of room for improvement: e.g., the wallet display on the navbar keeps reloading when navigating within the site, and the page has to be manually refreshed to get the latest game status instead of subscribing to the smart contract events. I truly appreciate front-end engineers who build a great user experience on a Web3 app or web app.

Standing on the Shoulders of Many Giants

Standing on the Shoulder of Giant

In the past eight weeks I have been truly standing on the shoulders of, not one, but many giants enabling me to deliver this capstone project.

I am grateful PSE Team has selected me for their Core Program and has an excellent curation of study materials. One can find a lot of materials on zero-knowledge cryptography on the internet and be easily overwhelmed by them. The curated study materials, with mentors and tutors always available to answer your questions and teammates holding regular weekly study groups/presentations, definitely helped push me through the study barrier.

Kudo to the Semaphore project team too. Studying the source code of the Semaphore Protocol and its Boilerplate has significantly contributed to this project. In fact, my project is a fork of the Boilerplate template, which is a well-architected demonstration of how a React (Next.js) front-end interfaces with the zk circuit and smart contract. I have also learned a lot of coding and optimization tricks from them.

The project has been made feasible with countless libraries that I used. For example, circom, Circomkit, snark.js, Hardhat, Next.js, Wagmi and Viem, Web3Modal etc, just to name a few. Of course, I also deployed it to the Optimism ecosystem.

Finally, I stand in awe when looking at all the researches that have gone through on pushing the boundary of zero-knowledge protocols and the field of programmable cryptography.

This is just the beginning of my BUIDLing journey, and hopefully I will share more projects with you soon.

Semaphore - part 1

· 3 min read
Jimmy Chu
Site Author

I have been studying about zero-knowledge proof in the past eight weeks. It is hard to grasp how a software project could integrate zero-knowledge proof. Then, through some community chat groups, I came across Privacy & Scaling Explorations group and one of their projects, Semaphore. Semaphore is a very educational project. In this project, I see how ZKP is being applied (in a fashion I understand) and observe excellent software engineering inside.

Semaphore allows users belonging to a certain group to vote without revealing which member they are, aka anonymous voting. There is also a mechanism to prevent users from double-voting, which is done by posting some "residues" generated by the zero-knowledge proof protocol on-chain.

But what is great about the project is that it is very beginner-friendly, and I say this with great respect. The whole project is architected very nicely. It can even be viewed as a model project for how software and packages are structured in modern-day dApps. I have learned a lot from their software structure and encapsulation.

  • The core software project: Semaphore protocol, with a nice structure of:

    • contracts package - the smart contract that verify the proof generated offchain.

    • proof package - the off-chain logics that generate the ZKP with wasm code generated by an arithmetic circuit.

    • circuits package - contains the arithmetic circuit written in circom that generates and verifies proof.

    • hardhat package - a package that allows developers to deploy the core semaphore contract via hardhat without downloading the semaphore project.

  • A template to see how the protocol can be incorporated into other projects. It consists of:

  • The project leverages the Merkle tree structure to verify that a member belongs to a group, with all the tree leaves being the users' identity commitment. The user will generate a Merkle proof to show that he belongs to the group. The core team implements the Merkle tree structure separately as a Lean Incremental Merkle Tree (with a summary note).

  • The project has excellent documentation and website as well, including as two different packages inside the source code.

Studying the project taught me a lot of optimization tricks I can apply in the zk-battleship codebase. I also saw how they used to save on-chain gas costs, particularly using emitted events to store voting results (the user feedback messages) instead of on-chain storage to reduce gas costs further. This is something that never crossed my mind before.

There are two feature enhancements I can help implement on the current Semaphore basis to further my engineering understanding in zk-app development.

  1. Users can later prove they have voted without revealing their voting option, which can become proof of participation for them.

  2. The voting result is hidden during the voting phase until it ends. In this way, the choices made by late voters won't be affected by the given voting result so far.

Cangjie & Chu Bong Foo - part 1

· 4 min read
Jimmy Chu
Site Author

On one occasion in Taiwan, people around me were amazed that we Hong Kong people type Chinese using the Cangjie (倉頡) method. I realize it is uncommon for people to learn how to deconstruct Chinese characters and type them with the Cangjie encoding. Further thinking about it, it is a brilliant idea that Chinese characters can be deconstructed this way. It makes me wonder how the inventor came up with this input methodology. So I dug deeper into its history and found an inspiring story.

The architect Chu Bong Foo (朱邦復), during the 1960s, was working in a publishing house in Brazil. He saw that the publishing house could finish an English book publication in 3 to 5 days. From his youth experience, he knew it would take about two to three months to get a Chinese book published, converting from the author's manuscript to a publishing form. This is because, at that time, there was no systematic way to index Chinese characters. English words are composed of 26 alphabets, and how to index them is well-established. However, there is no systematic way of indexing Chinese characters, and there are over 14k+ most frequently used characters and 40k+ frequently used characters.

So Chu Bong Foo first studied the most frequently used characters, analyzing their shape and deconstructing them into 24 basic shapes (radicals) and 76 auxiliary shapes, often rotated or transposed versions of the basic shapes. Then, he used the following character roots to represent them.

Philosophical Group

  • A: 日 - Sun
  • B: 月 - Moon
  • C: 金 - Gold
  • D: 木 - Wood
  • E: 水 - Water
  • F: 火 - Fire
  • G: 土 - Earth

Stroke Group

  • H: 竹 - apostrophe / slant
  • I: 戈 - dot
  • J: 十 - cruciform
  • K: 大 - cross
  • L: 中 - vertical
  • M: 一 - horizontal
  • N: 弓 - hook

Body Parts Group

  • O: 人 - person
  • P: 心 - heart
  • Q: 手 - hand
  • R: 口 - mouth

Character Shape Group

  • S: 尸 - corpse
  • T: 廿 - twin
  • U: 山 - mountain
  • V: 女 - female
  • W: 田 - field
  • Y: 卜 - fortune telling

Collision / Difficult Group

  • X: 難 - difficult
  • Z: 重 - unknown/collision

Each Chinese character can be represented by a sequence of one to five character roots above, and the duplication rate (given a valid sequence returning more than one character) is less than 10%. Cangjie encoding is the most efficient way of inputting Chinese characters.

With such a system, publication houses can now input Chinese characters much like an English word, and Chinese characters can be sorted in a particular order and indexed efficiently. This has increased the Chinese publication process.

Later in Mr. Chu Bong Foo's life, he was also involved in the following projects:

  • Chinese integration in modern computer systems, back in the 1980s. When the computer itself is not too popularized yet.

  • Launched the first Chinese e-book system in 2001, when the first version of Kindle Reader was launched six years later in 2007.

  • Launched an AI system that takes a Chinese poem text as input and generates a 3D animation, all without human intervention. The process involves natural language processing, Chinese character interpretation, and generative AI, which we talk about today, all back in 2011! For details, refer to his announcement post at that time and the animation 記承天寺 (original download link, or the Youtube video).

Mr. Chu Bong Foo is truly a visionary.

References

某個敏感的日子

· 3 min read
Jimmy Chu
Site Author

這幾天在新聞看到有政府官員說在某個將至的敏感日子,有人發佈發布具煽動意圖的帖文,挑起市民對中央政府、特區政府及司法機構的仇恨,及意圖煽動組織非法活動。這則新聞讀起來真諷刺。

首先連政府人員也不敢,或不想直說 "那一天" 到底是哪一天。其實那一天不就是我寫本文的當天 - 六月四日。所紀念的就是 35 年前所發生過的天安門事件。從 2019 年起,當權者用不同名目禁示了人們到維園作六四紀念,政府也不願人們提起這事。小說 1984 就說過,那極權政府會改寫歷史,把事件從書本中,報紙上完全删掉,文字,園片 都一一移除。以前讀的時候還以為這是多麼的天荒夜談。但原來是可以在今時今日發生在我們的生活當中。國家想要人民遺忘某些事,抹去文字,抹去圖片,但仍是抹不走人們的記憶。有時越是煞有介事的要把事情捂起來,越是不允許人民說出口的事,越是大家心心念念想着的事情。當權者知道,人民也知道。但當權者就是不能去正視過去,正視歷史,從中去學習,從而修正前面領導國家要走的路。

我作為普羅大眾一人,謹希望記着這一天,記着這些年的這些事。但願我不遺忘,從這些事件中有所學習,好好走前面的路。

On Learning zk-SNARK

· 2 min read
Jimmy Chu
Site Author

Recently I am picking up the knowledge necessary to understand what is going on in zk-SNARK. For those who are not familiar with this term, "zk" stands for zero-knowledge, and "SNARK" stands for succinct, non-interactive, argument of knowledge. What this term means is:

  • zero-knowledge: this is a way for a prover proving "something" to a verifier in such a way that the verifer will not gain any additional knowledge than before the interaction. We will elaborate the "something" in the following bullet points.

  • succinct: the proof is going to be short compared to the actual knowledge, and the verifier will be able to perform the verification quite fast. To be more concrete, if the knowledge m has a length of |m|, the proof may probably be O(log |m|) or even shorter.

  • non-interactive: there won't be rounds of back and forth interactions between the prover and verifier. The prover will only send a single message to the verifier.

  • Argument of Knowledge: This is the "something" mentioned in the first bullet point. Notice that this is not the knowledge itself, but a proof that demonstrate that the prover indeed knows the knowledge.

Yes, this is quite an abstract concept. But what is the most puzzling in this learning journey is that though there are tons of useful online resources, such as:

I CANNOT integrate the theoretical aspect together with its engineering aspect in this domain. For example, the high level concept of a zk-SNARK is a combination of one of the functional commitments with one form of interactive oracle proof (IOP). But in implementing a actual zero-knowledge proof project, how do I actually choose a zk-SNARK scheme with a particular functional commitment and a particular interactive oracle proof?

So now, my learning approach is as follows:

  • Studying in-depth about the source code of the Semaphore project.
  • Meanwhile continue reading rareskill zk-book
  • At the same time, I am also applying for bootcamp program to solidify my understanding and hoping to get hands-on experience of putting zkp knowledge into engineering practice.

To be continued.

On Human Differences

· 3 min read
Jimmy Chu
Site Author

At times, when I observe two individuals, one brimming with motivation, purpose, courage, and wisdom, while the other seems to embody the concept of 'average,' I find myself pondering the root of this disparity. This contemplation is not merely an intellectual exercise but a philosophical journey that holds practical implications for shaping the future generation.

I can come up with many possible answers, such as his background, education, and the environment he was immersed in. Eventually, this leads to his upbringing and his family. I could further argue that the lessons, thinking, and reflections he has learned and accumulated in his life make a difference. It is the choices he had made that, in turn, affected the environment he would be in next and the opportunity and network he would face. Furthermore, the choices, environment, and learning will shape one's character. As his character builds up, this further affects the choices he makes. These choices don't have to be significant at first. It could be like deciding to wake up early every morning and keep a morning routine of 45 minutes of meditation and body exercise. As a result, this gives him better physical and mental health and allows him to perform better in his workplace. In this scenario, the outcome of better working performance is determined when he decides to have a morning routine, a time that is not visible to everyone except himself. When all these small choices accumulate, they will slowly but surely widen the gap between two people's life trajectories.

Tracing time backward, we will see that his learning, upbringing, and decisions are the effect of decisions he made further in the past, and that, in turn, affect his future. When we trace all the way back in time, only two infants were born in two different families. So, what is the cause of their difference in life trajectories, assuming these infants are not abnormal in any scales?

When the kids are as blank as a white sheet, we can only look at the external factors. So a significant determining factor is their family, their parents, and, to a broader extent, the society they reach (yes, this "society" could be very different even if they live in the same city because the society again highly depends on the "class" their families are in). How the parents raise these two kids and what character-building principles and lessons the parents infused in the kids in their early stages will start causing the difference in their trajectories.

But what about the infants' own will? In the beginning, their will plays an insignificant, or at best, minor, role. It mainly depends on external factors. But as they live on, their own will become more significant in shaping their interest, decisions, careers, and thus opportunities. Consequently, what is mentioned in the second paragraph follows. What shapes a person's will at an early stage then? Well, I thought hard about it, but I can't come up with a reasonable explanation except deferring to chances and randomness. So a more complete picture should be that early in one's life, one's character and trajectory depend very much on the external factors one is in, whether it is the family, society, etc. However, the longer one has lived, the less likely we can attribute one's life trajectory to these external factors and randomness but more to one's learning and personal decisions.

On Learning and Mastery

· 4 min read
Jimmy Chu
Site Author

This week I read about Mastery by Robert Greene and listened to Luogic TalkShow 83: How to Become a Master. It is interesting to see how they approach a similar topic from two different angles. Luo in Luogic talk show admitted from the beginning that the video is an advertisetment of its mobile app channel. First he covered being a master is not just about keep on repeating a certain process or procedure, but it is about Deliberate Practice. One would disconstruct the steps needed to complete a certain tasks, and then practice on each of these steps. In sport, this exercise is called drill. In a drill one only focuses on practicing a particular aspect of the whole endeavor. In fact, I recalled this similar subject is also covered in The Art of Learning by Josh Waitzkin. Waitzkin said his teacher would ask him to practice playing with just a king and pawn piece to throughly understand the limit of what both pieces could do together on a board (it turns out a king and a pawn can be quite powerful in chess endgames). It is this kind of dissecting the whole endeavor into small steps, drilling in each steps to throughly understand them, and finally putting them together that one can finally perform the endeavor in a master fashion. During this process, don't forget the element of diligence. Yes, one still need repetition, but it is repetition with full awareness. When a practice is performed over and over again, one would easily drift to an auto-pilot mode, when you would perform the action without paying much attentions. But not for those to be a master. They would perform every steps conciously using all their five senses and attentions, so they can imprint how to do a certain task deeply into the brain. Engaging with all senses and attention will also cultivate the intuition that you know sometimes something "feel not right". Because it is likely one of the senses or some environmental signals that you accept is contradicting to what you had perceived in the past when you performed them. Master to-be would also reflect and self-critique on where it could be improve upon, and repeat the practice. This become an iterative improvement cycle, and this will slowly imprint in your brain and new neurons are formed. When a Japanese swordman swings a sword 1000 times, it is not just swinging a sword mindlessly, but he is swinging a sword with attention and full awareness.

The second point Luo got into was about going away from your "comfort zone" to the "learning zone". There is a comfort zone that you believe you know most of the thing. Then way out from the zone is the "horror zone", where you know nothing about and you would feel horrified when you are in that zone. In between these two zone is the "learning zone". It is where you still have some familiarity on what you know, but at the same time you know there are things you don't know and need to learn. Luo argues that this is the best zone to learn. Your brain will get excited and fire up by the unknown, and your learning progress would be substantially accelerated in this zone.

Of course, Luo also covered about getting real-time feedback about your work. In fact, building such feedback mechanism need a deliberate way of cultivating your own environment. Maybe one needs to find a teacher and presenting his work to get critique in a timely fashion, regularly. Maybe one needs to join a community and finds peers who have the same objective as you and could perform peer-review to each others. In the modern day, probably one can use AI companion to give critique on the work as well 😄. By getting feedback, studying through them, and deliberately improve on those aspects, one is on the journey to becoming a master.

In my next writing, I should write more on Robert Greene Mastery.

Notes on Messari Thesis Report 2023

· 4 min read
Jimmy Chu
Site Author

Just finished reading Messari Thesis Report 2023 last week. I was impressed with myself that I could finish reading a 150+ page report. Of course, I am more impressed by Messari CEO Ryan Selkis and his team ability of compiling this report in the last two months of 2022. There are a few points I think worth mentioning, summarized from the report.

Crypto Regulation

Regulation continue to be a big deal in crypto space, especially with crypto friendly banks such as Silvergate and Signature bank collapsed recently. I believe financial entities not governed in DAO but licensed structure will inevitably get more and more scrutinized from regulators. That's why Coinbase stock is at best doing as meh as it continuously being the focal point of SEC.

Ethereum Product Development

The development of Ethereum continues to gain momentum in 2022. After the Merge last year Sep, we are expecting the Shanghai upgrade to happen in 2023 Apr, allowing staked ETH tokens to be withdrawed from the network. The developers in the ecosystem are also working on EIP 4844, Proto-Danksharding, to build a data availability layer in Ethereum. Another effect of this feature is that rollup cost from layer 2s will be significantly reduced. I also learned that there is a broad ethereum development roadmap, published under Vitalik's tweet.

Ethereum Roadmap

Zero Knowledge

Zero knowledge continue to occupy a seat in blockchain landscape. One of its biggest applications is in rollup. Currently there are two kind of rollups, optimistics and zero-knowledge rollup.

Optimistic rollup assume data rolled up is correct, until a valid fault proof is submitted during the challenge period. Once the challenge period has passed without a valid fault proof raised by verifiers, data is regarded as valid and finalized on layer 1. The advantage is that the implementation is easier compared to the other approach (mentioned below). The disadvantage is that users need to wait for the challenge period to pass before data is finalized. Usually the challenge period could last from a few hours (Polygon) to a few days (Arbitrum).

Zero-knowledge rollup uses validity proof instead. That means each piece of rollup data need to be accompanied with a proof to be proven valid. The verification mechanism on layer 1 (some smart contract logics) will run through the proof and check for its validity. Once the proof is shown valid, data is written on-chain. Using this approach the engineering implementation is much more complex, but the finalization time needed is much shorter, like a few blocks, thus, within a minute.

Another application of zero knowledge is in transaction privacy. Tornado Cash is actually a good use case of zero-knowledge. Unfortunately regulators hate technology that promote transaction privacy, as it facilitates money laundering. Tornado Cash has a spiritual successor called Privacy Pools.

Staking

Staking in ETH ecosystem continues to be a big deal, especially with Shanghai upgrade that allow stakers to withdraw their staked ETH. LIDO, a decentralized staking protocol, has become the Defi protocol with the highest TVL. The protocol is governed in a DAO structure with a forum, Snapshot gasless voting, and Aragon on-chain voting platform.

Modular Blockchain

This year, Ethereum has graduated from a monolithic platform to become more and more modular. Modular blockchain used to be the catchphrase in Polkadot ecosystem with blockchains built with Substrate framework. Now it also applies to Ethereum with the effort from Celestia and Fuel.

Modular Ethereum

original image source

A blockchain can be viewed as functioning in four aspects, data availability layer, consensus layer, settlement layer, and execution layer. These two teams continue to strengthen their frameworks and making them could inter-operate with other frameworks, and users could mix-and-match these different frameworks together to form their own modular blockchains.

NFT

In addition to being PFP (profile picture), there are a lot happening within NFT spaces. For instance, there are more and more use cases of using NFT to represent some kind of identity, such as rns.id, memberships, or just receipts. There are also innovations going such as creating NFT with expiration, NFT with on-chain info of a collection, and NFT owning other NFTs. There is momentum and we are likely to continue seeing innovation in this domain.

Advent of Code 2021

· 6 min read
Jimmy Chu
Site Author

In the past three months I have been working on solving Advent of Code 2021 puzzles. I probably have spent 80+ hours in solving all 25 puzzles. Yes, that is pretty much a whole two weeks of full-time work cumulatively, and that is with the help of this subreddit forum and a lot of online resources 😳.

My solutions are here, with a corresponding website.

For those who don't know, Advent of Code are a series of programming puzzles released every year before Christmas, from Dec 1st till the Christmas day 25th to be exact, one problem a day. The author Eric Wastl has been doing this since 2015, and the programming puzzles have since used in company training, university coursework, and practice problems.

Coming out from the rabbit hole of solving these puzzles, there are three problems that stand out from the rest, in my opinion. So I want to document them down and reflect on what I have learned from them.

Spoiler Alert

For those who want to solve the puzzles yourselves, please stop reading here 😄

Day 19: Beacon Scanner

The original problem. In solving this puzzle, the first issue is to clarify what the 24 orientations are. The way I eventually visualize it is with a Blu Tack and three piece of papers.

So we can have X, Y, Z axes like that, and we can rotate them to have three different axes arrangement:

image

Now we can have axes pointing at different directions as follows:

image

With eight different set of axes directions and each direction with three axes arrangement, we get a total of 24 orientations.

I then learn that we can rotate a coordinate XYZ based on these orientations by matrix transformation(multiplication). The 24 matrics are listed here at the end.

Now with all beacon coordinates relative to a scanner location, and the scanner orientation be one of the 24 orientations, how are we going to solve the problem? At this point I am out of my wit and look for clues in the subreddit forum. The key is by comparing the distance of beacons between different scanner report.

Yes, Of course! That is the only invariant we have here! And I recall what Bezos have famously said "What is not going to change?".

Indeed, if a scanner have scanned the same set of beacons as another scanner, the distance between any pair within that beacon set are not going to change, no matter where the scanner is (its x, y, z offset), and what orientation the scanner faces (the matrix transformation).

With this insight we proceed to the next problem. How could we determine if two scanners have scanned the same set of twelve beacons, as specified by the problem?

Here we need to get to a concept in graph theory - complete graph. It is a set of nodes with each node have edges connect to every other nodes. So if two scanners have scanned the same set of 12 beacons, we should be able to build a complete graph from these 12 beacons (nodes) and get all of its edge length matches. Now how many edge lengths should we be counting here? It is 12 chooses 2, so 66. So if we build an edge table (with all edge length between any two beacons) for the two scanner reports, and find there are 66 entries in the two tables matches, we can claim there is a very high likelihood that the two scanners have scanned the same set of twelve beacons.

Once we identify the set of 12 beacons, we can determine the orientation of the second scanner. Here we always use the first scanner as the baseline, meaning it is located at (0, 0, 0) with the [[1, 0, 0], [0, 1, 0], [0, 0, 1]] orientation matrix.) We apply all 24 orientations to four beacon coordinate and see which one will result to having the same vector displacement of the first scanner, then we can determine the location and orientation of the second scanner.

Using this methodology we can map out all beacon and scanner locations and the orientation of scanners iteratively.

Day 22: Reactor Reboot

The original problem. I learned two concepts in solving this puzzle. The first one is how to compute the area covered by multiple sets, without double counting, when they overlap with each others.

For instance, the following:

image

We know the total of area are:

Area A + Area B - Area A^B

Now with the following:

image

Area A + Area B + Area C - Area A^B - Area A^C - Area B^C + Area A^B^C

So with N set, following the pattern, it is:

Area A + Area B + .... Area N - Area A^B - Area A^C - .. Area A^N - Area B^C - ... + Area A^B^C + Area A^B^D + .... - ...

I then learned that this is the Inclusion-exclusion principle.

I stumbled upon another sub-problem when solving this puzzle, that is, given a set of items, efficiently generate all its combination and compute its n choose k combinations. I thought it will be fun to code that, so using the algorithm from Dr. James McCaffrey, I coded the algorithm and also the front end Combination App (source code) that you can play with.

Day 23: Amphipod

The original problem. This is a fun puzzle. I learn from sub-reddit that this is basically a shortest path problem and can be solved with Dijkstra's algorithm. So it is a refresher on coding that algorithm and encoding various game states into keys of nodes, with edge extended from the node as all its next possible moves, with the edge length as the move cost.

But in implementing Dijkstra's algorithm, with a large amount of possible states and edges, how do you efficiently get the next shortest path to try? This is when I learn the data structure about Indexed Priority Queue (IPQ) from this Youtube video.

The overall structure is shown as follow:

image

You use an index to represent its nodes' keys and corresponding edge costs. Then during the heap operations (swimming up or down), we just move the index around. So here is the implementation of indexed priority queue with test cases.


That's it on Advent of Code 2021. Overall it is a fun and intellectually challenging experience, and I am glad to finish all 2021 problems 😉.