Kirill Shmilovich

©2016 by Kirill Shmilovich

Archive

Please reload

Rubik's Cubes & Entropy

December 3, 2017

With this post, I want to provide some intuition on the concept of entropy through analogy by Rubik’s cube. Broadly speaking, entropy is a powerful tool which relates the microscopic character of a system to its macroscopic observables in a compact and analytic fashion.

 

Before we begin a discussion on entropy, I want to introduce some of the basics of this colorful twisty puzzle. First, in how many configurations exist for an unsolved Rubik’s cube? That is, how many ways may the individual cubies be oriented to arrive a totally different cube. A standard 3x3x3 Rubik’s cube will always have 8 corner pieces, 12 edge pieces, and 6 center pieces. There is no need to consider the center pieces in this calculation, as they are always fixed relative to one another. The 8 corners and 12 edges may be placed around the cube in 8! and 12! ways respectively. Furthermore, each fixed corner position can orient in 3 different ways (3 sides to each corner), resulting in another 3^8 distinct combinations, while edges in can orient in only 2 ways, 2^12 combinations. Multiplying these numbers together to arrive at

 

 

possible combinations if we could freely swap the stickers around the cube. However, this number is slightly inflated compared to what’s allowed. As an additional nuance, not all these configurations are accessible by simple rotations; specifically, individually rotating a corner or edge independent of other cubies is impossible by only twisting the cube, while independently interchanging two individual cubies is also not possible by these means. Correcting for over counting, we arrive at the actual number of 3x3x3 Rubik’s cube configurations:

 

 

So, how are Rubik’s cubes related to entropy? As we have shown, an unsolved Rubik’s cube has many distinct configurations which classify an unsolved state. This idea relates to the concept of microstates and macrostates in thermodynamics. If I handed you a box containing 100 particles and specified the temperature of the box, would you be able to tell me exactly how fast each particle is moving? No, of course not. Loosely speaking, we know the temperature of the box is related to the average kinetic energy (i.e. the average speed of the particles)—at best you could tell me average speed of the particles. One would then say, the macrostate of the box is being described by its temperature while the microstate is defined by the position and speed of each individual particle. Clearly, there are many different microstates that result in the same macrostate, being that there are many different combinations of 100 speeds which give the same average. This is where the Rubik’s cube analogy comes in. The macrostate of a Rubik’s cube could be defined as how ‘solved’ it is (classified as simply solved or unsolved in this case) and knowledge of its microstate would require specifying the specific orientation of position of every cubie. Interestingly, the solved state of a Rubik’s cube only has one associated microstate—knowing a cube is solved will immediately tell you the orientation and position of all its cubies.

 

I’ve yet to introduce entropy, but before I do so let’s delve a bit deeper into the progression of solving a Rubik’s cube (I promise this is relevant).

 

Rather than classifying the state of a Rubik’s cube as a simple solved-unsolved binary, we can consider intermediary stages one might visit in solving a Rubik’s cube. Without going into too much detail, we can describe the state of a Rubik’s cube, between completely unsolved and solved, in one of five ways.

 

 

These configurations describe our new macrostates, each with an associated set of microstates, the number of which are shown at the bottom of each picture. Note, as we progress in solving a Rubik’s cube (effectively increasing how ‘ordered’ it is) the number of microstates contained in each macrostate decreases.

 

Now we’re ready to define entropy (denoted S).

 

 

The k-sub-B is simply a constant of nature, hence for the purposes of this discussion, we'll focus on how entropy varies with the argument of the logarithm (the number of unqiue microstates in a given macrostate).

 

So, how is entropy useful? Well, in a lot of ways. For one, notice entropy depends only on the number of microstates, particularly, the formula for entropy does not explicitly relate to a given macrostate. However, with reasonable inference, we can derive characteristics of our macrostates using entropy. For example, if I told you S=0 then you would instantly know that the Rubik’s cube is solved—as there is only one distinct microstate associated with the solved macrostate.

 

You may be thinking to yourself now, as I did when I first learned about this, that entropy doesn’t tell us anything new—but rather overcomplicates something naturally quite intuitive. In practice, however, entropy is providing us with a quantifiable and relatable measure to the macroscopic properties of a system. Meaning, if we know the entropy of a system, through a series of calculations, we can derive important macroscopic properties (temperature, pressure, etc.).

 

As an additional application, consider the illustrious concept of the ‘arrow of time’. If I showed you a video of a ball rolling up a hill, and asked you whether I’m playing the video in reverse or as it happened, how would you answer? Of course, we would all recognize that the video is being played in reverse—that’s just how gravity works. As is for many macroscopic phenomena, we generally have excellent intuition for determining the natural progression of time. But for microscopic phenomena this intuition often fails. The second law of thermodynamics tells us that the total entropy of the universe must always increase. Like a ball rolling up a hill, if we have a stream of measurements from a totally isolated system, whatever it may be, and observe its entropy decreasing we can confidently deduce that we are receiving these measurements in the reverse order in which they were taken. Much like the fact that we can rely on gravity to always pull things closer to the earth, we can always rely on entropy to increase in the universe.

 

Relating this back to the Rubik’s cube, if I were randomly twisting a Rubik’s cube, would you expect it to ever be solved? As is our experience, this would essentially never happen—randomly twisting a Rubik’s cube will almost always drive you farther from the solved state. This concept relates to the ubiquity of entropy always increasing, as the level of ‘solved-ness’ is related to the cube’s entropy. More precisely, the cube’s entropy is proportional to the number of microstates associated with a macrostate, which decreases as the cube goes from unsolved to solved. Generally, to increase how ordered something is, effectively decreasing its entropy, one must input some energy. When solving a Rubik’s cube its entropy is being decreased, but in doing so I make a deliberate effort to rotate the cube in just the right way, which may be thought as a characteristic input of energy on my part.

Please reload

Recent Posts

December 10, 2017

December 3, 2017

October 24, 2017

Please reload