# Hidden Dimensions

## Phiphy's Physics Study Notes

• Graduate Student in Physics @ Johns Hopkins University

## Field Mass-The Inspiration from Mattress

Posted by Phiphy on 11/15/2009

What is mass? What is the origin of mass? These questions are hard enough for particle mechanics, and even more obscure in field theory. For particles, we say that mass describes the property of inertia; for fields,  mass describes the dispersion relation, ie, the relations between wave number and frequency of the excitation of the field. Well, that is too abstract, is there any intuitive picture for the mass of field?

Let’s first consider a mattress constituted with classical oscillators. Each oscillator has mass m. The spacial division of two adjacent masses is l and the spring constant between them is k. Then we can write down the lagrangian: $L=\sum_i\frac{1}{2}[m\dot{q}_i^2-k(q_i-q_{i+1})^2]$

After continuation, we send $l\to0, m\to0$, $L=\lim_{l,m\to0}\int \frac{d^Dx}{l^D}\frac{1}{2}\left[m\dot{q}^2-kl^2(\partial_xq)^2\right]$

Redifine the field function $q=\frac{\phi}{\sqrt{\sigma}}$ where $\sigma=\frac{m}{l^D}$ is the mass density of the oscillators. After redefinition of the parameters, we get a massless scalar field. with speed of wave $c^2=\frac{kl^2}{m}$.

But what if we want a massive scalar field? Add one term to Lagrangian: $\Delta L=-\frac{1}{2}\sum_ik'q_i^2$
After continuation and redefinition, this term becomes $\Delta L=-\frac{1}{2}\frac{k'}{l^D\sigma}\phi^2=-\frac{1}{2}\frac{k'}{m}\phi^2$
This is just the field mass term, with mass $M=\frac{k'}{m}$.

The physical meaning of this term is just that besides the springs connecting between these oscillators themselves, there are some other springs connecting each oscillator to a fixed point, so that the whole mattress can not move freely. What is this solid “wall” that hold this mattress field? It can be the vacuum of another field. And yes, this is just a natural and intuitive picture of Higgs mechanism! I like this more than the celebrity picture showed in the Higgs Cartoon.

## QFT Journal Club 2: Propagator Theory

Posted by Phiphy on 10/08/2009

Time: 10/3/2009

Speaker: James Murray

### Outline:

• Non-relativistic propagators in quantum mechanics.
• Propagators of scalar field.
• Propagators of fermion field.
• The perturbative solution of propagators.

### Reference:

Bjorken & Drell, Relativistic Quantum Mechanics, Chapter 6.

Only part of the comments were made in the meeting.
• Propagators as Green’s Functions

The mathematical meaning of propagators are just Green’s functions of the equations of motion (EOM) of the particles or fields. $\widehat{L}(x)G(x,x')=\delta(x-x')$
Where $\widehat{L}(x)$ is a linear differential operator.

We learned in mathematical methods of physics that the relationship between Green’s functions and the general solutions of the EOM with source depends on the order of derivatives in the EOM and also on boundary conditions. Generally, if an EOM is second-order in the time derivative, we need to know the initial conditions of both the solution and the first order time derivative of the solution. In non-relativistic quantum mechanics (NRQM), we only have the first order time derivative, so it’s legitimate to write down a general solution of the wave function as $\Psi(x',t')=i\int d^{3}x dt G(x',t';x,t)f(x,t)$

This is just an application of Huygens’ principle for evolution of waves. Then in relativistic quantum mechanics (RQM), the Klein-Gorden equation is in the second order of time derivative, but we used the same evolution function for the wave functions. Why? Because we set the initial condition as $\Psi(x,t\to -\infty )=\frac{\mathrm{d} }{\mathrm{d} t}\Psi(x,t\to -\infty )=0$
You may notice that there is no boundary terms in space either, because they are also set to be 0.

The Green’s function itself also depends on boundary conditions. It’s remarkable that ‘time ordered’ is just a kind of boundary conditions. In NRQM, we input the step function $\theta (t)$ as a boundary condition that $G(t)=0 \forall t<0$. This gives us the retarded Green’s functions. In RQM, the boundary condition becomes $F(x,t)=G(x,t)$ for $t>0$ and $F(x,t)=G(-x, -t)$ for $t<0$, where $G(x,t)$ is the on-shell Green’s function without this time-ordered condition. $F(x,t)$ is just the Feynman propagator.

In RQM or QFT, we only deal with the simplest case with the simplest boundary conditions. As a contrary, in condense matter physics, there may be many weird boundary conditions and the problem becomes much more complicated.

• Propagators as Correlation Functions

There is another name for propagators which is more commonly used in statistical mechanics: two-point correlation functions. Propagators are just a kind of correlation functions. Since it correlates two states in space-time, it describes the evolution, ie, propagating of the particle or field. In condense matter physics, the more commonly used correlation functions are only defined in pure space, where time is not a variable. Although they have different properties, their essence is the same: for a given system, if we know the probability amplitude of the ‘particle’ at one point in the ‘space’, a correlation function gives us the amplitude of the ‘particle’ at another point under such conditions. Here the ‘particle’ can also mean field or abstract states, and ‘space’ can also mean space-time or any abstract phase space.

By understanding its physical meaning, it’s not difficult to write down the general and abstract formula for two-point correlation functions: $G(\alpha ,x;\alpha ',x')=\langle \alpha ,x|\alpha ',x' \rangle=\sum_{n} \langle \alpha ,x|n \rangle \langle n|\alpha ',x' \rangle$
Where x and x’ are space-time indices and $\alpha, \alpha '$ are indices for all internal degrees of freedom. $|n \rangle$ are a complete set of states.

• How physical is a propagator?

In QM, we learned that some quantities are physical and some are not. For example, wave functions are not physical, because they can have different phases for the same physics; but the modulus squared of a wave function or of an inner product of wave functions are physical, because they mean probabilities which can be measured directly. Generally speaking, a physical quantity respect the symmetry of the system, ie, it must be some symmetric representations of the symmetry group, while an unphysical quantity does not necessarily have this constraint.

We’ve seen that propagators are amplitudes, so the question ‘how physical a propagator is’ can be translated into the question ‘ how physical an amplitude is’. In QM, an amplitude is almost physical in the sense that there is only an unphysical phase in the amplitude. So if we impose Lorentz symmetry into the system, it’s natural to require that amplitudes, then also propagators to be Lorentz invariant. However, this is not always the case in RQM. The propagator of a scalar field is Lorentz invariant because the two state correlated are also Lorentz invariant. But we’ll see later that a propagator can be a tensor which is Lorentz covariant, eg, fermion propagator, or it can even have no Lorentz covariance property, eg, photon propagator in Coulomb gauge. This is because each component of the propagator describes the correlation of two components of the field, and even the field components can be unphysical as we’ll see in gauge field theory.

• Classical and Quantum Propagators

As long as there is a linear differential equation, there is a set of Green’s functions. Physically it means propagators exist in every wave system, no matter it’s quantum or classical. Quantum mechanics (including relativistic quantum mechanics) treats single particles as waves, so it’s also called Wave Mechanics. In this sense, QM is essentially the same as a classical field theory (Of course, QM is a special field theory in the sense that it has only the first order of the time derivative). For example, the Klein-Gorden equation is a quantum equation for relativistic scalar particles, but it’s also a classical equation for relativistic scalar fields. Then what’s the difference between quantum and classical theories? They have different propagators. We use retarded propagators in classical field theories while Feynman (time-ordered) propagators in relativistic quantum mechanics. They have different boundary conditions in time according to different physical conditions. For classical field theory, we do not want any particle or anti-particle to be produced, so we have propagators which start at an initial time point and always propagate forward in time with a positive pole energy. For quantum mechanics, we tried to keep the same property, yet realized that in order to combine quantum mechanics with relativity, we have to end up with a weird propagator in which some negative energy propagates backward in time. Then we have to accept the concept of anti-particles and realized that RQM is in itself inconsistent, because we start with a single particle system yet end up with a system in which particle number is not conserved. The only solution is to further treat particle wave functions as field operators, then we come to quantum field theory. This is called the second quantization. In fact, there is another way leading to QFT, that is to quantize classical field directly, in which all the symmetries and dynamical variables are already prepared.

It is remarkable that the uncertainty principle and the ‘off-shell’ of a propagator are not exclusively the properties of quantum mechanics. In fact they are properties of waves and also exist in classical field theories. As there are sources, the EOM of free waves no longer holds at all time so the dispersion relation changes, and that is the cause of ‘off-shell’.

• The Role of Propagators in Perturbative Solutions of the EOM

In principle, if we can exactly solve the EOM of a system with a certain boundary condition, all dynamics and hence the state of the system at any future time is determined, then our work is done. However, most of the EOM’s are not easy to be solved analytically, especially when there are interactions. We have seen it in QM that there are only a few exactly solvable systems. So we introduced a powerful tool: perturbation methods. In QFT, solving EOM becomes even more difficult because the EOM with interaction is usually nonlinear and the interaction always depends on time. To use the perturbation method, we have to make two assumptions: first, in order to use the superposition principle of Green’s functions, we assume that the interaction only happens in a finite region of space-time and we are only interested in initial and final states which are far from the interaction region and can be considered as free particles for which the EOM is linear; Second,  we assume that the interaction is small and can be expanded perturbatively, then as in time-dependent perturbation theory in QM, we can calculate the correlation functions iteratively, with different interaction points connected with propagators of free particles (Note: free does not mean on-shell, it just means it’s the propagator solved from the EOM without interactions ).

• The structure of propagators in QFT

Now we’ve seen two kinds of propagators, the one for scalars: $G(p)=\frac{i}{p^2-m^2+i\varepsilon }$
and the one for spinors: $G(p)=\frac{i (\hat{p}+m) }{p^2-m^2+i\varepsilon }$
Where $\hat{p}=\gamma ^{\mu}p_{\mu}$ is a matrix carrying spinor indices.
We see that the spinor propagator equals a scalar propagator, which provides the pole and time-order structure, multiplied by some matrix in spinor space. This structure is very general. No matter the field is a scalar, a spinor or a vector field, each component (polarization) of the field propagates like a scalar, then we only need to sum over all the components to get a total propagator, that is the why the numerator of the spinor appears. Let’s use this manner to look at a vector propagator:

A vector field can be written as $A_{\mu}(x)={\varepsilon}_{\mu}^{\lambda}a^{\lambda}(x)$
Where ${\varepsilon}_{\mu}^{\lambda}$ is the polarization basis and $a^{\lambda}(x)$ is the ‘coordinate’ on this direction which is a pure number. To calculate the propagator, we take $a^{\lambda}(x)$ as a delta function at point x. According to the definition of correlation functions mentioned earlier, $G_{\mu \nu}(x-x')=\left \langle A_{\mu}(x)|A_{\nu}(x') \right \rangle=\left \langle {\varepsilon}_{\mu}^{\lambda}a^{\lambda}(x)|{\varepsilon}_{\nu}^{\lambda'}a^{\lambda'}(x') \right \rangle=\left \langle {\varepsilon}_{\mu}^{\lambda}|{\varepsilon}_{\nu}^{\lambda'} \right \rangle \left \langle a^{\lambda}(x)|a^{\lambda'}(x') \right \rangle$
The propagating between polarizations are orthogonal, so in momentum space $\left \langle a^{\lambda}(x)|a^{\lambda'}(x') \right \rangle=\frac{i{\delta}_{\lambda \lambda'}}{p^2-m^2+i\epsilon}$
then we have $G_{\mu \nu}(p)=\frac{i\sum_{\lambda}{\varepsilon}_{\mu}^{\lambda} {\varepsilon}_{\nu}^{\lambda}}{p^2-m^2+i\epsilon}$
This is the the general structure of vector propagators and $\sum_{\lambda}\varepsilon_\mu^\lambda \varepsilon_\nu^\lambda$ is gauge dependent for massless vector field so we see that the propagator is also gauge dependent.

Posted in Communications | Tagged: | 6 Comments »

## QFT Journal Club 1: Groups and Group Representations in Physics

Posted by Phiphy on 09/20/2009

Time: 9/19/2009

Speaker: Chris Brust

### Outline:

• Definition of groups: 4 axioms.
• Finite groups: defined by a multiplication tables.
Example: Permutation group $S_3$
• Group representations: mapping a group to a set of matrices.
trivial rep, faithful rep, reducible and irreducible rep, unitary rep
Example: 3 (and the only 3) irreducible rep of $S_3$, 1 faithful but reducible rep of $S_3$
• Lie groups: continuous groups which can also be described as a manifold.
Commonly used Lie groups in physics: $U(N), SU(N), O(N), SO(N), L(N), GL(N), Sp(N), E_N$
• Group properties: Isomorphism(btwn two groups), Abelian and non-Abelian, compactness, connectedness, simply connected or not.
Examples: $U(1), SU(2), O(1,3),$ Poincare group
• Lie algebras: defined by 3 axioms.
• Generating a group from an algebra and vice versa.
Example: Heisenberg algebra and group

• Very nice talk, informative and well organized. Thank you, Chris.
• Why do we need group theory in physics?

It’s all about symmetry. Symmetry plays a significant role in modern physics. From crystal lattice in condense matter to elementary particles in high energy physics,  it is its symmetric structure that causes the system’s rich phenomena, and almost all we care about in theory is related to its symmetry realization and breaking. Group theory is an indispensable math tool for describing symmetries. A system has some symmetry means the Hamiltonian or Lagrangian is invariant under the transformation of the corresponding group. So we get some rigid mathematical form for this symmetry and studying this system becomes studying the Hamiltonian or Lagrangian under such constraints. Buy using this tool, we can even lift different specific physical systems to some abstract structure and find their common properties, as happened again and again in the history of physics. One of the most remarkable example is the 2008 Nobel price for physics: Nambu was awarded for his work on spontaneous symmetry breaking in superconductor which latter played a vital role in particle physics.

• Why are group representations so important in physics?

Groups are only some abstract math f0rms. To connect math to physics, we need one more step: to find some specific representations of the group. Different systems may have the same symmetry, but their constituents can have very different behaviors under the symmetric transformation. Some may not change, some may exchange identities with each other, some may shift by some values, but the Hamiltonian or Lagrangian is invariant under all these changes. In math language it means, they are in different representations of the same group. For example, the ones that are kept unchanged are in trivial representations, ie., $I$. In particle physics, the role of representations is even more obvious: the nature has only one fundamental physics law, which means the groups that describe all the matters in the universe are the same, but why are there so many different species of fundamental particles with different spins and interactions? They are distinguished by different representations. Different spins and momenta are distinguished by rep’s of Poincare group, different interactions are distinguished by rep’s of gauge groups.

• Why do we also need Lie algebra?

There is a most important class of groups called Lie groups, which played a central role in studying QFT. Lie groups describe continuous symmetric transformations, eg., Lorentz transformation, translation and gauge transformations. However, usually we only care about *local* properties of a system, ie., how it behaves under some infinitesimal transformations. That’s where Lie algebra comes out. In geometric language, Lie groups can be taken as manifolds, each group element is a kind of ‘translation’ on the manifold and the generators of a Lie algebra are a set of basis of the manifold. (To imagine it, you can use ordinary vector space as a analogy.) By studying the properties of the basis, we can know the properties of the whole manifold, but wait, not all properties of the manifold are included in the basis. The same Lie algebra may generate different Lie groups. For example, $SO(2)$ and $U(1)$ are equivalent both as Lie algebras and groups (ie., they are isomorphic); While $SO(3)$ and $SU(2)$ have the same algebra, they are different groups ( $SU(2)$ is simply connected but $SO(3)$ is not. ). This is because some discontinuous symmetry distinguishes their global properties. Fortunately, in QFT, usually the local property says everything about physical observables we care about, eg., cross-section of collision, life time, etc. So we do not need to be too serious on distinguishing Lie groups and Lie algebras.

• The first step of constructing a quantum field theory
– One example of group representation theory used in QFT

One of the most important Lie groups in QFT is of course the Poincare group, which carries the physical meaning of special relativity. To make a relativistic quantum mechanics, we only need to let each of the group elements act on a vector(state) of a Hilbert space which satisfies all the axioms of quantum mechanics and get another vector in the same space, ${\Psi }' = e^{-ix_{\mu}P_{\mu}-i\omega_{\mu \nu} J^{\mu \nu} }{\Psi}$

That means this Hilbert space is a symmetric space under the transformations of the group. So we have relativity and quantum mechanics both satisfied. Then our task is to find all the possible representations of the group and do experiments to see what representations are chosen by the nature, ie., what species of particles do exist in nature. Mathematically we can prove that translation and Lorentz transformation can be disentangled, and further, the only irreducible representations of Lorentz group are spin-half-integer particles. Now we find all possible kinds of elementary particles in the nature! (Assuming relativity and quantum mechanics are correct, of course.) In reality, we see only spin-1/2 , spin-1 and spin-2 elementary particles, but who knows spin-0 and spin-3/2 elementary particles exist or not, they may be waiting for us on the LHC.

Till now we only discussed Poincare symmetry for free particles. Most of the interactions are related to gauge symmetry and they can be studied in a similar manner.

Now we’ve learned the first step of constructing a general quantum field theory: determine all the symmetries of the system, find and select certain representations of the symmetry groups, and write down a Lagrangian which is invariance under the symmetric transformations by using these representations as degrees of freedom.

—————————————————

 For your information, $SO(3)$ is in fact isomorphic to $SU(2)/Z_2$, where $Z_2$ means an action of orbifold. )

 Elementary particles with spins higher than 2 are theoretically forbidden for some deeper reasons.

 Text books on group theory suggested by the speaker:

Georgi, Lie Algebras in Particle Physics

M.S. Dresselhaus, G. Dresselhaus, A. Jorio, Group Theory: Application to the Physics of Condensed Matter

Micheal Tinkham, Group Theory and Quantum Mechanics

Posted in Communications | Tagged: , | 15 Comments »

## Discussions on Entropy

Posted by Phiphy on 09/11/2009

Information entropy is in fact a more fundamental definition than our familiar Baltzmann entropy in stat mech.

When we use Boltzmann energy $S=\log\Omega$, where W is the total number of microscopic states of the system with a given macroscopic state, we assume that all microscopic states has the same probability. This is true for most thermal systems. However, if the probability of each micro state is not the same, we have to modify the definition of entropy, then we have Gibbs entropy, which is equivalent to Shannon (information) entropy: $\sum_{i}P_i\log(\frac{1}{P_i})$, where $P_i$ is the probability of a state $i$, and the sum takes over all $i$‘s. You can check that if all $P_i$ are equal, this definition goes back to Boltzmann entropy.

So we can have a ‘modern’ interpretation of thermal entropy: the amount of entropy means the amount of information we need to input into this system to determine its micro state for a given macro state. In other words, order means predictability, and for a specific state, the predictability means its probability. If a thermal system has a larger number of micro states, then each state would have smaller probability, which means we have a smaller chance to predict the right micro state, that’s why this system has a higher entropy.

A very useful lesson from thermal theory for us to understand order and disorder in information theory is: For two systems with the same macro state, the one with independent sub systems has higher entropy than the one with correlated sub systems. For example, ideal gas system has higher entropy than interacting gas system with the same micro state. We can use this intuition into information theory. For example, compare two pages of paper with the same number of letters on it. The letters on one page is totally random, while the other page is a well written article. So we say the letters on the first page are independent while the letters on the second page have some correlations with each other. To determine the micro structure of the first page, we have to put in the information of each letter, with probability $1/26$ for each one. To determine the micro structure of the second page, we only need to put in the information of each word, with the probability larger than $(1/26)^n$, where n is the number of letters in the word, since we know that there are some combination of letters which are definitely not a word. So now you can calculate the probability of each micro state, the second should be larger than the first. So we say the second page is more ‘predictable’, and the information comes from correlation. It’s interesting that we have just interpreted information entropy by using thermal entropy, for smaller probability of a micro state means larger number of micro states.

But there are 2 cases in which we can only use information entropy. One is, as I mentioned, when the probability of each micro state is not the same; Another one is non-equilibrium process. I am not familiar with either of them. Does any one know any examples of these cases?

The following is Lightsaber’s explanation of entropy in non-equilibrium process:

Let me start with non-equilibrium situations. In my lecture, I mentioned that “Information is …… boundary condition.” Many thought that “boundary condition” was a phrase I randomly picked, it’s not. Actually, I was referring to the non-equilibrium cases. Non-equilibrium is featured by non-uniform distribution and time-variance. By possessing the information of its spatial distribution, the entropy of the system is reduced. When equilibrium is established, all the information become no longer valid, and the entropy increases. That is why the establishment of equilibrium is always entropy-increasing. This piece of information could be valid only at a particular time, or be valid during the entire time interval we study. In the former case, it’s an “initial value condition” in PDE language, but it’s but a boundary condition in time domain anyway.

Starting from the simplest example in non-equilibrium thermodynamics: diffusion. If we connect a full bottle of nitrogen dioxide (bottle A) and a full bottle of air (bottle B) with a glass tube, we can see the red color gradually propagates, until the gas in both bottle has the same color. This is a entropy-increasing process. At the very beginning, we do know (know = possess a piece of valid information) that there’s neither air in A nor air in B. With the knowledge, the entropy is relatively low. In the language of probability theory as Shannon used, the probability of (an infinitesimal domain) in bottle A being filled with NO2 is 1, while it is 0 in bottle B. After the establishment of equilibrium, this piece of information become completely invalid, and the entropy is larger.

However, how can we describe the dynamical process (the formal term is “transport process” I think) between the start and the equilibrium? Can we use information theory to process it? I believe so.

In my PERSONAL opinion which hasn’t appeared on any reference material I’ve read so far, the introduction of fuzzy mathematics will be a possible way to solve it. Darthmaverick is an outstanding expert in this area, but I can discuss to the best of my knowledge. We can define a “membership function of validity” for any piece of information, which is dependent on time. This membership function can be determined as “The probability that the piece of information is true”. For example, for the statement “Bottle A is full of NO2 without air”, we can define the membership function as “P(an infinitesimal domain is filled with NO2)-P(an infinitesimal domain is filled with air)”. At the starting of transpotation, this membership function is 1, and it becomes 0 eventually. It can be proven (though I haven’t done it myself) that this membership function could be constructed to be proportional to the “entropy decreasing capability” of the corresponding information, as in the example above.

The utilization of this measure is still to be explored. After all, I come up with this combination of non-equilibrium thermodynamics, information theory and fuzzy mathematics independently, and it’s expected that some original work can be done following this direction.

That’s all for now, thank you.

As to the cases in which “the probability of each micro state is not the same”, intuition told me that it’s EQUIVALENT to the non-equilibrium cases, or uniquely corresponds to one. I wish I could prove it mathematically, but I cannot do it in a rigorous way. Possibly it’s wrong anyway.

One of the possible way of proving it is to consider the symmetry of the system. The unsymmetry of different microstates vs the broken symmetry the macroscopic system, what’s the connection?

Plus, according to LOT2, a closed system tends to maximize its entropy. Can a system reach the GLOBAL maximum of entropy without eliminating the unsymmetry among microstates?

I wish I could find an example in which symmetry among microstates is essentially and permanently broken. If it exists, and it can be stablized given time, my hypothesis in the 1st paragraph is wrong.

Then CoolPro asked an interesting question (in Chinese):

A microstate is a specific, detailed configuration that includes the state of all the particles inside. For an N-particle system, it’s one single point in the 6N-dimensional phase space. For example, for a system described by canonical ensemble, its equilibrium macrostate is consist of numerous microstates, EACH OF WHICH obeys Gibbs distribution (cuz each microstate contains a complete set of information about all the particles and thus has its OWN distribution), and has the same probability as each other. If we change the parameters (like T), the equilibrium state will be consist of another set of microstates, but each of them will still obey Gibbs distribution and will has the same probability as each other.

My response:

To my knowledge, Gibbs distribution (or more commonly called ‘Gibbs measure’, or ‘Boltzmann distribution’) means the probability of a microstate of a system, not the distribution of the particles in this system. If this is not GL means, that’s fine, we don’t need to debate on terminologies. But I do have some more comments. The phase space you mentioned is for microcanonical ensembles, in which the density of states is a constant, which means all microstates, no matter what macrostates it correspond to, have the same probability. This is true for microcanonical ensembles (or isolated systems), but not for canonical ensembles, which is exactly described by Gibbs measure. In the phase space of a canonical ensemble, the density of a certain microstate is not a constant, it can repeat for many times, and the density or probability of the microstate is proportional to the number of microstates of the reservoir which ‘coexist’ with it. When you consider this and do the calculation, you get the Gibbs measure.

We should distinguish ensemble language and distribution language. The former one is only interested in microstates of the whole system. It is so abstract that it never cares about what kind of system it is or the fate of a single particle in it, while this fate or distribution probability of a single particle is just the essential focus of the distribution language, and it’s result varies with the types of systems(classical, quantum, interacting or not, etc. ). And further, a given microstate in ensemble language has no ‘probability distribution’, cuz it’s totally determined, the so-called distribution is just a description of this state, and it can be way off the Boltzmann distribution, eg, some higher energy level may have more particles than some lower energy level. Some of us may confuse ‘Boltzmann distribution’ in ensenble language with that in distribution language. They have exactly the same mathematical form, but have very different meanings: one is for microstates of a system ensemble, one is for particles in a single system. Why they have the same form is just a coincidence, because ensembles are defined as classical and independent with each other, which is the same property of the particles in a classical non-interacting thermal system. But see, we have other kinds of thermal systems, eg, quantum boson or fermion system, and they do not obey Boltzmann distribution.

## Scientist: Four golden lessons (ZZ)

Posted by Phiphy on 07/17/2009

These are bloody correct lessons.

http://www.nature.com/nature/journal/v426/n6965/full/426389a.html

Steven Weinberg

When I received my undergraduate degree — about a hundred years ago — the physics literature seemed to me a vast, unexplored ocean, every part of which I had to chart before beginning any research of my own. How could I do anything without knowing everything that had already been done? Fortunately, in my first year of graduate school, I had the good luck to fall into the hands of senior physicists who insisted, over my anxious objections, that I must start doing research, and pick up what I needed to know as I went along. It was sink or swim. To my surprise, I found that this works. I managed to get a quick PhD — though when I got it I knew almost nothing about physics. But I did learn one big thing: that no one knows everything, and you don’t have to.

Another lesson to be learned, to continue using my oceanographic metaphor, is that while you are swimming and not sinking you should aim for rough water. When I was teaching at the Massachusetts Institute of Technology in the late 1960s, a student told me that he wanted to go into general relativity rather than the area I was working on, elementary particle physics, because the principles of the former were well known, while the latter seemed like a mess to him. It struck me that he had just given a perfectly good reason for doing the opposite. Particle physics was an area where creative work could still be done. It really was a mess in the 1960s, but since that time the work of many theoretical and experimental physicists has been able to sort it out, and put everything (well, almost everything) together in a beautiful theory known as the standard model. My advice is to go for the messes — that’s where the action is.

My third piece of advice is probably the hardest to take. It is to forgive yourself for wasting time. Students are only asked to solve problems that their professors (unless unusually cruel) know to be solvable. In addition, it doesn’t matter if the problems are scientifically important — they have to be solved to pass the course. But in the real world, it’s very hard to know which problems are important, and you never know whether at a given moment in history a problem is solvable. At the beginning of the twentieth century, several leading physicists, including Lorentz and Abraham, were trying to work out a theory of the electron. This was partly in order to understand why all attempts to detect effects of Earth’s motion through the ether had failed. We now know that they were working on the wrong problem. At that time, no one could have developed a successful theory of the electron, because quantum mechanics had not yet been discovered. It took the genius of Albert Einstein in 1905 to realize that the right problem on which to work was the effect of motion on measurements of space and time. This led him to the special theory of relativity. As you will never be sure which are the right problems to work on, most of the time that you spend in the laboratory or at your desk will be wasted. If you want to be creative, then you will have to get used to spending most of your time not being creative, to being becalmed on the ocean of scientific knowledge.

Finally, learn something about the history of science, or at a minimum the history of your own branch of science. The least important reason for this is that the history may actually be of some use to you in your own scientific work. For instance, now and then scientists are hampered by believing one of the over-simplified models of science that have been proposed by philosophers from Francis Bacon to Thomas Kuhn and Karl Popper. The best antidote to the philosophy of science is a knowledge of the history of science.

More importantly, the history of science can make your work seem more worthwhile to you. As a scientist, you’re probably not going to get rich. Your friends and relatives probably won’t understand what you’re doing. And if you work in a field like elementary particle physics, you won’t even have the satisfaction of doing something that is immediately useful. But you can get great satisfaction by recognizing that your work in science is a part of history.

Look back 100 years, to 1903. How important is it now who was Prime Minister of Great Britain in 1903, or President of the United States? What stands out as really important is that at McGill University, Ernest Rutherford and Frederick Soddy were working out the nature of radioactivity. This work (of course!) had practical applications, but much more important were its cultural implications. The understanding of radioactivity allowed physicists to explain how the Sun and Earth’s cores could still be hot after millions of years. In this way, it removed the last scientific objection to what many geologists and paleontologists thought was the great age of the Earth and the Sun. After this, Christians and Jews either had to give up belief in the literal truth of the Bible or resign themselves to intellectual irrelevance. This was just one step in a sequence of steps from Galileo through Newton and Darwin to the present that, time after time, has weakened the hold of religious dogmatism. Reading any newspaper nowadays is enough to show you that this work is not yet complete. But it is civilizing work, of which scientists are able to feel proud.

Posted in Read them | Tagged: | 1 Comment »

## 杂记

Posted by Phiphy on 10/11/2008

LHC最早在明年4月就能出第一批数据了，而额外维的引力子貌似是需要数据最少的。RS首当其冲啊，要死的话就是死得最早的。这样也好，省得我在上面白白浪费青春。

## LHC Physics Mini Workshop

Posted by Phiphy on 05/16/2008

LHC is coming soon, every one in particle physics is excited. And you can smell in the air how crazily LHC-driven they are. The mini workshop of LHC physics was held at UMD, for model builders to discuss the phenomenology of some model which are expected to be tested on LHC.

Quirks:
Markus Luty and Roni Harnik
Both of them talked about “quirk”(sounds like the bird who calls “quark, quark, quark” catches a cold). Quirk is the simplest extension of SM gauge symmetry. It’s a QCD inspired SU(N) field, with it’s strong coupling scale $\Lambda. So if you stretch a quirk-antiquirk pair, the energy reserved in a unit length is $\Lambda$, unlike QCD, it is less than the energy required for producing another quirk-antiquirk pair, so the string will never broken. It can be stretched to macro scale, depending on the cutoff. Because of the string, this pair can oscillate like a spring. This gives the quirk very rich phenomenology. The two talks focused on the detection signal of quirks, they argued about different life time of the string before it annihilates and the stuff shaken off during the oscillation.
Markus’ quirks also carry QCD colors. So they will be surrounded by quarks and gluons to form a so called “brown muck”. When the string is oscillating, soft pions are the dominating shaken off particles. They lifetime of the string is estimated by using WKB approximation of the wave function of a quantum oscillator, and then considering the energy and angular momentum change by shaking off pions
Roni actually studied squirks, which are superpartners of quirks. They are uncolored under QCD. And in his “folded SUSY” model, squarks and quirks are orbifolded out by $Z_2$ symmetry, leaving quark and uncolored squirks to be superpartners. This scenario can be realized in extra dimension. Since the squirks are uncolored, the dominating shaken off stuff are soft photons and glueballs(I do not understand why there are still gluons even though they are not colored.). And the life time is longer since the energy taken away by photons are much lower.

Loopy fermion mass:
Patrick Fox
This is most precise “posdiction” of fermion mass I have ever seen, which is too good to be true. But it’s interesting and maybe useful to my project. In this model, it assumes that only top quark (or one quark which we call “top”) gets mass at tree level, and all the other uptype fermions get their mass at loop level.
The conventional yukawa term is forbidden by a new U(1) symmetry of Higgs, so another U(1) charged scalar field $\Phi$ is introduced to form a 5 dimensional operator, whose UV completion is the propagator of a massive U(1) charged fermion $\Psi$. The UV completion allows only one type of quark coupled to $\Phi$ and $\Psi$, so after this U(1) symmetry is broken by vev of $\Phi$, only one type of quark obtain mass. But my question is, is there any symmetry to forbid other types of fermions in this coupling?
The next work is more generic. By introducing a QCD and EW charged scalar field r and a set of coupling constants, the following fermions get mass at 1,2,3,4,5 loops level respectively: $\tau -> c -> \mu -> u -> e$. The generated mass are very close to the real values, by varying the couplings only between 0.3 and 3. This is the most beautiful part of the model.
The down type masses are much more messier. They have to introduce several weird fields and coupling terms, but the couplings are still order one. And the CKM matrix is in the right form. Neutrino mass is not explained in this model. But it can be done via seesaw mechanism.
The most intriguing LHC signals will be those colored and charged scalar fields. The constrain of available data is $m_r>(80TeV \sim 100TeV)$, which gives no hope to see them at LHC. But the constrain for a scalar field for downtype mass is $m_8>O(300GeV)$.

## LHC – The 0th Year

Posted by Phiphy on 05/09/2008

End June:
Machine cold
Expt beampipe bakeout finished

Mid-July:
Patrol of tunnel and caverns
Controlled access only

End-July:
First injection
Commissioning with beam, including at some point
few hours collisions at 900 GeV cm

End Sept:

Collisions at 10 TeV..2008 physics run.

End Nov??
Winter shutdown: commission to 14 TeV.

## Dark Energy – Ten Years

Posted by Phiphy on 05/07/2008

There is a symposium held by STScI to review the ten years’ progress in cosmology after the discovery of accelerating expansion of the universe. I missed some very important talks on Monday, not only because one of the speakers was Witten 😦 but also it was the only series of physical topics (rather than observational ones) of dark energy in these 4-day talks. I only attended several short talks(mostly 15 minutes each). Some topics for record:

1 First Results from the WiggleZ Galaxy Redshift Survey
Chris Blake (Swinburne University of Technology)
The WiggleZ project at the Anglo-Australian Telescope is a large-scale redshift survey of UV-selected emission-line galaxies. The survey is mapping a co-moving volume of approximately 1 Gpc^3 at a significantly higher redshift (0.5 < z < 1.0) than has been previously achieved by projects such as the 2dFGRS and SDSS. The main science goal is to use baryon acoustic oscillations in the galaxy clustering pattern as a standard ruler to measure the cosmic distance scale and expansion rate to z=1 and hence perform a robust test of the cosmological constant model. The survey is approximately 50% complete and is scheduled to finish in 2009. I will introduce the project and present initial results on the clustering, environments and luminosity function of high-redshift star-forming galaxies. I will also discuss forecasts for testing dark energy models with WiggleZ in the context of current and future cosmological datasets.

2 The Dark Energy Indicator: A Measure of Deviations of w from -1″
Ruth Daly (Penn State University)
The dark energy indicator provides a tool to measure deviations of the equation of state of dark energy from -1 over the redshift range from zero to one. The indicator is model-independent, and will be shown for the most recently available supernova and radio galaxy data sets. The preliminary results are consistent with a constant equation of state w of -1 from a redshift of zero to about one.
Note: This is interesting and may also be useful. The usual way we constrain the cosmological parameters, eg. $H_0$, $q_0$, $\Omega_{\Lambda}, w$ is model dependent, ie., take a particular model with undetermined parameter and calculate the expected observational curve and fit with the data. This talk provided a way to draw the parameters directly from data. The only assumption is RW metric and GR. From the general Friedman equation, we can express the coordinate distance and it’s first and second derivatives (wrt z) with those parameters. By analyzing the distance-redshift curve directly, we can fit the parameters. In order to determine w, which can vary with z, we need to construct another parameter called “dark energy indicator” s, which is 0 for w=-1. The fitted s with supernovae data is 0 for z<1, but when z goes close to 1, there is a bump. It’s still unclear whether this bump is caused by systematics or it has any physical meaning.

3 Uncorrelated Estimates of Dark Energy Equation of State
Asantha Cooray (UC Irvine)
I will give a talk on some of the recent work we have done on how to extract and establish equation of state with supernovae and other cosmological data.
Note: There are three subtopics. I can only remember two. One is about the measurement of spectrum by putting many many filters for each wavelength. Another one is that type Ia supernovae actually have two subtypes with different light curved, ie., the distribution of time difference of maximum and some certain fraction of luminosity has two peaks. And these two subtypes’ population change with redshift differently, one increases with z, but the other one decreases. So the precision of measuring distance by using the time difference (cosmological time dilution) will be reduced by a factor of 2-3. Adam Riess complained soon. He said by investigating low z supernovae, we can get enough information to distinguish these two types. Well, he is the quasi-Nobel on supernovae, no one doubt he can do that.

4 Inflation and Dark Energy: Is There a Connection?
Scott Watson (University of Michigan)
We now have convincing evidence that both today and in the very early universe, the cosmic expansion went through a period of acceleration. A natural question arises: Are these periods of acceleration connected? I will briefly review past attempts to address this question, as well as more recent attempts motivated by the string landscape. In particular, I will discuss a crucial theoretical difficulty that arises in constructing such models, due to the vast range of energy and length scales involved. I will also discuss the possible experimental signatures that may arise if such models can be realized.
Note: If the acceleration today is due to the same mechanism as inflation, w should not be a constant, because the effective w of inflation is changing in order to end inflation at some time. Theoretically, the scalar field may not be elementary dynamical field. One model is ‘cascade universe’ with a stair of vacuums, and more generally they are inspired by string landscape. They difficulty of these models: I can’t remember. Think about these possible difficulties: no slow roll condition, no fluctuation seeds, unnatural(well, this is the weakest one), etc.

## 概率问题

Posted by Phiphy on 04/30/2008

Update: