jump to navigation

Superstring Theory February 26, 2009

Posted by tap0340 in Philosophy of physics.
comments closed

Hi everyone,

I’m glad you all tried to stay focused during my talk, even though it was a little heavy and out there! In the following paragraphs, I will briefly discuss the history behind String Theory, and some of the people who have dedicated their lives to finding a unified theory of everything. Further more, I will try to explain this new, and radical idea of strings and the extra dimensions. There were good questions and comments during my presentation that I would also like to address.

In the early twentieth century, Albert Einstein developed his Special Theory of Relativity, combating some of Isaac Newton’s theories on motion. Using this new concept of motion, he decided to tackle an even greater problem that he saw. He wanted to find the cause of gravity. Even Newton himself said how crazy it seems that a body of mass (the sun), which is millions of miles away, can reach across empty space and exert influence (gravity) on a smaller body of mass, like earth. Einstein took the challenge. He found that when mass is present, it causes space and time to warp and curve around it. It’s along these curves that smaller masses, like the planets in our solar system, move in an orbital fashion. His theory was proven in 1919 by astronomical observation. He became a celebrity over night, and there ensued a paradigm shift.

Around the same time, a German mathemetician and physicist, named Theodor Kaluza, attempted to describe the other known force at the time, electromagnetism. He tried to describe it in a similar way that Einstein had, using warps and curves. There were no more dimensions for him to warp or curve, so he conceptualized there being one more dimension of space. When he did this, the formula he came up with produced Maxwell’s equation for electromagnetism, and Einstein’s equations to descibe gravity. He thought he had finally unified the two forces.

more to come…


Conventionality and the Geometrization of Gravity February 26, 2009

Posted by mgh2577 in Philosophy of physics.
comments closed


Albert Einstein (March 1879 to April 1955)

Einstein first discovered the actual geometrization of gravity, first by recognizing that gravity is indistinguishable from a uniformly accelerating inertial frame. From this finding Einstein recognized that light would appear to bend if viewed from a uniformly accelerating frame. Finally, with a little more math that I have involved, Einstein predicted that since the uniformly accelerating frame was equatable to gravity, that gravity would therefore bend light, which was corroborated by taking the apparent positions of stars about the sun during a solar eclipse to make the stars visible over the glare.

Once this experiment corroborated Einsteins theory, many other similar experiments were created, such as the Pound-Rebka experiment in 1959 at Harvard University. Though the first experiment was one that all people could grasp, the Pound-Rebka experiment proved more complicated. This experiment was to test the hypothesis that time slows down in a higher gravitational field. To set up the experiment scientists at Harvard University set up an experiment to calculate the redshift of gamma ray by having a loudspeaker with iron 57 at the top floor of their building and a receiving portion at the bottom which is depicted in the diagram belowpound-rebka experiment


The result of this experiment corroborated Einstein’s theory further with an accuracy of 10% which was later increased to 1%.

The implications of space-time on space exploration in micro-gravity is a subject which I proposed a question on in my presentation. After the presentation I looked to find any documentation on this but found none, interestingly however I did find that bone-loss in space is a problem causing bones to degenerate at 1-1.5% per month in space which was likened to “age-related changes  similar to osteoporosis, though when brought back to earth the bones were able to regenerate with continued exercise and rehabilitation.

To respond to the question of if the reality of space-time matters to non-physicists, I would have to say no. Just as there is no reason for every person to know how their computer works, there is no reason they should know the ins and outs of the warping of space-time, especially when the difference in time on earth from its lowest point to its highest point is in the order of a few nano-seconds over the average lifetime of a human.






Many Worlds Interpretation February 23, 2009

Posted by jfd5010 in Philosophy of physics.
comments closed

The Many Worlds Interpretation (MWI) is a metatheory that seeks to solve the problems caused by a probabilistic quantum mechanics and a deterministic Newtonian and relativistic framework. The problem that MWI attempts to fix is the issue of waveform collapse. This event occurs when a probability function dictating an event spontaneously “collapses” into a single deterministic result. MWI solves this problem by removing the collapse by creating a “world” for each possibility. The result is that the universe (or in this case multiverse) is a superposition of all possible worlds. These worlds contain the sum of everything observable and present at the time of the split (essentially what we would describe as our universe). Worlds are by nature non-communicating and increasingly divergent. This means that there is no possibility of travel between worlds and even if the “determining event” was infinitely small the initial difference would slowly increase over time in each world, causing the worlds to further differentiate.

The largest philosophical issues caused by this metatheory are related to choice, free will, and personal identity. If we view decision-making and by extension, thought in general, as a physical process, then the minute changes in thought are in fact at some level quantum events. These events are governed by probability functions that will eventually result in a single deterministic event. Based on this analysis the other possible outcomes must have occurred and are proceeding in alternate worlds. So what is choice? Even more concerning is the concept of individuality. Are the people identical to you in alternate universes you?

MWI is a fairly popular metatheory and is compatible with all linear quantum mechanics. The implications of this theory are primarily mathematical but issues of self and choice have been hotbeds of debate.


Wikipedia, “Many-worlds interpretation

Lev Vaidman, “Many-Worlds Interpretation of Quantum Mechanics,” Stanford Encyclopedia of Philosophy


Branching "Worlds"

Entropy and Time February 20, 2009

Posted by mcw5247 in Metaphysics, Philosophy of physics.
comments closed

The Second law of Thermodynamics states that a closed system will never decrease in entropy. That is to say, that the energy of a given system will never spontaniously increase. Ludwig Bolzmann worked with this idea as well as developing a statistical approach to describing how gas molecules interact within a closed container. From his work with this Boltzmann began developing his view on the arrow of time.

His ideas began with watching the gasses interact, and seeing that they always move to the highest entropy state given enough time. He stated that he believed time could only move in one direction, forward. His evidence for this was to give examples of what he called irreversible events, which are things such as air leaking from a baloon or a hot liquid cooling off at room temperature. These are events that always happen in one direction and never the other, and he claimed that this was a proof of how time could only flow forward.

From this idea, one can pose the question of how the universe can be sustaining an increase in entropy over a very long period of time. One response to this is the low entropy early universe, which is to say that shortly after the universe formed all of the matter within it was evenly distributed. This would have been a very low entropy state for the universe to be in and would thus allow for entropy to increase over a long period of time.

Through argument, Boltzmann later refined his ideas, and finally came to his entropy curve discussion. Here he said that it was possible for entropy to be a constant when looked at over a long enough period, and that we could merely be on a fluctuation of this where it appears to us that entropy always increases. This idea is disturbing however as many of our laws depend on how we currently view entropy.

The Large Hadron Collider and its Relation to Cosmic Rays February 18, 2009

Posted by bmwcarey in Philosophy of physics, Science & society.
comments closed

The Large Hadron Collider is considered an impending catastrophe by some people.  While it is a milestone of scientific achievement and a portrayal of humankind’s resolute endeavor to compose ourselves of a greater knowledge and understanding of the nature of the universe, the LHC experiment is neither new nor infrequent in nature.

The concept behind this immense experiment is to provide an adequate replication of the nature of matter shortly after the Big Bang, a time in which particles propagated and collided at astonishing kinetic energies and, consequently, considerable velocities.  The scientists behind CERN mean to collide two high-energy particles and observe the underlying mechanisms.  Speculations pertaining to the possible materialization of microscopic black holes and other disastrous scenarios instill fear in some people.  Though matter already behaved in this fashion 13.7 billion years ago, we require more compelling and currently observable evidence to justify the safety of the LHC.  Adrian Kent explains that counterarguments of catastrophic mechanisms, “show that the existence of the catastrophe mechanism is highly improbable, either because closer analysis shows that the proposed mechanism does in fact contradict well established physical principles, or because its existence would imply effects which we should almost certainly have observed but have not.”

Betelgeuse, a star of approximately 20 solar masses.  Image courtesy of NASA/ESA.  Image found through Wikipedia.

Betelgeuse, a star of approximately 20 solar masses. Imaged in ultraviolet. Image courtesy of NASA/ESA. Image found through Wikipedia.

Consider a massive star of more than eight solar masses.  Stars spend most of their lifetime in the main sequence, a period in which hydrogen fusion is active in the star core.  Massive stars are also capable of fusing heavier nuclei, such as helium into carbon, carbon into oxygen, and so on.  However, once these stars develop an iron core, they are incapable of conducting further core fusion; iron does not generate nuclear energy.  The degeneracy pressure of the core cannot sustain itself against the gravitational force of the star’s outer layers.  The inert iron core eventually collapses, releasing an overwhelming amount of energy and disseminating the outer layers in a supernova.

The Crab Nebula, a supernova remnant.  Courtesy of NASA/ESA.  Image found through Wikipedia.

The Crab Nebula, a supernova remnant. Courtesy of NASA/ESA. Image found through Wikipedia.

Within the hot remnants of supernova explosions, particles collide with each other until they escape at relativistic speeds.  These particles, identified as cosmic rays, may have kinetic energies greater than 10^20 eV, a figure of much greater magnitude than the maximum energy of LHC particles, 7 x 10^12 eV.  Earth’s atmosphere is bombarded by cosmic rays on a regular basis; occasionally, they even penetrate the atmosphere and reach the surface.  Despite this frequent occurrence, we have yet to witness the destruction of Earth, or other astronomical bodies, by means of microscopic black holes emerging from collisions between cosmic rays and other particles.  However, this does not undermine the LHC; it is a feat to be able to observe high-energy particle collisions in controlled experiments.

It is worth mentioning that the scientists behind CERN are also ordinary people.  They too have families, friends, and other attachments to the world around us.  While they are not immune to conducting mistakes, it would be erroneous to think they disregard the safety of others.

Through the LHC, scientists hope to uncover the elusive Higgs Boson, a theoretical particle recognized as the origin of mass.  Other aspirations include breakthroughs for String Theory and a more concrete understanding of dark matter.  CERN suffered a setback when an incident befell the LHC on September 19, but they currently intend to see the collider operational by late spring or early summer.

And now, the LHC rap!


Bennett, Jeffrey O., Megan Donahue, Nicholas Schneider, and Mark Voit. Cosmic Perspective: Stars, Galaxies and Cosmology. San Fransisco: Benjamin Cummings, 2007.

Adrian Kent. “A Critical look at risk assessments for global catastrophes” (pdf). 

CERN – The Large Hadron Collider.” CERN – European Organization for Nuclear Research.

Crab Nebula.” Wikipedia, the free encyclopedia. 

Betelgeuse.” Wikipedia, the free encyclopedia.

Time Travel February 17, 2009

Posted by phillymb in Metaphysics, Philosophy of physics.
comments closed

Going along with Greg’s post on the topography of time, specifically the portion on Time Dilation, I will explore the mathematical possibility and philosophical implications of time travel in physics.

Who wouldn’t want to travel back to the past to see what life was like or peek into the future?  What sort of implications would this have?  The idea of time travel has entertained us for years from black and white TV shows like the Twilight Zone until now with shows like Lost.  However, from a physical perspective how would time travel be possible?

There are two ways to travel into another person’s future:

  1. Traveling at speeds >10% of the speed of light (https://planetparadigm.wordpress.com/2009/02/12/topography-of-time/)
  2. Taking advantage of an intense gravitational field (more…)

Are There Laws in the Physical Sciences? February 14, 2009

Posted by jts3034 in Philosophy of physics.
comments closed

What is a Law?

What does it mean to be a physical law? One simple definition is to say that laws are generalizations that describe the world around us. This seems to be a little overly simple though. For example, it would be accurate to say something like, ‘there are no spheres of gold greater than a mile in diameter’. While this statement is a generalization of the state of nature, it doesn’t seem to be what we would consider a law. There is nothing about the nature of gold that prevents the formation of a gold sphere of this size. It simply just doesn’t exist. On the other hand, we could make the generalization that there are no spheres of uranium one mile in diameter. Uranium does have something in its nature that prevents this though. Using the concepts of radioactive decay and critical mass, there is no physical way to construct this object. In this case, the generalization about uranium spheres seems to be a law, or at least lawlike. Thus a stricter definition of law is needed.

Philosophers have tried to build a more explicit way to define law. One way is by equating the universe with a deductive system of logic. Deductive systems can have two qualities associated with them: strength and simplicity. A strong system will take many things into consideration. A simple system will require a minimum number of axioms. In the systems view, a law is any axiom of the system that has the best combination of strength and simplicity. The sphere of gold would add complexity while not adding strength and thus it is excluded as a law. The sphere of uranium adds needed strength and thus is a called a law. Critics of this system argue that this way of thinking causes laws to be mind dependent. What counts as simple or necessary is determined by those who are doing physics. However, laws should be independent of human consciousness.

Another way to view laws is as relationships between universals. For example, if I were to say ‘All F’s are G’s’, then the universals in this case would be F and G and the law would be the relation. Critics of this theory will argue that the relationship is very vague and is not necessarily the best way of describing the law. In the case of the uranium sphere, the law isn’t really that there are no uranium spheres of a mile diameter. That relation is a byproduct of the actual laws of critical mass and radioactive decay.

One other way philosophers talk about laws is to say that they don’t really exist. This is an anti-realist perspective. They will argue that the universe does not have to follow any certain rules. There is no exact way to describe the way things work and every law we have is just really an approximation. Looking back on almost all previous laws shows that much of what we have determined to be laws turned out to be false. The realist would argue that even though previous laws have been shown to only approximate reality, scientists are clearly getting better at describing the universe and we will eventually have some sort of complete understanding.

There is then the issue of what Physicists are actually trying to discover. There is a distinction between strict generalizations and ceteris paribus generalizations. Strict generalizations are those that exactly describe the world. Ceteris paribus generalizations that are only valid under certain circumstances. Ceteris paribus is Latin for other things equal. An example of a ceteris paribus generalization would be Newton’s Laws. These ‘laws’ only hold under the condition that gravity is not extremely large and objects are moving at low relativistic speeds. Some philosophers, such as Nancy Cartwright, argue that physicists only try to discover ceteris paribus generalizations and that there really are no strict generalizations to be found.

The Problem of Induction


It is the case that our Laws, considered as generalizations strict or ceteris paribus, must be derived from observations – a process we call induction. Ignoring any epistemological issues with assuming that what we sense accurately represents the universe, the need for induction in forming our Laws, by the nature of induction, precludes logically thinking that they will hold true universally. Explicitly stated, it need not be true that future events will follow the trends we observe, and there is no way to prove whether they will or will not before they occur. The only manner in which one can prove a statement drawn from induction is to perform an infinite number of tests – a task that is clearly impossible. A classic example is that of the “Law of Gravity.” In our experience, in the absence of other forces, objects fall down (or toward the Earth, if you must). Claiming that we can prove the Law by observing it many times – or even claiming that because we have observed it many times, it is likely to hold true universally – is illogical.

A further disadvantage of induction is the uncertainty it breeds. For a given set of observations, a number of conclusions can often be drawn, depending on the drawer’s past observations, experiences, etc. How is one to judge the quality of differing inductive claims? Surely those that are absurd – not following in a recognizable way from observations – can be discounted, as can those that are disprovable by a conflicting observation. Aside from similarly easy cases, judgment seems difficult – this is likely a useful area in which to apply Occam’s razor.

Is it rational to believe that inductively inferred Laws hold throughout the universe? According to the Skeptic’s Field Guide, it is. The author, defining a rational belief as one that is well reasoned and does not contradict itself, makes the following argument. Using an idea from Daniel Dennett, it is both physically and logically possible that the universe is described by a set of laws. According to the author, such an explanation is the simplest one, so by Occam’s razor we should accept the truth of Laws (or at least the ability of Laws to describe the universe). Obviously, this is not a logical argument, as it requires us to accept Occam’s razor. But by his standards, which seem reasonable to me, it is rational.

The truly important question is whether we can accept using inductively inferred laws to complete everyday (and not so everyday) tasks. If we cannot be certain that the aerodynamics principles we’ve established will always hold true, how can we feel safe riding in a plane? The Stanford Encyclopedia of Philosophy claims that we can simply because that is what we are used to. Everything we’ve observed has lead us to believe that induction is a good way to analyze the world, so in everyday life, we do. And as long as the planes keep flying, that’s alright with me.

Another issue with laws are that they have the possibility to be too simple to describe reality.  Sometimes there are other factors in reality that change how things work that differ from what the laws state.  One of those is the example of gravity and how two particles interact.  There is a law of gravity that states that the attraction between two bodies is directly related to the size of those particles and distance between those two.  The trouble is sometimes those particles are charged and this adds additional attraction or repulsion.  The question came up that if you say this then are forces just a human construct and not even reality so can say that these formulas don’t even try to describe reality?





Nancy Cartwright. How the Laws of Physics Lie. Oxford University Press, USA . 1983

Living in a Hologram February 13, 2009

Posted by ebrister in Philosophy of physics.
comments closed

It sounds weird to say that our universe might be a hologram, and to be honest, reading this article in New Scientist didn’t make it seem any less weird.

Topography of Time February 12, 2009

Posted by greghrinda in Philosophy of physics.
comments closed

Much of our life is directly related to the concept of time.   Whether it be  what time class starts or the deadline for an application, we make ourselves slaves to  the clock.   With so much dependent on the notion of time, it is important to dissect our knowledge of it in a scientific manner.

Aristotelian Spacetime

Everyday accounts of time use Aristotelian Spacetime.  In this sense, time is an infinite continuum and an instant in time is defined by the preceding and future time.  This description does not allow for a beginning of time or an end of time because that instant of time cannot be defined.  Aristotle uses a frame independent spacetime that requires the continuous progression of time.

Einstein’s Relativity

Relativity as defined by Einstein has different ramifications to the notion of time.  Each reference body has an individual time, meaning a standard “time” is useless without knowing the reference frame.   Relativity becomes more important as the speed of a reference frame increaes to relativistic speeds (>10% the speed of light).

Time dialtion is the culmination of time difference between reference frames.  Imagine a train passing through the desert.  Michael is centered on the train while Bruce is a distance to the side of the track.  Just as Michael and Bruce are aligned by a line perpindicular to the train track, lightning strikes both the engine and the caboose of the train.  Since both the front and back of the train are equidistant to Bruce and he is stationary, the light from the lightning will reach him at the same time, to Bruce the lightning strikes were simultaneous.   Micheal, on the other hand, travels toward the engine because he is traveling on the train, so he first recieves the light from the frontal lightning strike.  To Michael the two lightning strikes are events seperated by time.

Check out this video for a more detailed analysis of this Train Experiment of Time Dilation.  Here is another video on Time Dilation.  Another thought experiment showing differences in reference frame time is the Twin Paradox.  It is difficult to explain in text, so I am trying to find a good video for it.  Here are some pictures for the twins Pam and Jim.

Twin Paradox

Pam leaves Jim on Earth and travels away from Earth and back in a space ship.  They send each other greetings once a year using light messages.  Even going 60% the speed of light, Pam is eight years older while Jim is 10 years older.

Twin Paradox 2

Consequences of the Big Bang

Our understanding of the universe brings more debate about time.  Through the work of Edwin Hubble in the early 1900s, the universe is predicted to be roughly 13.7 billion years old.  Does this mean that there was a beginning of time?  Not really. The only thing that the current model shows is that time is infinite because the universe is expanding.  As we get closer to the time of the Big Bang, the physics that we use begins to break down, in particular, the Plank Era was the first 10-43 seconds after the big bang and it was not until after this time that the four fundamental forces were formed (Gravity, Electromagnetism, Weak Interaction, and Strong Interaction).  Even Stephen Hawking has noted the quandry of defing time

Time has a finite amount of past, but no beginning.

One new emerging theory relating to time and the Big Bang is the Quantization of Time.  This theory holds that time is comprised of unique quanta of time having a duration of 10-43 seconds.  This theory rebukes the continuum of time, trading in the analog style for a binary code of time related information.  The fact that we are currently incapable of measuring such precise gaps in time is a plus for the theory, but more ground work needs to be done before it is of widespread knowledge throughtout academia.

As was discussed in class, the Big Bang is our universe’s origin, but could we just be the branching of a greater entity above our universe?  This leads to the idea of multiverses.  Our universe could have been forged from the singularity of a black hole in another universe, ripping space and time to start our universe. Likewise, black holes in our universe may be portals to new multiverses.  In this case it would be unclear as to the function of time or the unification of time between multiverses as it is not required that each multiverse follow the same physical principles our universe expresses.



Electron-Positron Annihilation

The last interesting note on the Topography of Time that I will discuss is retrocausality.  Many people have heard of antimatter, and some have heard of particle annihilation.  In the case of electron-positron annihilation, the preceding matter-antimatter pair meet and are destroyed to produce a gamma photon (ignore the right half of the photo, that depicts information to another phenomena).  In this situation, the only difference between the electron and positron is their charge.  However, if time is not a straight arrow, the suppossed positron particle could just be an electron moving backwards through time.  This is based of the assumption of the right hand rule used to determine direction of charge.  If antimatter is just matter moving backwards through time, what are the ramifications to the physical theories of today?


Markosian, Ned. “Time.” http://plato.stanford.edu/entries/time/#TopTim

Dowden, Bradely. “Time.” http://www.iep.utm.edu/t/time.htm#H8

Hitchcock, Christopher.  Contemporary debates in the philosophy of science. Maldon, MA: Blackwell publishing, 2004.

Salgado, Rob. “Aristotle’s Spacetime” http://www.phy.syr.edu/courses/modules/LIGHTCONE/aristotle.html