Wednesday, 18 December 2013

Life is projection

We are Holograms



It may be news to you and it may need an explanation, but here it is: You are a hologram.
And while this second statement may not appear to be as speculative, it still needs to be said considering the first statement: You are real.
As the first declaration is likely on your mind I’ll deal with it to begin, however I will say now that it is the second statement that will take a greater leap of faith in order to believe; in fact, reality as we know it will in the end depend on philosophy.
A last statement before this op-ed: I promise this is all easily understood.
There are four ideas that need to be brought up before we go much further. The first two — which are based in science — explain how we get into this mess to begin with; that we are holograms. The third — based in logic-bound philosophy — explains why this all makes perfect sense. And the fourth, well shoot, the fourth puts a shiny bow on everything… Don’t you just love arguments that wrap up nicely?
All in all, I think it’s a fairly reasonable trade off here; read a few hundred words an in exchange for an understanding of our existence?!? Deal? Deal!!
1) We Are All Holograms
It’s true! At least it looks like that’s the case… Check it out. In sum, the recent works of a team of physicists led by Yoshifumi Hyakutake has mathematically reconciled many of the propositions put forth in 1997 by theoretical physicist Juan Maldacena (perhaps more accurately, the current work supports the theories that Maldacena built upon).
Maldacena’s theory of the universe suggests that all that we see, all that we are, is the result of a resonance (or is the by-product, the projection, a hologram, a vibration, a wave-length resultant) of an incredibly complex interplay of particles that is occurring in a central, 'original' dimension.
In this dimension there is no gravity, and space is vastly different from our common understanding of it; I mean, we think of space as stars and a bunch of empty area, right? That’s not what space is in this dimension though — it’s best described as flat and dense. And in this flat and dense cosmos, the particles — like all things — move, which causes friction and vibration. At a cosmic level, these vibrations — these intense and constant humming's in the "central" and "original" dimension — have, in their oscillation, spawned a series of unimaginably large strings of by-products (which are fed through what is described as a "holographic way", as in gateway, and they transmit "stuff" like gravity. Don’t you love my exceptionally technical language?). While the "original" feeds these strings of by-products via continual resonance, the "original" is in return fed cosmically-scaled reverberations (or counter-vibrations, tides, resonances) by the strings of by-products.
These "vibrations" or "waves" or "strings," which are by-products of the "original" dimension, include the thing that we commonly think of as our universe; this is a misnomer though, as the universe refers to everything, and therefore would include the "original" and any other "vibrations" or "waves" or "strings" that may exist, regardless of whether or not they are visible to us (and thus a part of our mental concept of the universe). According to Maldacena’s theory, there are several vibrating strings, each home to millions of galaxies, swaths of dark matter, each with black holes, stars, planets, etc. (or in other words, there are several area’s that we commonly call a ‘"universe"), and that is one of the reasons this thesis is commonly called 'string theory'.
Wipe your brow off! We’ve gotten past the first, biggest and most important step — we now understand Maldacena’s theory (kind of, not really though, but in the most basic of terms)! So there is step one, step two is to explain why this makes sense.
We now turn to (and only for a moment and in the simplest of terms, I promise) the work of Einstein as well as the papers developed by Hyakutake’s team. First of all, Einstein’s Theory of Gravity was disrupted by certain issues with quantum physics, which became evident in observation of black holes. Einstein knew that his theory, which otherwise made sense, couldn’t be true if there were even these slight inconsistencies at any level; so what was happening? Einstein posited concepts like white holes to try and resolve the issues, but even they didn’t make perfect sense... Well, Maldacena’s theory entirely reconciles these problems.
The only issue with Maldacena’s theory — the one where we are simply the result of oscillations in the "original" which is fueling the projection of seemingly real cosmos — was that there wasn’t any proof that it was true beyond the fact that it reconciled the issues relating to Einstein’s work (which was in itself a huge accomplishment). Well, the papers published by Hyakutake’s team has mathematically validated Maldacena’s work by narrowing in on the event horizon of black holes; in calculating the entropy and energy transfer that appears to occur in a black holes event horizon and comparing these figures against the proposed transfers in that should occur as hypothesized by Maldacena, the team was able to see if there was a correlation in our cosmos with the theory; there was. They data lined up with the theory… Perfectly.
It needs to be said that this is not absolute, undeniable proof of Maldacena’s hologram theory; no, it is a mathematical positive that suggests he is right. A computation that suggest that you, I and everything we see are simply the projected vibrations of an "original dimension" that is all but unknown to us — the by-product of a plateau where time, gravity, and space exist among a group of particles in way that is all but entirely alien to us, in a way that is capable of conducting, managing, and oscillating everything that has and will exist. I know, wow…
If you believe this is possible (and I’m going to move forward hoping that you do), the rest will be easier to understand. If you don’t (yet) buy into the idea please hold on until the end of this article, it will make (even more) sense soon…
Read more 
                       seziun.blogspot.in

Thursday, 21 November 2013

                    

                                  Role in ambient radiation

Cosmic rays constitute a fraction of the annual radiation exposure of human beings on the Earth, averaging 0.39 mSv out of a total of 3 mSv per year (13% of total background) for the Earth's population. However, the background radiation from cosmic rays increases with altitude, from 0.3 mSv per year for sea-level areas to 1.0 mSv per year for higher-altitude cities, raising cosmic radiation exposure to a quarter of total background radiation exposure for populations of said cities. Airline crews flying long distance high-altitude routes can be exposed to 2.2 mSv of extra radiation each year due to cosmic rays, nearly doubling their total ionizing radiation exposure.  

                                      Effect on electronics

Cosmic rays have sufficient energy to alter the states of elements in electronic integrated circuits, causing transient errors to occur, such as corrupted data in electronic memory devices, or incorrect performance of CPUs, often referred to as "soft errors" (not to be confused with software errors caused by programming mistakes/bugs). This has been a problem in extremely high-altitude electronics, such as in satellites, but with transistors becoming smaller and smaller, this is becoming an increasing concern in ground-level electronics as well. Studies by IBM in the 1990s suggest that computers typically experience about one cosmic-ray-induced error per 256 megabytes of RAM per month. To alleviate this problem, the Intel Corporation has proposed a cosmic ray detector that could be integrated into future high-density microprocessors, allowing the processor to repeat the last command following a cosmic-ray event.
Cosmic rays are suspected as a possible cause of an in-flight incident in 2008 where an Airbus A330 airliner of Qantas twice plunged hundreds of feet after an unexplained malfunction in its flight control system. Many passengers and crew members were injured, some seriously. After this incident, the accident investigators determined that the airliner's flight control system had received a data spike that could not be explained, and that all systems were in perfect working order. This has prompted a software upgrade to all A330 and A340 airliners, worldwide, so that any data spikes in this system are filtered out electronically.

                                     Role in lightning

Cosmic rays have been implicated in the triggering of electrical breakdown in lightning. It has been proposed that essentially all lightning is triggered through a relativistic process, "runaway breakdown", seeded by cosmic ray secondaries. Subsequent development of the lightning discharge then occurs through "conventional breakdown" mechanisms.

                                     Postulated role in climate change

A role of cosmic rays directly or via solar-induced modulations in climate change was suggested by Edward P. Ney in 1959 and by Robert E. Dickinson in 1975. Despite the opinion of over 97% of climate scientists against this notion, the idea has been revived in recent years, most notably by Henrik Svensmark, who has argued that because solar variations modulate the cosmic ray flux on Earth, they would consequently affect the rate of cloud formation and hence the climate. Nevertheless, it has been noted by climate scientists actively publishing in the field that Svensmark has inconsistently altered data on most of his published work on the subject, an example being adjustment of cloud data that understates error in lower cloud data, but not in high cloud data.
The 2007 IPCC synthesis report, however, strongly attributes a major role in the ongoing global warming to human-produced gases such as carbon dioxide, nitrous oxide, and halocarbons, and has stated that models including natural forcings only (including aerosol forcings, which cosmic rays are considered by some to contribute to) would result in far less warming than has actually been observed or predicted in models including anthropogenic forcings.
Svensmark, being one of several scientists outspokenly opposed against the mainstream scientific assessment of global warming, has found eminence among the popular culture movement thatdenies the scientific consensus. Despite this, Svensmark's work exaggerating the magnitude of the effect of GCR on global warming continues to be refuted in the mainstream science. For instance, a November 2013 study showed that less than 14 percent of global warming since the 1950s could be attributed to cosmic ray rate, and while the models showed a small correlation every 22 years, the cosmic ray rate did not match the changes in temperature, indicating that it was not a causal relationship. 

            Types of cosmic rays are-

Cosmic rays originate as primary cosmic rays, which are those originally produced in various astrophysical processes. Primary cosmic rays are composed primarily of protons and alpha particles (99%), with a small amount of heavier nuclei (~1%) and an extremely minute proportion of positrons and antiprotons. Secondary cosmic rays, caused by a decay of primary cosmic rays as they impact an atmosphere, include neutrons, pions, positrons, and muons. Of these four, the latter three were first detected in cosmic rays.

Primary cosmic rays

Primary cosmic rays primarily originate from outside our Solar System  and sometimes even the Milky Way. When they interact with Earth's atmosphere, they are converted to secondary particles. The mass ratio of helium to hydrogen nuclei, 28%, is similar to the primordial elemental abundance ratio of these elements, 24%. The remaining fraction is made up of the other heavier nuclei that are nuclear synthesis end products, products of the Big Bang, primarily lithium, beryllium, and boron. These light nuclei appear in cosmic rays in much greater abundance (~1%) than in the solar atmosphere, where they are only about 10−11 as abundant as helium.
This abundance difference is a result of the way secondary cosmic rays are formed. Carbon and oxygen nuclei collide with interstellar matter to form lithium, beryllium and boron in a process termed cosmic ray spallation. Spallation is also responsible for the abundances of scandium,titanium, vanadium, and manganese ions in cosmic rays produced by collisions of iron and nickel nuclei with interstellar matter.

Primary cosmic ray antimatter

Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, but these do not appear to be the products of large amounts of antimatter in the Big Bang, or indeed complex antimatter in the later Universe. Rather, they appear to consist of only these two types of elementary anti-particles, both newly made in energetic processes.
Preliminary results from the presently-operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 GeV to 250 GeV, with the fraction of positrons to electrons increasing at higher energies. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.
Antiprotons arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.
There is no evidence of complex antimatter atomic nuclei, such as anti-helium nuclei (anti-alpha) particles, in cosmic rays. These are actively being searched for. A prototype of the AMS-02designated AMS-01, was flown into space aboard the Space Shuttle Discovery on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio.

Secondary cosmic rays

When cosmic rays enter the Earth's atmosphere they collide with molecules, mainly oxygen and nitrogen. The interaction produce a cascade of lighter particles, a so-called air shower. All of the produced particles stay within about one degree of the primary particle's path.
Typical particles produced in such collisions are neutrons and charged mesons such as positive or negative pions and kaons. Some of these subsequently decay into muons, which are able to reach the surface of the Earth, and even penetrate for some distance into shallow mines. The muons can be easily detected by many types of particle detectors, such as cloud chambers, bubble chambers or scintillation detectors. The observation of a secondary shower of particles in multiple detectors at the same time is an indication that all of the particles came from that event.
Cosmic rays impacting other planetary bodies in the Solar System are detected indirectly by observing high energy gamma ray emissions by gamma-ray telescope. These are distinguished from radioactive decay processes by their higher energies above  about 10 MeV.

Cosmic-ray-flux

The flux of incoming cosmic rays at the upper atmosphere is dependent on the solar wind, the Earth's magnetic field, and the energy of the cosmic rays. At distances of ~94 AU from the Sun, the solar wind undergoes a transition, called the termination shock, from supersonic to subsonic speeds. The region between the termination shock and the heliopause acts as a barrier to cosmic rays, decreasing the flux at lower energies (≤ 1 GeV) by about 90%. However, the strength of the solar wind is not constant, and hence it has been observed that cosmic ray flux is correlated with solar activity.
In addition, the Earth's magnetic field acts to deflect cosmic rays from its surface, giving rise to the observation that the flux is apparently dependent on latitude, longitude, and azimuth angle. The magnetic field lines deflect the cosmic rays towards the poles, giving rise to the aurorae.
The combined effects of all of the factors mentioned contribute to the flux of cosmic rays at Earth's surface. For 1 GeV particles, the rate of arrival is about 10,000 per square meter per second. At 1 TeV the rate is 1 particle per square meter per second. At 10 PeV there are only a few particles per square meter per year. Particles above 10 EeV arrive only at a rate of about one particle per square kilometer per year, and above 100 EeV at a rate of about one particle per square kilometer per century.
In the past, it was believed that the cosmic ray flux remained fairly constant over time. However, recent research suggests 1.5 to 2-fold millennium-timescale changes in the cosmic ray flux in the past forty thousand years.
The magnitude of the energy of cosmic ray flux in interstellar space is very comparable to that of other deep space energies: cosmic ray energy density averages about one electron-volt per cubic centimeter of interstellar space, or ~1 eV/cm3, which is comparable to the energy density of visible starlight at 0.3 eV/cm3, the galactic magnetic field energy density (assumed 3 microgauss) which is ~0.25 eV/cm3, or the cosmic microwave background (CMB) radiation energy density at ~ 0.25 eV/cm3.

Detection methods

There are several ground-based methods of detecting cosmic rays currently in use. The first detection method is called the air Cherenkov telescope, designed to detect low-energy (<200 GeV) cosmic rays by means of analyzing their Cherenkov radiation, which for cosmic rays are gamma rays emitted as they travel faster than the speed of light in their medium, the atmosphere.While these telescopes are extremely good at distinguishing between background radiation and that of cosmic-ray origin, they can only function well on clear nights without the Moon shining, and have very small fields of view and are only active for a few percent of the time. Another Cherenkov telescope uses water as a medium through which particles pass and produce Cherenkov radiation to make them detectable.

Extensive air shower (EAS) arrays, a second detection method, measure the charged particles which pass through them. EAS arrays measure much higher-energy cosmic rays than air Cherenkov telescopes, and can observe a broad area of the sky and can be active about 90% of the time. However, they are less able to segregate background effects from cosmic rays than can air Cherenkov telescopes. EAS arrays employ plastic scintillators in order to detect particles.
Another method was developed by Robert Fleischer, P. Buford Price, and Robert M. Walkerfor use in high-altitude balloons. In this method, sheets of clear plastic, like 0.25 mm Lexan polycarbonate, are stacked together and exposed directly to cosmic rays in space or high altitude. The nuclear charge causes chemical bond breaking or ionization in the plastic. At the top of the plastic stack the ionization is less, due to the high cosmic ray speed. As the cosmic ray speed decreases due to deceleration in the stack, the ionization increases along the path. The resulting plastic sheets are "etched" or slowly dissolved in warm caustic sodium hydroxide solution, that removes the surface material at a slow, known rate. The caustic sodium hydroxide dissolves at a faster rate along the path of the ionized plastic. The net result is a conical etch pit in the plastic. The etch pits are measured under a high-power microscope (typically 1600x oil-immersion), and the etch rate is plotted as a function of the depth in the stacked plastic.
This technique yields a unique curve for each atomic nucleus from 1 to 92, allowing identification of both the charge and energy of the cosmic ray that traverses the plastic stack. The more extensive the ionization along the path, the higher the charge. In addition to its uses for cosmic-ray detection, the technique is also used to detect nuclei created as products of nuclear fission.
A fourth method involves the use of cloud chambers to detect the secondary muons created when a pion decays. Cloud chambers in particular can be built from widely available materials and can be constructed even in a high-school laboratory. A fifth method, involving bubble chambers, can be used to detect cosmic ray particles.

Changes in atmospheric chemistry

Cosmic rays ionize the nitrogen and oxygen molecules in the atmosphere, which leads to a number of chemical reactions. One of the reactions results in ozone depletion. Cosmic rays are also responsible for the continuous production of a number of unstable isotopes in the Earth's atmosphere, such as carbon-14, via the reaction:
n + 14N → p + 14C
Cosmic rays kept the level of carbon-14 in the atmosphere roughly constant (70 tons) for at least the past 100,000 years, until the beginning of above-ground nuclear weapons testing in the early 1950s. This is an important fact used in radiocarbon dating used in archaeology.

                                       Identification of Cosmic rays

In the 1920s the term "cosmic rays" was coined by Robert Millikan who made measurements of ionization due to cosmic rays from deep under water to high altitudes and around the globe. Millikan believed that his measurements proved that the primary cosmic rays were gamma rays, i.e., energetic photons. And he proposed a theory that they were produced in interstellar space as by-products of the fusion of hydrogen atoms into the heavier elements, and that secondary electrons were produced in the atmosphere by Compton scattering of gamma rays. But then, in 1927, J. Clay found evidence, later confirmed in many experiments, of a variation of cosmic ray intensity with latitude, which indicated that the primary cosmic rays are deflected by the geomagnetic field and must therefore be charged particles, not photons. In 1929,Bothe and Kolhörster discovered charged cosmic-ray particles that could penetrate 4.1 cm of gold. Charged particles of such high energy could not possibly be produced by photons from Millikan's proposed interstellar fusion process.
In 1930, Bruno Rossi predicted a difference between the intensities of cosmic rays arriving from the east and the west that depends upon the charge of the primary particles - the so-called "east-west effect." Three independent experiments found that the intensity is, in fact, greater from the west, proving that most primaries are positive. During the years from 1930 to 1945, a wide variety of investigations confirmed that the primary cosmic rays are mostly protons, and the secondary radiation produced in the atmosphere is primarily electrons, photons and muons. In 1948, observations with nuclear emulsions carried by balloons to near the top of the atmosphere showed that approximately 10% of the primaries are helium nuclei (alpha particles) and 1% are heavier nuclei of the elements such as carbon, iron, and lead.
During a test of his equipment for measuring the east-west effect, Rossi observed that the rate of near-simultaneous discharges of two widely separated Geiger counters was larger than the expected accidental rate. In his report on the experiment, Rossi wrote "... it seems that once in a while the recording equipment is struck by very extensive showers of particles, which causes coincidences between the counters, even placed at large distances from one another." In 1937 Pierre Auger, unaware of Rossi's earlier report, detected the same phenomenon and investigated it in some detail. He concluded that high-energy primary cosmic-ray particles interact with air nuclei high in the atmosphere, initiating a cascade of secondary interactions that ultimately yield a shower of electrons, and photons that reach ground level.
Soviet physicist Sergey Vernov was the first to use radiosondes to perform cosmic ray readings with an instrument carried to high altitude by a balloon. On 1 April 1935, he took measurements at heights up to 13.6 kilometers using a pair of Geiger counters in an anti-coincidence circuit to avoid counting secondary ray showers.
Homi J. Bhabha derived an expression for the probability of scattering positrons by electrons, a process now known as Bhabha scattering. His classic paper, jointly with Walter Heitler, published in 1937 described how primary cosmic rays from space interact with the upper atmosphere to produce particles observed at the ground level. Bhabha and Heitler explained the cosmic ray shower formation by the cascade production of gamma rays and positive and negative electron pairs.

Energy distribution of Cosmic Rays

Measurements of the energy and arrival directions of the ultra-high energy primary cosmic rays by the techniques of "density sampling" and "fast timing" of extensive air showers were first carried out in 1954 by members of the Rossi Cosmic Ray Group at the Massachusetts Institute of Technology. The experiment employed eleven scintillation detectors arranged within a circle 460 meters in diameter on the grounds of the Agassiz Station of the Harvard College Observatory. From that work, and from many other experiments carried out all over the world, the energy spectrum of the primary cosmic rays is now known to extend beyond 1020 eV. A huge air shower experiment called the Auger Project is currently operated at a site on the pampas of Argentina by an international consortium of physicists, led by James Cronin, 1980 Nobel Prize in Physics of the University of Chicago and Alan Watson of the University of Leeds. Their aim is to explore the properties and arrival directions of the very highest-energy primary cosmic rays. The results are expected to have important implications for particle physics and cosmology, due to a theoretical Greisen–Zatsepin–Kuzmin limit to the energies of cosmic rays from long distances (about 160 million light years) which occurs above 1020 eV because of interactions with the remnant photons from the big bang origin of the universe.
In November 2007, the Auger Project team announced some preliminary results. These showed that the directions of origin of the 27 highest-energy events were strongly correlated with the locations of active galactic nuclei (AGNs). The results support the theory that at the centre of each AGN is a large black hole exerting a magnetic field strong enough to accelerate a bare proton to energies of 1020 eV and higher.
High-energy gamma rays (>50 MeV photons) were finally discovered in the primary cosmic radiation by an MIT experiment carried on the OSO-3 satellite in 1967. Components of both galactic and extra-galactic origins were separately identified at intensities much less than 1% of the primary charged particles. Since then, numerous satellite gamma-ray observatories have mapped the gamma-ray sky. The most recent is the Fermi Observatory, which has produced a map showing a narrow band of gamma ray intensity produced in discrete and diffuse sources in our galaxy, and numerous point-like extra-galactic sources distributed over the celestial sphere.

Sources of cosmic rays

Early speculation on the sources of cosmic rays included a 1934 proposal by Baade and Zwicky suggesting cosmic rays originating from supernovae. A 1948 proposal by Horace W. Babcock suggested that magnetic variable stars could be a source of cosmic rays. Subsequently in 1951, Y. Sediko et al. identified the Crab Nebula as a source of cosmic rays. Since then, a wide variety of potential sources for cosmic rays began to surface, including supernovaeactive galactic nucleiquasars, and gamma-ray bursts.
Later experiments have helped to identify the sources of cosmic rays with greater certainty. In 2009, a paper presented at the International Cosmic Ray Conference(ICRC) by scientists at the Pierre Auger Observatoryshowed ultra-high energy cosmic rays (UHECRs) originating from a location in the sky very close to the radio galaxy Centaurus A, although the authors specifically stated that further investigation would be required to confirm Cen A as a source of cosmic rays. However, no correlation was found between the incidence of gamma-ray bursts and cosmic rays, causing the authors to set a lower limit of 10−6 erg cm−2 on the flux of 1 GeV-1 TeV cosmic rays from gamma-ray bursts.
In 2009, supernovae were said to have been "pinned down" as a source of cosmic rays, a discovery made by a group using data from the Very Large Telescope. This analysis, however, was disputed in 2011 with data from PAMELA, which revealed that "spectral shapes of [hydrogen and helium nuclei] are different and cannot be described well by a single power law", suggesting a more complex process of cosmic ray formation. In February 2013, though, research analyzing data from Fermi revealed through an observation of neutral pion decay that supernovae were indeed a source of cosmic rays, with each explosion producing roughly 3 × 1042 - 3 × 1043 J of cosmic rays. However, supernovae do not produce all cosmic rays, and the proportion of cosmic rays that they do produce is a question which cannot be answered without further study.

Cosmic Radiation

                                  Cosmic rays are very high-energy particles, mainly originating outside the Solar System. They may produce showers of secondary particles that penetrate and impact the Earth's atmosphere and sometimes even reach the surface. Composed primarily of high-energy protons and atomic nuclei, they are of mysterious origin. Data from the Fermi space telescope (2013) have been interpreted as evidence that a significant fraction of primary cosmic rays originate from the supernovae of massive stars. However, this is not thought to be their only source.Active galactic nuclei probably also produce cosmic rays.
The term ray is a historical accident, as cosmic rays were at first, and wrongly, thought to be mostly electromagnetic radiation. In modern common usage high-energy particles with intrinsic mass are known as "cosmic" rays, and photons, which are quanta of electromagnetic radiation (and so have no intrinsic mass) are known by their common names, such as "gamma rays" or "X-rays", depending on their frequencies.
Cosmic rays attract great interest practically, due to the damage they inflict on microelectronics and life outside the protection of an atmosphere and magnetic field, and scientifically, because the energies of the most energetic ultra-high-energy cosmic rays (UHECRs) have been observed to approach 3 × 1020 eV, about 40 million times the energy of particles accelerated by the Large Hadron Collider. At 50 J, the highest-energy ultra-high-energy cosmic rays have energies comparable to the kinetic energy of a 90-kilometre-per-hour (56 mph) baseball. As a result of these discoveries, there has been interest in investigating cosmic rays of even greater energies. Most cosmic rays, however, do not have such extreme energies; the energy distribution of cosmic rays peaks at 0.3 gigaelectronvolts (4.8×10−11 J).
Of primary cosmic rays, which originate outside of Earth's atmosphere, about 99% are the nuclei (stripped of their electron shells) of well-known atoms, and about 1% are solitary electrons (similar to beta particles). Of the nuclei, about 90% are simple protons, i. e. hydrogen nuclei; 9% are alpha particles, and 1% are the nuclei of heavier elements. A very small fraction are stable particles of antimatter, such as positrons or anti protons. The precise nature of this remaining fraction is an area of active research. An active search from Earth orbit for anti-alpha particles has failed to detect them.
Discovery of Cosmic Radiation
In 1909 Theodor Wulf developed an electrometer, a device to measure the rate of ion production inside a hermetically sealed container, and used it to show higher levels of radiation at the top of the Eiffel Tower than at its base. However, his paper published in Physikalische Zeitschrift was not widely accepted. In 1911 Domenico Pacini observed simultaneous variations of the rate of ionization over a lake, over the sea, and at a depth of 3 meters from the surface. Pacini concluded from the decrease of radioactivity underwater that a certain part of the ionization must be due to sources other than the radioactivity of the Earth.
Then, in 1912, Victor Hess carried three enhanced-accuracy Wulf electrometers. to an altitude of 5300 meters in a free balloon flight. He found the ionization rate increased approximately fourfold over the rate at ground level. Hess also ruled out the Sun as the radiation's source by making a balloon ascent during a near-total eclipse. With the moon blocking much of the Sun's visible radiation, Hess still measured rising radiation at rising altitudes. He concluded "The results of my observation are best explained by the assumption that a radiation of very great penetrating power enters our atmosphere from above." In 1913–1914, Werner Kolhörster confirmed Victor Hess' earlier results by measuring the increased ionization rate at an altitude of 9 km.
Hess received the Nobel Prize in Physics in 1936 for his discovery.
The Hess balloon flight took place on 7 August 1912. By sheer coincidence, exactly 100 years later on 7 August 2012, the Mars Science Laboratory rover used its Radiation Assessment Detector (RAD) instrument to begin measuring the radiation levels on another planet for the first time. On 31 May 2013, NASA scientists reported that a possible manned mission to Mars may involve a great radiation risk based on the amount of energetic particle radiation detected by the RAD on the Mars Science Laboratory while traveling from the Earth to Mars in 2011-2012.

Friday, 21 June 2013

Applications of Chaos theory

                     Chaos theory is applied in many scientific disciplines, including: geology, mathematics,microbiology, biology, computer science, economics,  engineering, finance,  algorithmic trading,  meteorology, philosophy, physics, politics,population dynamics,  psychology, and robotics.
Chaotic behavior has been observed in the laboratory in a variety of systems, including electrical circuits,  lasersoscillating chemical reactions, fluid dynamics, and mechanical and magneto-mechanical devices, as well as computer models of chaotic processes. Observations of chaotic behavior in nature include changes in weather, the dynamics of satellites in the solar system, the time evolution of the magnetic field of celestial bodies, population growth in ecology, the dynamics of the action potentials in neurons, and molecular vibrations. There is some controversy over the existence of chaotic dynamics in plate tectonics  and in economics. 
Chaos theory is currently being applied to medical studies of epilepsy, specifically to the prediction of seemingly random seizures by observing initial conditions. 
Quantum chaos theory studies how the correspondence between quantum mechanics and classical mechanics works in the context of chaotic systems.  Relativistic chaos describes chaotic systems under general relativity. 
The motion of a system of three or more stars interacting gravitationally (the gravitational N-body problem) is generically chaotic. 
In electrical engineering, chaotic systems are used in communications, random number generators, and encryption systems.
In numerical analysis, the Newton-Raphson method of approximating the roots of a function can lead to chaotic iterations if the function has no real roots. 

History of Chaos theory


                                   An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits which are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898 Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature. In the system studied, "Hadamard's billiards", Hadamard was able to show that all trajectories are unstable in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent.
Much of the earlier theory was developed almost entirely by mathematicians, under the name of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by G.D. Birkhoff, A. N. Kolmogorov, M.L. Cartwright and J.E. Littlewood, and Stephen SmaleExcept for Smale, these studies were all directly inspired by physics: the three-body problem in the case of Birkhoff, turbulence and astronomical problems in the case of Kolmogorov, and radio engineering in the case of Cartwright and Littlewood. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing.
Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident for some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behaviour of certain experiments like that of the logistic map. What had been beforehand excluded as measure imprecision and simple "noise" was considered by chaos theories as a full component of the studied systems.
The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems.
An early pioneer of the theory was Edward Lorenz whose interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz was using a simple digital computer, a Royal McBee LGP-30, to run his weather simulation. He wanted to see a sequence of data again and to save time he started the simulation in the middle of its course. He was able to do this by entering a printout of the data corresponding to conditions in the middle of his simulation which he had calculated last time.
To his surprise the weather that the machine began to predict was completely different from the weather calculated before. Lorenz tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 was printed as 0.506. This difference is tiny and the consensus at the time would have been that it should have had practically no effect. However Lorenz had discovered that small changes in initial conditions produced large changes in the long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modelling cannot in general make long-term weather predictions. Weather is usually predictable only about a week ahead.
In 1963, Benoît Mandelbrot found recurring patterns at every scale in data on cotton prices. Beforehand, he had studied information theory and concluded noise was patterned like a Cantor set: on any scale the proportion of noise-containing periods to error-free periods was a constant – thus errors were inevitable and must be planned for by incorporating redundancy. Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). This challenged the idea that changes in price were normally distributed. In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears to be a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (for example, the Menger sponge, the Sierpiński gasket and the Koch curve or "snowflake", which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1975 Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory. Biological systems such as the branching of the circulatory and bronchial systems proved to fit a fractal model.
Chaos was observed by a number of experimenters before it was recognized; e.g., in 1927 by van der Pol and in 1958 by R.L. Ives. However, as a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on Nov. 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.
In December 1977 the New York Academy of Sciences organized the first symposium on Chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw (a physicist, part of the Eudaemons group with J. Doyne Farmer and Norman Packard who tried to find a mathematical method to beat roulette, and then created with them the Dynamical Systems Collective in Santa Cruz, California), and the meteorologist Edward Lorenz.
The following year, Mitchell Feigenbaum published the noted article "Quantitative Universality for a Class of Nonlinear Transformations", where he described logistic maps. Feigenbaum notably discovered the universality in chaos, permitting an application of chaos theory to many different phenomena.
In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum "for his brilliant experimental demonstration of the transition to turbulence and chaos in dynamical systems".
Then in 1986 the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on Chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking disorder among schizophrenics. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example in the study of pathological cardiac cycles.
In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered to be one of the mechanisms by which complexity arises in nature.
Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behaviour. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including: earthquakes (which, long before SOC was discovered, were known as a source of scale-invariant behaviour such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law  describing the frequency of aftershocks); solar flares; fluctuations in economic systems such as financial markets(references to SOC are common in econophysics); landscape formation; forest fires; landslides; epidemics; and biological evolution(where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These "applied" investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.
This same year 1987, James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public, (though his history under-emphasized important Soviet contributions). At first the domain of work of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by J. Gleick.
The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory continues to be a very active area of research, involving many different disciplines (mathematics, topology, physics, population biology, biology, meteorology, astrophysics, information theory, etc.).

                          Chaos theory

                    Chaos theory is a field of study in mathematics, with applications in several disciplines including physics, engineering, economics and biology. Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions, an effect which is popularly referred to as the butterfly effect. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for such dynamical systems, rendering long-term prediction impossible in general.  This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved.  In other words, the deterministic nature of these systems does not make them predictable.  This behavior is known as deterministic chaos, or simply chaos. This was summarised by Edward Lorenz as follows: 
Chaos: When the present determines the future, but the approximate present does not approximately determine the future.
Chaotic behavior can be observed in many natural systems, such as weather.  Explanation of such behavior may be sought through analysis of a chaotic mathematical model, or through analytical techniques such as recurrence plots and Poincaré maps.  

Chaotic dynamics

In common usage, "chaos" means "a state of disorder".  However, in chaos theory, the term is defined more precisely. Although there is no universally accepted mathematical definition of chaos, a commonly used definition says that, for a dynamical system to be classified as chaotic, it must have the following properties: 
  1. it must be sensitive to initial conditions;
  2. it must be topologically mixing; and
  3. its periodic orbits must be dense.
The requirement for sensitive dependence on initial conditions implies that there is a set of initial conditions of positive measure which do not converge to a cycle of any length.

Sensitivity to initial conditions

Sensitivity to initial conditions means that each point in such a system is arbitrarily closely approximated by other points with significantly different future trajectories. Thus, an arbitrarily small perturbation of the current trajectory may lead to significantly different future behaviour. However, it has been shown that the last two properties in the list above actually imply sensitivity to initial conditions   and if attention is restricted to intervals, the second property implies the other two  (an alternative, and in general weaker, definition of chaos uses only the first two properties in the above list).  It is interesting that the most practically significant condition, that of sensitivity to initial conditions, is actually redundant in the definition, being implied by two (or for intervals, one) purely topological conditions, which are therefore of greater interest to mathematicians.
Sensitivity to initial conditions is popularly known as the "butterfly effect", so called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C. entitled Predictability: Does the Flap of a Butterfly’s Wings in Brazil set off a Tornado in Texas? The flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different.
A consequence of sensitivity to initial conditions is that if we start with only a finite amount of information about the system (as is usually the case in practice), then beyond a certain time the system will no longer be predictable. This is most familiar in the case of weather, which is generally predictable only about a week ahead. 
The Lyapunov exponent characterises the extent of the sensitivity to initial conditions. Quantitatively, two trajectories in phase space with initial separation \delta \mathbf{Z}_0 diverge
 | \delta\mathbf{Z}(t) | \approx e^{\lambda t} | \delta \mathbf{Z}_0 |\
where λ is the Lyapunov exponent. The rate of separation can be different for different orientations of the initial separation vector. Thus, there is a whole spectrum of Lyapunov exponents — the number of them is equal to the number of dimensions of the phase space. It is common to just refer to the largest one, i.e. to the Maximal Lyapunov exponent (MLE), because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic.
There are also measure-theoretic mathematical conditions (discussed in ergodic theory) such as mixing or being a K-system which relate to sensitivity of initial conditions and chaos.

Topological mixing

Topological mixing (or topological transitivity) means that the system will evolve over time so that any given region or open set of its phase space will eventually overlap with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.
Topological mixing is often omitted from popular accounts of chaos, which equate chaos with sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points will eventually become widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behaviour: all points except 0 will tend to positive or negative infinity.

Density of periodic orbits

Density of periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits.  The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example, \tfrac{5-\sqrt{5}}{8} → \tfrac{5+\sqrt{5}}{8} → \tfrac{5-\sqrt{5}}{8} (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem). 
Sharkovskii's theorem is the basis of the Li and Yorke  (1975) proof that any one-dimensional system which exhibits a regular cycle of period three will also display regular cycles of every other length as well as completely chaotic orbits.

Strange attractors

Some dynamical systems, like the one-dimensional logistic map defined by x→ 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behaviour is found only in a subset of phase space. The cases of most interest arise when the chaotic behaviour takes place on an attractor, since then a large set of initial conditions will lead to orbits that converge to this chaotic region.
An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it was not only one of the first, but it is also one of the most complex and as such gives rise to a very interesting pattern which looks like the wings of a butterfly.
Unlike fixed-point attractors and limit cycles, the attractors which arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set which forms at the boundary between basins of attraction of fixed points – Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and a fractal dimension can be calculated for them.

Minimum complexity of a chaotic system

Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite dimensional linear systems are never chaotic; for a dynamical system to display chaotic behaviour it has to be either nonlinear, or infinite-dimensional.
The Poincaré–Bendixson theorem states that a two dimensional differential equation has very regular behavior. The Lorenz attractor discussed above is generated by a system of three differential equations with a total of seven terms on the right hand side, five of which are linear terms and two of which are quadratic (and therefore nonlinear). Another well-known chaotic attractor is generated by the Rossler equations with seven terms on the right hand side, only one of which is (quadratic) nonlinear. Sprott  found a three dimensional system with just five terms on the right hand side, and with just one quadratic nonlinearity, which exhibits chaos for certain parameter values. Zhang and Heidel  showed that, at least for dissipative and conservative quadratic systems, three dimensional quadratic systems with only three or four terms on the right hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two dimensional surface and therefore solutions are well behaved.
While the Poincaré–Bendixson theorem means that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can exhibit chaotic behaviour.  Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite-dimensional.  A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis.