Jump to content

User:Chetvorno/work4

From Wikipedia, the free encyclopedia

Elementary explanation

[edit]

Quantum superpositions

[edit]

One of the strangest ideas in quantum mechanics is the concept of a 'quantum superposition'. The quantum principle of wave-particle duality says that an isolated small object like a atomic particle does not have a precise position but is spread out through space, represented by a wave function which gives the probability the particle is located at different points. Similarly other properties of the particle, such as its momentum and spin do not have precise values but only probabilities, given by the wave function. When a small scale (atomic scale) event, such as the decay of a particle, can have several possible outcomes, due to interference the wave function which represents the particle 'splits', and contains multiple components, each representing one of the possible outcomes. Each of these components, called an eigenstate, evolves independently with time. A 'superposition' is the wavefunction representing all the possible locations or states of the particle.

When an experiment is done which is capable of distinguishing which of the alternative locations or states the particle is in, it always finds, in conformity with common sense, the particle in one position in space or one of it's possible states, not a 'superposition'. However if the experiment is repeated with the same initial conditions, the location or state the particle is found in varies randomly among all the possible values, implying the particle does not ‘’have’’ a precise location or state before the experiment measures it. The probability of a value occurring is determined only by the square of the wavefunction; this is called the Born rule. Superpositions can be detected indirectly, by repeating the experiment many times, or with multiple particles; the observed locations of the particles form an 'interference pattern' caused by the interaction of the waves representing the different eigenstates. The interference pattern is proof that, before the experiment disturbed it, the particle or system was in multiple states (or locations) at once.

After the experiment the wavefunction of the particle continues to evolve continuously in time according to the Schrodinger equation, but from the location or state it was observed in, not the superposition of states it was in before. In other words, determining the location or state of the particle "did something" to the wavefunction. In quantum mechanics this is called a 'measurement' or 'observation', whether it occurs in a scientific experiment with a person observing, or a natural process with no one watching. In the Schrodinger's cat experiment the two eigenstates would be the decayed atom and its result the dead cat, and the undecayed atom and its result the live cat.

Superposition of wavefunctions is one of the most well established principles of quantum mechanics. It is ubiquitous in all quantum processes, has been observed by scientists for 90 years, and is necessary to the understanding of physics and chemistry. One well-known example is the dual-slit experiment in which a single particle in a superposition passes though two side-by-side slits in a barrier at the same time.

The Copenhagen interpretation

[edit]

The traditional explanation for this behavior, now called the Copenhagen interpretation (CI), mainly associated with physicists Neils Bohr and Erwin Schrodinger in the 1920s and 30s, was that a superposition lasts until it interacts with its environment, the 'large scale world'. During this process the wavefunction superposition is said to 'collapse', and one of the alternative eigenstates randomly becomes reality.[1]: 85–90 [2][3]

One example of such an 'observation' would be a physicist's instruments detecting which state the particle is in. Bohr emphasized that humans cannot perceive quantum events directly, we can only know the quantum world indirectly, from its effects on ordinary-sized macroscopic objects, such as our senses and scientific instruments.[1]: 85–90 [2] As stated in Bohr's correspondence principle, large objects composed of many quantum particles obey the familiar laws of classical physics, not quantum mechanics; an ordinary sized object is never observed in multiple places at once. The only way information about the wavefunction can reach us is by processes in which the small scale wavefunction affects large scale objects. The interference pattern indicates that before interaction with the outside world the wavefunction was in a superposition of multiple states, but the result of the observation process was that it was in a single state, leading to the conclusion that the observation process reduced the superposition to a single state.

In the Copenhagen interpretation of an observation, the world is divided into the microscopic quantum events which are viewed as wavefunctions, and the "external world" which is viewed classically. We can only directly perceive the end of the "observation" process, when classical macroscopic observable results appear, such as the quantities displayed on a physicist's instruments or the live or dead cat in Schrodinger's experiment.[1]: 85–90  In the Copenhagen interpretation, unobserved quantities, such as the state of the particle or the instruments before they are observed, are considered unknowable. So in the CI the dividing line between the two regimes (sometimes called the 'Heisenberg cut'), the point where the multiple quantum states collapse into a single outcome, is unknowable.[1]: 85–90 [2] It may occur at any point in the observation process before the result appears on the physicist's instruments, we have no way to tell. So the Copenhagen interpretation leaves undetermined exactly when and how the superposition collapses into one or the other alternative, leaving open the possibility that a macroscopic object could be in a superposition.

Schrodinger's thought experiment

[edit]

Schrödinger conceived his thought experiment to show that the Copenhagen interpretation could have results which went against 'common sense' when superpositions grew to large scales. Perhaps a particle can be in multiple states at once, he thought, but certainly not a large object like a cat. He imagined a case in which a quantum event can result in two very different large scale outcomes. To prevent the resulting superposition from collapsing through interaction with the outside world, the experiment is imagined to take place inside a box. A cat is put in the box, with a tiny amount of radioactive material, a radiation detector (geiger counter), and a flask of poison, and the box is closed. If the geiger counter detects radiation, a mechanism breaks the flask, releasing the poison, killing the cat. So whether the cat is dead or alive at any later time depends on a quantum event; whether a radioactive atom has decayed or not.

Radioactive decay, the emission of a particle by a radioactive nucleus, is a quantum event that occurs at random times. Quantum mechanics' uncertainty principle says that before the box is opened and the result is observed each of the radioactive atoms is in a superposition consisting of an undecayed state, and a decayed state in which the atom has emitted a particle of radiation. The Copenhagen interpretation implied that since the fate of the cat is dependent on ('entangled' with) the fate of the atom, and the atom is in a superposition, until the box is opened the cat may also be in a superposition. Until it is opened the box's interior may be in a peculiar quantum state: a wavefunction which is a superposition of the separate possible outcomes, some containing a live cat and some a dead cat. In effect, the cat is simultaneously alive and dead. Schrodinger's view was that this was absurd.

Wavefunction collapse

[edit]

When the box is opened and the result observed, only evidence of one of the two possible histories, a live cat or a dead cat, will be found. Any attempt from outside to determine the state inside the closed box will theoretically also have the effect of collapsing the superposition, again resulting in only one outcome. If a video camera or human observer is placed in the box with the cat (see Wigner's friend ), the CI says it will also be 'entangled' with the apparatus and will split into a superposition, and after collapse only one version will be left which only records one history, either the cat living, or dying. So large-scale superpositions such as the dead cat / live cat example, if they exist, seem to be unobservable, leaving the question of whether they actually occur, and how long they last, open to different "interpretations" of what happens during the unobserved period. In quantum mechanics, the unresolved question of how a wavefunction in a superposition reduces to a specific classical outcome is called the "measurement problem". Schrodinger's cat has become an iconic example for illustrating the measurement problem.

Although the outcome of a single trial is random, the probability of the cat being found alive depends on the amount of radioisotope used, its half-life, the placement of the radiation detector, and how long the box is left unopened. Many descriptions imply the experiment must be designed so the probability of the cat living and dying is 50%-50%, but this is not necessary. As long as there is some probability of each outcome occurring, the cat superposition cannot be ruled out.

Whether a conscious observer is required

[edit]

The Copenhagen interpretation has always included a range of opinions. Some early physicists such as Eugene Wigner and John von Neumann believed the collapse into one or the other possibility occurs only when a conscious mind observes the outcome, a theory now called the Von Neumann-Wigner interpretation or ‘consciousness causes collapse'.[2] If this theory is true, it would make wavefunction collapse, and all of classical physics that depends on it an artifact of the mind.[note 1]

This is not considered part of the modern Copenhagen interpretation.[1]: 85–90 [note 2] Most physicists believe the 'observation' process does not require a conscious mind;[2] the reduction of the superposition into a single outcome is an objective fact that has occurred by the time the results appear on the physicist's instruments (or in Schrodinger's experiment when the box is opened). Another point of view, solipsism, is that since our minds are necessary to observe events, it is philosophically impossible to distinguish whether collapse occurs in the mind or not, so this distinction is unimportant. This theory would violate the common scientific assumption that the laws of physics apply whether or not humans are watching, and would result in strange counterintuitive behavior such as wavefunctions of all macroscopic objects only reducing to a classical state if a person looked at them.

Modern view of the experiment

[edit]

The Copenhagen interpretation treats the "external world", or at least the part consisting of the instruments observing the particle, as classical; it ignores the fact that the apparatus that detects the particle, the cat, the box, and the physicist who looks into the box, are all themselves composed of atomic particles which consist of wavefunctions and obey the same quantum laws. However at the fundamental level everything is quantum mechanical, composed of wavefunctions. This is necessary for understanding the experiment;[2] the underlying reason for believing the cat may be in a superposition is that, due to the linearity of the Schrodinger equation, when a particle in a superposition interacts with another particle or object, the result is to put the second object's wavefunction into a superposition. A complete picture of wavefunction collapse should describe how the particle wavefunction interacts with the collective wavefunctions of the particles making up the observing apparatus and external world.

A modern view of the experiment is that the observation process consists of the radioactive atom's wavefunction, originally isolated, interacting with more and more other wavefunctions. For example:

  • When the radioactive atom decays, it emits a subatomic particle. Since at any time before the box is opened the atom is in a superposition of decayed and undecayed states, the particle is in a superposition of existence and nonexistence.
  • If the particle was emitted and strikes the radiation detector (geiger counter) sensor, it ionizes some of its atoms (knocking electrons out so they are electrically charged). Since the particle was in a superposition, if the superposition has not collapsed by this point, some of the sensor atoms are in a superposition of ionized and un-ionized states.
  • The ionization of the detector induces a small pulse of electric current in the electrode of the detector, which is amplified by the detector's electronics. So if the superposition has not collapsed before this point, the detector's electronic circuits are in a superposition of current and no current states.
  • The amplified current activates a mechanism which breaks the flask of poison. If the superposition has not collapsed yet, the flask of poison is in a superposition of broken and unbroken states.
  • The poison, if released, kills the cat. So if the superposition has not collapsed yet, the cat is in a superposition of dead and alive states.

In each of these steps from microscopic to macroscopic results, the superposition, if it exists, involves more particles, and the difference between the two possible outcomes increases. Another way of saying this is that the radioactive atom's wavefunction becomes progressively more entangled with, or diffused into, the wavefunction of the equipment in the box. Modern research on wavefunction collapse focuses on when in this process the superposition collapses into one or the other possibility.

Other interpretations

[edit]

Schrodinger did not believe that large-scale superpositions like the live cat / dead cat would actually occur, and the thought experiment was his effort to show that the idea was absurd. Later theorists have taken this possibility seriously, however, and other interpretations of quantum mechanics besides the Copenhagen interpretation have been proposed. These theories are called "interpretations" because they all agree (or nearly agree) with the well-confirmed results of quantum mechanics: the Born rule - that the probability of finding the particle in any location is always given by the square of the magnitude of the wavefunction, which is given by the Schrodinger equation. They only differ in unobserved events, such as how long the superposition is presumed to persist:

  • Some interpretations, such as the ensemble interpretation and the consistent histories interpretation, maintain that superpositions are an illusion and don't occur at all. For a given observer of the experiment, everything in the box, the radioactive atom, detector, and cat, is always in a single unambiguous state. Either the atom decays and the cat dies, or the atom does not decay and the cat lives. These theories require a redefinition of what “observation” is to explain where the interference pattern comes from.
  • Another interpretation, the De Broglie-Bohm theory, says the particle exists separately from the wave function. The particle is never in a superposition, it always has a precise location and trajectory, but the wavefunction serves to "guide" the particle; it has a greater probability of being released the when the wavefunction has a larger magnitude. In this interpretation the cat is also unambiguously alive or dead.
  • Others, such as objective collapse theories, assume that superpositions exist on a microscopic scale but collapse before they grow to large scales, so the radioactive atom will be in a superposition but it will collapse before it affects the cat, so the cat will be unambiguously alive or dead.
  • Still other theories, such as the many-worlds interpretation, propose that wavefunction collapse does not occur at all; the live cat / dead cat superposition persists even after the box is opened, causing a "splitting" or superposition of the external world into multiple "branches".[2] This is undetectable to us because due to decoherence the multiple branches of reality are mutually unobservable, and the copy of the physicist, and his equipment, and us, in one branch cannot detect the other branches.

There are also other interpretations. Although a survey of physicists found that the largest number favored the traditional Copenhagen interpretation, there is no consensus among quantum physicists as to which is correct.[2] Some of the interpretations have testable consequences which may make it possible to rule them in or out in the future.

Decoherence

[edit]

During the 1970s, physicist Heinz-Dieter Zeh and others recognized a natural process which provides an explanation of apparent wavefunction collapse, called decoherence. As a superposition becomes entangled with the external world and grows from a microscopic size to a statistically large number of particles, the separate eigenstates stop interfering and therefore become unobservable from each other. Within any eigenstate the probability of interaction with other eigenstates approaches zero, and the state begins to resemble a classical outcome. In quantum language, the observation process causes the superposition of pure states to transition to a mixed state. This could explain why only one large scale result, either a dead cat or a live cat, is observed.

Decoherence is a statistical thermodynamic process that occurs when information from the superposition wavefunction diffuses irreversibly into the environment. Any observation or process in which a small scale quantum event affects a large scale outcome involves amplification, which is an irreversible process which adds energy (thermal noise) to the ensemble of atoms, increasing the state space (number of degrees of freedom). The original radioactive atom superposition has only a few degrees of freedom. As it becomes entangled with the radiation detector, cat, and air molecules in the box, due to thermal noise the number of possible states the superposition could occupy, the state space, increases greatly (a macroscopic object such as a geiger counter or a cat has of the order of atoms, each of which can be in multiple quantum states, so the state space has at least that many degrees of freedom). In order for two of the states to interact they must be coherent: all the positions and momenta of the particles must be the same except for a few degrees of freedom due to the interference. As more and more particles are involved the probability of this becomes vanishingly small. The individual eigenstates, the classical outcomes of the experiment, are the only states which remain with finite probability after this 'environmentally induced superselection' process.

Calculations indicate that the decoherence into separate outcomes occurs within 10−14 seconds, before the particle hits the detector, due to interaction with random air molecules and black body radiation in the box, so the cat itself is never in the paradoxical condition of being in an "alive and dead" superposition.

Decoherence is not an actual wavefunction collapse, it is a statistical process that appears as wavefunction collapse. This means after decoherence the other eigenstates (outcomes of the experiment) should still be present in the environment, they are just undetectable. For this reason it is not considered a solution to the measurement problem, there is still the problem of what happened to the other states. Decoherence may explain why the classical world emerges from quantum physics,[2] but this is controversial and not accepted by all physicists.

Experiments on superpositions

[edit]

In recent years by using atom traps it has become possible to observe individual atoms and other particles in superpositions, and watch how the superpositions collapse. Many experiments have been performed, and a consistent picture has emerged of how superpositions act. If an experiment is performed which determines where the particle is, the interference pattern disappears. If an experiment is performed which only gives partial information on where the particle is, the interference pattern becomes less visible. If an experiment is designed to distinguish where the particle is, but the information is stored and is later destroyed, the superposition does not collapse. Only if the information is later observed by instruments does the superposition collapse.

Large-scale quantum superpositions or cat states have many applications in technology. As explained above, if a superposition is disturbed by noise it becomes decoherent, collapsing into a single classical state, as in Schrodinger's experiment. But if it can be shielded adequately from noise and disturbance it will remain coherent, and the multiple states composing the superposition will remain observable. As more particles are involved in the superposition this becomes harder to do. But recent experiments have created coherent superpositions involving billions of particles being in two different places at once, so there is no evidence yet of a limit to the size of superpositions. The ongoing development of quantum computers will eventually require macroscopic circuits that will be in superpositions, and no fundamental reason has yet been found why such large superpositions cannot be created.

Notes

[edit]
  1. ^ More precisely, either it would mean that wavefunction collapse and classical physics is a mental construct or illusion occurring in the mind, or that the brain has some special ability to collapse wavefunctions.
  2. ^ Bohr wrote, "Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory."

References

[edit]
  1. ^ a b c d e Omnès, R. (1994). The Interpretation of Quantum Mechanics. Princeton University Press. ISBN 978-0-691-03669-4. OCLC 439453957.
  2. ^ a b c d e f g h i "Interpretation of Quantum Mechanics". Encyclopaedia Britannica online. Encyclopaedia Britannica, Inc. 2020. Retrieved 7 August 2021.
  3. ^ Collins, Graham P. (19 November 2007). "The Many Interpretations of Quantum Mechanics". Scientific American. Retrieved 25 August 2021.

For Talk:Schrodinger's cat

[edit]

I added an extensive "Elementary explanation" section to the article. In spite of the excellent work done by previous editors, there are a huge number of misunderstandings by general readers about this confusing topic, as can be seen in the archives of this page. The article needs an explanation in ordinary language.

I know this section will not satisfy anyone familiar with quantum mechanics. Elementary explanations of technical subjects satisfy no one. Experts inevitably feel the necessary simplifications misrepresent the ideas, while noobs complain they are still too confusing. This is the iconic experiment used to discuss the measurement problem and interpretations of quantum mechanics, a very controversial subject even among quantum physicists themselves.

My goal in this section was to avoid stuffy philosophical jargon like ontological, deterministic, epistemological, and most mathematical jargon like Hilbert space or bra-ket notation, and stick to the results of experiments. Einstein said you don't really know a subject until you can explain it to your grandmother. We’re Wikipedia editors; we can do this.





How it works

[edit]

The overall operation of the Audion was superficially similar to the modern triode tube which evolved from it. A separate current through the filament heated it white-hot. The hot filament emitted electrons into the space of the tube, a process called thermionic emission. A positive voltage of 20 to 40 volts with respect to the filament was applied to the plate. The negative electrons were attracted to the positive plate. They flowed to it between the grid wires, creating a current of electrons through the tube from filament to plate.

A small voltage applied between the grid and filament could control this current. The grid wires between the filament and plate served as a controllable barrier or "valve" for the electrons. If the grid voltage was made more negative, the grid wires would repel the electrons, and fewer would get through the grid to reach the plate. If the grid was made less negative, or positive, more electrons would get through the grid wires, and the plate current would increase. Applying a varying signal voltage to the grid would cause similar variations in the plate current, the plate current "followed" the grid voltage. Since the current in the plate circuit could be much stronger than the voltage applied to the grid, the Audion allowed a weak electrical signal to control a stronger one. This property became known as amplification. The Audion was the first electrical device which could amplify.

Amplification was a new concept in 1912 and there was no word for it. Before the Audion, the only electrical device which resembled its operation even slightly was an electromechanical mechanism called a relay, so before the term amplification was coined the Audion was called an "electron relay".

Differences between the Audion and later triodes

[edit]

The early Audions differed from later triodes (including De Forest's later Ultraudions and Oscillions) in that they were incompletely evacuated; some residual air was left in the tube. De Forest said this was deliberate; he believed the tube required ionization to operate. The residual gas gave the Audion a more complicated behavior than the simple triode described above; the electrons struck the neutral gas molecules, knocking orbital electrons off them, creating ions, so instead of a single type of particle in the tube, electrons, there were three types: electrons, neutral molecules, and ions.

Around 1913-1914 Irving Langmuir at GE and Harold Arnold at Western Electric discovered that ions were not necessary for Audion operation. Using the resources of their large corporate laboratories, they developed means for evacuating Audions to 10-9 atm, in which there were so few gas molecules that the tube operated by pure electron conduction alone. These new tubes, the first modern triodes, were called "hard vacuum" or "hard" tubes, while tubes like the Audion which had significant gas in them were called "soft" tubes. The "hard" tubes proved to be much better amplifiers and developed into modern vacuum tubes. The "soft" tubes were limited to a few applications such as radio detectors and later were developed into special-purpose tubes called thyratrons.


Nonlinear operating characteristics

[edit]

Greater variability

[edit]

A crude, first generation device, the electrical characteristics of Audions varied widely even between tubes of the same type from the same production run. This was mostly due to variation of gas pressure in the tube. Not only did the amount of gas left in during manufacture vary widely, but as the tube aged gas absorbed by the metal electrodes was released, increasing the pressure. Later gas would be absorbed by the bulb walls, reducing pressure. When the device was turned on, the increasing temperature as the tube warmed up could drive gas out of the electrodes, changing the characteristics minute-to-minute.

Therefore, circuits used with the Audion had to be adjustable so the user could maintain the proper operating conditions for each tube as the pressure changed. Each Audion stage usually had an adjustment knob for filament current (a multiposition switch or potentiometer) and plate voltage. The first TRF receivers could have 4 to 5 Audion stages, so the front panel could have 8 to 10 controls just for the tubes, making adjustment a nightmare. The "hard" triodes that followed the Audion had no gas as well as much tighter manufacturing tolerances, making their behavior much more uniform. They required fewer front panel controls, making later vacuum tube consumer equipment more "user friendly".

Grid current

[edit]

In "hard" tubes the only charge carrier in the tube is negatively charged electrons, so if the grid potential is kept negative no current will flow to the grid. So the grid will have extremely high resistance and will consume no power from the signal source. Thus the "hard" triode is a voltage operated device. In contrast, in the Audion there are charge carriers of both polarities; negative electrons and positive ions, so there is only one grid voltage at which the number of positive ions and electrons striking the grid is equal, at any other potential there is grid current. At zero grid potential there are more positive ions than electrons striking the grid, so there is positive current into the grid. A capacitor in series with the grid requires a "grid leak" resistor. If the grid circuit has too high a resistance to ground, a runaway process can occur leading to "blue glow" and possible destruction of the tube.

Shot noise

[edit]

When adjusted to its most sensitive point, amplification in the Audion involved a multiplication process called a "Townsend avalanche" in which each electron which ionized a gas molecule liberated additional electrons, which struck other gas molecules producing more electrons in a chain reaction. Thus each original electron produced a pulse of current from the tube, so the discrete ionization process was amplified, and could be heard as a hiss in the earphone of an Audion receiver. The origin of this noise was discovered in 1918 by Walter Schottky, and it was subsequently named shot noise. Although it is a ubiquitous feature of all electronic circuits, the Townsend amplification process made shot noise particularly audible in the Audion.

Shorter filament life

[edit]

The positive ions created when the electrons struck gas molecules were all attracted to the negative filament (the cathode). During operation these massive particles bombarded the wire filament, knocking atoms of metal out of it. This was often visible as a dark coating that accumulated with use on the inner wall of the glass envelope. This progressively weakened the filament, developing "hot spots", which eventually caused it to burn out. The higher the plate voltage, the greater the velocity of the ions, and the more filament damage. Audion tubes had a much shorter lifetime than later "hard" tubes, about 30 - 100 hours in the first Audions (Saga of the Vacuum Tube, Part 7, Radio News, October 1943, p. 26). With their high cost, tube replacement was a burdensome operating expense, particularly for the young radio amateurs that were its initial market. Most Audions had two separate filaments, so when the first burned out the other could be used.

"Blue glow"

[edit]

If a high enough voltage was applied to the Audion, the gas in the tube would break down and conduct, and a phenomenon called a glow discharge would occur, similar in principle to that seen in gas discharge tubes like neon lamps. In normal operation, the tube operated in the "Townsend discharge" region, and only a small amount of the gas was ionized. However, if the voltage applied between filament and plate reached the tube's breakdown voltage, the electric field became strong enough that an electron knocked out of an atom when it was ionized could gain enough energy to knock an electron out of another atom, creating more free electrons and ions in a chain reaction. The ionization in the tube would abruptly increase, the grid would lose ability to control the current so the tube would stop amplifying, the voltage across the tube would drop to a low level, and a large current would flow. The gas in the tube would emit a blue glow due to fluorescence. Either the plate or the filament current had to be reduced to stop the discharge.

The "blue glow", as it was called, was not good for the tube. The overheating often caused gas to be absorbed or evolved changing the pressure, and the excessive ion bombardment could shorten the life of the filament. Unfortunately, the most sensitive operating point for Audion detectors was close to the breakdown voltage. So the bias on Audions had to be adjusted carefully, when finding the operating point, to avoid the tube "spilling over" into "blue glow" mode.

This breakdown mechanism severely limited the plate voltage usable with Audions, to about 20 to 30 V, and thus limited the output power to a few milliwatts. So Audions could not be used for high power transmitters. "Hard" triodes didn't have this problem and could be designed for far higher voltages. After high vacuuml tubes became available, "soft" tubes like the Audion were not used for amplification but were relegated to use solely as detectors.

References

[edit]



Crystal oscillator circuit

[edit]

A crystal oscillator is a type of linear electronic oscillator circuit that uses a resonator to produce a repetitive oscillating electronic signal called a sine wave. Other types of linear oscillator circuit use an electrical resonator, such as a tuned circuit or a microwave cavity, to determine the frequency. In contrast, a crystal oscillator uses mechanical vibrations of a mechanical resonator, a tiny piece of crystal such as quartz crystal or barium titanate to keep it on the correct frequency. To generate an oscillating electrical signal, the crystal is made of a material that has the property of piezoelectricity; bending or squeezing the crystal induces opposite electric charges on its opposing faces. The crystal is in the form of a narrow slice, or sometimes a tuning fork, sandwiched between metal plate electrodes, so when it vibrates an oscillating voltage is induced on the plates. Similarly, applying a voltage to the plates causes the crystal to bend or expand, so the crystal can be made to vibrate by applying an oscillating voltage. The crystal is called a piezoelectric resonator.

The purpose of the oscillator circuit is to provide electrical energy which keeps the crystal vibrating, and produce the sine wave output signal. The circuit consists of an electronic amplifying device, a transistor or op amp, with the crystal connected in a feedback loop between the amplifier's output and input.

The mechanical oscillations of the crystal produce an oscillating voltage on its electrodes, which is applied to the input of the amplifier. They are amplified producing pulses of current which are applied back to the electrodes, creating pulses of force in the crystal which serve as "pushes" to keep the oscillations going. When the power is turned on, electrical noise in the circuit provides a signal to get oscillations started




History

[edit]

The Audion (triode) vacuum tube, the first electronic amplifying device, was invented in 1906 by Lee De Forest, but its amplifying ability was not recognized for some years. Due to circuit imperfections, regeneration (positive feedback) was common in early triode circuits, causing parasitic oscillations, which manifested as shrieks, howls and hisses in earphones. Regeneration was independently discovered by at least six researchers around 1912-1913: Fritz Lowenstein (1911), Lee De Forest (1912), Edwin Armstrong (1912), Alexander Meissner (1913), Irving Lanmuir (1913), and H. J. Round (1913). However only De Forest and Armstrong proved to have legal priority.

Armstrong had discovered regeneration while a sophomore at Columbia University and went on to use it in his invention of the regenerative radio receiver. His father would not lend him the money for a patent attorney until hs graduated, but in 1914 he filed

De Forest, the inventor of the Audion, had noted regeneration in August 1912, but did not see any use for it. However, after he read Armstrong's patent he

The inventor of FM radio, Edwin Armstrong, invented and patented the regenerative circuit while he was a junior in college, in 1914.[1] He patented the super-regenerative circuit in 1922, and the superheterodyne receiver in 1918.

Lee De Forest filed a patent in 1916 that became the cause of a contentious lawsuit with the prolific inventor Armstrong, whose patent for the regenerative circuit had been issued in 1914. The lawsuit lasted twelve years, winding its way through the appeals process and ending up at the Supreme Court. Armstrong won the first case, lost the second, stalemated at the third, and then lost the final round at the Supreme Court.[2][3]

At the time the regenerative receiver was introduced, vacuum tubes were expensive and consumed lots of power, with the added expense and encumbrance of heavy batteries. So this design, getting most gain out of one tube, filled the needs of the growing radio community and immediately thrived. Although the superheterodyne receiver is the most common receiver in use today, the regenerative radio made the most out of very few parts.

In World War II the regenerative circuit was used in some military equipment. An example is the German field radio "Torn.E.b". Regenerative receivers needed far fewer tubes and less power consumption for nearly equivalent performance.

A related circuit, the super-regenerative detector, found several highly-important military uses in World War II in Friend or Foe identification equipment and in the top-secret proximity fuse.

In the 1930s, the superheterodyne design began to gradually supplant the regenerative receiver, as tubes became far less expensive. In Germany the design was still used in the millions of mass-produced German "peoples receivers" (Volksempfänger) and "German small receivers" (DKE, Deutscher Kleinempfänger). Even after WWII, the regenerative design was still present in early after-war German minimal designs along the lines of the "peoples receivers" and "small receivers", dictated by lack of materials. Frequently German military tubes like the "RV12P2000" were employed in such designs. There were even superheterodyne designs, which used the regenerative receiver as a combined IF and demodulator with fixed regeneration. The superregenerative design was also present in early FM broadcast receivers around 1950. Later it was almost completely phased out of mass production, remaining only in hobby kits.




How it works

[edit]

Real and reactive power

[edit]

A synchronous condenser is most commonly used to increase the power factor on power distribution lines, reducing nonproductive reactive power flow in lines caused by reactive loads. In addition to consuming electric energy, many electrical loads have a property called reactance; they temporarily store electrical energy in electric or magnetic fields. During each peak of the AC sine wave, which occurs 100 or 120 times per second in a 50 or 60 Hz power system, these reactances must be charged and discharged with energy. In addition to the energy being consumed in the load, during parts of the alternating current cycle extra energy is flowing into the device from the power grid and being stored in the fields, while in other parts of the cycle that energy is returned to the power grid. This reactive power does not deliver real power to the load, or serve any other useful purpose, but it causes an ebb and flow of additional current (called reactive current) in the wires of the electric power grid. This additional current increases power losses due to resistive heating in the long power lines, and requires larger rated conductors and transformers to be used for a given power load. A load's reactance is measured by a dimensionless number between 0 and 1 called its power factor (). This is equal to the ratio of real power being consumed in the load to apparent power, the product of the rms voltage and current in the supply line. Since real power delivered to the load is , the smaller the power factor the larger the line current needed to provide a given amount of real power to a load. In a load with capacitance the current through the load leads the voltage, this is called leading power factor. In a load with inductance the current through the load lags behind the voltage; this is called lagging power factor.

The additional costs to the electric utility due to reactive power in their lines are substantial. Utilities pass these costs on to customers in their rate structure, charging industrial customers a higher rate for power supplied to loads with low power factor, giving them an incentive to increase their power factor.

Power factor correction

[edit]

To improve power factor, a synchronous condenser is installed near low power factor loads, connected to the power supply line in parallel with the load. It consists of a three phase synchronous motor with its shaft rotating freely without load in synchronism with the cycles of the alternating current power. It serves as a temporary energy storage device. Each AC cycle, during the parts of the cycle when the load is absorbing reactive energy the machine acts as a generator; supplying that energy by converting kinetic energy stored in the condenser's rotor, while when the load returns the energy to the line the machine acts as a motor, storing the energy again in the rotor. Although the rotor slows down and speeds up slightly during each cycle, since all the energy provided by the machine is returned to it during the same cycle, the net torque on the rotor over each AC cycle is zero so (ignoring small frictional power losses) no net power (real power) is consumed by the condenser. The motivation for using a power factor corrector like the synchronous condenser is that the additional current due to the load's reactive power only flows in the local power lines between the synchronous condenser and the load, instead of through the whole power grid, reducing economic costs.

The amount of reactive power supplied by the synchronous condenser is adjusted by varying the DC excitation current of the motor. One of the advantages of the synchronous condenser over other power factor correcters like capacitor banks is that by varying the excitation, the machine can deliver either leading or lagging reactive power, acting as either an inductor or capacitor, so it can compensate either capacitive or inductive loads.

Example: Compensating an electric motor

[edit]

For example, in a load consisting of an electric motor, the current passes through the motor's armature and field windings which act as electromagnets, which have inductance and so store energy in their magnetic fields. During the parts of the AC cycle when current through the windings is increasing, the cores are being magnetized so energy is being stored in the magnetic field, while during the parts when current is decreasing the magnetic energy is being returned to the power line. The back EMF due to the inductance causes the current sine wave to lag behind the voltage sine wave. Since a large part of the load on power grids is electric motors, electric power lines usually operate at a lagging power factor. To compensate for the lagging current a synchronous condenser is adjusted to act as a capacitor, with the current through the machine leading the voltage. Ideally, by correctly adjusting the machine, the sum of the leading and lagging currents is a current in the supply line in phase with the voltage, resulting in a power factor of unity. This condition delivers the necessary real power to the loads with minimum current in the upstream power grid. In actual power grids, the reactive power and power factor typically varies as loads are turned on and off. so a synchronous condenser may not achieve a power factor of unity, but does substantially improve it.

Electrical theory

[edit]

From an electrical point of view, the load can be modeled as a resistance, which consumes the real power, in series with a reactance (capacitance or inductance), which creates the reactive power flow. The reactance of the load causes the current sine wave through the load to be out of phase with the voltage sine wave, either leading the voltage (in a capacitive load) or lagging behind (in an inductive load). The current through the load can be decomposed into the sum of two quadrature sine wave components. One in phase with the voltage that represents real power, and one 90° out of phase that represents the reactive power. Since (except for small frictional losses) the synchronous condenser doesn't consume any real power, its current is all reactive, 90° out of phase with the voltage. The synchronous condenser is adjusted so it has a current through it which is equal to the load reactive current but of opposite phase, 180° out of phase with it. Since they are connected in parallel, the current in the power line supplying the two devices is equal to the sum of the current into the load and the current into the condenser. So the current in the condenser cancels the reactive component of the current in the load, leaving only the component of load current in phase with the voltage (representing the real power) to flow through the power supply line. The combination of the load and the condenser thus (ideally) has power factor of 1.0.

The synchronous condenser consists of a three phase

  • If the load has inductance, such as a motor, the current through the load lags behind the voltage, the peak of the current occurs after the peak voltage. A large part of the load on electric power grids consists of induction motors, which have low lagging power factors when lightly loaded, so lagging power factor is the commonest condition on grids. To cancel the lagging load current the synchronous capacitor draws a leading current, the peak of its current occurs 90° before the voltage peak, so the machine acts as a capacitor.
  • If the load has capacitance, the current through the load leads the voltage, the peak of the current occurs before the peak voltage. To cancel the leading load current the synchronous capacitor draws a lagging current, the peak of its current occurs 90° after the voltage peak.

Voltage correction

[edit]

Since the voltage on the line also varies with the amount of reactance, synchronous condensers are also used to correct voltage on power lines.


Telegraphy is the long-distance transmission of textual messages by alphabetic codes, rather than a physical exchange of a written message. Developed beginning in the late eighteenth century, telegraph systems were the first instant messaging systems, devised to send text messages faster than written messages could be sent by couriers or pigeon post. Although visual signaling systems that could send a limited number of predetermined messages, such as smoke signals, had existed since ancient times, these could not send arbitrary text and so are not considered to be telegraphs.

The earliest telegraph system put into widespread use was the optical telegraph of Claude Chappe, invented in the late eighteenth century. The system was extensively used in France, and European countries controlled by France, during the Napoleonic era. The electric telegraph, the first electrical telecommunication system, started to replace the optical telegraph in the mid-nineteenth century. By the late 1900s most developed nations had built commercial electric telegraph networks with offices in most towns and villages, allowing the public to send messages called telegrams to any person in the country. The earliest radio communication system, called wireless telegraphy, transmitted information by pulses of radio waves which spelled out messages in Morse code. It was used until the 1950s. Optical telegraphy survived as flag semaphore used by naval vessels.


Original lead:

Telegraphy is the long-distance transmission of textual messages where the sender uses symbolic codes, known to the recipient, rather than a physical exchange of an object bearing the message. Thus flag semaphore is a method of telegraphy, whereas pigeon post is not. Ancient signalling systems, although sometimes quite extensive and sophisticated as in China, were generally not capable of transmitting arbitrary text messages. Possible messages were fixed and predetermined and such systems are thus not true telegraphs.

The earliest true telegraph put into widespread use was the optical telegraph of Claude Chappe, invented in the late eighteenth century. The system was extensively used in France, and European countries controlled by France, during the Napoleonic era. The electric telegraph started to replace the optical telegraph in the mid-nineteenth century.

A cellular network is a communications network which connects mobile wireless devices to the telephone network and the Internet using local antennas. Originally created for and still primarily used by cell phones, due to increasing speed and bandwidth cellular networks now also provide connectivity for laptops, tablets, pagers and other wireless devices equipped with cellular modems. The coverage area of a cellular network is divided into small geographical areas called cells. All the wireless devices in a cell communicate by radio waves with the network through a local antenna and automated transceiver in the cell called a cell site or cell tower. When a user carries the mobile device into a new cell, the device is "handed off" electronically to the new cell's antenna and subsequently communicates with that antenna, using a new set of frequencies.

In the first mobile phone systems, before cellular, all the mobile phones in a metropolitan area would communicate with a single antenna and central office. Since each phone call required a separate frequency channel, the limited radio spectrum severely restricted the number of phone calls that could be made at one time. Cellular networks, developed in _______ by _____,

  1. ^ "The Armstrong Patent", Radio Broadcast, 1 (1), Garden City, NY: Doubleday, Page & Co.: 71–72, May 1922
  2. ^ Morse 1925, p. 55
  3. ^ Lewis 1991