Quantum field theory

November 30, 2011

Article finished

Filed under: About — Nigel Cook @ 2:06 am

Article finished at 2am on 30 Nov. (7.3 MB PDF file, 63 pages, downloadable here.) Also hosted at http://rxiv.org/pdf/1111.0111v1.pdf, the General Science Journal, and (with some extra material) as the brief mail-order paperback book, Quantum Gravity and the Standard Model, ISBN 978-1-4709-9745-8, which is being listed on Amazon for those who want to help ensure a warmer global future (by turning trees into paper).


Dr Tommaso Dorigo comments on his post Higgs Expectations:

“… in order to really prove that our understanding of electroweak symmetry breaking is flawed and that there is no Higgs boson we would need a much, much more solid evidence than a mere “95% exclusion”. I would not be satisfied with anything less than a 99.9% exclusion (over three sigma) across the full mass range.

But I do not honestly believe that we will ever get into such a situation. I do believe, in fact, that the particle is there, and that it will be found very soon! So stay tuned and place your bets if you haven’t already. Time is running short.

We avoid the usual electroweak symmetry breaking problem, by changing electromagnetism from U(1) to a massless SU(2) gauge theory (which works out correctly, yielding Maxwell’s equations from the Yang-Mills, because charged massless vector bosons can’t propagate asymmetrically), so that SU(2) becomes a complete electroweak theory. (This is fine for the weak bosons, while the apparent discrepancy between weak isospin charges and fractional quark electric charges disappears with a vacuum polarization model, which predicts that 1/3 or 2/3 of the electric charge energy of quarks is present as strong colour charge.) U(1) is not abandoned altogether; it is dark energy, which also predicts gravity. The mass of SU(2) weak bosons is then produced by the Glashow-Weinberg mixing of U(1) gravity with SU(2) electromagnetism. Instead of a electroweak symmetry being broken to yield Nambu-Goldstone “Higgs” bosons, the weak interaction emerges from a simple mixing of SU(2) electromagnetism with U(1) gravity. I’ll try to get a briefer paper done, ready to replace the Higgs boson.

Update (7 December 2011):

http://www.science20.com/quantum_diaries_survivor/alejandro_rivero_fermion_mass_coincidences_and_other_fun_ideas-85187?nocache=1

“Then Koide went some steps beyond and considered quarks and leptons with substructure, so that lepton mass quotients could predict the Cabibbo angle too, even if this is a mixing between quarks.”

{(sqrt(M_e)+sqrt(M_mu)+sqrt(M_tau))^2} /( M_e + M_mu +M_tau) = 2/3

The key factor of 2/3 in the Koide relationship is the fractional electric charge of the up/charm/truth quarks, which arises from a mixing effect. It’s the 2/3 electric charge of up/charm/truth quarks that’s so interesting. The -1/3 charge of the down/strange/bottom quarks is very easily predicted by analysis of vacuum polarization for the case of the omega minus baryon (Fig. 31 in http://rxiv.org/pdf/1111.0111v1.pdf). It appears that the square root of the product of two very different masses gives rise to an intermediate mass (see http://nige.wordpress.com/2009/08/26/koide-formula-seen-from-a-different-perspective/ for the simple maths) that the Koide relationship implies a bootstrap model of fundamental particles (akin to the bootstrap concept Geoffrey Chew was trying to develop to explain the S-matrix in the 1960s before quarks were discovered). The square root of the product of the masses of a neutrino and a massive weak boson may give an electron mass, for instance. This seems to be the deeper significance of the Koide formula, from my perspective for what it’s worth. All fundamental particles are connected by various offshell field quanta exchanges, so their “charges” are dependent on other charges around them. This means that the ordinary approach of analysis fails, because of the reductionist fallacy. If your mathematical model of rope is the same for 100 one-foot lengths as for a single 100 foot length, it leads to customer complaints when you automatically send a sailor the former, not the latter. It’s no good patiently explaining to the sailor that mathematically they are identical, and the universe is mathematical. If the Koide formula is correct, then it points to an extension of the square root nature of the Dirac equation. Dirac made the error of ignoring Maxwell’s 1861 paper on magnetic force mechanisms: the chiral handedness of magnetism (the magnetic field curls left-handed around the direction of propagation of an electron) is explained in Maxwell’s theory by the spin of “field quanta” (Maxwell had gear cogs, but in QFT it’s just the spin angular momentum of field quanta). Maxwell’s theory makes EM an SU(2) Yang-Mills theory, throwing a different light on the Dirac’s spinor. It just so happens that the Yang-Mills equations automatically reduce to Maxwell’s if the field quanta are massless, because of the infinite self-inductance of electrically charged field quanta, so SU(2) Maxwellian electromagnetism in practice looks indistinguishable from Abelian U(1), explaining the delusions in modern physics.

The very interesting results Alejandro Rivero gives are from equation 4 on page 3 of his paper http://www.vixra.org/abs/1111.0062, which solves the Koide formula by writing one mass in terms of the two lepton other generation masses. Koide’s formula also implies (my 2009 post):

Me + Mm + Mt = 4 * [(Me * Mm)^1/2 + (Me * Mt)^1/2 + (Mm * Mt)^1/2]

where Me = electron mass, Mm = muon mass, Mt = tauon mass. I.e., the simple sum of lepton masses equals four times the sum of square roots of the products of all combinations of the masses, making it seem that if Koide’s formula is physically meaningful, then Geoffrey Chew’s bootstrap theory of particle democracy must apply to masses (gravitational charge) in 4-d. At high energy, early in the universe, tauons, muons and electrons were all represented and we only see an excess of electrons today because the other generations have decayed, although some of the other masses may actually exist as dark matter, and thus still undergoes the interaction of graviton exchange, which determines the Koide mass spectrum today (this dark matter is analogous to right-handed neutrinos). The basic physics of the Koide formula seems to be the Chew bootstrap applied to gravitation (Chew applied it to the strong force, pre-QCD):

“By the end of the 1950s, [Geoffrey] Chew was calling this [analytic development of Heisenberg’s empirical scattering or S-matrix] the bootstrap philosophy. Because of analyticity, each particle’s interactions with all others would somehow determine its own basic properties and … the whole theory would somehow ‘pull itself up by its own bootstraps’.” – Peter Woit, Not Even Wrong, Jonathan Cape, London, 2006, p148. (Emphasis added.)

The S-matrix went out when the SM was developed (although S-matrix results were used to help determine the Feynman rules), but at some stage a Chew-type bootstrap mechanism for Koide’s mass formula may be needed to further develop a physical understanding for the underlying theory of mass mixing, leading to a full theory of mixing angles for both gravitation (mass) and weak SU(2) interactions of leptons and quarks.

“… publishing a groundbreaking idea in peer-reviewed journals can be nearly impossible.”

- Louise Riofrio

Before you can get past peer review, you must convince the “peers” to listen, which is impossible if they believe in an “alternative” which has no evidence to support it (you can’t discredit something that’s not scientific to begin with):

“Scepticism is … directed against the view of the opposition and against minor ramifications of one’s own basic ideas, never against the basic ideas themselves. Attacking the basic ideas evokes taboo reactions … scientists only rarely solve their problems, they make lots of mistakes … one collects ‘facts’ and prejudices, one discusses the matter, and one finally votes. But while a democracy makes some effort to explain the process so that everyone can understand it, scientists either conceal it, or bend it … No scientist will admit that voting plays a role in his subject. Facts, logic, and methodology alone decide – this is what the fairy-tale tells us. … This is how scientists have deceived themselves and everyone else … It is the vote of everyone concerned that decides fundamental issues … and not the authority of big-shots hiding behind a non-existing methodology. … Science itself uses the method of ballot, discussion, vote, though without a clear grasp of its mechanism, and in a heavily biased way.”

– Professor Paul Feyerabend, “Against Method”, 1975, final chapter.

“The notion that a scientific idea cannot be considered intellectually respectable until it has first appeared in a ‘peer’ reviewed journal did not become widespread until after World War II. Copernicus’s heliocentric system, Galileo’s mechanics, Newton’s grand synthesis – these ideas never appeared first in journal articles. They appeared first in books, reviewed prior to publication only by their authors, or by their authors’ friends. … Darwinism indeed first appeared in a journal, but one under the control of Darwin’s friends. … the refereeing process works primarily to enforce orthodoxy. … ‘peer’ review is NOT peer review.”

– Professor Frank J. Tipler, Refereed Journals: Do They Insure Quality or Enforce Orthodoxy?

In 2006, the bestsellers by Lee Smolin and Peter Woit “Not Even Wrong” and “The Trouble with Physics” were published, showing that superstring theory has become a dogmatic consensus, like epicycles being “defended” by less-than-objective methods. Right on cue, the world’s greatest genius behind M-theory, Ed Witten, happened to write a letter to Nature (v. 444, p. 265, 16 November 2006), headlined:

Answering critics can add fuel to controversy.

“SIR — Your Editorial “To build bridges, or to burn them” and News Feature “In the name of nature” raise important points about criticism of science and how scientists should best respond (Nature 443, 481 and 498–501; 2006). The News Feature concerns radical environmentalists and animal-rights activists, but the problem covers a wider area, often involving more enlightened criticism of science from outside the scientific establishment and even, sometimes, from within.

“The critics feel … that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … We all know examples from our own fields … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies. Critics, who are often prepared to devote immense energies to their efforts, can thrive on the resulting ‘he said, she said’ situation. [Critics must never be permitted to thrive.]

“Scientists in this type of situation would do well to heed the advice in Nature’s Editorial. Keep doing what you are doing. And when you have the chance, try to patiently explain why what you are doing is interesting and exciting, and may even be useful one day.

“Edward Witten
Institute for Advanced Study, Einstein Drive,
Princeton, New Jersey 08540, USA.”

The next letter on that Nature page (from genetics engineer Boris Striepen) stated: “How and why did our public image change from harmless geeks to state- and industry-sponsored evil-doers worthy to be a target? More importantly, what do we do about it? And how do we communicate more effectively what we are doing, why we are doing it and what the opportunities and challenges of modern science are?”

Answer:

“Centralization of information and decision-making at the top has been destructive to most organizations. The Greeks had a word for the notion that the best decisions can only be made on the basis of the fullest information at the highest level. They called it hubris. In a living scientific organization, decisions must be pushed down to the lowest level at which they can be sensibly made. … Leadership would be decentralized throughout, not concentrated at the top. … It would also facilitate the downward transmission of goals, the only things that can be usefully passed down from above, and make room for the upward transmission of results, which should be the basis for reward. It should be obvious that this structure need not be imposed from above. There is no reason to await a decision from the top to do so. Everyone in the chain has the flexibility to organize his own life and thereby to decide whether he is to be a manager or a leader.”

- Gregory H. Canavan, The Leadership of Philosopher Kings, Los Alamos National Laboratory, report LA-12198-MS, December 1992.

Above: in the 1970s, state control planned to nationalize everything and control everything from the top, including scientific research and production. This was opposed by the campaigns like “Beware of the Elephant” (this advert is from The Guardian 9 Aug 1974 p5), which warned of the dangers from state control. Stalin admitted in his own book, Economic Problems of Socialism in the USSR, that the basic laws of nature are the same in free capitalist countries and socialist dictatorships, leading to stagnation, hubris, corruption, and other symptoms from the bloated, short-sighted elephant of state control unless the leadership is continuously fighting wars or innovating (Stalin pressed forward with nuclear power and space rockets and public criticisms were tempered; the bankrupcy of the USSR in the 80s when Reagan and others set up Star Wars/SDI and W-79 neutron bombs to negate the Soviet SS-20s and Warsaw Pact tank superiority, effectively ended the USSR dream of world domination so criticism of the regime’s short-sighted hubris became harder to censor out and dissent became more openly fashionable). Hubris also has tragic consequences for science (e.g. Lysenkoism in Stalin’s time, or eugenics in Hitler’s), just as they do for political economy as is now being well demonstrated by the socialist era debt problems of Greece and other Eurozone economies. But our point concerns the destruction of science by this same mechanism of short-sighted dictatorship by the media-loved band of “mainstream” superstringers who don’t have a falsifiable theory or even address the fundamental data that needs explaining. Other analogies abound in Health Physics nuclear quackery political-expedience limbo, and CO2-rich hot-air.

Ex-NASA climatologist Dr Roy Spencer ends his latest Climategate 2.0 blog post:

“But when only one hypothesis is allowed as the explanation for climate change (e.g. “the science is settled”), the bias becomes so thick and acrid that everyone can smell the stench. Everyone except the IPCC leadership, that is.”

Like the Emperor’s New Clothes, when the word goes around that the leadership is faulty, nobody dares overthrow the leader, or they bungle it. It’s precisely like the situation of Stalin or Hitler, who have got to the top by having a private army of bodyguards and propaganda chiefs, so that people like Delingpole can be pushed down by Dr Goebbels, aka the BBC’s biased “elite documentary maker,” Sir-Lord-God-Nobel Haw Haw of the Regal Society of Pseudoscientific Quacks, dedicated to the “laudable” politically-correct challenge diverting our limited funds in a time of austerity from saving human lives in drought and famine hit areas of humanity, to instead line the pockets of swindling carbon credit traders. The claim that democracy would allow the people to overthrow a scientific dictatorship of quacks funded by political expediency is laughable and is well disproved by all examples of scientific corruption in history, from the injection of false Aristotlean physics into medieval Christianity by Thomas Aquinas, to 11-dimensional superstring M-”theory” (which contains no theory, merely a vacuous framework in which 10500 different metastable vacuum states can sit, all of which contain the same faulty spin-2 graviton framework assumption).

Everybody can smell the stench from this piece of vile pseudophysics with its Gestapo response to critics, its abuse of the peer-review system for censorship of criticisms, and its patiently false “greenhouse” assumption which relies on the implicit assumption of an invisible non-existent glass ceiling to prevent water vapour from becoming cloud cover. The liars of the mainstream lyingly call critics “climate change deniers”, when climate change is natural: the argument is about whether the earth is a “greenhouse” that is super-sensitive to CO2 injections or not; the case for NOT being the earth is NOT a greenhouse. If the earth were a greenhouse, there would be no oceans (71% of surface area) and no cloud cover which varies in direct response to CO2. In fact, if you pump in CO2 and you increase cloud cover, which reflects back more sunlight into space, keeping the surface cool. This is negative feedback, totally ignored by all IPCC models, which make the same collective politically-correct mistake of assuming that the greenhouse effect is true (where IR-absorbing water vapour is unable to form clouds, and so has only a politically-correct positive feedback). In a greenhouse, water vapour is prevented from rising to from cloud cover that cools the greenhouse, because of the implicit glass ceiling (i.e. the falsely assumed lack of buoyancy of sunshine IR-warmed moist ocean surface water vapour).

Dr Roy Spencer, http://www.drroyspencer.com/2011/11/climategate-2-0-bias-in-scientific-research/:

“In the case of global warming research, the alternative (non-consensus) hypothesis that some or most of the climate change we have observed is natural is the one that the IPCC must avoid at all cost. This is why the Hockey Stick was so prized: it was hailed as evidence that humans, not Nature, rule over climate change. [Actually the climate is always varying so there is 50% chance of rising temperatures, 50% of falling temperatures. This reduces the statistical value of correlations of CO2 and temperature when you take account of the fact that there is a 50% chance of a spurious, coincidental correlation.]

“The Climategate 2.0 e-mails show how entrenched this bias has become among the handful of scientists who have been the most willing participants and supporters of The Cause. These scientists only rose to the top because they were willing to actively promote the IPCC’s message with their particular fields of research.

“Unfortunately, there is no way to “fix” the IPCC, and there never was. The reason is that its formation over 20 years ago was to support political and energy policy goals, not to search for scientific truth. I know this not only because one of the first IPCC directors told me so, but also because it is the way the IPCC leadership behaves. If you disagree with their interpretation of climate change, you are left out of the IPCC process. They ignore or fight against any evidence which does not support their policy-driven mission, even to the point of pressuring scientific journals not to publish papers which might hurt the IPCC’s efforts.

“I believe that most of the hundreds of scientists supporting the IPCC’s efforts are just playing along, assured of continued funding. In my experience, they are either: (1) true believers in The Cause; (2) think we need to get away from using fossil fuels anyway; or (3) rationalize their involvement based upon the non-zero chance of catastrophic climate change.”


See investigative journalist James Delingpole refuting all of the AGW quacks in the video linked here: http://www.dailymotion.com/embed/video/xlbqfl

Michael Mann’s hockey stick curve was faked to show constant temperature until CO2 began rising. IPCC/NASA gurus on the Horizon BBC2 “Science under Attack” propaganda film claimed that humanity emits 7 times more CO2 than nature, when in fact natural sources of CO2 emit 30 times more (even the IPCC 4th assessment report lists in its un-hyped small print that humanity’s emission is 29 Gt of CO2 from all fossil fuels etc, compared to 771 Gt from all natural land and ocean emissions). It’s well within the natural climate fluctuations of CO2, and the scare-propaganda relies entirely on censoring out the evidence of natural variability by tricks like switching temperature proxies at 1960 and 1980 so as to try to produce a hockey stick curve.

Before 1960 they use tree rings as the major proxy, which is false because tree growth is sensitive to cloud cover and rainfall, not particularly CO2 levels. From 1960-80 they used temperature station records near expanding “heat islands” like industrial factories and cities. After 1980 they used satellites which can’t tell the temperature under the cloud cover where all negative-feedback from cloud cover actually occurs. No prizes for guessing that the satellite “temperature data” didn’t properly include negative feedback from the extra cloud cover resulting from the extra evaporation of water due to rising CO2. They’re complete fanatics, who don’t donate a single brain cell to objectivity, let alone half a brain!

“… [Dr Andy] Dessler has … used models which DO NOT ALLOW cloud changes to affect temperature, in order to support his case that cloud changes do not affect temperature!”

- Dr Roy Spencer, ex-NASA climatologist, http://www.drroyspencer.com/2011/09/the-good-the-bad-and-the-ugly-my-initial-comments-on-the-new-dessler-2011-study/

This quotation is the smoking gun: Dr Roy Spencer’s latest paper was shot down by peer-review, then he was contacted by a “critic” whose paper is in proof, and is claiming that cloud cover doesn’t have negative feedback (i.e. cancel out CO2 injection effects on climate, the entire AGW scam) simply because the mainstream model doesn’t include cloud cover. If ever there was a circular argument, this is it. It’s a groupthink “ends justify the means” delusion, where they think they can safely suppress the facts because “making the environment cleaner” is an unassailable objective, never minds the diversion of funds from lifesaving charities into carbon trading scams. Stalin didn’t personally murder 40 million in collectivization of farming in the 30s, instead like Hitler he deluded himself with false “science” into believing that it was well-intentioned. The biggest danger is “well-intentioned pseudoscientific dogma”: the “safe” belief that it was necessary step on the road to global communist utopia, likewise Hitler gassed 6 million “safe” in his eugenics pseudoscience belief he was ethnically “cleansing” humanity genetically Yeah, right. The road to hell is paved with good intentions. Nobody will ever get through to people like Al Gore, they’re all completely deluded and have invested all they have in a pseudoscience politically-expedient belief system which devalues objectivity.

Between 8000 and 7000 years ago, sea levels rose 11.5 metres (1150 cm), or 1.15 cm/year, without killing life on earth. The current rate of rise is 0.2 to 0.4 cm/year, depending on which measurements you use. Sea levels were 120 metres lower some 18,000 years ago, at the height of the last ice age. 450 million years ago, sea levels were 400 metres higher than today. That’s natural variability for you. Those who try to artificially keep nature in status quo don’t understand that it doesn’t exist. Change is the basis for everything. There is no balance of nature, and no natural stability other than negative feedback from cloud cover which cancels out CO2. The ecofascists have no baseline marker to call “natural” because the world is ever changing.


Nothing gained in search for ‘theory of everything’
By Dr Robert Matthews
Financial Times, London. Published: June 2 2006 19:45

“They call their leader The Pope, insist theirs is the only path to enlightenment and attract a steady stream of young acolytes to their cause. A crackpot religious cult? No, something far scarier: a scientific community that has completely lost touch with reality and is robbing us of some of our most brilliant minds.

“Yet if you listened to its cheerleaders – or read one of their best-selling books or watched their television mini-series – you, too, might fall under their spell. You, too, might come to believe they really are close to revealing the ultimate universal truths, in the form of a set of equations describing the cosmos and everything in it. Or, as they modestly put it, a “theory of everything”.

“This is not a truth universally acknowledged. For years there has been concern within the rest of the scientific community that the quest for the theory of everything is an exercise in self-delusion. This is based on the simple fact that, in spite of decades of effort, the quest has failed to produce a single testable prediction, let alone one that has been confirmed. …

“Most theorists pay at least lip-service to falsifiability, popularised by the philosopher Karl Popper, according to which scientific ideas must open themselves up to being proved wrong. Yet those involved in the quest for the theory of everything believe themselves immune from such crass demands. Mr Woit quotes a superstring theorist [lenny susskind] dismissing the demand for falsifiability as “pontification by the ‘Popperazi’ about what is and what is not science”. …

“Coming from a community that refers to Prof Witten as The Pope this is a bit rich. But it also suggests the whole field is now propped up solely by faith. Woit provides plenty of evidence for this: the insistence of M-theorists that in the quest for ultimate answers, theirs is “the only game in town”; the lectures with titles such as The Power and the Glory of String Theory; the cultivation of the media to ensure wide-eyed coverage of every supposed “revelation”. …

“But why should the rest of us care? The reason is simple: the quest for the theory of everything has soaked up vast amounts of intellectual effort and resources at a time when they are desperately needed elsewhere. … the huge intellectual effort needed to enter the field compelling them to plough on regardless of the prospects of success. It is time they were put out of their misery by being told to either give up or find funding from elsewhere (charities supporting faith-based pursuits have been suggested as one alternative).

“Academic institutions find it hard enough to fund fields with records of solid achievement. After 20-odd years, they are surely justified in pulling the plug on one that has disappeared up its Calabi-Yau manifold.”

The writer is visiting reader in science at Aston University, Birmingham

November 13, 2011

The cross-section for graviton scatter, scaled by Feynman’s rules from the weak interaction

Filed under: About — Nigel Cook @ 11:08 pm


28 November 2011 update: the edited PDF of the first 48 pages (excluding the references) is linked here (6 MB download, PDF file). The full paper is nearing completion and should be uploaded soon, after the reference list and final proof-reading has been done. Although this is an extension of previously published research from 1996 with updates from this blog and others, the paper is not just a summary of previously published material, but a completely fresh approach altogether.

There are pedagogy and presentation problems: “Darwin’s theory of evolution is disproved because Lamarke had an evolution theory before Darwin, which was wrong.” (Joseph McCarthy’s “guilt by false association”, applies to LeSage’s gravity mechanism. If one person gets something wrong, nobody else is ever allowed to correct the errors in it. Darwin was only able to proceed by pretending that Lamarke hadn’t existed. Science is not a logical system where errors get corrected. It’s a political process whereby theories are pre-judged in their incorrect nascent state, then dismissed for ever when found incorrect. If someone else later corrects all the errors, that person is wrong by being “associated”, much like friends of people who turned out to have been student communist party members were guilty of being Stalin’s friends, in McCarthy’s eyes. This story is of course usually turned around to a very different conclusion: the fact that McCarthy was wrong in shooting everyone who had ever heard of Marx was used to try to “defend” Stalin’s evil, a kind of one upmanship or reversal of McCarthy’s trick. Anyone criticising Stalin was then compared to McCarthy, and their message unheeded. Science is more political than normal politics, because it pretends that there is no political element and uses this deception to “disprove” the need for democratic debate, etc. Science is the worst sort of politics, the sort which pretends it’s always justified by good intentions, no matter the consequences, exactly like Stalinism and Nazism, but Godwin help you if you say it.)

September 18, 2011

Cos (iS) to replace exp(iS) to overcome Haag’s objection to the QFT interaction picture…

Filed under: About — Nigel Cook @ 12:26 am

Cos (iS) to replace exp(iS) to overcome Haag’s objection to the QFT interaction picture, since the resultant phase sum (path integral) for all interactions is follows path of least action S -> 0, hence exp(i*0) = 1, which has a direction on the real plane of the Argand diagram, so we don’t “lose physically real solutions” by using the real “component” cos(iS) to totally replace the hardened orthodoxy of exp(iS)

Interesting and funny (maybe) quotation about criticisms innovators receive and the relatively poor backing from “independent referees” when contracted to “sort out” a dispute, from a 1987 interview by magnetic dipole EMP discoverer Dr Conrad Longmire, who died in 2010:

Longmire:

Nothing that I was involved in. They were, DNA did hire them way back in the 1970s, early seventies, there was a fellow at the RAND Corporation, this is after the RDA physics group left. His name was Cullen Crane, who, I don’t know if you’ve ever heard of him—well, anyway, this fellow was saying that EMP is a hoax. These guys are either crazy or they’re doing it to, you know, perpetuate their salaries. And so the Jason group got tasked by DNA to look into this. Now, in this case, in my opinion, the Jason group didn’t do a very good job, because instead of reading the reports and trying to settle the argument, they started out from scratch and first did their own version of EMP, and at least, I didn’t think that was necessary at the time. But I don’t know, it might have been useful to DNA.

Aaserud:

Of course it’s more interesting to do one’s own work.

Longmire:

Yes, right. Also, I might say, if they have any faults at all, one of them is that they’re not very good as historians. They do not, you know, when they begin to look into something, they don’t go back and make sure that they’ve read all the earlier references and stuff like that. But you don’t expect physicists to be your formally good historians.

Longmire was spot on. They don’t know history because they don’t care about history too much, thinking physics a separate subject from boring old history. Which is why they keep making the same mistakes as foolish predecessors, by using “gut instinct/intuition” to dismiss new ideas which contradict existing interpretations, in place of unbias analysis of all the options. Intuition is useful for objective and constructive work, but is dismally stupid when used to “justify” ignoring a new idea which is having a hard time any just because it is new. Intuition is easily confused with herd instincts. I’m going to include a concluding “crying about spilt milk” section in my paper on what Newton could and should have done with Fatio’s gravity mechanism circa 1790 A.D., when Newton could (if he knew G which of course he didn’t really know or even name, since he used Euclidean-type geometric analysis to prove everything in Principia, and that symbol it came from Laplace long after), have predicted the acceleration of the universe from applying his 2nd and 3rd laws of motion plus other Newtonian physics insights to improve and rigorously evaluate the gravity mechanism. Of course, we’re still stuck in a historical loop where any mention of the facts is dismissed by saying Maxwell and Kelvin disproved a gravity mechanism by proving that onshell matter like gas would slow down planets and heat them up, etc. Clearly this is not applicable to experimentally validated Casimir off-shell bosonic radiations, for example, and in any case quantum field theory’s well validated interaction picture version of quantum mechanics (with wavefunctions for paths having amplitudes exp(iS), representing different interaction paths) suggests that fundamental interactions are mediated by off-shell field quanta.

The Maxwell/Kevlin and other “disproofs” of graviton exchange are wrong because they implicitly assume gravitons are onshell, an assumption which, if true, would also destroy other theories. It’s not true. E.g. he Casimir zero point electromagnetic radiation which pushes metal plates together does not cause the earth to slow down in its orbit or speed up.

The use of a disproved and fatally flawed classical “no-go” theorem to “disprove” a new theory is exactly what holds up physics for centuries. E.g., Rutherford objected at first to Bohr’s atom on the basis that the electron orbiting the nucleus would have centripetal acceleration, causing it to radiate continuously and disappear within a fraction of a second. We now know that the electron doesn’t have that kind of classical Coulomb-law attraction to the nucleus, because the field isn’t classical but is quantum, i.e. discrete field quanta interactions occur. This is validated by “quantum tunnelling”, where you can statistically get a particle to pass through a classically-forbidden “Coulomb barrier” by chance: instead of a constant “barrier” there is a stream of randomly timed field quanta (like bullets in this respect) and there is always some chance of getting through by fluke. You don’t need to have a more fancy explanation than that, because the available mathematics (which gets into trouble with Haag’s theorem) doesn’t prove a more fancy explanation. The simplest theory which fits the experimental facts is adequate and preferable to everyone sensible.

[Path integrals using a real-only amplitude, cos(iS), in place of the complex exp(iS) are also a topic of my paper. The exp(iS) factor comes from Schroedinger's time-dependent equation, which contains i, the complex number, because Schroedinger had read the idea in Weyl's paper on a gauge theory of quantum gravity, which had been inspired by Hilbert's and Einstein's Lagrangian for general relativity. London showed that Weyl's complex exponential phase factor can be applied to atoms directly, but Schroedinger had already taken the idea to mind. The "stationary" states of an electron are then the real solutions to an equation that contains also a complex conjugate. E.g., exp(iS) = cos(iS) + i*sin(iS) (Euler's equation) gives periodic real, discrete solutions, exp(i*0) = 1 for instance, which is useful for modelling discrete energy levels in the atom. However, it's just a model. Does the electron exist only in "imaginary space" on an Argand diagram when it jumps between states? I doubt it. The problem is severe because Bell's theorem - used with experiments to "discredit" hidden variables in QFT and this to "credit" ESP-fairy entanglement "interpretations" instead - is based on 1st quantization Schroedinger wavefunction analysis as a foundational assumption. If you drop the complex plane, you don't lose an angle on an Argand diagram, because no such angle exists; the real world is resultant arrow which is the path of least i.e., S = ZERO, and exp(i*0) = 1, so the least action "sum of histories" resultant arrow direction is on the real plane. The imaginary plane is not just imaginary but unnecessary because replacing exp(iS) with Euler's real component of it, cos(iS), does all the work we need it to do in the real physics of the path integral (see Feynman's 1985 book "QED" for this physics done with arrows on graphs, without any equations): all you're calculating from path integrals are scalars for least action magnitudes (resultant arrow lengths, not resultant arrow directions; since as said the resultant arrow direction is horizontal, in the real plane, or, you don’t get a cross-section of 10i barns!). As Feynman says, Schroedinger’s equation came from the mind of Schroedinger (actually due to Weyl’s idea), not from experiment.

Why not replace exp(iS) with cos(iS) for phase amplitudes? It gets rid of complex Fock and Hilbert spaces and Haag’s interaction picture problem which is due to renormalization problems in this complex space (it hopefully also gets rid of arrogant deluded “mathematicians” who don’t know physics but are good at PR), and it makes path integrals simple and understandable!

Some additional amplifying comments about the post above:

When using exp(iS) you’re adding in effect a series of unit length arrows with variable directions on an Argand diagram to form the path integral. This gives, as stated, two apparent resultant arrow properties: direction and length. A mainstream QFT mathematician’s way of thinking on this is therefore that this must be a vector in complex space, with direction and magnitude. But it’s not physically a vector because the path integral must always have DIRECTION on the real plane due to the physical principle that the path integral follows the direction of the path of least action.

The confusion of the mainstream QFT mathematician is to confuse a vector with a scalar here. A “vector” which always has the same direction is physically equivalent to a scalar. You can plot, for example, a “two dimensional” graph of money in your bank balance versus time: the line will be a zig-zag as withdrawals and deposits occur discretely, and you can draw a resultant arrow between starting balance and final balance, and the arrow will appear to be a vector. However, in practice it is adequate to treat money as a scalar, not a vector. Believing that the universe is intrinsically mathematical in a complicated way is not a good way to learn about nature, it is biased.

Instead of having unit arrows of varying direction and unit length due to a complex phase factor exp(iS), we have a real world phase factor of cos(iS) where each contribution (path) in the path integral (sum of paths) has fixed direction but variable length. This makes it a scalar, removing Foch space and Hilbert space, and reducing physics to the simplicity of a real path integral analogous to the random (Monte Carlo) statistical summing of Brownian motion impacts, or better, long-wave 1950s and 1960s radio multipath (sky wave) interference.

For long distance radio prior to satellites, long wavelength (relatively low frequency, i.e. below UHF) was used so that radio waves would be reflected back by “the” ionosphere tens of kilometres up, overcoming the blocking by the earth’s curvature and other obstructions like mountain ranges. The problem was that there was no single ionosphere, but a series of conductive layers (formed by different ions at different altitudes) which would vary according to the earth’s rotation as the ionization at high altitudes was affected by UV and other solar radiations.

So you got “multipath interference”, with some of the radio waves from the transmitter antenna being reflected by different layers of the ionosphere and being received having travelled paths of differing length by a receiver antenna. E.g., a sky wave reflected by a conducting ion layer 100 km up will be longer than one reflected by a layer only 50 km up. The two sky waves received together by the receiver antenna are thus out of phase to some extent, because the velocity of radio waves is effectively constant (there is a slight effect of the air density which slows down light, but this is a trivial variable in comparison to the height of the ionosphere).

So what you have is a “path integral” in which “multipath interference” causes a bad reception under some conditions. This is a good starting point to checking what happens in the “double-slit experiment”. Suppose, for example, you have two radio waves received out of phase. What happens to the “photon”? Does “energy conservation” cease to hold? No. We know the answer: the field goes from being observable (i.e. onshell) to being offshell and invisible, but still there. It’s hidden from view unless you do the Aharonov–Bohm experiment, which proves that Maxwell’s equations in their vector calculus form are misleading (Maxwell ignores “cancelled” field energy due to superimposed fields of different direction or sign, which still exists in offshell energy form, a hidden field).

Notice here that a radiowave is a very good analogy because the “phase vectors” aren’t “hidden variables” but measurable electric and magnetic fields. The wavefunction, Psi, is therefore not a “hidden variable” with radio waves, but is say electric field E measured in volts/metre, and the energy density of the field (Joules/m2) is proportional to its square, “just as in the Born interpretation for quantum mechanics”. Is this just an “analogy”, or is it the deep reality of the whole of QFT? Also, notice that radio waves appear to be “classical”, but are they on-shell or off-shell? They are sometimes observable (when not cancelled in phase by another radio wave), but they can be “invisible” (yet still exist in the vacuum as energy and thus gravitational charge) when their fields are superimposed with other out-of-phase fields. In particular, the photon of light is supposed to be onshell, but the electromagnetic fields “within it” are supposedly (according to QED, where all EM fields are mediated by virtual photons) propagated by off-shell photons. So the full picture is this: every charge in the universe is exchanging offshell radiations with every other charge, and these offshell photons constitute the basic fields making up “onshell” photons. An “onshell” (observable) photon must then be a discontinuity in the normal exchange of offshell field photons. For example, take a situation where two electrons are initially “static” relative to one another. If one then accelerates, it disrupts the established steady state equilibrium of exchange of virtual photons, and this disruption is a discontinuity which is conventionally interpretated as a “real” or “onshell” photon.

Older Posts »

Theme: Shocking Blue Green. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.