Friday, April 29, 2022

How absolutely accurate is the principle of relativity? Philosophical versus experimentalist approach. Virtual mass of photon hypothesis of red shift.

 The recent experiments with direct detection of gravitational waves showed that they arrived 1.7 second before the gamma-ray burst from the neutron stars merger [1]. While this is tiny deviation it posts a question about the speed of light - is it really not c (Lorentz speed from special relativity) but a tiny bit below it? And how absolute is principle of relativity?

While there were many attempts to overcome the principle of relativity (and all failed) they all came from the perspective of theoretical re-interpretation of Michelson-Morley experiment and futile attempt to save the theory of classical ether. Here is the attempt to evaluate it from the point of view of experimentalist not theoretician. 

What is any physical law from the point of view of modern theoretician? This is most frequently the mathematical law beyond it which is from the philosophical point of view holds supreme, that is absolute from the point of view of mathematics. Indeed, the energy conservation law means no energy may be created or lost, absolute mathematical zero. Principle of relativity means that speed of light in vacuum is exactly Lorentz speed with absolute mathematical accuracy. In General Theory of Relativity the constant G is absolute and may not deviate nowhere, just absolutely fixed constant. Any such idea creates a possibility to express such law in quite simple mathematical form. The mathematical equations are written and new phenomena are predicted.

But from the point of view of experimentalist any physical law is just a fit to the existing experimental data and nothing more. It is obligatory approximate and by no means absolute - one day it will be rejected and become the simplified approach of more general law. And physics seems to confirm this point of view: Newtons law were considered as absolute, but finally became just a simple limit version of both quantum mechanics or theory of relativity depending upon the scale of research. The same is true with any physical law: it is just fit to the existing experimental data and nothing philosophically absolute is beyond it. They all temporary and just waiting to be revoked and replaced by some more general laws, which are in there's turn are to be replaced by even more general laws etc. However, some of such fits are enormously good! Energy conservation law is one example. Another example is principle of relativity and postulate that speed of light in vacuum is exactly Lorentz speed. That particular law is so accurate fit to the existing data that by many is considered as absolute. But that may be changed soon. Of course, even if it  is not absolutely correct it is very close to absolute, and the deviation is only possible to be very (enormously) small and revealed on very large (enormously large) distances, like in space. 

The idea of the non-absoluteness of the speed of light in vacuum originates mainly from the quantum mechanics and especially from the idea of quantum vacuum. For example, due to the presence of virtual particles in quantum vacuum it is responding to the external field and thus the Kerr effect in the vacuum is possible (quite serious publication in the very serious journal is here [2]). The author emphasizes the fact that the principle of relativity is respected in the sense that the received velocity of light is obligatory below Lorentz speed but from philosophical point of view the existence of such effect means that the principle of relativity is not an absolute any more, but rather an extremely good fit to the existing experimental data. The calculated deviation of the speed of light from Lorentz speed is enormously small (and well below measurability level) but it means that small deviations of speed of light from Lorentz speed are not a big deal anymore - the relativity principle is not absolute, rather a fit and may be replaced by more general law. 

From the point of view of experimentalist it opens the new venues of search for the experimental data, which would contradict to the existing paradigms of science and would create a "new physics". One of the ideas is that the red shift is not due to the Big Bang, but rather due to the inherent but not discovered properties of light which allow it to loose the energy while propagating the enormous distances of billions of light years which the observer at the Earth see like a red shift (of course it is red, means that the energy is lost, not obtained, which would contradict to another general law of energy conservation which is from experimentalist point of view not absolute too, but for right now and in this phenomenon is valid - say that law is better overall fit compare to principle of relativity).

From my point of view any matter including light is both particle and wave and thus the light obligatory has non-zero rest mass (despite of course enormously small) [3]. In this approach every particle including photon is considered as having rest mass and photon is ultra-relativistic particle for which the simplified energy relation E=p*c is valid (pulse times speed of light), as for any other ultra-relativistic particle like electron for example. In this approach of course that is approximate relation even for photon E~p*c, because in reality speed of light is a little bit below Lorentz speed c, which was possibly revealed by the experiment with gravitational waves (see the beginning of the post). The speed of gravitational waves is not Lorentz speed too, of course, from the same consideration as before, but the gravitational wave propagating with Lorentz speed is better fit compare to speed of light (and the Lorentz speed itself is not absolute, philosophically absolute value too, but that would be for future generations to discuss). 

But would be the origin of this mass for photon? Is it really the particle with some finite but very small mass (in this case the registration of non-relativistic  photon in vacuum would be very difficult since it would have the De-Broglie wavelength of enormous value - around 100 thousands meters for the photon moving with velocity of 1 m/s, from the evaluations done in [3]). The detector to register this particle as a particle should have a similar dimensions of 100 km and not feasible even in the far future. 

In this case the red shift may be explained very similar to the energy loss of  an ultra-relativistic particle moving in the field along the curved trajectory - the electron in cyclotron is circling around and as an accelerating particle shedding away photons (synchrotron radiation) thus loosing energy. The photons are subject to the gravitational fields (light is bent by masses) and since the mass is not absolutely zero, they are shedding away gravitons and are losing energy too. In this case since they are still ultra-relativistic this is not revealed as the change in speed (it is extremely small and unmeasurable) but as red-shift. Being propagated in the non-uniform gravitational fields for billions of years they are obtaining finally red shift observable on Earth.

Another interesting way for photon to have the finite mass is to have the virtual finite mass. In this case the quantum vacuum may offer such possibility. Quantum vacuum is considered as a special media obeying principle of relativity for the uniform motion and for the motion with constant acceleration [4]. Here is how it is stated in [4]:

"We may emphasize that the motional force does not raise any problem to the principle of special relativity. As a matter of fact, the reaction of vacuum (*) vanishes in the particular case of uniform velocity. The quantum formalism gives an interesting interpretation of this property : vacuum fluctuations appear exactly the same to an inertial observer and to an observer at rest. Hence the invariance of vacuum under Lorentz transformations is an essential condition for the principle of relativity of motion to be valid and it establishes a precise relation between this principle and the symmetries of vacuum. More generally, vacuum does not oppose to uniformly accelerated motions and this property corresponds to conformal symmetry of quantum vacuum [*]. In this sense, vacuum fluctuations set a class of privileged reference frames for the definition of mechanical motions"

Later in the same publication [4] the authors actually predict the existence of subtle effects never observed, connected with Kazimir forces, which creates the problems for dissipative processes in quantum vacuum being considered from point of view of principle of relativity. Essentially they came to the conclusion that if the process is dissipative and valid in quantum vacuum it may a little contradict to the philosophical absolute principle of relativity, like the publication [2]. This is actually expected and well in line with present blog: the principle of relativity is not philosophical absolute, rather a very good fit for the existing data and small (but actually extremely small and not yet possible to observe) deviations are not only possible but rather inevitable. 

In the sense of the quantum vacuum the light is actually supported by the popping in and out of the existence virtual particles pretty much as in a condensed matter with huge refraction index (say 10 in ultra-dense plasma) the light is almost completely supported by the real, not virtual, charges and dipoles (indeed, when the index of refraction is larger than 1 part of the energy of the electric and magnetic fields of light is actually in the induced polarization of the matter, not in the initial vacuum hold electric and magnetic fields, that  is why the speed of light is smaller and the values of the field are different). If the index of refraction is 100 almost nothing left of the quantum vacuum here, and the propagating light is almost entirely depends upon the properties of the real particles and real charge distributions. In essence the light in this situation is more like the total assembly of orientations and motions of the real particles. Thus the famous experiment of Fresnel dragging may be interpreted as follows. 

Einstein's formula u=c/n + v(1-1/[n*n]) or u=c/n - v(1-1/[n*n]) (see [5]), where u is the measured speed of light propagating in the media with refraction index n moving with velocity n in the direction of the light or against it

is because light is partially supported by media. If n=1 (vacuum, no support) the velocity is obviously c (principle of relativity). If the value of n is huge (say 100) the velocity of media is almost completely added or subtracted (in this case the aether theory is valid because the light is essentially nothing more now but some distortion of the moving media and almost nothing left from the vacuum part).

If the media with huge refraction index n is subject to tidal force of gravity near the gravitating body (say moving directly toward the center of the gravitating body) any distortion of the media is subject to such tidal forces too. Since the front part of the oscillation of the light is attracted more strongly toward the body than the other side ( in both cases, whether it is moving toward it or away from it) the light obviously is spread more together with the media (the wavelength is becoming larger). Thus if such media with light passes near the gravitationally attracting body (say the star being ruptured by tidal force passing near the black hole) the red shift must be obviously present and explained by the action of gravity on the media, not on light itself.

The quantum vacuum being almost completely OK with respect to principle of relativity, however, may explain the tiny red shift in the same way: the effective n for quantum vacuum is not exactly 1, but a little deviating from it (n>1, of course). This is exactly the conclusion of [2]. Then when the photon is passing the ominous spaces in the Universe traveling for billions of years this media (quantum vacuum) creates kind of non-zero virtual mass of the photon (despite indeed really very small). During the passage of the light near the gravitational body the light is supported by those virtual particles which are in turn are subject to gravitational field. From quantum mechanics it means that the non-zero mass is accelerating a little and thus obligatory shedding some gravitons and loosing energy. From classical electrodynamics point of view the virtual media of quantum vacuum is stretched by the tidal force, thus elongating a little bit (very small effect of course) the photon, leading to the increase of the wavelength and to the red shift. Once the gravitational body is passed the influence of it on virtual refraction index is over (n=1 with much larger accuracy) so the velocity of photon is restored too, but the stretch is not (because for both directions toward the body and away from the body the gradient is working in the same direction). If it would be real media it may contract back through the electromagnetic forces, but the quantum vacuum is different - it is not passing information from point to point. Light is being now supported by the quantum particles which popped out into the very short existence and have no knowledge about the virtual particles which supported light before the gravitating body.

The effect is enormously small but the photons is to travel billions of years, too. So this creates the slow change of the energy of photon through not change of velocity but rather through the elongation of wavelength - exactly red shift.



References.

1.Ask Ethan: Why Did Light Arrive 1.7 Seconds After Gravitational Waves In The Neutron Star Merger? (forbes.com)

2.Scott Robertson "Optical Kerr effect in vacuum"// Phys Rev. A, 100, 063831

Phys. Rev. A 100, 063831 (2019) - Optical Kerr effect in vacuum (aps.org)

3.Dmitriy S. Tipikin "The quest for new physics. An experimentalist approach"

Published in MoreBooks in 2021:

https://www.lap-publishing.com/catalog/details//store/gb/book/978-620-4-73173-5/the-quest-for-new-physics-an-experimentalist-approach

THE QUEST FOR NEW PHYSICS An experimentalist approach / 978-620-4-73173-5 / 9786204731735 / 6204731734 (lap-publishing.com)

Free version on Vixra.org:

https://vixra.org/abs/2011.0172

The Quest for New Physics: An Experimentalist Approach, viXra.org e-Print archive, viXra:2011.0172

4.“Quantum vacuum fluctuations”  by Serge Reynaud, Astrid Lambrecht, Cyriaque Genet, Marc-Thierry Jaekel

CERN Research, 10.1016/S1296-2147(01)01270-7

https://core.ac.uk/download/pdf/25312347.pdf

/home/www/ftp/data/quant-ph/dir_0105053/0105053.dvi (core.ac.uk)

5.https://en.wikipedia.org/wiki/Special_relativity#Dragging_effects

https://en.wikipedia.org/wiki/Special_relativity#Dragging_effects

Thursday, April 14, 2022

Energy-matter cycle (aka water cycle on Earth) instead of Big Bang idea. How energy is converted back to matter.

 The hypothesis of Big Bang is not the only possible way to visualize the Universe at the largest possible scale. If the James Webb telescope will fail to discover "end of light" and confirm it  the revolution in the astrophysics is inevitable. The red shift may be easily attributed to non-zero mass of photon (old abandoned idea of tired light, but on new physical principles, not yet discovered), microwave background may be something completely irrelevant but what about the generation of the energy in stars? 

Indeed, if the stars are only generating energy and the Universe is much much older than expected (say trillions of years or even older) how it is possible that stars are still shining? They would be completely depleted long ago. 

So it is necessary to hypothesize the reverse process - how energy is converted back to matter (it is assumed for now that the energy conservation law rules supreme and E=mc*c is valid with very high accuracy). 

Actually the only plausible hypothesis except for completely new processes to be discovered is connected with well known phenomenon predicted  by Soviet academician V. I. Goldanski and confirmed later in USA-  the two-proton decay. Indeed in this process the nucleus may be excited by some kind of energy consuming process (say proton of ultra-high energy accelerated in the space by magnetic fields and photons from the stars) will strike the appropriate dust particle, excite  the appropriate nucleus and generate two protons in the decay, thus successfully transforming energy back into the matter (and very important, hydrogen gas, which would later condense into the star and the process to be repeated again and again, pretty much like water circle on earth.

proton + energy + nucleus= nucleus + 2 protons

Indeed, the two proton decays are quite common and in the recent years they are discussed as important for the detection of neutrino [1].

In that and many many other articles the processes like multinucleon knockout are discussed, like (p,2p) and (p,3p), where the proton with huge energy generates two or three protons (and the remaining nucleus will undergo the process of more usual decay afterwards) successfully transforming the energy into the matter or even process of neutrino of extrahigh energy detection in which it is exciting the nucleon (energy absorbed) and it decays into the two or three protons plus nucleon with lower energy [2].

Essentially the processes like those may be responsible for the conversion of energy back to matter, than matter back to energy (through very easily observable stars burning in the sky) and again and again for possibly infinite time span. The important part will be notably played by the two-proton decay because this is the easiest from energy point of view way of conversion energy into the matter. No problems here from the antimatter point of view - it is not born here, the initial non-equilibrium distribution is merely preserved. Why matter predominate - it is a very good question in this picture of Universe, but as unsolvable as it is with Big Bang hypothesis - it is merely not enough knowledge now to answer it. Some amount of antimatter is of course generated during the conversion of energy back to matter, but it quickly converts back at annihilation (that would be the second, much smaller in scale and much less important cycle of matter-energy conversion). In this picture the Universe is back to the very very old ages, trillions or even more years old, possibly infinitely old. The observed non-uniformity of red shift in time is to be explained by some other phenomena (yet to be discovered) and the origin of Universe becomes the philosophical question again, rather than physical one.

The problem with this energy-matter cycle is that one part of the cycle is very visible (stars) while the second one is mainly hidden in the impossible interstar distances (slow aceleration of elementary particles with energy till they strike the appropriate nucleus), that is why our civilization successfully identified only one part of the cycle and failed so far apprehend the second, much slower part of it. Indeed, it means that at equilibrium the vast majority of energy is accumulated in wave-like particles like light (despite my another hypothesis actually demands that any matter is both matter and wave, merely barionic matter is mainly matter and only a little wave, while light is almost completely wave and just a little matter), which is actually quite accepted today (energy predominates the matter very strongly). The largest difference is the origin of such shift toward the energy and dynamic - in Big Bang hypothesis matter will eventually disappear completely while in my idea it is the equilibrium distribution.



References.

1.A.Frotscher et all "Sequential Nature of ðp;3pÞ Two-Proton Knockout from Neutron-Rich Nuclei" // Phys. Rev. Letters, Vol 125, 012501 (2020)

Sequential Nature of (p,3p) Two-Proton Knockout from Neutron-Rich Nuclei (aps.org)

2.J.E.Sobczyk et all "Exclusive final state hadron observables from neutrino-nucleus multi-nucleon knockout" // 2002.08302.pdf (arxiv.org)


Thursday, February 17, 2022

How repelling of the gravity-antigravity particles may explain the gravity enhancement by the usual barionic matter.

In the classical electrodynamics the electric field generated by the charges is from positive to negative charge. Two positive charges repel each other and positive and negative charges attract each other. The dipole may be considered as close placed positive and negative charges. In this case in the case of the presence of the dipole particles near the electric charge the direction of the dipole is as shown in the picture below:


This is because the negatively charged part of dipole attracts to the positive charge and positively charged part of the dipole repelled by the main positive charge. In this case the electric field of dipole (still directed from positive to negative charge) is against the electric field of the main charge. This is a well known phenomenon and because of it the virtual dipoles created in the quantum vacuum are weakening the electric field and making the speed of light finite despite very large. Because of the same phenomena in the presence of any matter the electric field is weakened by the dipoles leading to the famous parameter of the dielectric constant in Coulomb law: 

E=[1/(4πεεo)]*q/r2

Here E is electric field due to charge q, r is the distance from charge to the point of measurement, εo

is the dielectric permitivity of vacuum and ε is the dielectric constant.

Numerous attempts were given to invent the antigravitational particles and second spin (see the same blog of Tipikin, earlier publications) assuming the antigravitation will behave in exactly the same way as electrostatic - similar charges repel each other while the positive and negative attract each other. But it may happened that the situation is not completely symmetric - while the second spin is present and gravity is governed not by bosonic particles like Higgs, but rather in a way similar to electricity, the similar particles are attract each other while the gravitational and antigravitational repel each other. In this case the situation near the gravitational field in the presence of the dipole is different


The gravitational field in the presence of the gravitational dipoles is enhanced.

While it means that the gravitational dipole is very unstable and may be not easily detected, the virtual dipole in quantum vacuum still should be present (since it will decay back instantly before the Heisenberg uncertainty time is up). It means that in the presence of the strong gravitating body the polarization of quantum vacuum will enhance the gravity between two bodies, not weaken it as in the case of the electrostatics. In this case the effect observed for binary stars - the gravity seems enhanced when the stars are close to each other (the concentration of ordinary barionic matter is higher locally) compare to when stars are away from other (local concentration is smaller) [1].

This idea is also explains where is the antigravitational matter - even if it is created accidently in the colliding beams, it is instantly pushed away from Earth and out of this Universe (may be condensing somewhere in another Universe just near our own).


References.

1.D.S.Tipikin "Analysis of slope of mass-luminosity curves for different subsets of binaries - dark matter, MOND or something else governs the accelerated rotation of galaxies?" https://vixra.org/pdf/2008.0217v1.pdf









Wednesday, December 22, 2021

The book "The quest for new physics. An experimentalist approach" is finally available in paper format on Morebooks.de

 The book in paper format may be bought here:

THE QUEST FOR NEW PHYSICS An experimentalist approach, 978-620-4-73173-5, 6204731734 ,9786204731735 by Dmitriy Tipikin (morebooks.de)

https://www.morebooks.de/store/gb/book/the-quest-for-new-physics-an-experimentalist-approach/isbn/978-620-4-73173-5

Why this book was written? This is the experimentalist response to the crisis in fundamental physics: there are hundreds of the theories but they are still based on few core assumptions: principle of relativity, energy conservation law, observation of red shift, particle-wave dualism and many others. Even if the experiment is planned, it is based on three weak paradigms: Big Bang, dark matter and quantum entanglement.

Authors consider all three of them as obsoleted and standing in front of physics development. The book is devoted to the description of the ideas of experiments (not theories) which would create new facts and help establish finally New Physics (beyond general relativity, standard theory, modern quantum mechanics). James Webb space telescope is about to be launched and in one year it is supposed to show the blue bright first galaxies 16 billions light away with darkness below (because of Big Bang happened 16-17 billions years ago) or see nothing special: the same old galaxies with huge red shift up to James Webb limits (say 50 billions of years). No darkness, no just born stars, the same universe like near Sun but seemingly infinite. My guess that like futile search for dark matter that would be the place and suddenly all the house of cards: Big Bang, dark matter and quantum entanglement is falling down. 

But what is instead? What was missed 100 years ago why the physics is in crisis now? From author perspective much simpler objects were severely under-investigated: simple binary stars instead of galaxies will reveal the true nature of gravity on long scales (this is neither dark matter no MOND but distortion of quantum vacuum or space-time itself); the idea of tired light being resurrected but on a new physical principles: some weak but existing interaction makes the light redder and redder as it travels billions of years; the idea of spooky action will be found to be a simple experimental error (to the benefit of Einstein and Feynman); the necessity of careful and extremely expensive experiments with pulsed gravitational field will be clear (thus the author's proposal for gravito-electromagnetic national laboratory) and many other ideas of experiment in the fundamental physics.


Thursday, February 4, 2021

The experimentalist approach to the new physics.

In modern physics the most discussed challenge now is crisis in fundamental physics. Indeed, numerous different theories are multiplying at exponential rate, but no further improvement was seen in already around 80 years, what indeed looks like a crisis here. One of the points every physicist agrees upon is lack of new experimental data, which would demonstrated the limitation of the major paradigms: general relativity and quantum mechanics. So the most promising way in this situation is the hard work way of experimentalist - search for new discoveries here and there without preliminary theoretical calculations to confirm - the "wild goose chase" approach.
The only way old theories may help in this situation is to create some kinds of boundaries where to search. 1.At least energy conservation law should be preserved.
2.The effect sought should be small (otherwise it would be noticed already).
3.Use the existing theory and eliminate only one assumption.
4.The experiment proposed is obligatory very expensive (otherwise it would be done decades ago).

With this approach in mind I tried to generate the list of possible areas to search within:
One of the confirmation of the validity of search come from the observation of double stars orbit - they are not really Keplerian, rather ovoid instead of ellipse (the deviation is of course small and was confirmed only recently using special technique):


Hopefully soon the search for new discoveries in this way will finally lead to new physics.


Porrima orbit is ovoid instead of ellipse - the hint to the gravitation induced space-time stretching with characteristic scale of around 0.04 light year

Introduction.

The problem of missing matter in the galaxy generated different approaches already and more approaches are possible. The startling difference between the modern theory and experimental observation comes from the observation of the deviation of the light by the mass – light bending by galaxies, clusters of galaxies etc. The most general expression for the gravity influence on light goes from Schwarzchild metric expression:

Z=GM/(c2R)

This Z value as observed is too large for the measured distance, visible mass and known gravitational constant. So the hypothesis are:

1.Missing matter – dark matter approach – value M should be higher to account for Z

2.Gravitity law is changed at high distance – combination of G and R should be reconsidered (MOND)

3.G value is wrong when away from Earth – gravity enhancing field hypothesis, fifth force hypothesis (developed in [1])

4.Speed of light is not constant and if it is smaller between galaxies the value of Z may be higher to account for the measured light bending. The speed of light theoretically may be smaller away from gravitating mass if the gravitation influences quantum vacuum much stronger than expected now – in this case not only the G constant is smaller away from galaxy (the hypothesis discussed above), but the vacuum permeability, responsible for speed of light is larger, for example, due to enhancement of positron-electron virtual pairs generation in the absence of gravitational field

5.The geometry is not correct – the value of R is wrong. For example, the Einstein’s idea that space is created by the mass is even more important and between galaxies there is virtually less space than inside galaxy – in this case the simple geometrical rules are not applicable.

               This article is devoted to the fifth approach, claiming that the distances are measured wrongly on galactic scales because the “ruler” is based on photons and the space is more distorted than expected by the presence of the mass (energy in more general terms).

Main part.

The idea proposed in [1] for experimental observation of the discrepancies connected with gravitation is rather simple: the Cavendish experiment should be performed away from gravitating mass (away from the Sun, say beyond the Pluto orbit). Modern space probes are already left the solar system, so this experiment is within the reach of modern technology. However, the simple measurement of the deviation of the expected force from the calculated one:

F=G*M1*M2/R2

May come not only from the difference in G as it depends from mass but also from deviation of R as it is measured away from gravitating body. If the probe is sent toward Sun, this is expected effect (general relativity will reveal itself in such measurement close to the Sun). But what if the space distorted by the presence of mass more than general relativity teaches us? What if there is another, much larger scale of influence of mass onto the space-time, in addition to Schwarzschild radius? Application of “ruler” created near Earth to the galactic and intergalactic distances than would generate the large error which is perceived as missing matter. For example, if the space is more “diluted” away from gravitating bodies (this is idea number 5 from [1]), than for stars on the outskirts of galaxy both distance measured is larger than it is in reality (from graviton point of view, for example, not from photons point of view)  and the measured velocity is larger, too (assuming in the first approximation that time is not touched) because v=dR/dt. In this case the galaxy rotation curve may be modified toward the predicted shape.

(see the picture in [5])

How this idea would reveal itself in other astronomical observations?

In this case the orbits of the binary stars are not ellipses any more, rather distorted ellipse (ovoid). Because the space is expected to be a little more “diluted” when two stars are away from each other, the ellipsoid will be deformed. (see the picture in [5])

Indeed, the observation of the binary stars which are nearby and were observed for full cycle demonstrated that the best fit ellipse is different from the real orbit in exactly this place (see the picture  from [2] for Gamma Virginis, Porrima star), in [5]

It is clearly seen that the best fit elliptical orbit is not passing through the middle or median of the numerous points when the stars are especially far from each other (those points have smaller errors because it is much easier to measure higher separation  between stars than small separation when start are almost overlap each other).

Simple mathematics may allow to evaluate the degree of the “dilution” of space when the stars are separated. For ellipse with parameters a  and b (a>b):

(x-a)2/a2+ y2/b2=1

For aphelion-perihelion line the crossing points for y=0 are 0 and 2a.

Let the distance along x axis is distorted: x → x*exp(-cx) where c<<1. This means that the space is diluted with characteristic length of 1/c when the stars are apart compare to when they nearby. In this case the shape of the figure is as follows:

(x*exp(-cx)-a)2/a2 +y2/b2=1

For y=0 (aphelion-perihelion line) the new crossing points would be determined from the equation:

 

(x*exp(-cx)-a)2=a2

x*exp(-cx)-a=-a

Or x*exp(-cx)-a =a

 

The first point is still x=0 and the second point is determined from the equation: x*exp(-cx)=2a

Because c<<1 we have

exp(-cx)~1-cx

and substituting into equation x*exp(-cx)=2a

we have: x*(1-cx)=2a

or: -cx2+x-2a=0

 

The root of this quadratic equation is:

x1=[-1+sqrt(1-8ac)]/(-2c)

The second root has no physical meaning.

 

Because c<<1 the value of 8ac<<1 and using the Maclaurin formula:

(1+x)1/2=1+1/2*x-1/8*x2 + … we have

(1-8ac)1/2= 1-4ac-8a2c2+ …

 

Than x1=[-1+sqrt(1-8ac)]/(-2c)=[1/(-2c)]*(-1+1-4ac-8a2c2)=2a+4a2c

The difference between the real path of the star and the pure ellipse is Δ=4a2c. The relative shift of the ellipse is Δ/2a=2ac

This relative shift is possible to see from the Fig.3. It is approximately 3/80.

For Gamma Virginis the parallax is 0.0856 and semi-major axis in arc seconds is 3.662 [3]

Then the semimajor axis in astronomical units is 3.662/0.0856=42.8 a.u.

The value of 2a=85.6 a.u.=1.28*1013 m

The value of c=3/80*1/(2a)=2.9*10-15 1/m

 

The characteristic length would be ξ=1/c=3.4*1014 m (0.036 light year), what correlates with the value of the decay length of the gravity deduced in [1,4] from the mass-luminosity curves analysis: 3.2*1014 m. Because the gravity law has inverse squares  of distance in Newton formula it means that the characteristic length of space dilution should be twice small compare to the decay of the gravity to lead to the same numerical value of force.

 

Conclusion.

For the advancements in understanding of gravity and how it leads to the rotation curves of the galaxies, in addition to the attempts to detect the dark matter more accurate measurements of the binary stars orbits are necessary. The deviations in the gravity laws, easily visible on the galactic scale must reveal itself in the dynamic of smaller objects, like binary stars, despite possibly in tiny amounts. From historic perspective the analysis of simple object like binary star (but performed with high accuracy) may lead to the crucial discoveries faster than the analysis of much more visible phenomena on galaxy or Universe scale because of the extreme complexities associated with larger objects. Described here analysis may be performed by professional astrophysicists for much broader range of binaries to find the possible deviations from pure elliptical orbits.

 

References.

1.D.S.Tipikin “The quest for new physics: an experimentalist approach”// https://vixra.org/pdf/2011.0172v1.pdf

2. http://stars.astro.illinois.edu/sow/porrima.html

3. https://en.wikipedia.org/wiki/Gamma_Virginis

4.D.S.Tipikin “Analysis of slope of mass-luminosity curves for different subsets of binaries – dark matter, MOND or something else governs the accelerated rotation of galaxies?” //

https://vixra.org/pdf/2008.0217v1.pdf

5.D.S.Tipikin  "Careful analysis of the binary stars orbits may reveal the space-time distortions on medium scale." 

 https://vixra.org/pdf/2012.0180v1.pdf

or:

https://vixra.org/abs/2012.018

 

Sunday, August 30, 2020

Analysis of slope of mass-luminosity curves for different subsets of binaries – dark matter, MOND or something else governs the accelerated rotation of galaxies?

Abstract.

Analysis of mass-luminosity curves for different subsets of binaries (both visual and eclipsing spectroscopic binaries) revealed the deviation in slopes  for relatively close binaries (averaged around 3.6*10-4 light years) compare to relatively far spaced binaries (averaged around 5.6*10-3 light years). The slope for close binaries is larger, what means that for the same luminosity of the main sequence stars the determined from Kepler law gravitational mass is smaller (or gravity between stars is stronger). This observation is opposite to the MOND idea (the far the stars the higher shift from 1/r2 law to 1/r law for gravity) – that would be opposite effect. The idea of dark matter seems to be confirmed once more (as if some dark mass is hanging around the star, thus making the mass seemingly larger), but a new concept of some kind of gravity enhancement by the mass itself may also be relevant – the closer the binary the higher local concentration of mass and higher value of G in the gravity law.

 

 

 

 

 

 

 

 

 

 

 

 

Introduction.

One of the unsolved problem of modern science is the observed deviations of the galaxy rotation curves from the predicted ones. The phenomenon is observed only on large scales and that is why it is so difficult to understand. At the same time such phenomenon is expected to reveal itself on all scales and all objects, including the simplest ones, where the gravity may be probed – binary star. Indeed, the simplest atom – hydrogen atom allowed to create quantum mechanics (including quantum electrodynamics due to Lamb shift) and from history of science perspective it is expected that the investigation of the simplest objects may lead to the most efficient theories. Hydrogen atom was especially simple binary system  because both masses were quantized with high accuracy. Binary stars, of course, may have all the possible variations of masses of both stars, but still it is  a simplest model object for applications of law of mechanics. Any deviation from simple Newton laws (Einstein modifications for close stars would be necessary) which is visible on galactic scale (dark matter problem) must reveal itself despite possibly in miniscule amounts on this simple objects.

               The long and unsuccessful search for dark matter started to reveal different ideas. One of them is MOND, and at modified Newton gravity the binaries with high deviation between stars would start feel this deviation from Newton law and attract each other stronger [1].  

Main part.

In order to test the idea of the change of gravity law for the binaries as  a function of separation between them I decided to go the same way as for the testing of the additional gravity created by photons [2,3]. That is, the mass-luminosity curve will have a different slope for the different subsets for binaries (subset of binaries with close luminosities versus subset  binaries with different luminosities would reveal any additional force connected to the photons trapped inside the stars, for example). The comparison of subset of binaries with relatively far separation between star versus subset of binaries with small separation would reveal any deviation from Newton law as a function of distance.

               I manually chose several visual binaries which are close to the Sun (the close the star, the better accuracy of all measurements) and plotted separately relatively close binaries versus relatively far binaries. (two eclipsing spectroscopic binaries were added to close binaries to have points with masses between 3 and 4 Suns)

 

Fig 1. Mass-luminosity relation for binaries with relatively far semi-major axis (average~ 5.6*10-3 ly) and relatively small semi-major axis (average ~ 3.6*10-4 ly).

Table 1 Distant binaries.

Name of binary

Mass in Suns

Ln(Luminosity), Luminosity is in Suns

Andromeda Groombridge 34

0.38

-3.816

0.15

-7.07

Eta Cassiopea

0.972

0.208

0.57

-2.81

24 Comae Berenices

4.4

5.155

3.3

3.173

61 Cygnus

0.7

-1.877

0.63

-2.465

Mu Cignus

1.31

1.79

0.99

0.34

Gamma Delphinus

1.57

1.93

1.72

3.034

Epsilon Lirae 1

2.03

3.18

1.61

2.13

Epsilon Lirae 2

2.11

3.367

2.15

3.466

36 Ophiuchus

1.7

-0.6

0.71

-2.41

 

 

Table 2 Close binaries.

Name of binary

Mass in Suns

Ln(Luminosity), Luminosity is in Suns

Xi Bootes

0.9

-0.5

0.66

-2.8

Sirius

2.063

3.23

1.018

-2.88

Alfa Centarous

1.1

0.418

0.907

-0.69

Alfa Comae Berenices

1.237

0.542

1.087

0.56

Beta Delphinus

1.75

3.18

1.47

2.08

Delta Equaleus

1.192

0.81

1.187

0.728

Zeta Herculesis

1.45

1.879

0.98

-0.48

99 Herculesis

0.94

0.673

0.46

-1.966

Sigma Herculesis

2.6

5.44

1.5

2.0

Beta Leonis minor

2.11

3.58

1.35

1.76

Psi Centari*

3.114

4.95

1.909

2.89

Chi 2 Hidrae*

3.605

5.84

2.632

4.19

70 Ophiuchus

0.9

-0.53

0.7

-2.04

·        * - eclipsing spectroscopic binaries (obviously close binaries)

Slopes of the curves are different! It means that for close binaries the effective gravitational constant would be larger. Indeed, the visual binaries gives the masses as:

M1+M2=4*π2R3/(G*T2)                                                                                       (1)

M1, M2 – masses of the stars, R- semi-major axis, G – gravitational constant, T is the period of the binary.

And similar formula for the eclipsing spectroscopic binaries:

M1+M2=T2*(V1+V2)3/(2*π*G)                                                             (2)

Here V1, V2 – maximum velocities of the stars

 Assuming that the absolute luminosity determines the inertial mass of the star (indeed, any deviation from gravitation law is small and should not influence the evolution of the star), it is possible to see, that higher slope corresponds to smaller deduced gravitational mass for close binary compare to far binary (if the gravitational constant is the same). Assuming the equivalence principle holds, it means that the gravitational constant for close binaries is different from the gravitational constant for far binaries (larger for close binaries). This observation is exactly opposite to what is expected for MOND – in this case the far binaries would be attracted stronger. It looks like some additional mass is present in addition to the star masses which forces them to go closer (almost like the dark matter is present).

However, why would the dark matter be present only for close binaries and not for all of them (in this case on average the slopes should be the same)? More plausible idea is that gravity constant depends upon the mass of the star itself – the gravity enhancing field is created by ordinary matter, which is stronger for higher concentration of the matter in the space.

What is the problem with dark matter being considered as some kind of exotic particles being able to gravitate but not react in any other way with usual barionic and non-barionic (light, for example) matter? In principle such matter is possible, but all the previous experimental evidence tells that the less particle interact with barionic matter the less it contribute to gravity. Indeed, any ions and molecules are easy to catch and they contribute to gravitation tremendously so far. Electrons are less interacting with matter and also less heavy. Neutrinos are kind of particles that are almost not interacting with barionic matter but they are also do not have significant contribution to the gravity. It plausible to assume that other types of particle exist which would interact with matter even less, but they also would contribute to the gravity even less. The idea of any type of particle which would be not interacting with ordinary matter but contribute to the gravity even more than barionic matter is out of this sequence and seems not obvious.

In addition the recent discovery of ultra-diffuse  galaxies with diluted stars concentration and completely devoid of dark matter [4] poses even more questions: how the dark matter may be separated from the ordinary matter [5] if they interact gravitationally? Why would not dark matter be attracted back for billions of years and completed the usual setup: dark matter halo around the visible galaxy?

At the same time the dark matter is absent in ultra-diffuse galaxies only – may be the concentration of ordinary matter plays some role? The ordinary matter changes the gravity constant through some kind of gravity enhancing field?

From the slope of the curves it is possible to roughly evaluate how gravitational constant G changes with distance.

We have two equations:

Y=3.7978*ln(x)-0.1622 – far binaries (distance ~56.29*10-4 light years, l.y.)

Y=4.653ln(x)-0.0421 – for close binaries (distance ~3.63*10-4 l.y.)

For mass m=2 from the first equation y=2.4702. This value is assumed to be correlated with inertial mass which determined by star evolution and it is assumed that small change in gravity law can not influence the luminosity (the luminosity dependence  upon the heavy metal composition is neglected). Substituting into second equation we got m=1.716 (instead of two). The equivalence principle should not be violated for close  binaries compare to far binaries, so it means that the mass of the star is not enough for such luminosity.

               It may be simpler explanation, of course for such deviation – both stars were formed from the same cloud, which was much denser for close binaries (that is why they are closer) compare to very diluted cloud for far binaries. In addition to the stars, huge amount of planets and asteroids are hanging around each star (because the initial cloud was dense), effectively creating invisible but quite real barionic matter (“dark matter” in the very original sense). Assuming the observations of the brightness variation exclude such explanation (constant dimming of the star due to interstellar objects), the other explanation is that the gravity constant is different. From equations (1) and (2) it follows that G would be larger for close binaries (and G=K/m law holds). For close binaries G is 2/1.716=1.166 times larger.

               Influence of the mass to the gravity may be written in a formula similar to Coulomb law:

F=(1/[4πεεo])*q1*q2/r2                                                                               (3)

Where q1, q2 are electrostatic charges, r is the distance between charges, ε is the permittivity of space (due to dipole nature of the medium the force is weakened), εo is the permittivity of free space.

               For gravity it would be:

               F=(εg/[4πεgo])*m1*m2/r2                                                             (4)

Where m1,m2 are masses, r is the distance between masses, εg is the gravitoelectric permittivity of space (due to the absence of antigravitation it always enhances the force) and εgo is the gravitoelectric permittivity of free space (the notations would be suitable for gravitoelectromagnetism [6,7]).

In this equation εg moved up to numerator compare to formula (3) because the gravity is enhanced, not weakened as in the case of electricity.

With loose similarity to Debye length [8] the dependence of such field may be written in a way like this:

εg=1+δ*{ΣMi*exp(-ri/ξ)}/{ΣMi}                                    (5)

Here Mi are masses around the point (actually all masses in Universe, but due to exponential decay only closest masses are necessary), ri are distances to the point of interest, ξ is the decay length, δ is some empirical constant (how strongly gravitational constant is enhanced). Formula (5) would drop to 1 in infinity (no influence of mass) and to some enhanced value near the star.

Simplifying even further to evaluate the value of the effect in the Solar system:

G=Go*exp(-r/ξ)                                                               (6)

And 1.166=[exp(-3.6*10-4/ξ)]/[exp(-5.6*10-3/ξ)]

The decay length would be 0.034 l.y. (3.2*1014 m) and for the Pluto orbit (5.9*1012 meters) change of gravitational constant of 2% is expected (G=0.98Go).

               This is quite large a change and should be easily noticeable if the Cavendish experiment is performed on Pluto orbit or on the Pluto surface (because the planets are small compare to Sun, the only real player in Solar system is Sun). For example, the Cavendish experiment performed on Moon surface would lead to only around 4*10-8 relative change – not enough with modern accuracy of Cavendish experiment. The previously published idea of Cavendish experiment near the surface of the Sun would be helpful in the case the accuracy will be good enough.

               It is interesting to note, that the idea of quantum vacuum being influenced by different fields with corresponding change of gravity constant or electric field constants is not new and was already discussed [6,9]. In [6] the weakness of gravity is hypothesized to be due to the existence of Higgs boson “gravitational antiparticle” (second quantization is predicted), so that virtual pairs particle-gravitational antiparticle would weaken the field in exactly the same way as virtual electron-positron pairs are weakening the electric field in quantum vacuum explanation of speed of light value. If there is no gravitational antiparticle in nature, the presence of the mass is expected to polarize the quantum vacuum in such a way, that popping out of quantum vacuum particles are all bosons with the same positive sign of mass (all attracting each other). In this case if the boson condensation of all of them is avoided (collapsing the mass into the black hole as described in [6,9]), the virtual particles would be increasing the strength of the gravitational filed, not weakening it as in the case of electromagnetism. This would be exactly what is observed in this article. The enhancement length seems to be enormous – but this is in the range what is expected for dark matter (actually the real length may be higher, because more accurate experiments are necessary.

Conclusion.

The discovered deviation in the  mass-luminosity curves is a hint, that the gravity constant is not valid for the free space and becomes stronger in the presence of classical barionic matter. Such behavior is exactly opposite for what is expected by MOND and formally in line with dark matter hypothesis (the non-barionic unseparable and mass induced field is in broad sense would be “dark matter”). However, such observation is more consistent with old definition of field, not matter. To confirm or reject the observation made here the more accurate data on numerous binaries would be necessary (because the “googled” data can not be considered accurate in modern science). The article may be of some interested for visual binaries specialists who are trying to decrease the scattering in the mass-luminosity curve (the idea is that the scattering is not really the experimental error, which would be much smaller in the time of space-based telescopes, but rather some underlying physical mechanism, which may give different slopes for different subsets of binaries). To my best knowledge, nobody so far analyzed mass-luminosity curves from this perspective.

 

References.

1. McCulloch, M.E., Lucio, J.H. Testing Newton/GR, MoND and quantised inertia on wide binaries. Astrophys Space Sci 364, 121 (2019). https://doi.org/10.1007/s10509-019-3615-z

https://link.springer.com/article/10.1007/s10509-019-3615-z

2. https://vixra.org/pdf/2005.0250v1.pdf

3. https://vixra.org/pdf/2007.0195v1.pdf

4. https://www.discovermagazine.com/the-sciences/hubble-reveals-new-evidence-for-controversial-galaxies-without-dark-matter

5. https://astronomy.com/news/2019/03/ghostly-galaxy-without-dark-matter-confirmed

6. https://en.wikipedia.org/wiki/Gravitoelectromagnetism

7. https://tipikin.blogspot.com/2019/12/quantum-vacuum-application-to-gravity.html

8. https://en.wikipedia.org/wiki/Debye_length

9. https://tipikin.blogspot.com/2020/03/unification-of-gravitational-and.html