Book Read Free

Dark Matter and Cosmic Web Story

Page 15

by Einasto, Jaan


  The formation of galaxies was discussed during the Study Week on Cosmology and Fundamental Physics in the Vatican in 1981. Faber (1982a) used the White–Rees theory of galaxy formation via core condensation to develop a model for the structure of disk galaxies. She showed that disk parameters such as rotation velocity, radius, and surface brightness scale directly in proportion to the corresponding halo parameters. Scaling relations derived from the model look like the observed Fisher-Tully and radius-luminosity laws. In another talk Faber (1982b) argued that the ellipticals and spheroidal bulges of spiral galaxies are highly condensed systems in which ordinary luminous matter falls deep into the central core of the surrounding non-luminous halo. This model is consistent with the observed scaling laws for elliptical galaxies and the velocity dispersions of spheroidal bulges in spirals. The Hubble sequence can be considered as a sequence of increasing dissipation and central concentration of the luminous matter relative to the surrounding dissipationless halo.

  The increasing power of computers allows us to investigate the possible assembly of various populations of galaxies. Sales et al. (2012) studied the origin of disks and spheroids in simulated galaxies similar to the Milky Way, in a series of cosmological gas-dynamical simulations, the Galaxies Intergalactic Medium Calculation (GIMIC), based on the Millennium simulation by Springel et al. (2005). The authors find that the final morphology of a galaxy results from the combined effects of spin alignment and of hot/cold gas accretion. Disk dominated objects are made of stars formed predominantly in situ, and avoid systems where most baryons were accreted cold, or those where spin misalignments are extreme.

  4.3.5 Modern models of galaxies

  In our first models of the Galaxy, M31 and other nearby galaxies with dark matter coronae only rather rudimentary data on the mass and radius of the dark matter component were available, thus we published our models only in conference proceedings and in small journals in the hope that soon better data will be available (IAU Symposium on the Spiral Structure of Galaxies (Einasto & Rümmel, 1970a), IAU Symposium on External Galaxies and Quasi-Stellar Objects (Einasto & Einasto, 1972a; Einasto & Rümmel, 1972), First European Astronomy Meeting (Einasto, 1974a), Third European Astronomy Meeting (Einasto et al., 1976c,b), Astronomicheskij Circular (Einasto et al., 1978,1979b)). In these models the M/L ratio of stellar populations improved as new data came in, also parameters of the dark population were improved. In these models we used for all stellar populations the generalised exponential mass distribution, and our standard graphs for various values of the concentration parameter N tabulated by Einasto & Einasto (1972b).

  Our young collaborator Urmas Haud developed a new program to automatically adjust the parameters of our multi-component models to get the best fit to all data. In these models all the main galactic populations were present — the core, the bulge, the disk, the stellar halo, the flat population of young stars and gas, and the dark corona. Thus parameters for all these populations must be found. However, the development of this program advanced rather slowly. Only in the late 1980’s were we able to publish in the journal “Astronomy & Astrophysics” our first multicomponent model of our Galaxy, calculated with the new method (Einasto & Haud, 1989; Haud & Einasto, 1989), later models of the Andromeda galaxy (Tenjes et al., 1994), and some other nearby galaxies (Tenjes et al., 1991, 1998).

  In hindsight I think that this long delay with publication in a major journal was a mistake.Actually our models of the Galaxy and M31 (Einasto et al., 1978,1979b) contained all the essential information, including better data for dark coronas. The publication of these models in the late 1970’s in a respectable journal could have better influenced the development of models of galaxies; by the late 1980’s and early 1990’s it was already too late.

  Some authors have constructed two-component models with the bulge and the disk in an attempt to avoid the use of the massive corona (the mass of the ordinary stellar halo consisting of old metal-poor stars is rather small and can be excluded from the mass model). One of these attempts was made by Rohlfs & Kreitschmann (1980). The authors ignored the last measured point in the rotation curve of the galaxy M81, and were able to find a model with only two populations, the bulge and the disk. The disk of the galaxy has a hole in the centre; a similar disk model was presented also by Einasto et al. (1980c). However, using more complete data Tenjes et al. (1998) calculated a multi-component model using the rotation curve of M81 which flattens at large galactocentric distances. Thus a massive corona must be present in this galaxy. Rohlfs & Kreitschmann (1988) constructed a multicomponent model of our Galaxy, where a massive corona was added. In this model the bulge has two components, a visible and a dark one. The authors argue that the dark components both in the bulge and the corona are baryonic.

  Another attempt to try to avoid the presence of a massive dark halo population was made by Kalnajs (1987). He investigated by numerical experiments the role of the massive halo in stabilising the disk, as suggested by Ostriker & Peebles (1973). He argued that, compared to a bulge, a halo is not very efficient in stabilising the disk. His conclusion was that the stability argument for the presence of a massive halo is not very compelling. The stability of a flat galactic disk was studied earlier by Toomre (1964), who argues that a bulge-type population is needed to stabilise the flat population.

  All models where attempts were made to avoid the presence of a massive dark halo have one common aspect — they have as massive a disk as possible. Thus these models are often called “maximum disk” models. In our models we tried to derive the masses of populations as accurately as possible, using all available data and not maximising only one parameter. Instead, we tried to minimise the overall deviations of model parameters from directly observed parameters.

  In modern models galaxies are not considered as isolated objects. Actually new data suggest that all giant galaxies have been formed by merging dwarf galaxies. The ongoing merging of giant galaxies can be observed in interacting galaxies, as shown by Toomre & Toomre (1972). In high-resolution simulations of the evolution of the cosmic web the merging of smaller galaxies to form more massive galaxies is well seen.

  Recently Tamm et al. (2007) and Tempel et al. (2007) calculated a new model of M31, taking into account the absorption inside the galaxy. The model includes four visible populations: a bulge, a disc, an inner halo and an extended diffuse halo, and a dark matter halo. The authors find that about 40% of the total luminosity is obscured due to the dust. Using chemical evolution models, authors calculated the mass-to- luminosity ratios of the populations. The total intrinsic mass-to-luminosity ratio of the visible matter is M/LB = 3.1 − 5.8 M/L, and the total mass of visible matter Mvis = (10 – 19) × 1010 M. Further the authors use HI and stellar rotation data and stellar velocity dispersions to find a dynamical model, which allows them to calculate more accurately the DM halo density. The authors find that a DM halo having NFW or Einasto profiles gives the best fit with observations.

  Fig. 4.8 Examples of modelling real SDSS galaxies. Upper row shows the original observations, middle row shows the point-spread-function (PSF)-convolved model galaxies, and lower row shows the residual images (Tempel et al., 2012d).

  For the Einasto DM profile, the total mass of M31 is 1.28 × 1012 M, and the ratio of the DM mass to the visible mass is 10.8.

  Presently photometric and redshift data, as well as direct images in ugriz filters are available for almost one million galaxies of the SDSS main galaxy survey. So far this large dataset has been used to construct simple 2-dimensional models of these galaxies. Tempel et al. (2012d) used this dataset to construct 3-dimensional models of SDSS galaxies. They used in models two main populations, the bulge and the disk, applying the Einasto profile with variable shape parameter N. The authors first tested the modelling technique using simulated galaxies. This test shows that the restored integral luminosities and colour indices remain within 0.05 mag and the errors of the luminosities of individual components remain within 0.2 mag. The accuracy of the restored
bulge-to-disc ratios is within 40% in most cases. Examples of modelling real SDSS galaxies are shown in Fig. 4.8. As we can see the visual apparence of modelled galaxies is rather close to pictures of actual galaxies, except for a relatively small noise in the residual images. The general balance between bulges and discs is not shifted systematically. Inclination angle estimates are better for disc-dominated galaxies, with the errors remaining below 5° for galaxies. In total, 3-D models were found for more than half a million SDSS main sample galaxies.

  Tempel et al. (2013) used this sample of 3-D models of SDSS galaxies to investigate the spin alignment of spiral and elliptical/S0 galaxies in filaments. The authors found evidence that the spin axes of bright spiral galaxies have a weak tendency to be aligned parallel to filaments. For elliptical/S0 galaxies, the authors showed that their spin axes are preferentially aligned perpendicular to the host filaments.

  4.4 Tartu Observatory in the 1970’s

  4.4.1 Computer revolution

  In my life I have experienced the whole computer revolution. When I started my studies in Tartu University, the main computer was a sliding rule. It remained so for almost 25 years. However, sometimes a higher accuracy was needed. One such example was my work on data reduction of the Solar eclipse observations made by Prof. Kipper and his assistants in summer 1945. The reduction of these data was announced as a competition project for students. I observed the same eclipse with my self-made telescope, so for me it was interesting to compare my own modest observations with those made by professional astronomers.

  One of the tasks was to calculate the phases of the eclipse for the Tartu Observatory location, since the main goal of observations was the determination of the change of the Solar surface brightness as a function of the distance from the center of the Solar disk towards the limb. For these calculations phases of the eclipse were needed. Input data were available in the Astronomical Almanac, and calculations were needed to find the phases for a particular location. For this task relatively simple formulae of spherical trigonometry were needed. I found all input data, and started computations. Here the accuracy of the slide rule was not sufficient, so I used a Rheinmetall mechanical calculator available in the physics department of the University. The calculations were relatively simple but demanded a lot of time. When calculations were almost completed, I discovered that something was wrong — phases were completely different from expectations. Then I started to look for what is wrong, and discovered that I had taken in my computations the longitude of Tartu Observatory wrongly: I used a positive value for Tartu, forgetting that longitudes to the east are negative. In other words, I found phases for a place in the middle of the Atlantic Ocean!

  The deadline to present the paper was already close, thus there was no time to repeat all calculations. But I found a way to use the first part of calculations, thus only the second part must be repeated. The final polishing of the paper was made in a great hurry. For last three days I worked in the Observatory day and night, sleeping on the floor of Rootsmäe’s office a few hours per night. I presented the work in time, but was thereafter so tired that I was not able to work or study for a month or so.

  However, the main lesson from this study was different. The maximal phase of the eclipse in Tartu was about 0.9, thus the critical region to find the darkening of the Solar limb was not reached. In other words, these observations did not have any scientific meaning, and gave no new information to the understanding of Solar physics. This was a critical lesson. In science only such studies which give answer to an open question are needed.

  In 1954 the next Solar eclipse of the same cyclus came about, and the whole staff of the Observatory participated in the observations. This time the zone of the full eclipse crossed Lithuania, and we made an expedition to observe the event. A special camera was used to photograph the Solar corona during the eclipse. However, the sky was partly cloudy, and the corona was not visible. After the expedition Prof. Kipper asked me to do the data analysis, since I already had experience from the earlier eclipse. I refused and declared that our observations have no scientific value, thus it makes no sense to do the data analysis. Kipper took umbrage, and this incident strained our relations for a long time.

  In the 1960’s Tartu University formed a laboratory for computing. Thecomputer was probably a copy of one of the first American electronic computers; it worked with vacuum tubes. The central computing unit (CPU) was on a rotating magnetic drum which made 100 cycles per second, this determined the speed of the computer. We had access to this computer and did some calculations there.

  The first electronic computer the Observatory had was a rather curious one — the programming was done by putting wires into proper places as in old telephone central stations. My wife Liia was one of the programmers, and it took time to learn how to use such methods for programming. But soon we got a better computer where input data were given using punched strips, and later punched cards.

  However, our own computers were rather slow, and thus we often used computers installed in Tallinn at the Institute of Cybernetics which had a large computing center. To get a computer of such power was in the 1970’s a great problem. Permission from high bureaucrats in Moscow was needed. They discussed whether to give one computer for all three Baltic countries for joint use, or separately for Estonia. The management of the Institute of Cybernetics was able to convince the bureaucrats in Moscow that there were lots of people in Estonia in need of such a large computer, and we could make full use of it. The computer was really big — one full hall was needed. A modest remark: the computing power of this computer was a tiny fraction of the smallest modern notebooks or even mobile phones.

  The programming with this computer used punched strips. The actual work was done so. We had several programs running simultaneously. Once a week our Observatory bus made a trip to Tallinn to deliver our strips, and to bring back computer output rolls to fix errors in programs and as well as results sent in the previous week. The polishing of one program took normally several months, so we had always several programs running. All this work was so inconvenient for astronomers that we used a special laboratory for programming. Programmers of the laboratory made the practical programming. My wife Liia also worked in the programming laboratory, and all my programs in the late 1960’s and early 1970’s were made together with her.

  If the results of some program were urgently needed, then we drove together to Tallinn. A course-mate of Liia, Aino Mannil, was the deputy head of the computing centre, so we could work during the night when the computer was less used to polish our programs. Aino slept nearby in a camp bed to be ready to help us in case we had problems. So during a few nights we made more progress than in several months using the traditional way of transporting programs back-and-forth to Tallinn. In just such a manner all my results of modelling galaxies and galaxy evolution were obtained.

  In the late 1970’s our Observatory also got a big computer, not as powerful as the computer in Tallinn, but for those days good enough. The programming was, however, very similar to the earlier time, using punched strips or cards. That was easier — there was no need to send programs to Tallinn, so the process was much more rapid. In the early 1970’s our group got several new astronomers, graduates from Tartu University. They learned the programming style in the Observatory quickly, and helped me also when longer and more complicated programming was needed.

  In 1976 I attended the IAU General Assembly in France and was able to buy my first pocket computer. Next year I was on a short visit in Germany to attend the IAU Symposium on galaxies. This time it was possible to buy a Texas Instruments programmable pocket computer. It had only 100 memory locations for programs and input data, but I was able to write a program to calculate galactic models using the generalised exponential density profile, now called the “Einasto profile”. A year later I had a short visit to the USA, for a conference to discuss programs for the Hubble Space Telescope. In a small shop I saw the first personal computer. This was a kit c
omprising of parts to be assembled by hobbyists, but it was evident that the computing is going towards personal computers.

  In 1980 there was an exhibition in Moscow, where for the first time real personal computers were on display. At the time it was extremely difficult to get foreign currency to buy equipment from Western countries. I cannot remember how we managed to buy from this exhibition two computers, Tandy TRS-80 and Apple II. The Tandy computer was used by other people, but I got the Apple II for my personal use. From this moment on I have done almost all my computing with my own computer. We could buy an external floppy disk drive for 8-inch floppies. One floppy disk contained the Fortran compiler, so we could program not only in BASIC but in Fortran too, which was for scientific computing much better. I used my first Apple II computer for many years, and most of the computing needed for my papers in the early 1980’s was done with this device.

  The same autumn I visited the Institute of Astronomy of Cambridge University, and was rather surprised that no personal computers were used there. Computing was done with a central DEC VAX computer; terminals were in all offices. So I had to learn computing on this machine too. But computing with a personal computer was much more convenient.

  I bought my first private personal computer when visiting Germany in 1981, a Commodore VIC-20. But this was not as convenient as the Apple II, so I gave it to my younger collaborators. In 1982 I had a chance to work also with the very first IBM Personal Computer (PC). George Abell had just bought it, and during my visit to Los Angeles I tried it. But at home I continued to work with my Apple.

 

‹ Prev