Numerical Relativity is still RelativityERE Salamanca 2008 Palma GroupAlic, Dana Bona, Carles Bona-Casas, Carles
Most recent successful stories in BH simulationsLong term evolutions:Harmonic (4D spacetime, excision, harmonic gauge source functions)BSSN (3+1 decomposition, punctures/excision, 1+log and gamma freezing)
Isnt the gauge choice too limited? Shouldnt numerical relativity be relativity?
Do we have any choice?Reported experiences:No long term simulations with normal coordinates (zero shift).Generalised harmonic slicing but strictly harmonic shift.BSSN normal coordinates (zero shift) and 1+log slicing crashes at 30-40M (gr-qc/0206072).Gaugewave test: gauge imposed is harmonic, so harmonic code succeeds, but BSSN crashes.
Looking for a gauge polyvalent codeZ4 formalism
MoL with 3rd order SSP Runge-Kutta.Powerful 3rd order FD algorithm (submitted to JCP). See a variant in http://arxiv.org/abs/0711.4685 (ERE 2007)Scalar field stuffing.Cactus. Single grid calculation. Logarithmic grid for long runs.
Gaugewave TestMinkowski spacetime:
Harmonic coordinates x,y,z,t.
t=1000; Amplitude 0.1
t=1000; Amplitude 0.5
Single BH TestSingularity avoidant conditions (Bona-Mass)
Q = f (trK-2)
1+log (f=2/) slicing with normal coordinates (zero shift) up to 1000M and more! Never done before (BSSN reported to crash at 30-40M without shift). Unigrid simulation. Logcoords =1.5.
Lapse function at t=1000M
More gauges (zero shift)Isotropic coords. Boundaries at 20M.
Logcoords f=1/ 150M.
Slicing (f)2/1+1/1/2+1/1/1/4+3/41/2+1/2Vol. Elem. left37%25%20%14%10%6%Time lasting (0.2 / 0.1 resol)50M / 50M50M / 50M50M/50M6M/50M6M/20M5M / 12M
Shift1st order conditions.Vectorial.Harmonic? xi = 0.1st order version
Advection termsLie derivative advection/damping
Covariant advection term
1st order vector ingredients
Time-independent coordinate transformations.
I would like to present you some of the work done in the Palma group,whose authors are stated here (I am the third in the list).As an introduction to our work I'd like to briefly mention which are the most recent successful stories in Black-Hole simulations. We have 2 types of systems able to produce long runs. One of them is the Generalized Harmonic system,which discretizes directly a 4-dimensional spacetime, uses excision in order to set aside the black hole interior and uses harmonic gauge source functions. On the other hand we have the BSSN system, also with many flavours, which discretizes a 3+1 decomposition of the spacetime, which is displayed here. The fundamental quantities are the space metric ij and the extrinsic curvature of the time slices. The apse function measures the proper-time versus coordinate-time ratio when moving along the lines normal to the slices. The hift i measures in turn the deviation between these normal lines and the time lines. (It does not aect the geometry of the slicing in any way). Both lapse and shift are given by the gauge conditions. In this case BSSN uses a 1+log prescription for the lapse, which I will talk about a bit later, and a gamma freezing shift condition. It also uses either punctures or excision to deal with black hole singularities.
The first thing that might come to one's mind is why we need such strict gauge prescriptions, as far as we are in the General Relativity framework, where coordinate freedom is assumed from the very beginning.The truth is that there are no long term simulations (let's say, for example, that last for more than 100 black hole masses) with normal coordinates, that is with zero shift. The harmonic code uses some generalised harmonic slicing with a harmonic shift and the attemps to run BSSN with no shift have ended up in crashing between 30 and 40masses. We can also pick for example a Gaugewave test (that is Minkowski flat spacetime with an imposed nonlinear time-dependent metric, so it's fake dynamics: pure gauge) and we'll see how the harmonic codes succeed, as the gauge imposed is harmonic, wheras BSSN codes fail, as the test excludes the use of a freezing shift.Having realised that the gauge choice is quite crucial for the existant codes, we will try to look here for a gauge polyvalent code, so we can have a choice as wide as possible. The ingredients we have used are the Z4 formulation of the Einstein equations and a powerful 3rd order finite differences algortihm that has been recently proposed (see ERE07 for a variant). Time discretization is dealt with the Method Of Lines, with a 3rd order strong-stability-preserving Runge-Kutta. We haven't used neither excision nor punctures but a scalar field stuffing for the black hole interior (when doing black hole simulations). We have carried out all the simulations using Cactus for parallelisation of the code with a single uniform grid which has been adapted to logarithmic coordinates when we have been performing long runs so this way we have been able to put the contour far away. Ok then. These are the gaugewaves we are going to use. We have an amplitude parameter. The bigger the amplitude the bigger the nonlinearity of the test. When having a look at the gaugewave test with our code we can clearly see the almost superposed numerical and analytical results after 1000 round trips. We can also notice that the phase error can not be seen in the plot. This is because we are using a third order method in both space and time. On the other hand we can see the big difference with the BSSN results, crashing after 30 round trips (although it has been reported that with 4th order FD they can go up to 80 round trips). Both have been carried out with a 10% amplitude. As the new test proposal suggests using an even bigger amplitude of 50%, we have also tried this with our code, finding out that we are actually able to reach 1000 round trips with a resolution of 0.05, which is usually the highest one used in this kind of test. Here are the results. This time we can't talk about superposition, but we still find the results quite spectacular. The results one can find for the Harmonic code in the apples with apples web page indicate that the harmonic code shows an even better behaviour, as expected in this harmonic coordinates test.But our most exciting results are yet to come with the single black hole test. We will only restrict ourselves to the use of singularity avoidant slicing conditions given by the Bona-Mass family, where f could be an arbitrary function of the lapse. The 2 theta term is specific of the Z4 formalism. It turns out that with a 1+log slicing condition (that is the popular nickname for 2 over alpha) in normal coordinates (so with zero shift) we are able to evolve the Black Hole up to a time corresponding to 1000 black hole masses and more, with the help of a logarithmic grid (the code didn't crash at 1000, it's just that we stopped it there).. At this point we must remember that BSSN is reported to crash under these conditions between 30 and 40M. What you will see is a snapshot of a unigrid simulation in logarithmic coordinates with a suitable scale factor. We have a plot here of the lapse function. This is the first time such a long term evolution is obtained in normal isotropic coordinates. We can say we have proven in this way that shift is not a key ingredient to get long and stable evolutions.
We can see here the role of the logarithmic coordinates. Our grid extends up to R=20M which corresponds to 463000M in isotropic coordinates.Still without shift we can play with the slicing condition. We have established a sort of testbed consisting of a single black hole in isotropic coordinates with two different resolutions (that is 0.2 and 0.1) and we have set the boundaries at 20M. The different slicing conditions are expressed using the Bona-Mass family of slicing conditions. We have sorted them out in terms of the percentage of the initial space volume element left when they reach their limiting surface, that is when the lapse function gets to zero. This volume element left can be obtained by integrating this equation, which is only valid for zero shift. All the conditions shown are singularity avoidant so they all could work in principle, the difference is that ones stop well before the singularity is reached and others stop really closer. We understand that we have a slicing condition that is working for us if we are able to evolve until 50M, which is the same as saying that we have reached our boundaries. What we can see here is that the slicings that lead to code crashing for these resolutions are the ones that correspond to limiting surfaces that are closer to the singularity. In these cases we always see an improvement by increasing the resolution, what leads us to think that numerical errors are making us fall into the singularity in the cases that we are too close to the singularity. It must be pointed out that the standard 1+log slicing, corresponding to the graph we have shown before, is the most favourable of the cases analysed in the sense that we are reaching the limiting surface when there is still 37% of the initial space volume element left.
We have also tried to do a long run with the worst favourable (in terms of volume element left) of the slicings that passed the testbed, which is 1 over alpha. In this case we have been able to reach 150 black hole masses before the code crashed, so we expect the slicings that reach their limiting surface before than this one to perform even better, although we haven't checked all of them.
After this we can question ourselves wether we're in front of another gauge specific code, this time in normal coordinates, or if we are able to deal with some shi
Click here to load reader