\documentclass[11pt,twoside]{article}
\usepackage{adassconf}
\usepackage{verbatim}
\newcommand{\vecx}{\bf x}
\begin{document}
\paperID{P8-1}
%%%% ID=P8-1
\title{GAMMS: a Multigrid--AMR code for computing gravitational fields}
\titlemark{GAMMS: AMR for gravity }
\author{Claudio Gheller}
\affil{CINECA, via Magnanelli 6/3 - 40033 Casalecchio di Reno BO (Italy)}
\author{Flavio Sartoretto, Michele Guidolin}
\affil{Universit\`a di Venezia, Dipartimento di Informatica,
Via Torino 155, Mestre VE (Italy)}
%\author{Michele Guidolin}
%\affil{Universit\`a di Venezia, Dipartimento di Informatica,
%Via Torino 155, Mestre VE (Italy)}
\contact{Claudio Gheller}
\email{c.gheller@cineca.it}
\paindex{Gheller, C.}
\aindex{Sartoretto, F.}
\aindex{Guidolin, M.}
\authormark{Gheller, Sartoretto \& Guidolin}
\keywords{cosmology, numerical methods, parallel computing}
\begin{abstract}
This paper describes
our GAMMS (Gravitational Adaptive Mesh Multigrid Solver) code, which is
a numerical gravitational solver, based upon a multigrid (MG) algorithm and
a local refining mesh strategy (AMR, Adaptive Mesh Refinement).
MG allows for an efficient ad accurate evaluation of the gravitational field,
while AMR provides high resolution computations exactly where needed.
GAMMS is an open source code which takes advantage of the open source,
based upon MPI, DAGH parallel library.
Our code can be easily integrated
with both N-body and fluid-dynamics code, in order to perform high
resolution astrophysical simulations.
\end{abstract}
\section{Introduction}
Many astrophysical problems deal with the computation of gravitational
forces.
Usually, this task can be performed only by computer simulations.
A large computational domain is to be treated,
and good spatial resolution is required.
Cosmological problems are distinguished examples.
A cosmological structure formation process involves an extremely large
dynamical range.
As large as the universe volumes are featured ($10^3$ Mpc), yet
one must solve the problem of resolving star formation regions in galaxies,
which are typically at the 1 pc scale.
The spatial dynamical range per dimension required is as large as
$10^9$, well beyond the maximum value, $10^3$, that nowadays
can be managed on supercomputers.
A solution to this range problem relies upon combining
the Multigrid (MG) method
%~\cite{bra77,SuiSaa95}
(Brandt 1977, Suisalu \& Saar 1995)
with local refinement techniques.
Multigrid solves differential equations
using a set of multilevel grids,
for an {\em optimal} $O(N)$ computational cost.
Moreover, MG allows for efficiently tracking
errors, and it can be effectively combined with local refinement techniques.
We exploited
Adaptive Mesh Refinement (AMR), which allows for
dynamically defining and managing the computational
meshes;
they can be either refined, where high resolution is required, or
coarsened, where low resolution is enough.
Hence, the treatment of any interesting
physical process is performed at its proper spatial resolution,
whereas it would be too expensive by uniform grid treatment.
The ensuing code, GAMMS (Gravitational Adaptive Mesh Multigrid Solver) is
described in the sequel.
\section{Multigrid Algorithm}
Let us consider Poisson's equation
$$
\Delta u (\vecx)\ =\ F(\vecx), \quad \vecx\in \Omega,
$$
where $u(\vecx)$ is the gravitational potential, $F(\vecx)$
is the mass density field, $\Delta$ is the Laplacean operator and
$\Omega$ is the problem domain.
Our Multigrid technique estimates the solution by a Finite Difference
(FD) scheme on a uniform, square grid (base grid) on $\Omega$, thus yielding
a linear system of equations.
The problem solution is approximated by a Grid Function (GF), i.e. a value on
each mesh node.
\begin{figure}
\epsscale{.70}
\plotone{P8-1-fig-3.eps}
\caption{Sketch of an MG V-cycle on a grid hierarchy.}
\label{P8-1-fig-3}
\end{figure}
The linear system is solved using a relaxation method, which
computes $u$ on each node by considering only the
adjacent ones, hence they
efficiently reduce the error only at high frequency modes,
while performing bad on smooth error components,
which live at a {\em whole domain} scale.
Slow convergence would emerge.
On the other hand, large scale smooth errors
can be read as local errors on
coarser grids, where they can be smoothed out using the same
relaxation technique.
Hence, several discretization levels of the
same continuous problem can be used to smooth out errors
at different scales.
This is the basic idea of MG,
which enrolls a hierarchy of coarser and coarser
grids to reduce the error at larger and
larger scales, thus providing fast convergence.
Traversing the hierarchy is called performing a {\em V-cycle}.
Figure~\ref{P8-1-fig-3} sketches a sample V-cycle.
To go up and down along the grid hierarchy, two problem--dependent operators
must be devised:
a {\em restriction operator},
which allows to map onto a coarser grid any GF
given on a fine grid;
a {\em prolongation operator}, in order to map any GF given on a
coarse grid, onto a one--level higher, finer grid.
The problem is eventually
solved only at a coarse grid level, the upper levels being
resolved by exploiting the prolongation operator.
Since coarsening produce
smaller algebraic systems, the computational cost
of solving a linear system on a coarse grid is low.
One can prove that MG has an $O(N)$ computational cost,
$N$ being the number of grid nodes.
MG can also easily manage locally refined grids.
%where higher resolution is required, by
%using the same sequence of uniform subgrids, the finests being confined
%to increasingly smaller subdomains to produce the required local
%refinement.
\section{AMR Technique}
The AMR technique
%~\cite{BerOli84}
(Berger \& Oliger 1984)
allows for refining any region in the computational volume, by
building a nonuniform grid, which is the
union of uniform sub-grids $G^0$,...,$G^M$, each one featuring its
mesh size $d_k = h^{k+1} = h^{k}/2$, $k=1, \ldots, M$;
$d_0$ being a given value.
%These subgrids does not necessarily encompass
%the whole domain, hence
%unequal refinement levels can be exploited on each subdomain.
%\begin{figure}
%\epsscale{.70}
%\plotone{P8-1-fig-1.eps}
%\caption{A locally refined grid and related data structure.}
%\label{P8-1-fig-1}
%\end{figure}
Inside our GAMMS code, each adaptive mesh is built and controlled
using the DAGH (Distributed
Adaptive Grid Hierarchy) library
(see {\tt http:\-//www.\-caip.\-rutgers.\-edu\-/\verb=~=parashar\-/DAGH}).
DAGH provides data abstractions and all the tools required in order to define
and manage grid values on a nonuniform, dynamically evolving mesh.
DAGH is an open source package and features a parallel implementation,
based upon a standard MPI library.
%\begin{figure}
%\epsscale{.70}
%\plotone{P8-1-fig-2.eps}
%\caption{Scheme of DAGH application layers.} \label{P8-1-fig-2}
%\end{figure}
\section{Results}
At the present stage, MG and AMR are not fully combined.
We tested them separately.
\begin{figure}
\epsscale{.60}
\plotone{P8-1-tempoesecuzione2.eps} % P8-1-fig-5.eps}
\caption{CPU seconds spent by MG vs number of grid points.}
%!!! Togliere l'intestazione superiore dalla figura, segnare i punti
%calcolati e interpolarli con linea tratteggiata !!!}
\label{P8-1-fig-5}
\end{figure}
Tests on our MG implementation proved that
the algorithm converges in few relaxation sweeps,
when a moderate number of V-cycles is performed.
%The results can be compared
%to corresponding analytical solutions,
%giving an estimated error of few percents.
Figure \ref{P8-1-fig-5} shows that the computational cost of our MG code
increases linearly with the number of grid points, as theory predicts.
\begin{figure}
\epsscale{.60}
%\plotone{P8-1-fig-4.eps}
%\plotone{alllivelli.eps}
%\vspace*{-10em}
%\epsfig{file=P8-1-fig-4.eps}
\plottwo{P8-1-alllivelli.eps}{P8-1-strutgrid3d.eps}
\caption{Hierarchical grid structure adapted to a Gaussian
function.
Left: the Gaussian function.
Right: the final, locally refined mesh.}
%!!! Affiancare all' immagine della funzione, una della griglia
%raffinata. !!!!}
\label{P8-1-fig-4}
\end{figure}
Tests on our AMR code show that it
can efficiently handle an adaptive mesh.
Figure \ref{P8-1-fig-4}
shows a three--level local refinement driven by a Gaussian function, which
was set initially on the coarsest, uniform grid.
The grid was locally refined according to the magnitude of the derivative:
larger magnitude requires finer resolution.
%The MG
%prolongation and restriction operators, work properly also when applied
%in AMR context.
A detailed description of the codes and
tests can be found in the Degree Thesis of R. Peruzzo (2003)
and U. Zuccon (2003).
%\vspace*{-2em}
\begin{references}
\reference Brandt A., Math. Comp., 1977, 31, 333
\reference Suisalu I., Saar E., 1995, MNRAS, 274, 287S
\reference Berger M. J., Oliger J., 1984, J. Comp. Phys., 484, 54
\reference Peruzzo R."Implementazione Object Orented Implementation of the
AMR technique to calculate the gravitational field",
Degree Thesis, 2003, University of Venice
\reference Zuccon U. "Implementation of a 3D algorithm to calculate the gravitational field" Degree Thesis, 2003, University of Venice
\end{references}
\end{document}