18
Code Overview Space Charge 2015, Oxford 23-27 March 2015 1

Code overview - STFC EMS · Code Overview Space Charge 2015, Oxford ... – Used as basis for most tuning algorithms ... • Within the limits of the model,

Embed Size (px)

Citation preview

Code Overview

Space Charge 2015, Oxford 23-27 March 2015

1

Time evolution• 1970: Limited storage (512MB), limited

computing time (12 secs), limited output, no graphics

– develop approximation methods • space-charge kicks, hard-edged magnets • balancing computing time available with

accuracy and number of particles; symplectic v non-symplectic tracking

• core-halo models, analytical modules • frozen space-charge

• Now: Multiprocessors, NERSC, GUIs, 109 macro-particles, jobs running for months, output and store everything we can think of.

2

Why are Codes Important?

3

• Validate theory, give ideas for theoretical and experimental developments • Generate the basic underlying machine design

– sets of self-consistent parameters – optimise machines for performance – avoid resonances, instabilities, minimise non-linear effects

• Establish likely machine performance – predict effect and correction of failure mechanisms – bracketing allowable errors – controlling/reducing beam loss – identify beam properties on exit (e.g. to a target) – quantify output energy, emittance & halo at full current

• Indicate whether novel ideas are feasible • Develop commissioning strategies • Help us avoid making fools of ourselves

The Codes• Beam Optics codes

– Transform envelope with analytical space charge– Used as basis for most tuning algorithms

• PIC Dynamics codes– Linacs: Parmila, Parmela, Tracewin, Dynamion– Rings: Orbit, Simpsons, Simbad, …… new Python/PTC modules– 106 or more particles, with 3-D space charge– Matrix/map based, thin lens+drift+space charge kicks – Do a good job on core simulations– Agree at few % level with some experiments ( perhaps not all)

• Integrating dynamics codes– Impact, Track, Tstep (Parmela), Track2d/3d

• Ray-tracing codes– Zgoubi, G4beamline...

• Can now integrate ~109 particles through field maps

4

Beam Optics Codes v. Beam Tracking

Beam optics codes(example: Trace-3D)

n Matrix based, usually first order n Hard-edge field approximation n Space charge forces approximatedn Beam envelopes and emittancesn Fast, Good for preliminary studiesn Simplex optimisation: Limited number of

fit parameters

Ø Optimisation via optics codes + added terms for specific effectsØ Use beam dynamics codes for checking and final tweaking

– More realistic representation of the beam especially for high-intensity and multiple charge state beams (3D external fields and accurate SC calculation).

– Include quantities not available from beam optics codes: minimise beam halo formation and beam loss.

– Now possible with faster PC’s and parallel computer clusters to use beam dynamics codes for (limited) optimisation.

Beam dynamics codes (example: TRACK, IMPACT)n Particle tracking, all orders includedn 3D fields including realistic fringe fieldsn Solving Poisson equation at every stepn Actual particle distribution: core, halo …n Slower, good for detailed studies including

errors and beam loss n Larger scale optimisation possible

5

Code Limitations

Main issues when modelling real machines:• An accurate 6-D description of the initial beam particle distribution

– beam characterisation, need plenty of diagnostics• Magnets and their alignment can be accurately mapped;• An accurate description of the fields:

– More codes now allow field maps; however, applying/iterating field maps may be an issue

– Some fields are not known:– the axial RF field distribution in RFQ’s is not measurable– the RF field distribution in SC cavities at operating temperature.

• Some diagnostic measurements are not accurate enough to provide the necessary information for the codes

6

Assumptions

• Codes assume particles are distinguishable, moving classically, and separate the forces: space charge, external forces, image charges and currents, beam loading, wake-fields etc.

• Internal forces are separated: space-charge (long-range) and scattering (short-range).

• Assume relative velocities within a bunch are very small, so treating intra-bunch forces as electrostatic - Poisson’s equation.

• Within the limits of the model, we may be confident about tracking particles over short-moderate time scales. But now we are aiming to model full synchrotron cycles, or start-to-end simulations of a whole accelerator complex.

7

Jeff Holmes, NA-PAC 2013

Space Charge• Design codes

– Repeated runs with varying parameters for optimisation, relatively simple – 1D longitudinal codes with space charge from derivative of line density – 2D transverse codes with simple space charge kicks scaled by density

factor – Bunch length >> pipe radius

• Research codes – Detailed “final” runs to explore non-linear effects, resonances, space-

charge induced beam loss etc. – General 3D simulations (essential for short bunches, e.g. linacs) – Space charge from Poisson’s equation using direct coulomb, FFT,

multigrid, fast multipoles, finite differences, finite elements, SOR techniques …

8

Lest we forget…• Consider propagation/growth of errors

– whether a simple symplectic method is better than a more accurate non-symplectic approach

• Programming techniques – computers only do what we tell them to do – computers are not perfect

R. W. Hockney and J. W. Eastwood, “Computer Simulation Using Particles”, Institute of Physics Publishing, Bristol, 1988.

• Codes written for one particular problem/machine may not be suitable for another

– make sure you know what is going on – question results

• In the search for “accuracy”, do not lose sight of the underlying problem.

9

1011 Managed by UT-Battellefor the U.S. Department of Energy

Goodness of Model Depends onInformation Sought

• Individual particle behavior is much more difficult to converge, due to discretization-induced numerical noise. Discretization arises from time-stepping, numerical distribution, and gridding.

Individual particle orbits diffuse in time. Calculated tunes of individual particlesare sensitive to this diffusion.

For long times, convergence criteria are hard to satisfy.

Jeff Holmes, NA-PAC 2013

Simulation with ~109 Particles

With super-fast computers and parallel processors can now simulate a large number of particles: actual number if possible– Suppress noise from the PIC method: enough particles/cell– More detailed simulation: better statistics, better characterisation of beam halo

11

Simulation with ~109 Particles

With super-fast computers and parallel processors can now simulate a large number of particles: actual number if possible– Suppress noise from the PIC method: enough particles/cell– More detailed simulation: better statistics, better characterisation of beam halo

11

Phase space plotsfor 8.65×108 protonsafter 30 cells in the SNS RFQ.

Longitudinal Tracking of the SNS RFQ

Simulation with ~109 Particles

With super-fast computers and parallel processors can now simulate a large number of particles: actual number if possible– Suppress noise from the PIC method: enough particles/cell– More detailed simulation: better statistics, better characterisation of beam halo

11

TRACK, 109 particle simulation SNS measurement in MEBT

Courtesy: Mustapha, ANL Courtesy: Jeon, SNS

Even 1 billion particles may not provide enough detail

Rings Codes: Inventory• Single particle transport through various types of lattice elements as well as

sufficiently many particles stop study collective effects• Magnet Errors, Closed Orbit Calculation, Orbit Correction• Charge exchange injection foil and phase space painting• RF and acceleration• Longitudinal impedance and 1D longitudinal space charge• Transverse impedance• 2.5D space charge with or without conducting wall beam pipe• 3D space charge• Field maps• Feedback for Stabilisation• Apertures and collimation• Electron Cloud Model

12

Code Language Platform GUI Parallel 1D/2D/3D Particles linacs/rings

IMPACT F90 Unix/Linux no MPI 3D > 106 linacs

ML-IMPACT F90 Unix/Linux/Mac no MPI 3D > 106 linacs/rings

PARMILA F90 Windows no no 2D/3D 104-105 linacs/transfer lines

GPT C, C++ Windows yes MPI scans 3D 106 linacs/FEL/transfer lines

BEST F90 Unix/Linux python/IDL MPI/ 3D > 106 linacs/ringsOpenMP

VADOR C++ Unix/Linux no MPI 2D n/a linacs

SPUNCH F77 Linux no 1D 104 LEBT

PATH F90 Windows yes no 3D 105 linacs/rings

TRACEWIN C++ Windows yes no 2D/3D 105 linacs

DYNAC F77 Linux/Unix/ no no 2D/3D 105 linacsWindows

Synergia F90/C++/ Unix no MPI 3D > 106 linacs/ringsPython

WARP Python/ Linux/Unix/ Under dev MPI 3D/rz/xy up to 108 linacs/ringsF77/F90/C Windows/Mac

Spreadsheet of Space-Charge Codes I

13

Mac, Windows

Mac

>

Spreadsheet of Space-Charge Codes IICode Space Charge Solver Boundaries/Images Impedances Field Maps Integration

order

IMPACT spectral open/periodic/ no yes 2nd order in zrectangular/circular

ML-IMPACT spectral elliptical/ yes no 2nd in zpolygon/lossy 5th Runge-Kutta

PARMILA

GPT 3D multigrid open conductive rect. no 2D,3D 5th Runge-Kuttapipe, cathode

BEST spectral, FD circular conducting wall automatic/ no user specifiedexternal

VADOR FFT conductive wall no no 2ndany shape

SPUNCH exact for disc- circular conducting n/ n/a 1stshaped particles wall

PATH Schell, pt-to-pt open no yes ?

TRACEWIN Sche↵/PICNIC/Gaussup open no no ?

DYNAC Sche↵/Scherm/Hersc open no yes 3rd analytical

Synergia spectral (IMPACT) open/periodic/ no yes 2nd order in zrectangular/circular

WARP FFT, Cap matrix, square/round pipe, ad hoc no 2nd ordermultigrid, adaptive mesh, internal conductors,

refined MG bent pipe, general

14

Yes

Spreadsheet of Space-Charge Codes III

Code t or s tracking Graphics Portability Source code Manual Standard

available? test cases

IMPACT s post proc. all unix platforms to collaborators partial yeswith MPI

ML-IMPACT s post proc. to collaborators partial yes

GPT t built in portable except all beamline yes yesuser interface components

BEST t netcdf, IDL any Linux yes no yes

VADOR s GNUplot, any Linux yes almost yesopenDX

SPUNCH s built in fully portable yes no yes

PATH s built in any Windows yes yes ?

TRACEWIN s built in any Windows no yes ?

DYNAC t, s GNUplot fully portable yes yes yes

Synergia s post proc. all Linux yes in progress not yetRoot+Openinventor

WARP t, s PyGist 2D portable yes online yesOpenDX 3D

15

Issues• How valid are the initial approach and initial assumptions in

PIC-type modelling? • How far can we trust and how useful are codes based on

frozen space charge? • Can we believe results from extended modelling runs (weeks

or more)? – symplectic v. non-symplectic – integrated methods, drift/matrix+space charge kicks

16

RIAPMTQ The Fortran 90 version of PARMTEQM was the basis

for RIAPMTQ. The code was “parallelized” by

incorporating Message Passing Interface (MPI)

commands to allow the code to run in the parallel-multi-

processor environment. Optimization of the code using

“domain decomposition” was not thought to be necessary

for this initial phase; the simpler approach called “particle

decomposition” was used. The most significant code

modifications were required in the parallelization of the

space-charge calculations. The following modifications

were made to RIAPMTQ: transport and acceleration of

multiple-charge-state beams (two at present before the

first stripper), RFQ input transition cell which is desirable

when the RFQ begins with vane modulations, beam-line

elements including high-voltage platforms within the

linac, interdigital accelerating structures, charge-stripper

foils, capabilities for simulations of the effects of machine

errors including misalignments and other off-normal

operating conditions, and automated beam steering where

the program applies the steering that is needed. We have

both a PC version of the code and a parallel-processor

version.

IMPACT The IMPACT code is a parallel particle-in-cell (PIC)

beam dynamics code. It has a large collection of beam

line elements, calculates the acceleration numerically

using RF cavity fields obtained from electromagnetic

field-solver codes, and calculates 3D space charge with

several boundary conditions. The IMPACT code

modifications for a heavy-ion driver linac included

multiple-charge-state capability, improved modelling of

bending magnets, stripping models, a beam scraper, and a

multipole magnet model including a sextupole, octupole,

and decapole.

MULTIPROCESSOR END-TO-END SIMULATIONS

RIAPMTQ/IMPACT end-to-end simulations using

parallel processing were performed at the NERSC facility

at LBNL for both the ANL and MSU RIA driver-linac

designs using 10 million particles, each beginning at the

multiharmonic buncher in the LEBT with transverse 4-D

waterbag distributions that are uniformly distributed in

phase. The MSU and the ANL simulations were done

using 16 and 32 processors, respectively. The IMPACT

simulations from the end of the MEBT to the end of the

linac included charge stripping and beam selection

sections. Figures 1 and 2 show 10 million simulation-

particle final transverse phase-space results for the MSU

(uranium charge states q=87, 88, and 80) and ANL

(uranium charge states q=86, 87, 88, 89, and 90) RIA

driver linac designs, respectively.

Rather than plot all 10M particles we employ a

technique in which we impose a threshold cut in the 2D

projections: in regions of the 2D phase space where the

projected density is below the threshold, we plot every

simulation particle; in regions where the projected density

is above threshold, we use a rejection method to

statistically sample the particles to approximately hold the

plotted density at the threshold value. This allows us to

completely observe the halo without plotting a needlessly

large number of particles in the beam core. For the phase

space plots below, the threshold was taken to be 0.1% of

the maximum density in each projection. For illustrative

purposes the plots also contain the particles comprising

every 100th

particle of the distribution.

particle simulation for three-charge-state uranium beam at

the exit of linac for the MSU RIA design. As described in

the text, the plotting algorithm displays all the halo

particles. The dense central region is obtained by plotting

every 100th

particle.

particle simulation for five-charge-state uranium beam at

the exit of linac for the ANL RIA design. As described in

the text, the plotting algorithm displays all the halo

particles. The dense central region is obtained by plotting

every 100th

particle.

Figure 1: Transverse phase-space plot (x vs px) of 10M

Figure 2: Transverse phase-space plot (x vs px) of 10M

Proceedings of PAC07, Albuquerque, New Mexico, USA THPAS051

05 Beam Dynamics and Electromagnetic Fields

1-4244-0917-9/07/$25.00 c�2007 IEEE

D05 Code Developments and Simulation Techniques

3607

Codes discussion this afternoon (S. Machida)

Transverse phase-space plot (x vs px) of 10M particle simulation for five-charge-state uranium beam at the exit of linac for the ANL RIA design