Thursday, June 29, 2017

Lattice 2017, Day Six

On the last day of the 2017 lattice conference, there were plenary sessions only. The first plenary session opened with a talk by Antonio Rago, who gave a "community review" of lattice QCD on new chips. New chips in the case of lattice QCD means mostly Intel's new Knight's Landing architecture, to whose efficient use significant effort is devoted by the community. Different groups pursue very different approaches, from purely OpenMP-based C codes to mixed MPI/OpenMP-based codes maximizing the efficiency of the SIMD pieces using assembler code. The new NVidia Tesla Volta and Intel's OmniPath fabric also featured in the review.

The next speaker was Zoreh Davoudi, who reviewed lattice inputs for nuclear physics. While simulating heavier nuclei directly in the lattice is still infeasible, nuclear phenomenologists appear to be very excited about the first-principles lattice QCD simulations of multi-baryon systems now reaching maturity, because these can be use to tune and validate nuclear models and effective field theories, from which predictions for heavier nuclei can then be derived so as to be based ultimately on QCD. The biggest controversy in the multi-baryon sector at the moment is due to HALQCD's claim that the multi-baryon mass plateaux seen by everyone except HALQCD (who use their own method based on Bethe-Salpeter amplitudes) are probably fakes or "mirages", and that using the Lüscher method to determine multi-baryon binding would require totally unrealistic source-sink separations of over 10 fm. The volume independence of the bound-state energies determined from the allegedly fake plateaux, as contrasted to the volume dependence of the scattering-state energies so extracted, provides a fairly strong defence against this claim, however. There are also new methods to improve the signal-to-noise ratio for multi-baryon correlation functions, such as phase reweighting.

This was followed by a talk on the tetraquark candidate Zc(3900) by Yoichi Ikeda, who spent a large part of his talk on reiterating the HALQCD claim that the Lüscher method requires unrealistically large time separations. During the questions, William Detmold raised the important point that there would be no excited-state contamination at all if the interpolating operator created an eigenstate of the QCD Hamiltonian, and that for improved interpolating operators (such as generated by the variational method) one can get rather close to this situation, so that the HLAQCD criticism seems hardly applicable. As for the Zc(3900), HALQCD find it to be not a resonance, but a kinematic cusp, although this conclusion is based on simulations at rather heavy pion masses (mπ> 400 MeV).

The final plenary session was devoted to the anomalous magnetic moment of the muon, which is perhaps the most pressing topic for the lattice community, since the new (g-2) experiment is now running, and theoretical predictions matching the improved experimental precision will be needed soon. The first speaker was Christoph Lehner, who presented RBC/UKQCD's efforts to determine the hadronic vacuum polarization contribution to aμ with high precision. The strategy for this consists of two main ingredients: one is to minimize the statistical and systematic errors of the lattice calculation by using a full-volume low-mode average via a multigrid Lanczos method, explicitly including the leading effects of strong isospin breaking and QED, and the contribution from disconnected diagrams, and the other is to combine lattice and phenomenology to take maximum advantage of their respective strengths. This is achieved by using the time-momentum representation with a continuum correlator reconstructed from the R-ratio, which turns out to be quite precise at large times, but more uncertain at shorter times, which is exactly the opposite of the situation for the lattice correlator. Using a window which continuously switches over from the lattice to the continuum at time separations around 1.2 fm then minimizes the overall error on aμ.

The last plenary talk was given by Gilberto Colangelo, who discussed the new dispersive approach to the hadronic light-by-light scattering contribution to aμ. Up to now the theory results for this small, but important, contribution have been based on models, which will always have an a priori unknown and irreducible systematic error, although lattice efforts are beginning to catch up. For a dispersive approach based on general principles such as analyticity and unitarity, the hadronic light-by-light tensor first needs to be Lorentz decomposed, which gives 138 tensors, of which 136 are independent, and of which gauge invariance permits only 54, of which 7 are distinct, with the rest related by crossing symmetry; care has to be taken to choose the tensor basis such that there are no kinematic singularities. A master formula in terms of 12 linear combinations of these components has been derived by Gilberto and collaborators, and using one- and two-pion intermediate states (and neglecting the rest) in a systematic fashion, they have been able to produce a model-independent theory result with small uncertainties based on experimental data for pion form factors and scattering amplitudes.

The closing remarks were delivered by Elvira Gamiz, who advised participants that the proceedings deadline of 18 October will be strict, because this year's proceedings will not be published in PoS, but in EPJ Web of Conferences, who operate a much stricter deadline policy. Many thanks to Elvira for organizing such a splendid lattice conference! (I can appreciate how much work that is, and I think you should have received far more applause.)

Huey-Wen Lin invited the community to East Lansing, Michigan, USA, for the Lattice 2018 conference, which will take place 22-28 July 2018 on the campus of Michigan State University.

The IAC announced that Lattice 2019 will take place in Wuhan, China.

And with that the conference ended. I stayed in Granada for a couple more days of sightseeing and relaxation, but the details thereof will be of legitimate interest only to a very small subset of my readership (whom I keep updated via different channels), and I therefore conclude my coverage and return the blog to its accustomed semi-hiatus state.


Sunday, June 25, 2017

Lattice 2017, Day Five

The programme for today took account of the late end of the conference dinner in the early hours of the day, by moving the plenary sessions by half an hour. The first plenary talk of the day was given by Ben Svetitsky, who reviewed the status of BSM investigations using lattice field theory. An interesting point Ben raised was that these studies go not so much "beyond" the Standard Model (like SUSY, dark matter, or quantum gravity would), but "behind" or "beneath" it by seeking for a deeper explanation of the seemingly unnaturally small Higgs mass, flavour hierarchies, and other unreasonable-looking features of the SM. The original technicolour theory is quite dead, being Higgsless, but "walking" technicolour models are an area of active investigation. These models have a β-function that comes close to zero at some large coupling, leading to an almost conformal behaviour near the corresponding IR almost-fixed point. In such almost conformal theories, a light scalar (i.e. the Higgs) could arise naturally as the pseudo-Nambu-Goldstone boson of the approximate dilatation symmetry of the theory. A range of different gauge groups, numbers of flavours, and fermion representations are being investigated, with the conformal or quasi-conformal status of some of these being apparently controversial. An alternative approach to Higgs compositeness has the Higgs appear as the exact Nambu-Goldstone boson of some spontaneous symmetry breaking which keeps SU(2)L⨯U(1) intact, with the Higgs potential being generated at the loop level by the coupling to the SM sector. There are also some models of this type being actively investigated.

The next plenary speaker was Stefano Forte, who reviewed the status and prospects of determining the strong coupling αs from sources other than the lattice. The PDG average for αs is a weighted average of six values, four of which are the pre-averages of the determinations from the lattice, from τ decays, from jet rates and shapes, and from parton distribution functions, and two of which are the determinations from the global electroweak fit and from top production at the LHC. Each of these channels has its own systematic issues, and one problem can be that overaggressive error estimates give too much weight to the corresponding determination, leading to statistically implausible scatter of results in some channels. It should be noted, however, that the lattice results are all quite compatible, with the most precise results by ALPHA and by HPQCD (which use different lattice formulations and completely different analysis methods) sitting right on top of each other.

This was followed by a presentation by Thomas Korzec of the determination of αs by the ALPHA collaboration. I cannot really attempt to do justice to this work in a blog post, so I encourage you to look at their paper. By making use of both the Schrödinger functional and the gradient flow coupling in finite volume, they are able to non-perturbatively run αs between hadronic and perturbative scales with high accuracy.

After the coffee break, Erhard Seiler reviewed the status of the complex Langevin method, which is one of the leading methods for simulating actions with a sign problem, e.g. at finite chemical potential or with a θ term. Unfortunately, it is known that the complex Langevin method can sometimes converge to wrong results, and this can be traced to the violation by the complexification of the conditions under which the (real) Langevin method is justified, of which the development of zeros in e-S seems to be the most important case, giving rise to poles in the force which will violate ergodicity. There seems to be a lack of general theorems for situations like this, although the complex Langevin method has apparently been shown to be correct under certain difficult-to-check conditions. One of the best hopes for simulating with complex Langevin seems to be the dynamical stabilization proposed by Benjamin Jäger and collaborators.

This was followed by Paulo Bedaque discussing the prospects of solving the sign problem using the method of thimbles and related ideas. As far as I understand, thimbles are permissible integration regions in complexified configuration space on which the imaginary part of the action is constant, and which can thus be integrated over without a sign problem. A holomorphic flow that is related both to the gradient flow and the Hamiltonian flow can be constructed so as to flow from the real integration region to the thimbles, and based on this it appears to have become possible to solve some toy models with a sign problem, even going so far as to perform real-time simulations in the Keldysh-Schwinger formalism in Euclidean space (if I understood correctly).

In the afternoon, there was a final round of parallel sessions, one of which was again dedicated to the anomalous magnetic moment of the muon, this time focusing on the very difficult hadronic light-by-light contribution, for which the Mainz group has some very encouraging first results.

Friday, June 23, 2017

Lattice 2017, Days Three and Four

Wednesday was the customary short day, with parallel sessions in the morning, and time for excursions in the afternoon. I took the "Historic Granada" walking tour, which included visits to the Capilla Real and the very impressive Cathedral of Granada.

The first plenary session of today had a slightly unusual format in that it was a kind of panel discussion on the topic of axions and QCD topology at finite temperature.

After a brief outline by Mikko Laine, the session chair, the session started off with a talk by Guy Moore on the role of axions in cosmology and the role of lattice simulations in this context. Axions arise in the Peccei-Quinn solution to the strong CP problem and are a potential dark matter candidate. Guy presented some of his own real-time lattice simulations in classical field theory for axion fields, which exhibit the annihilation of cosmic-string-like vortex defects and associated axion production, and pointed out the need for accurate lattice QCD determinations of the topological susceptibility in the temperature range of 500-1200 MeV in order to fix the mass of the axion more precisely from the dark matter density (assuming that dark matter consists of axions).

The following talks were all fairly short. Claudio Bonati presented algorithmic developments for simulations of the topological properties of high-temperature QCD. The long autocorrelations of the topological charge at small lattice spacing are a problem. Metadynamics, which bias the Monte Carlo evolution in a non-Markovian manner so as to more efficiently sample the configuration space, appear to be of help.

Hidenori Fukaya reviewed the question of whether U(1)A remains anomalous at high temperature, which he claimed (both on theoretical grounds and based on numerical simulation results) it doesn't. I didn't quite understand this, since as far as I understand the axial anomaly, it is an operator identity, which will remain true even if both sides of the identity were to happen to vanish at high enough temperature, which is all that seemed to be shown; but this may just be my ignorance showing.

Tamas Kovacs showed recent results on the temperature-dependence of the topological susceptibility of QCD. By a careful choice of algorithms based on physical considerations, he could measure the topological susceptibility over a wide range of temperatures, showing that it becomes tiny at large temperature.

Then the speakers all sat on the stage as a panel and fielded questions from the audience. Perhaps it might have been a good idea to somehow force the speakers to engage each other; as it was, the advantage of this format over simply giving each speaker a longer time for answering questions didn't immediately become apparent to me.

After the coffee break, things returned to the normal format. Boram Yoon gave a review of lattice determinations of the neutron electric dipole moment. Almost any BSM source of CP violation must show up as a contribution to the neutron EDM, which is therefore a very sensitive probe of new physics. The very strong experimental limits on any possible neutron EDM imply e.g. |θ|<10-10 in QCD through lattice measurements of the effects of a θ term on the neutron EDM. Similarly, limits can be put on any quark EDMs or quark chromoelectric dipole moments. The corresponding lattice simulations have to deal with sign problems, and the usual techniques (Taylor expansions, simulations at complex θ) are employed to get past this, and seem to be working very well.

The next plenary speaker was Phiala Shanahan, who showed recent results regarding the gluon structure of hadrons and nuclei. This line of research is motivated by the prospect of an electron-ion collider that would be particularly sensitive to the gluon content of nuclei. For gluonic contributions to the momentum and spin decomposition of the nucleon, there are some fresh results from different groups. For the gluonic transversity, Phiala and her collaborators have performed first studies in the φ system. The gluonic radii of small nuclei have also been looked at, with no deviation from the single-nucleon case visible at the present level of accuracy.

The 2017 Kenneth Wilson Award was awarded to Raúl Briceño for his groundbreaking contributions to the study of resonances in lattice QCD. Raúl has been deeply involved both in the theoretical developments behind extending the reach of the Lüscher formalism to more and more complicated situations, and in the numerical investigations of resonance properties rendered possible by those developments.

After the lunch break, there were once again parallel sessions, two of which were dedicated entirely to the topic of the hadronic vacuum polarization contribution to the anomalous magnetic moment of the muon, which has become one of the big topics in lattice QCD.

In the evening, the conference dinner took place. The food was excellent, and the Flamenco dancers who arrived at midnight (we are in Spain after all, where it seems dinner never starts before 9pm) were quite impressive.

Tuesday, June 20, 2017

Lattice 2017, Day Two

Welcome back to our blog coverage of the Lattics 2017 conference in Granada.

Today's first plenary session started with an experimental talk by Arantza Oyanguren of the LHCb collaboration on B decay anomalies at LHCb. LHCb have amassed a huge number of b-bbar pairs, which allow them to search for and study in some detail even the rarest of decay modes, and they are of course still collecting more integrated luminosity. Readers of this blog will likely recall the Bs → μ+μ- branching ratio result from LHCb, which agreed with the Standard Model prediction. In the meantime, there are many similar results for branching ratios that do not agree with Standard Model predictions at the 2-3σ level, e.g. the ratios of branching fractions like Br(B+→K+μ+μ-)/Br(B+→K+e+e-), in which lepton flavour universality appears to be violated. Global fits to data in these channels appear to favour the new physics hypothesis, but one should be cautious because of the "look-elsewhere" effect: when studying a very large number of channels, some will show an apparently significant deviation simply by statistical chance. On the other hand, it is very interesting that all the evidence indicating potential new physics (including the anomalous magnetic moment of the muon and the discrepancy between the muonic and electronic determinations of the proton electric charge radius) involve differences between processes involving muons and analogous processes involving electrons, an observation I'm sure model-builders have made a long time ago.

This was followed by a talk on flavour physics anomalies by Damir Bečirević. Expanding on the theoretical interpretation of the anomalies discussed in the previous talk, he explained how the data seem to indicate a violation of lepton flavour universality at the level where the Wilson coefficient C9 in the effective Hamiltonian is around zero for electrons, and around -1 for muons. Experimental data seem to favour the situation where C10=-C9, which can be accommodated is certain models with a Z' boson coupling preferentially to muons, or in certain special leptoquark models with corrections at the loop level only. Since I have little (or rather no) expertise in phenomenological model-building, I have no idea how likely these explanations are.

The next speaker was Xu Feng, who presented recent progress in kaon physics simulations on the lattice. The "standard" kaon quantities, such as the kaon decay constant or f+(0), are by now very well-determined from the lattice, with overall errors at the sub-percent level, but beyond that there are many important quantities, such as the CP-violating amplitudes in K → ππ decays, that are still poorly known and very challenging. RBC/UKQCD have been leading the attack on many of these observables, and have presented a possible solution to the ΔI=1/2 rule, which consists in non-perturbative effects making the amplitude A0 much larger relative to A2 than what would be expected from naive colour counting. Making further progress on long-distance contributions to the KL-KS mass difference or εK will require working at the physical pion mass and treating the charm quark with good control of discretization effects. For some processes, such as KL→π0+-, even the sign of the coefficient would be desirable.

After the coffee break, Luigi Del Debbio talked about parton distributions in the LHC era. The LHC data reduce the error on the NNLO PDFs by around a factor of two in the intermediate-x region. Conversely, the theory errors coming from the PDFs are a significant part of the total error from the LHC on Higgs physics and BSM searches. In particular the small-x and large-x regions remain quite uncertain. On the lattice, PDFs can be determined via quasi-PDFs, in which the Wilson line inside the non-local bilinear is along a spatial direction rather than in a light-like direction. However, there are still theoretical issues to be settled in order to ensure that the renormalization and matching the the continuum really lead to the determination of continuum PDFs in the end.

Next was a talk about chiral perturbation theory results on the multi-hadron state contamination of nucleon observables by Oliver Bär. It is well known that until very recently, lattice calculations of the nucleon axial charge underestimated its value relative to experiment, and this has been widely attributed to excited-state effects. Now, Oliver has calculated the corrections from nucleon-pion states on the extraction of the axial charge in chiral perturbation theory, and has found that they actually should lead to an overestimation of the axial charge from the plateau method, at least for source-sink separations above 2 fm, where ChPT is applicable. Similarly, other nucleon charges should be overestimated by 5-10%. Of course, nobody is currently measuring in that distance regime, and so it is quite possible that higher-order corrections or effects not captured by ChPT overcompensate this and lead to an underestimation, which would however mean that there is some instermediate source-sink separation for which one gets the experimental result by accident, as it were.

The final plenary speaker of the morning was Chia-Cheng Chang, who discussed progress towards a precise lattice determination of the nucleon axial charge, presenting the results of the CalLAT collaboration from using what they refer to as the Feynman-Hellmann method, a novel way of implementing what is essentially the summation method through ideas based in the Feynman-Hellmann theorem (but which doesn't involve simulating with a modified action, as a straightforward applicaiton of the Feynman-Hellmann theorem would demand).

After the lunch break, there were parallel sessions, and in the evening, the poster session took place. A particular interesting and entertaining contribution was a quiz about women's contributions to physics and computer science, the winner of which will win a bottle of wine and a book.

Monday, June 19, 2017

Lattice 2017, Day One

Hello from Granada and welcome to our coverage of the 2017 lattice conference.

After welcome addresses by the conference chair, a representative of the government agency in charge of fundamental research, and the rector of the university, the conference started off in a somewhat sombre mood with a commemoration of Roberto Petronzio, a pioneer of lattice QCD, who passed away last year. Giorgio Parisi gave a memorial talk summarizing Roberto's many contributions to the development of the field, from his early work on perturbative QCD and the parton model, through his pioneering contributions to lattice QCD back in the days of small quenched lattices, to his recent work on partially twisted boundary conditions and on isospin breaking effects, which is very much at the forefront of the field at the moment, not to omit Roberto's role as director of the Italian INFN in politically turbulent times.

This was followed by a talk by Martin Lüscher on stochastic locality and master-field simulations of very large lattices. The idea of a master-field simulation is based on the observation of volume self-averaging, i.e. that the variance of volume-averaged quantities is much smaller on large lattices (intuitively, this would be because an infinitely-extended properly thermalized lattice configuration would have to contain any possible finite sub-configuration with a frequency corresponding to its weight in the path integral, and that thus a large enough typical lattice configuration is itself a sort of ensemble). A master field is then a huge (e.g. 2564) lattice configuration, on which volume averages of quantities are computed, which have an expectation value equal to the QCD expectation value of the quantity in question, and a variance which can be estimated using a double volume sum that is doable using an FFT. To generate such huge lattice, algorithms with global accept-reject steps (like HMC) are unsuitable, because ΔH grows with the square root of the volume, but stochastic molecular dynamics (SMD) can be used, and it has been rigorously shown that for short-enough trajectory lengths SMD converges to a unique stationary state even without an accept-reject step.

After the coffee break, yet another novel simulation method was discussed by Ignacio Cirac, who presented techniques to perform quantum simulations of QED and QCD on alattice. While quantum computers of the kind that would render RSA-based public-key cryptography irrelevant remain elusive at the moment, the idea of a quantum simulator (which is essentially an analogue quantum computer), which goes back to Richard Feynman, can already be realized in practice: optical lattices allow trapping atoms on lattice sites while fine-tuning their interactions so as to model the couplings of some other physical system, which can thus be simulated. The models that are typically simulated in this way are solid-state models such as the Hubbard model, but it is of course also possible to setup a quantum simulator for a lattice field theory that has been formulated in the Hamiltonian framework. In order to model a gauge theory, it is necessary to model the gauge symmetry by some atomic symmetry such as angular momentum conservation, and this has been done at least in theory for QED and QCD. The Schwinger model has been studied in some detail. The plaquette action for d>1+1 additionally requires a four-point interaction between the atoms modelling the link variables, which can be realized using additional auxiliary variables, and non-abelian gauge groups can be encoded using multiple species of bosonic atoms. A related theoretical tool that is still in its infancy, but shows significant promise, is the use of tensor networks. This is based on the observation that for local Hamiltonians the entanglement between a region and its complement grows only as the surface of the region, not its volume, so only a small corner of the total Hilbert space is relevant; this allows one to write the coefficients of the wavefunction in a basis of local states as a contraction of tensors, from where classical algorithms that scale much better than the exponential growth in the number of variables that would naively be expected can be derived. Again, the method has been successfully applied to the Schwinger model, but higher dimensions are still challenging, because the scaling, while not exponential, still becomes very bad.

Staying with the topic of advanced simulation techniques, the next talk was Leonardo Giusti speaking about the block factorization of fermion determinants into local actions for multi-boson fields. By decomposing the lattice into three pieces, of which the middle one separates the other by a distance Δ large enough to render e-MπΔ small, and by applying a domain decomposition similar to the one used in Lüscher's DD-HMC algorithm to the Dirac operator, Leonardo and collaborators have been able to derive a multi-boson algorithm that allows to perform multilevel integration with dynamical fermions. For hadronic observables, the quark propagator also needs to be factorized, which Leonardo et al. also have achieved, making a significant decrease in statistical error possible.

After the lunch break there were parallel sessions, in one of which I gave my own talk and another one of which I chaired, thus finishing all of my duties other than listening (and blogging) on day one.

In the evening, there was a reception followed by a special guided tour of the truly stunning Alhambra (which incidentally contains a great many colourful - and very tasteful - lattices in the form of ornamental patterns).