Tuesday, September 15, 2015

Fundamental Parameters from Lattice QCD, Last Days

The last few days of our scientific programme were quite busy for me, since I had agreed to give the summary talk on the final day. I therefore did not get around to blogging, and will keep this much-delayed summary rather short.

On Wednesday, we had a talk by Michele Della Morte on non-perturbatively matched HQET on the lattice and its use to extract the b quark mass, and a talk by Jeremy Green on the lattice measurement of the nucleon strange electromagnetic form factors (which are purely disconnected quantities).

On Thursday, Sara Collins gave a review of heavy-light hadron spectra and decays, and Mike Creutz presented arguments for why the question of whether the up-quark is massless is scheme dependent (because the sum and difference of the light quark masses are protected by symmetries, but will in general renormalize differently).

On Friday, I gave the summary of the programme. The main themes that I identified were the question of how to estimate systematic errors, and how to treat them in averaging procedures, the issues of isospin breaking and scale setting ambiguities as major obstacles on the way to sub-percent overall precision, and the need for improved communication between the "producers" and "consumers" of lattice results. In the closing discussion, the point was raised that for groups like CKMfitter and UTfit the correlations between different lattice quantities are very important, and that lattice collaborations should provide the covariance matrices of the final results for different observables that they publish wherever possible.

Wednesday, September 09, 2015

Fundamental Parameters from Lattice QCD, Day Seven

Today's programme featured two talks about the interplay between the strong and the electroweak interactions. The first speaker was Gregorio Herdoíza, who reviewed the determination of hadronic corrections to electroweak observables. In essence these determinations are all very similar to the determination of the leading hadronic correction to (g-2)μ since they involve the lattice calculation of the hadronic vacuum polarisation. In the case of the electromagnetic coupling α, its low-energy value is known to a precision of 0.3 ppb, but the value of α(mZ2) is known only to 0.1 ‰, and a larger portion of the difference in uncertainty is due to the hadronic contribution to the running of α, i.e. the hadronic vacuum polarization. Phenomenologically this can be estimated through the R-ratio, but this results in relatively large errors at low Q2. On the lattice, the hadronic vacuum polarization can be measured through the correlator of vector currents, and currently a determination of the running of α in agreement with phenomenology and with similar errors can be achieved, so that in the future lattice results are likely to take the lead here. In the case of the electroweak mixing angle, sin2θw is known well at the Z pole, but only poorly at low energy, although a number of experiments (including the P2 experiment at Mainz) are aiming to reduce the uncertainty at lower energies. Again, the running can be determined from the Z-γ mixing through the associated current-current correlator, and current efforts are under way, including an estimation of the systematic error caused by the omission of quark-disconnected diagrams.

The second speaker was Vittorio Lubicz, who looked at the opposite problem, i.e. the electroweak corrections to hadronic observables. Since approximately α=1/137, electromagnetic corrections at the one-loop level will become important once the 1% level of precision is being aimed for, and since the up and down quarks have different electrical charges, this is an isospin-breaking effect which also necessitates at the same time considering the strong isospin breaking caused by the difference in the up and down quark masses. There are two main methods to include QED effects into lattice simulations; the first is direct simulations of QCD+QED, and the second is the method of incorporating isospin-breaking effects in a systematic expansion pioneered by Vittorio and colleagues in Rome. Either method requires a systematic treatment of the IR divergences arising from the lack of a mass gap in QED. In the Rome approach this is done through splitting the Bloch-Nordsieck treatment of IR divergences and soft bremsstrahlung into two pieces, whose large-volume limits can be taken separately. There are many other technical issues to be dealt with, but first physical results from this method should be forthcoming soon.

In the afternoon there was a discussion about QED effects and the range of approaches used to treat them.

Monday, September 07, 2015

Fundamental Parameters from Lattice QCD, Day Six

The second week of our Scientific Programme started with an influx of new participants.

The first speaker of the day was Chris Kelly, who spoke about CP violation in the kaon sector from lattice QCD. As I hardly need to tell my readers, there are two sources of CP violation in the kaon system, the indirect CP-violation from neutral kaon-antikaon mixing, and the direct CP-violation from K->ππ decays. Both, however, ultimately stem from the single source of CP violation in the Standard Model, i.e. the complex phase e in the CKM matrix, which gives the area of the unitarity triangle. The hadronic parameter relevant to indirect CP-violation is the kaon bag parameter BK, which is a "gold-plated" quantity that can be very well determined on the lattice; however, the error on the CP violation parameter εK constraining the upper vertex of the unitarity triangle is dominated by the uncertainty on the CKM matrix element Vcb. Direct CP-violation is particularly sensitive to possible BSM effects, and is therefore of particular interest. Chris presented the recent efforts of the RBC/UKQCD collaboration to address the extraction of the relevant parameter ε'/ε and associated phenomena such as the ΔI=1/2 rule. For the two amplitudes A0 and A2, different tricks and methods were required; in particular for the isospin-zero channel, all-to-all propagators are needed. The overall errors are still large: although the systematics are dominated by the perturbative matching to the MSbar scheme, the statistical errors are very sizable, so that the 2.1σ tension with experiment observed is not particularly exciting or disturbing yet.

The second speaker of the morning was Gunnar Bali, who spoke about the topic of renormalons. It is well known that the perturbative series for quantum field theories are in fact divergent asymptotic series, whose typical term will grow like nkznn! for large orders n. Using the Borel transform, such series can be resummed, provided that there are no poles (IR renormalons) of the Borel transform on the positive real axis. In QCD, such poles arise from IR divergences in diagrams with chains of bubbles inserted into gluon lines, as well as from instanton-antiinstanton configurations in the path integral. The latter can be removed to infinity by considering the large-Nc limit, but the former are there to stay, making perturbatively defined quantities ambiguous at higher orders. A relevant example are heavy quark masses, where the different definitions (pole mass, MSbar mass, 1S mass, ...) are related by perturbative conversion factors; in a heavy-quark expansion, the mass of a heavy-light meson can be written as M=m+Λ+O(1/m), where m is the heavy quark mass, and Λ a binding energy of the order of some QCD energy scale. As M is unambiguous, the ambiguities in m must correspond to ambiguities in the binding energy Λ, which can be computed to high orders in numerical stochastic perturbation theory (NSPT). After dealing with some complications arising from the fact that IR divergences cannot be probed directly in a finite volume, it is found that the minimum term in the perturbative series (which corresponds to the perturbative ambiguity) is of order 180 MeV in the quenched theory, meaning that heavy quark masses are only defined up to this accuracy. Another example is the gluon condensate (which may be of relevance to the extraction of αs from τ decays), where it is found that the ambiguity is of the same size as the typically quoted result, making the usefulness of this quantity doubtful.

Friday, September 04, 2015

Fundamental Parameters from Lattice QCD, Day Five

The first speaker today was Martin Lüscher, who spoke about revisiting numerical stochastic perturbation theory. The idea behind numerical stochastic perturbation theory is to perform a simulation of a quantum field theory using the Langevin algorithm and to perturbatively expand the fields, which leads to a tower of coupled evolution equations, where only the lowest-order one depends explicitly on the noise, whereas the higher-order ones describe the evolution of the higher-order coefficients as a function of the lower-order ones. In Numerical Stochastic Perturbation Theory (NSPT), the resulting equations are integrated numerically (up to some, possibly rather high, finite order in the coupling), and the average over noises is replaced by a time average. The problems with this approach are that the autocorrelation time diverges as the inverse square of the lattice spacing, and that the extrapolation in the Langevin time step size is difficult to control well. An alternative approach is given by Instantaneous Stochastic Perturbation Theory (ISPT), in which the Langevin time evolution is replaced by the introduction of Gaussian noise sources at the vertices of tree diagrams describing the construction of the perturbative coefficients of the lattice fields. Since there is no free lunch, this approach suffers from power-law divergent statistical errors in the continuum limit, which arise from the way in which power-law divergences that cancel in the mean are shifted around between different orders when computing variances. This does not happen in the Langevin-based approach, because the Langevin theory is renormalizable.

The second speaker of the morning was Siegfried Bethke of the Particle Data Group, who allowed us a glimpse at the (still preliminary) world average of αs for 2015. In 2013, there were five classes of αs determinations: from lattice QCD, τ decays, deep inelastic scattering, e+e- colliders, and global Z pole fits. Except for the lattice determinations (and the Z pole fits, where there was only one number), these were each preaveraged using the range method -- i.e. taking the mean of the highest and lowest central value as average, and assigning it an ncertainty of half the difference between them. The lattice results were averaged using a χ2 weighted average. The total average (again a weighted average) was dominated by the lattice results, which in turn were dominated by the latest HPQCD result. For 2015, there have been a number of updates to most of the classes, and there is now a new class of αs determinations from the LHC (of which there is currently only one published, which lies rather low compared to other determinations, and is likely a downward fluctuation). In most cases, the new determinations have not or hardly changed the values and errors of their class. The most significant change is in the field of lattice determinations, where the PDG will change its policy and will no longer perform its own preaverages, taking instead the FLAG average as the lattice result. As a result, the error on the PDG value will increase; its value will also shift down a little, mostly due to the new LHC value.

The afternoon discussion centered on αs. Roger Horsley gave an overview of the methods used to determine it on the lattice (ghost vertices, the Schrödinger functional, the static energy at short distances, current-current correlators, and small Wilson loops) and reviewed the criteria used by FLAG to assess the quality of a given determination, as well as the averaging procedure used (which uses a more conservative error than what a weighted average would give). In the discussion, the points were raised that in order to reliably increase the precision to the sub-percent level and beyond will likely require not only addressing the scale setting uncertainties (which is reflected in the different values for r0 obtained by different collaboration and will affect the running of αs), but also the inclusion of QED effects.

Fundamental Parameters from Lattice QCD, Day Four

Today's first speaker was Andreas Jüttner, who reviewed the extraction of the light-quark CKM matrix elements Vud and Vus from lattice simulations. Since leptonic and semileptonic decay widths of Kaons and pions are very well measured, the matrix element |Vus| and the ratio |Vus|/|Vud| can be precisely determined if the form factor f+(0) and the ratio of decay constants fK/fπ are precisely predicted from the lattice. To reach the desired level of precision, the isospin breaking effects from the difference of the up and down quark masses and from electromagnetic interactions will need to be included (they are currently treated in chiral perturbation theory, which may not apply very well in the SU(3) case). Given the required level of precision, full control of all systematics is very important, and the problem of how to properly estimate the associated errors arises, to which different collaborations are offering very different answers. To make the lattice results optimally usable for CKMfitter &Co., one should ideally provide all of the lattice inputs to the CKMfitter fit separately (and not just some combination that presents a particularly small error), as well as their correlations (as far as possible).

Unfortunately, I had to miss the second talk of the morning, by Xavier García i Tormo on the extraction of αs from the static-quark potential, because our Sonderforschungsbereich (SFB/CRC) is currently up for review for a second funding period, and the local organizers had to be available for questioning by panel members.

Later in the afternoon, I returned to the workshop and joined a very interesting discussion on the topic of averaging in the presence of theoretical uncertainties. The large number of possible choices to be made in that context implies that the somewhat subjective nature of systematic error estimates survives into the averages, rather than being dissolved into a consensus of some sort.

Fundamental Parameters from Lattice QCD, Day Three

Today, our first speaker was Jerôme Charles, who presented new ideas about how treat data with theoretical uncertainties. The best place to read about this is probably his talk, but I will try to summarize what I understood. The framework is a firmly frequentist approach to statistics, which answers the basic question of how likely the observed data are if a given null hypothesis is true. In such a context, one can consider a theoretical uncertainty as a fixed bias δ of the estimator under consideration (such as a lattice simulation) which survives the limit of infinite statistics. One can then test the null hypothesis that the true value of the observable in question is μ by constructing a test statistic for the estimator being distributed normally with mean μ+δ and standard deviation σ (the statistical error quoted for the result). The p-value of μ then depends on δ, but not on the quoted systematic error Δ. Since the true value of δ is not known, one has to perform a scan over some region Ω, for example the interval Ωn=[-nΔ;nΔ] and take the supremum over this range of δ. One possible extension is to choose Ω adaptively in that a larger range of values needs to be scanned (i.e. a larger true systematic error in comparison to the quoted systematic error is allowed for) for lower p-values; interestingly enough, the resulting curves of p-values are numerically close to what is obtained from a naive Gaussian approach treating the systematic error as a (pseudo-)random variable. For multiple systematic errors, a multidimensional Ω has to be chosen in some way; the most natural choices of a hypercube or a hyperball correspond to adding the errors linearly or in quadrature, respectively. The linear (hypercube) scheme stands out as the only one that guarantees that the systematic error of an average is no smaller than the smallest systematic error of an individual result.

The second speaker was Patrick Fritzsch, who gave a nive review of recent lattice determinations of semileptonic heavy-light decays, both the more commonly studied B decays to πℓν and Kℓν, and the decays of the Λb that have recently been investigated by Meinel et al. with the help of LHCb.

In the afternoon, both the CKMfitter collaboration and the FLAG group held meetings.

Tuesday, September 01, 2015

Fundamental Parameters from Lattice QCD, Day Two

This morning, we started with a talk by Taku Izubuchi, who reviewed the lattice efforts relating to the hadronic contributions to the anomalous magnetic moment (g-2) of the muon. While the QED and electroweak contributions to (g-2) are known to great precision, most of the theoretical uncertainty presently comes from the hadronic (i.e. QCD) contributions, of which there are two that are relevant at the present level of precision: the contribution from the hadronic vacuum polarization, which can be inserted into the leading-order QED correction, and the contribution from hadronic light-by-light scattering, which can be inserted between the incoming external photon and the muon line. There are a number of established methods for computing the hadronic vacuum polarization, both phenomenologically using a dispersion relation and the experimental R-ratio, and in lattice field theory by computing the correlator of two vector currents (which can, and needs to, be refined in various way in order to achieve competitive levels of precision). No such well-established methods exist yet for the light-by-light scattering, which is so far mostly described using models. There are however, now efforts from a number of different sides to tackle this contribution; Taku mainly presented the appproach by the RBC/UKQCD collaboration, which uses stochastic sampling of the internal photon propagators to explicitly compute the diagrams contributing to (g-2). Another approach would be to calculate the four-point amplitude explicitly (which has recently been done for the first time by the Mainz group) and to decompose this into form factors, which can then be integrated to yield the light-by-light scattering contribution to (g-2).

The second talk of the day was given by Petros Dimopoulos, who reviewed lattice determinations of D and B leptonic decays and mixing. For the charm quark, cut-off effects appear to be reasonably well-controlled with present-day lattice spacings and actions, and the most precise lattice results for the D and Ds decay constants claim sub-percent accuracy. For the b quark, effective field theories or extrapolation methods have to be used, which introduces a source of hard-to-assess theoretical uncertainty, but the results obtained from the different approaches generally agree very well amongst themselves. Interestingly, there does not seem to be any noticeable dependence on the number of dynamical flavours in the heavy-quark flavour observables, as Nf=2 and Nf=2+1+1 results agree very well to within the quoted precisions.

In the afternoon, the CKMfitter collaboration split off to hold their own meeting, and the lattice participants met for a few one-on-one or small-group discussions of some topics of interest.

Monday, August 31, 2015

Fundamental Parameters from Lattice QCD, Day One

Greetings from Mainz, where I have the pleasure of covering a meeting for you without having to travel from my usual surroundings (I clocked up more miles this year already than can be good from my environmental conscience).

Our Scientific Programme (which is the bigger of the two formats of meetings that the Mainz Institute of Theoretical Physics (MITP) hosts, the smaller being Topical Workshops) started off today with two keynote talks summarizing the status and expectations of the FLAG (Flavour Lattice Averaging Group, presented by Tassos Vladikas) and CKMfitter (presented by Sébastien Descotes-Genon) collaborations. Both groups are in some way in the business of performing weighted averages of flavour physics quantities, but of course their backgrounds, rationale and methods are quite different in many regards. I will no attempt to give a line-by-line summary of the talks or the afternoon discussion session here, but instead just summarize a few
points that caused lively discussions or seemed important in some other way.

By now, computational resources have reached the point where we can achieve such statistics that the total error on many lattice determinations of precision quantities is completely dominated by systematics (and indeed different groups would differ at the several-σ level if one were to consider only their statistical errors). This may sound good in a way (because it is what you'd expect in the limit of infinite statistics), but it is also very problematic, because the estimation of systematic errors is in the end really more of an art than a science, having a crucial subjective component at its heart. This means not only that systematic errors quoted by different groups may not be readily comparable, but also that it become important how to treat systematic errors (which may also be correlated, if e.g. two groups use the same one-loop renormalization constants) when averaging different results. How to do this is again subject to subjective choices to some extent. FLAG imposes cuts on quantities relating to the most important sources of systematic error (lattice spacings, pion mass, spatial volume) to select acceptable ensembles, then adds the statistical and systematic errors in quadrature, before performing a weighted average and computing the overall error taking correlations between different results into account using Schmelling's procedure. CKMfitter, on the other hand, adds all systematic errors linearly, and uses the Rfit procedure to perform a maximum likelihood fit. Either choice is equally permissible, but they are not directly compatible (so CKMfitter can't use FLAG averages as such).

Another point raised was that it is important for lattice collaborations computing mixing parameters to not just provide products like fB√BB, but also fB and BB separately (as well as information about the correlation between these quantities) in order to help making the global CKM fits easier.

Saturday, July 18, 2015

LATTICE 2015, Day Five

In a marked deviation from the "standard programme" of the lattice conference series, Saturday started off with parallel sessions, one of which featured my own talk.

The lunch break was relatively early, therefore, but first we all assembled in the plenary hall for the conference group photo (a new addition to the traditions of the lattice conference), and was followed by afternoon plenary sessions. The first of these was devoted to finite temperature and density, and started with Harvey Meyer giving the review talk on finite-temperature lattice QCD. The thermodynamic properties of QCD are by now relatively well-known: the transition temperature is agreed to be around 155 MeV, chiral symmetry restoration and the deconfinement transition coincide (as well as that can defined in the case of a crossover), and the number of degrees of freedom is compatible with a plasma of quarks and gluons above the transition, but the thermodynamic potentials approach the Stefan-Boltzmann limit only slowly, indicating that there are strong correlations in the medium. Below the transition, the hadron resonance gas model describes the data well. The Columbia plot describing the nature of the transition as a function of the light and strange quark masses is being further solidified: the size of the lower-left hand corner first-order region is being measured, and the nature of the left-hand border (most likely O(4) second-order) is being explored. Beyond these static properties, real-time properties are beginning to be studied through the finite-temperature spectral functions. One interesting point was that there is a difference between the screening masses (spatial correlation lengths) and quasiparticle masses (from the spectral function) in any given channel, which may even tend in opposite directions as functions of the temperature (as seen for the pion channel).

Next, Szabolcs Borsanyi spoke about fluctuations of conserved charges at finite temperature and density. While of course the sum of all outcoming conserved charges in a collision must equal the sum of the ingoing ones, when considering a subvolume of the fireball, this can be best described in the grand canonical ensemble, as charges can move into and out of the subvolume. The quark number susceptibilities are then related to the fluctuating phase of the fermionic determinant. The methods being used to avoid the sign problem include Taylor expansions, fugacity expansions and simulations at imaginary chemical potential, all with their own strengths and weaknesses. Fluctuations can be used as a thermometer to measure the freeze-out temperature.

Lastly, Luigi Scorzato reviewed the Lefschetz thimble, which may be a way out of the sign problem (e.g. at finite chemical potential). The Lefschetz thimble is a higher-dimensional generalization of the concept of steepest-descent integration, in which the integral of eS(z) for complex S(z) is evaluated by finding the stationary points of S and integrating along the curves passing through them along which the imaginary part of S is constant. On such Lefschetz thimbles, a Langevin algorithm can be defined, allowing for a Monte Carlo evaluation of the path integral in terms of Lefschetz thimbles. In quantum-mechanical toy models, this seems to work already, and there appears hope that this might be a way to avoid the sign problem of finite-density QCD.

After the coffee break, the last plenary session turned to physics beyond the Standard Model. Daisuke Kadoh reviewed the progress in putting supersymmetry onto the lattice, which is still a difficult problem due to the fact that the finite differences which replace derivatives on a lattice do not respect the Leibniz rule, introducing SUSY-breaking terms when discretizing. The ways past this are either imposing exact lattice supersymmetries or fine-tuning the theory so as to remove the SUSY-breaking in the continuum limit. Some theories in both two and four dimensions have been simulated successfully, including N=1 Super-Yang-Mills theory in four dimensions. Given that there is no evidence for SUSY in nature, lattice SUSY is of interesting especially for the purpose of verifying the ideas of gauge-dravity duality from the Super-Yang-Mills side, and in one and two dimensions, agreement with the predictions from gauge-gravity duality has been found.

The final plenary speaker was Anna Hasenfratz, who reviewed Beyond-the-Standard-Model calculations in technicolor-like theories. If the Higgs is to be a composite particle, there must be some spontaneously broken symmetry that keeps it light, either a flavour symmetry (pions) or a scale symmetry (dilaton). There are in fact a number of models that have a light scalar particle, but the extrapolation of these theories is rendered difficult by the fact that this scalar is (and for phenomenologically interesting models would have to be) lighter than the (techni-)pion, and thus the usual formalism of chiral perturbation theory may not work. Many models of strong BSM interactions have been and are being studied using a large number of different methods, with not always conclusive results. A point raised towards the end of the talk was that for theories with a conformal IR fixed-point, universality might be violated (and there are some indications that e.g. Wilson and staggered fermions seem to give qualitatively different behaviour for the beta function in such cases).

The conference ended with some well-deserved applause for the organizing team, who really ran the conference very smoothly even in the face of a typhoon. Next year's lattice conference will take place in Southampton (England/UK) from 24th to 30th July 2016. Lattice 2017 will take place in Granada (Spain).