The UK Met Office is one of the world leaders in weather and climate prediction, and the Met Office's global forecast model is used by many other centres worldwide to drive their individual local area models. However, many short scale phenomena as well as important longterm dynamics are still difficult to predict accurately due to the limited spatial resolution of global models and the additional errors introduced by local area models. Novel computing architectures with more than 10^5 cores provide a chance to push these boundaries and to keep the UK Met Office at the forefront of developments. Decades of experience with numerical weather and climate prediction have produced a good understanding of the core dynamics inherent in atmospheric flow and of their stable and accurate numerical approximations. As outlined in the call, the Met Office's Unified Model uses lattitude-longitude grids and achieves high efficiency on parallel computers with up to 1000 cores. However, (artificial) grid clustering at the poles renders these grids impractical for large-scale computations, and so one of the core tasks in this NERC Programme is the search for suitable alternative grids. Several separate proposals address this issue. However, the equations governing atmospheric flow form a time-dependent system of differential equations which strongly couple the solution everywhere on the globe (the famous "butterfly effect"). Most current atmospheric dynamics models use semi-implicit time discretisation schemes which provide some global coupling of the equations at each time step. This prevents the system from becoming unstable and as a consequence it allows for larger time steps than fully explicit schemes, which include no global coupling. Since the cost of the forecast is proportional to the number of time steps, a scheme that allows for larger time steps (with satisfactory accuracy) seems preferable. But these benefits come at a price, especially in the context of large-scale problems and on massively parallel architectures. An elliptic system for the pressure has to be solved in each time step, leading to a very large, ill-conditioned algebraic system, the solution of which is difficult to parallelise efficiently. There are two main factors that make the scaling of this elliptic solve to large problem sizes and to large processor numbers difficult: algorithmic scalability and parallel scalability. Since the solution operator for the elliptic equation couples the pressures globally, only multilevel iterative solvers which use a hierarchy of discretisations on grids of varying resolution allow optimal, linear growth in cost (algorithmic scalability). But in a massively parallel computing environment, where global communication is costly, it is necessary to implement these solvers well, keeping most of the communication local, to ensure that the computational cost continues to scale optimally to 100K or more processors (parallel scalability). This proposal addresses this problem and will thus facilitate the best possible decisions on the design of the Met Office's future dynamical core, thus guaranteeing the UK's competitiveness in this key societal/technological challenge. An optimal scalability of semi-implicit schemes has not been achieved in atmospheric flow up to now, but success of the Project Partners, IWR Heidelberg and Lawrence Livermore National Lab, on simpler model elliptic problems shows that it is possible. The PIs experience over the years in obtaining optimal scalability of elliptic solvers on the most current architectures in various application areas, most notably for elliptic problems from atmospheric flow discretised on latitude-longitude grids up to 256 cores, as well as his status as one of the world's leading theoretical analysts of multilevel iterative elliptic solvers and his links to other world leading groups in this field, mean that that he is ideally equipped to achieve this goal.