
Fundamental research is dealing, by definition, with the two extremes: the extremely small and the extremely large. The LHC and Astroparticle physics experiments will soon offer new glimpses beyond the current frontiers. And the computing infrastructure to support such physics research needs to look beyond the cutting edge.
Once more it seems that we are on the edge of a computing revolution. But perhaps what we are seeing now is a even more epochal change where not only the pace of the revolution is changing, but also its very nature. Change is not any more an "event" meant to open new possibilities that have to be understood first and exploited then to prepare the ground for a new leap. Change is becoming the very essence of the computing reality, sustained by a continuous flow of technical and paradigmatic innovation.
The hardware is definitely moving toward more massive parallelism, in a breathtaking synthesis of all the past techniques of concurrent computation. New many-core machines offer opportunities for all sorts of Single/Multiple Instructions, Single/Multiple Data and Vector computations that in the past required specialised hardware.
At the same time, all levels of virtualisation imagined till now seem to be possible via Clouds, and possibly many more. Information Technology has been the working backbone of the Global Village, and now, in more than one sense, it is becoming itself the Global Village. Between these two, the gap between the need for adapting applications to exploit the new hardware possibilities and the push toward virtualisation of resources is widening, creating more challenges as technical and intellectual progress continues. ACAT 2010 proposes to explore and confront the different boundaries of the evolution of computing, and its possible consequences on our scientific activity.
What do these new technologies entail for physics research? How will physics research benefit from this revolution in data taking and analysis, experiment monitoring and complex simulations? What physics research seizing these new technologies may bring forward innovations that would benefit the society at large?
Editorial board: T. Speer (chairman), F. Boudjema, J. Lauret, A. Naumann, L. Teodorescu, P. Uwer
Conference web-site: http://acat2010.cern.ch/
Programme and presentations: http://indico.cern.ch/conferenceDisplay.py?confId=59397
Session: Plenary contributions |
---|
Computing Outside the Box: On Demand Computing & its Impact on Scientific Discovery
|
History of the ROOT system: conception, evolution and experience
|
Pattern recognition and estimation methods for track and vertex reconstruction
|
LHC Cloud Computing with CernVM
|
Analysis of medical images: the MAGIC-5 Project
|
Data access in the High Energy Physics community
|
Statistics challenges in HEP
|
Automation of multi-leg one-loop virtual amplitudes
|
Data Transfer Optimization - Going Beyond Heuristics
|
How to Navigate Next Generation Programming Models for Next Generation Computer Architecture
|
Tools for Dark Matter in Particle Physics and Astrophysics
|
Lattice QCD simulations
|
Scientific Computing with Amazon Web Services
|
Applying CUDA Computing Model To Event Reconstruction Software
|
Application of Many-core Accelerators for Problems in Astronomy and Physics
|
Numerical approach to Feynman diagram calculations: Benefits from new computational capabilities
|
Summary of Track 1: Computing Technology for Physics Research
|
Summary of Track 2: Data Analysis - Algorithms and Tools
|
Summary of Track 3: Methodology of Computations in Theoretical Physics
|
Session: Track 1: Computing Technology for Physics Research |
EU-IndiaGrid2 - Sustainable e-infrastructures across Europe and India
|
Teaching a Compiler your Coding Rules
|
Computing at Belle II
|
BNL Batch and DataCarousel systems at BNL: A tool and UI for efficient access to data on tape with faireshare policies capabilities
|
The ALICE Online Data Quality Monitoring
|
Building Efficient Data Planner for Peta-scale Science
|
Distributed parallel processing analysis framework for Belle II and Hyper Suprime-Cam
|
Contextualization in Practice: The Clemson Experience
|
Implementation of new WLCG services into the AliEn Computing model of the ALICE experiment before the data taking
|
Tools to use heterogeneous Grid schedulers and storage system
|
PROOF - Best Practices
|
Optimizing CMS software to the CPU
|
Optimization of Grid Resources Utilization: QoS-aware client to storage connection in AliEn
|
NoSQL databases in CMS Data and Workflow Management
|
PROOF on Demand
|
Debbie: an innovative approach for the CMS Pixel Tracker web-based configuration DB
|
PROOF - Status and New Developments
|
A T3 non-grid end-user analysis model based on prior installed Grid Infrastructure.
|
AliEn2 and beyond
|
Studies of the performances of an open source batch system / scheduler (TORQUE / MAUI) implemented on a middle sized GRID site.
|
Monitoring the software quality in FairRoot
|
An improvement in LVCT cache replacement policy for data grid
|
How computing centres of Alice connect? A social network analysis of cooperative ties
|
Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.
|
Session: Track 2: Data Analysis - Algorithms and Tools |
Likelihood-based Particle Flow Algorithm at CDF for Accurate Energy Measurement and Identification of Hadronically Decaying Tau Leptons
|
Classifying extremely imbalanced data sets
|
SFrame - A high-performance ROOT-based framework for HEP analysis
|
Online Filtering for Radar Detection of Meteors
|
Absorbing systematic effects to obtain a better background model in a search for new physics
|
Analysis of Photoluminescence measurement data from interdiffused Quantum Wells by Real coded Quantum inspired Evolutionary Algorithm
|
ATLAS Second-Level Electron/Jet Neural Discriminator based on Nonlinear Independent Components
|
High Volume data monitoring with RootSpy
|
mc4qcd: web based analysis and visualization tool for Lattice QCD
|
TMVA - Toolkit for Multivariate Data Analysis
|
FAST PARALLELIZED TRACKING ALGORITHM FOR THE MUON DETECTOR OF THE CBM EXPERIMENT AT FAIR
|
The RooStats project
|
Fourier Transforms as a tool for Analysis of Hadron-Hadron Collisions.
|
Fast Parallel Ring Recognition Algorithm in the RICH Detector of the CBM Experiment at FAIR
|
WatchMan Project - Computer Aided Software Engineering applied to HEP Analysis Code Building for LHC
|
Parallel approach to online event reconstruction in the CBM experiment
|
FATRAS — A Novel Fast Track Simulation Engine for the ATLAS Experiment
|
Visual Physics Analysis - Applications in High-Energy- and Astroparticle-Physics
|
ATLAS Physics Analysis Tools
|
The SHUTTLE: the ALICE Framework for the extraction of the conditions Data
|
HepData - the HEP data archive reloaded
|
Using TurboSim for Fast Detector Simulation
|
Automating CMS Calibrations using the WMAgent framework
|
Parallelization of events generation for data analysis techniques
|
Alignment of the ATLAS Inner Detector
|
Parallelization of Neutron Transport Code ATES3 on BARC's Parallel System
|
Session: Track 3: Methodology of Computations in Theoretical Physics |
Status of the FORM project
|
Parallel versions of the symbolic manipulation system FORM
|
Deterministic numerical box and vertex integrations for one-loop hexagon reductions
|
Recursive reduction of tensorial one-loop Feynman integrals
|
Automated Computation of One-loop Scattering Amplitudes
|
Calculating one loop multileg processes. A program for the case of $gg\rightarrow t \bar{t}+gg$
|
The automation of subtraction schemes for next-to-leading order calculations in QCD
|
FeynHiggs 2.7
|
New developments in event generator tuning techniques
|
IR subtraction schemes
|
Applications of FIESTA
|
Sector decomposition via computational geometry
|
Multiple Polylogarithms and Loop Integrals
|
Two-Loop Fermionic Integrals in Perturbation Theory on a Lattice
|
Unstable-particles pair production in modified perturbation theory in NNLO
|