Abstract | ||
---|---|---|
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many
computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed
sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents
an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview
of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience
in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by
the user experience in their current analysis activities. |
Year | DOI | Venue |
---|---|---|
2010 | 10.1007/s10723-010-9152-1 | J. Grid Comput. |
Keywords | Field | DocType |
LHC,CMS,Distributed analysis,Grid | Large Hadron Collider,User experience design,World Wide Web,Grid computing,Computer science,Software,Data access,Data management,Grid,Scalability,Distributed computing | Journal |
Volume | Issue | ISSN |
8 | 2 | 1570-7873 |
Citations | PageRank | References |
4 | 0.40 | 2 |
Authors | ||
79 |