The support for systematic data analytics (beyond ad hoc one-shot exercises) could imply new architectural and infrastructural requirements for a statistical institute. In some cases, this might be best tackled in a general overhaul of legacy systems.
At this daWos session, we will discuss how to best put an infrastructure for data analytics in place.
Questions that could be addressed include
- What are the specificities of data analytics in the context of architecture and infrastructural requirements?
- What is in general the target architecture adopted for data analytics use cases, e.g. plug and play design vs. full integration, ad-hoc infrastructure vs. CSPA?
- Do we need smart statistics architecture, in which data collection and analytics are embedded in production and service delivery systems?
- Do you foresee any need to adopt new architecture models (e.g., APIs, virtualised containers, distributed platforms …) in order to support new data/computational requirements?
- What is the experience with processing on unknown or flexible target architecture, for instance in the cloud, in the premises of the data provider (e.g., for remote secure processing of microdata) or any other data centre?
- When adopting modern/new architecture, (how) was the legacy architecture integrated taken into account into the new model (e.g., adopting virtual layers within logical warehouse)?
- What is actually the impact of the fast evolving architecture landscape (infrastructure, storage) on the adoption of data analytics solutions?