What is the ReComp project about?
- Data analytics is expensive to perform on a large scale
- It yields value, for example in the form of predictive models that provide actionable knowledge to decision makers
- However, such value is liable to decay with time, as the premises (data, algorithms, other assumptions) become obsolete.
With these observations, ReComp is addressing the following question:
Given a finite set of resources available to perform analytics tasks (including a money budget to pay for cloud resources) and a set of analytical tasks that generated value in the past with known cost and outcomes, how do we select candidates for re-computation, in a way that maximises the expected return on the resources?
The ReComp project sets out to establish a rich metadata management infrastructure and metadata analytics on top, to enable decision makers to answer these questions.
We are going to validate our architecture and metadata analytics algorithms on two very different case studies:
- genetic diagnostics through NGS data processing, and
- Newcastle’s Urban Observatory: predictive models from smart city data obtained from multiple, diverse sensors, specifically to study the Urban Heat Island effect in large cities