According to the ISA Legal Decision (DECISION No 922/2009/EC), issues such as the efficiency, effectiveness and relevance of the ISA programme actions shall be assessed as well as their performance against the objective of the ISA programme and the rolling work programme. In this context, the ISA Dashboard aims at providing the key stakeholders of the ISA programme with the ability to track the progress of each ISA action over time. The data displayed is based on the inputs provided by the ISA project officers and the work programme 2015.
‘Efficiency’ is a measurable concept, quantitatively determined by the ratio of output to input. It describes the amount of time or effort used to perform an intended task or achieve a specific purpose. In other terms, efficiency aims at evaluating the extent to which the desired effects are achieved at a reasonable cost.
In the framework of the ISA programme, efficiency is evaluated at action and programme level according to the Earned Value Management (EVM) and Earned Schedule (ES) techniques for most actions.
- EVM is a systematic approach to the integration and measurement of cost, schedule and technical (scope) accomplishments of an action and its related pieces of work (i.e. milestones).
- ES is an extension of EVM, given that it completes the EVM approach by going deeper into performance in a time perspective.
Note: The Earned Value Management technique tailored for the ISA programme is based on the Earned Value Management Tutorial, Module 1: Introduction to Earned Value Management, prepared by Booz, Allen and Hamilton, Department of energy, United States of America.
The Earned Schedule technique tailored for the ISA programme is based on the guidelines Earned Schedule in Action, developed by Kim Henderson, from Project Management Institute (PMI) Oklahoma, 13.07.2007.
For these actions mainly about maintenance and support, efficiency is monitored by following their actual costs.
- Earned Value (EV): Also known as Budget Cost of Work Performed (BCWP), the EV can be defined as the quantification of the ‘worth’ of the work done to date.
- Planned Value (PV): Also known as Budget Cost of Work Scheduled (BCWS), the PV is the sum of the approved budget for each WBS level, i.e. action, official phase, specific contract, work package, milestone. The total PV of an action must be equal to its Budget at Completion (BAC).
- Actual Cost (AC): The AC is the executed budget for achieving the work of an ISA action. The executed budget will only be known at the end of a Work Programme year. Since most of the specific contracts are on fixed price it is not relevant to keep track on the AC before the end of a Work Programme year. For a matter of simplification, the AC is then considered as equal to the PV.
- Budget At Completion (BAC): The BAC is the sum of the budgets allocated to an action. An action BAC must always be equal to its total PV (aggregation of the official phases PV).
- Cost Variance (CV): The CV is the difference between the earned value of the work performed and the executed budget (Actual Cost). CV= EV-AC.
- Schedule Variance (SV): The SV is the difference between the earned value of the work performed and the planned value of the work scheduled. SV= EV-PV. Since PV is equal to AC, then CV=SV.
- Cost Performance Index (CPI): The CPI measures the value of the work performed over its actual cost (measure of cost efficiency). CPI= EV/AC.
- Schedule Performance Index (SPI): The SPI measures the value of the work performed over the value of the work scheduled (measure of schedule efficiency). SPI= EV/PV. Since PV is equal to AC then CPI=SPI.
- Estimate At Completion (EAC): EAC corresponds to the sum of the AC to date with an objective estimate of costs for the remaining authorised work. The objective in preparing an EAC is to provide an accurate cost projection at the completion of the action. EAC= AC + Estimate to Complete (ETC).
- Variance at Completion (VAC): The VAC gives a projection of the amount over or under the budget at completion of the project. VAC= BAC-EAC.
Based on the action Work Breakdown Structure (WBS) i.e. is a family tree subdivision of efforts, efficiency can be computed at five different levels, namely:
1. Action: according to Article 2 (f) of the ISA Legal Decision an action can be means a study, project and accompanying measure.
2. ISA Official phase: according to Article 5 (3) of the ISA Legal Decision, “a project shall, where appropriate, have three phases: the inception phase, which leads to the establishment of the project charter, the execution phase, the end of which shall be marked by the execution report, and the operational phase, which starts when a solution is made available for use”.
3. Specific contract: Specific contracts are contracts signed with external contractors working on the action.
4. Work package: A work package is a deliverable or action work component at the highest level of the WBS of a specific contract. The work package is considered as completed once the tasks (milestones) that are included in the work package are all completed.
5. Milestone: A milestone is the completion of a work package major deliverable. At milestone level, all the necessary major deliverables to be performed within a work package must be listed.
However, only three levels are displayed on the ISA Dashboard i.e. action, work package and milestone levels.
EVM cannot be implemented for the actions that have not started yet, are currently on hold or started already but for which the implementation of EVM is not finalised yet (e.g. WBS currently being built).
‘Effectiveness’ describes the extent to which the objectives of an action have been achieved. In other terms, effectiveness aims at evaluating if an action, classified as a project or as an accompanying measure, delivers the expected outcomes. Short-term effectiveness, which is our current focus, relates indeed to intended outcomes measurements.
Effectiveness is monitored based on identified outcome-indicators that are specific to each ISA action. These outcome-indicators are identified based on the following four-phase approach:
- Phase 1: Definition of the action ‘objectives’, ‘benefits’, ‘outputs’ and related ‘outcomes’.
- Phase 2: Translation of outcomes into relevant outcome-indicators.
- Phase 3: Definition of major elements of the data collection process (e.g. source of the data, person in charge of providing the data, adequate reporting frequency, baseline of the indicator).
- Phase 4: Definition of targets for each outcome-indicator.
The results of these outcome indicators can be classified in following four categories:
- Target met: outcome indicators having their last measurements reaching the target;
- Target not met: outcome indicators having their last measurements not reaching the target;
- Not due: outcome indicators having a target defined but no measurement yet;
- Not yet rated: outcome indicators without any target defined at the moment.
Effectiveness is only evaluated at action level for the actions classified as ‘project’ and ‘accompanying measure’.
‘Availability’ is defined as the ability of a configuration item or IT service to perform its agreed function when required. In other terms, availability aims at evaluating the uptime of the service over the total time when the service should operate during one day.
‘Perceived quality’ indicates the extent to which an action satisfies and brings added value to its end users. In other terms, perceived quality aims at evaluating how well the outputs of an ISA action are meeting or exceeding its direct beneficiaries’ expectations (e.g. public administrations).
The measurement of perceived quality has to be performed on a regular basis during the lifecycle of an ISA action, in particular at specific stages called ‘pitch points’. A pitch point can be defined as a point in time when a measurement of perceived quality is performed, in other words it may consists in the following:
- Availability of new functionalities;
- Release of a new version of a solution;
- Roll out phase of a new tool.
In order to ensure the completeness of the perceived quality measurement, a series of five questions needs to be answered:
- Who: select the targeted population for the perceived quality survey
- Where: identify specific moments called “pitch points” which may consist in either availability of new functionality, release of a new version of a solution or roll out phase of a new tool
- When: propose a schedule of different pitch points according to the different releases
- What: select from the criteria above, those which may be subject to measurement
- How: build a questionnaire to be addressed to the targeted population on “perceived quality”
At action level, the measurement of perceived quality aims at evaluating the quality of the output(s) produced by an action based on pre-defined criteria such as responsiveness, security, usability and utility, which are defined below:
- Responsiveness consists in providing a prompt service to customers’ requests;
- Security consists in inspiring trust regarding information exchange;
- Usability consists in ensuring that a generic tool or service can be used by specified users to meet their needs (as specified on the ISA action project charter);
- Utility consists in assessing new functionalities (or improvements) supplied by a generic tool or a common service in terms of usefulness and effectiveness.
Perceived quality cannot be measured for the actions which are in inception phase and the actions that are delivering enabler to interoperability.
‘Relevance’ shows the extent to which an action’s objectives are pertinent to the needs, problems and issues to be addressed. In other terms, relevance aims at evaluating the level of contribution of an action towards the achievement of the objectives stated in the EIS document (i.e. focus areas).
The assessment of the relevance criterion is performed at two levels:
- ISA Action Project Officers level: to identify which ISA action is contributing to which principle or objective
- Member States level: to assess the perceived contribution of the ISA Programme (at cluster level) through its specific actions against the above principles and objectives
The overall efficiency rating is the average of the "Schedule Perfomance Index" (SPI) rating and the "Percentage of delay" (% of delay) rating.
Overall efficiency rating value is (SPI Rating + Delay Rating) / 2
Overall efficiency rating colors explained:
- Red if Overall Rating <= 3
- Yellow if Overall Rating >3 and < 7
- Green if Overall Rating >= 7
SPI Rating and Delay Rating colors explained:
- Red label indicating "Significant concerns", when SPI Rating <= 3
- Yellow label indicating "Need attention", when SPI Rating > 3 and < 7
- Green label indicating "Good", when SPI Rating >= 7
SPI Rating value is calculated as follows:
If SPI Value >= 1 then SPI Rating = 10
if SPI Value >= 0.95 and Delay Value < 1, then SPI Rating = 9
if SPI Value >= 0.90 and Delay Value < 0.95, then SPI Rating = 7
if SPI Value >= 0.80 and Delay Value < 0.90, then SPI Rating = 5
if SPI Value < 0.80, then SPI Rating = 3
Delay rating value is calculated as follows:
If Delay Value >= 0, then Delay Rating = 10
if Delay Value >= -0.05 and Delay Value < 0, then Delay Rating = 9
if Delay Value >= -0.10 and Delay Value < -0.05, then Delay Rating = 7
if Delay Value >= -0.20 and Delay Value < -0.10, then Delay Rating = 5
if Delay Value < -0.20, then Delay Rating = 3