Possible evolutions of the regulatory environment for the new digital ecosystem

  • Giacomo Rossoni profile
    Giacomo Rossoni
    27 September 2015 - updated 4 years ago
    Total votes: 2

Recent advances in computer and communication technology, e.g. those made available by cloud computing, unleash the role of storage, computing and network resources as commodities. The actors involved in this evolution of the digital ecosystem are users, service providers, cloud operators, equipment vendors and, of course, market regulator authorities.

Availability of storage as a commodity should imply that users can freely migrate their data (e.g. contacts, photos, videos, posts) from one service provider to another. To allow service migration, service providers must expose an open API through which user data can be exported and imported; and the regulatory framework should mandate the compliance of the service provider's implementation with the reference open API.

User devices should embed software that allows to measure and monitor the quality of the provided services. An example is the Nemesys software of the italian Ugo Bordoni Foundation, that can be installed on the user PC to measure the quality of the internet connection. This approach is however not the most correct, since the monitoring software should run on the user equipment directly connected to the provider network (e.g. modem/router, smartphone/tablet).

Computing resources should be freely moved from one cloud operator to another, addressing the needs of service providers. Indeed, efficient service implementation requires to move computing resources as close as possible to the user equipment. The user demand is expected to increase dramatically thanks to the advent of the internet of things. It is therefore essential for service providers to be able to adapt the deployment of computing resources to match the user demand in time and space. Portability of computing resources (i.e. software applications) means that they need to be containerized (e.g. deployed in docker containers).

The service provider will use an orchestrator system to deploy, move and monitor software containers in the underlying cloud/fog networks, possibly exploiting different cloud operators. The cloud operator must therefore support an open API through which software containers can be deployed, undeployed and monitored. A standardization example of container orchestration is the open container initiative. An independent authority should run interoperability tests to validate the application portability from one cloud operator to another. The tests could be run against a reference orchestrator implementation of the open standard.

Availability of network as a commodity should imply that cloud operators can build their network infrastructure using network equipment from different vendors. The equipment vendors must ensure their devices are interoperable both at the network protocol and at the network management level. While network protocol interoperability is of course mandatory for end-to-end communication in the network data plane, on the other hand network management is key for end-to-end service assurance across network control planes from different vendors.

Interoperability at the network management level can be achieved via software defined controllers. An example standardization of the network management interface is the open networking foundation. An independent authority could run tests on SDN controllers from different equipment vendors against a reference manager implementation of the standard interface.