Establishing trust in data is critical. Organizations are now employing AI, Machine Learning, Blockchain to ensure data reliability and integrity.
Establishing trust in data is an essential requirement for businesses and entities for whom credible, reliable information is the lifeblood. As enterprises seek to manage data as an asset, it becomes increasingly vital that data sources are trusted and verifiable.
I wrote a few weeks ago about the MIT initiative to establish a framework for trusted data, and the resulting position paper, “Towards an Internet of Trusted Data: A New Framework for Identity and Data Sharing”. The authors highlight the criticality and need for “trustworthy, auditable data provenance” where “systems must automatically track every change that is made to data, so it is auditable and completely trustworthy”. One of the key recommendations of the study was to improve the process and quality of data sharing. One suggestion was to move the algorithm to the data, explaining “The concept here is to perform the algorithm (i.e. query) execution at the location of data (referred to as the data-repository). This implies that raw-data should never leave its repository, and access to it is controlled by the repository/data owner”.
Tom Dunlap has been at the center of issues of data trust, standardization, and normalization for well over a decade. Dunlap most recently served as a managing director at Goldman Sachs, where he was global head of enterprise data strategy and reference data operations during his seventeen-year tenure with the firm. Among other responsibilities, Dunlap served on Goldman Sachs operations data digitization council and financial reform steering group. He also serves as a member of the Financial Research Advisory Committee at the US Treasury Department’s Office of Financial Research.
From his catbird seat at the heart of the action in financial services, Dunlap developed some informed perspectives on issues of data trust and data reliability. He sees the financial services industry progressing on a path to enriched data quality and reliability. Dunlap notes, “From the top on down, financial services firms are viewing data as a corporate asset, where data is seen as being foundational to achieving not only compulsory needs with regulatory reporting, but also as improving the client experience and enabling commercial initiatives”. Dunlap sites as an example the introduction of Legal Entity Identifier (LEI), which is being employed by financial services firms to manage systemic risk. In addition, financial services firms are tracking data lineage and definitions of data, with the result that data can be traced from production through consumption, to accurately understand the points at which data is being used and how that data is being transformed during its lifecycle. The result, notes Dunlap, is that “data can now be trusted, and verified, from the source, with fewer data quality problems being experienced”. The benefit is that higher levels of data quality translate into faster time-to-market for activities including product profiling and pricing, and faster trade executions. The net result is that client experience has improved.
Source/More: Getting To Trusted Data Via AI, Machine Learning And Blockchain