In this expert interview series, Paige Bartley, Senior Analyst for Data and Enterprise Intelligence at Ovum, discusses the state of GDPR readiness, and how data quality, data availability and data lineage play into the GDPR compliance landscape.
For part one, Bartley focuses on how prepared organizations are for GDPR as well as some key challenges they may face.
Is the typical organization ready for the big day — May 25 — when the GDPR goes into effect?
In general, a significant portion of organizations will not be fully compliant with GDPR by the time the deadline passes. Of course, there is no such thing as a “typical” organization; GDPR readiness varies greatly across industry verticals, regions, and organization sizes. Those that are most likely to be prepared are large enterprise firms that are in highly-regulated verticals, as they tend to already have the human processes and IT infrastructure in place for managing data at a fine-grained level. EU-based organizations, additionally, will have a head start in compliance efforts, as they have historically had to adjust business practices to accommodate the requirements of GDPR’s predecessor: the 1995 Data Protection Directive (Directive 95/46/EC).
Those that will struggle the most are smaller organizations, often based outside of Europe, that operate in historically unregulated verticals and have a minority of their customers or employees based in the EU.
What are the key challenges standing in the way of GDPR compliance for organizations that are not yet ready for the law to take effect?
There are a number of areas of difficulty being faced by organizations as they travel along the path to compliance. Let’s talk about the biggest ones.
First is the issue of documentation of processes. Even if an organization is unable to meet the May 25th deadline, it is critical that the steps taken towards compliance have been fully documented. Regulators will be more flexible with an organization that has taken good-faith measures to meet the deadline, as opposed to an organization that has failed to act entirely.
Mapping and identification of personal data are important, too. The enterprise cannot control or manage data that they cannot accurately and consistently locate within their IT ecosystem. However, today’s IT environments are increasingly distributed and heterogeneous, with data scattered across repositories in the cloud and in various databases and legacy systems. Therefore, it’s important to map and detect all instances of personal data within these environments. Simply knowing where personal data resides in the IT ecosystem is a major challenge for most organizations.
Then there’s data erasure, data rectification, and data duplicates. The data subject’s right to data erasure and data rectification under the GDPR is complicated by the prevalence of data silos within most organizations. Duplicate data is rife within most IT ecosystems, and just because data has been updated or deleted in one repository doesn’t mean that it has been updated or deleted in another. Furthermore, businesses that are unable to centrally search all of their repositories are at increased risk of being non-compliant
Finally, there is the challenge of data transfer and data sovereignty. The GDPR has restrictions on where, physically, EU resident data can be processed. EU resident data needs to be either processed on EU servers, or on servers in a country that has an “adequacy decision,” meaning that the regional laws offer protections comparable to the EU directive. In the absence of either of these two conditions, the data of EU residents may be processed on non-EU servers when certain conditions, such as binding corporate rules have been established within contracts or approved certification mechanisms have been met.
This maze of legal requirements has made it difficult for organizations to determine when and where data may be legally processed: a daunting challenge in the cloud era, where compute location is often algorithmically and automatically determined (and optimized) based on pricing and server availability, directing data processing to servers around the world. GDPR’s broad definition of processing – even including the viewing of data – further compounds this challenge. Organizations need the capability to override automated, managed service decisions regarding where data will be processed, and need to be able to localize their EU data to EU servers or servers within countries that have adequacy decisions.
This isn’t a complete list of the major GDPR challenges I’m seeing. These are just some of the key issues.
Check out part 2 tomorrow when Bartley discusses how data lineage, data quality, and data availability play in GDPR compliance.
If you want to learn more about GDPR compliance and how Syncsort can help, be sure to view our webcast on Data Quality-Driven GDPR: Compliance with Confidence.
Bigdata and data center
thanks you RSS link