2020-02-03 HR Examiner article john sumser AI and Intelligent Software Implementation in HR part iii photo img cc0 via pexels board game 207924 544x305px.jpg

“Over time, the dynamic role of the data governance process will simply become a part of work. It is the mechanism that enables humans and machines to work effectively together.” - John Sumser


Catch Up on the Series

Haven’t read parts 1-5? Catch up on the series by clicking on the links to each article at the bottom of this post. Click here to navigate to the links.

In part III of this series, we’ll be looking at the role of data cleanliness, data models, and process governance. The tools and techniques required for the management of intelligent tools at scale are still in their infancy.

Readiness Equals Data Cleanliness Which is Foundational to Governance

On one level, data cleaning and governance is the classic ‘perfection is the enemy of the possible’ question. Still, the consequence of the first generation of cloud configurations is an ocean of unique workflows that do the same thing for different KPIs, which are all named differently. Current machine learning based intelligent technology works better with structured information. That means having the same steps in a process and using standard protocols to define the fields. Unstructured information is useful for mining underlying trends and concepts, but not so great for systems that lead to process improvement.

Part of the appeal of older systems was the fact that they allowed a great deal of freedom in task specific workflows. Rather than centralizing a workflow authorization process, individual users were often allowed to build unique, situation specific workflows. We’ve seen cases where companies developed over 300 detailed recruiting workflows in which identical steps in the process were called different things.

It’s not a good idea to solve this problem by edict. People are comfortable with the way that they work and will resist authoritarian changes to the way they work. The process has to move slowly and win the commitment of users, almost one at a time.

Once begun, the data governance process becomes a part of ongoing operations.

First, you need an inventory of all fields currently in use across all of the systems that you might want data from. This is often harder than it sounds

Second, the comprehensive list needs to be evaluated for data content redundancy, over-complication, duplicate names, and overlapping processes. Each item should include identification of human users, machine users, sources, and update frequency.

Third, visibility, reporting, and machine processes must be assessed to define a clear picture of output requirements. This is where having a clear picture of where you want to go really matters.

Fourth is the hard work. Through a process of interaction and education, talk with each user (who has workflow permissions) about the tradeoffs between task customization and system level management capabilities.

Finally, establish a governance committee responsible for understanding and navigating the issues that will emerge over time. As data volumes increase, it is important for the organization as a whole to be cognizant of the implications. In the 21st Century, data is infrastructure. This is how you manage and maintain it.

Over time, the dynamic role of the data governance process will simply become a part of work. It is the mechanism that enables humans and machines to work effectively together.

2017-04-21 HRExaminer photo img sumser john bio pic IMG 3046 black and white full 200px.jpg

John Sumser is a Principal Analyst for HRExaminer.

Control of Data Model Standards and Process Governance

You will find it hard to imagine the number of discrete data models that will be used in your organization. They wear out, require maintenance, create liability, and imply service level promises to employees. Success requires keeping the big picture in mind.

The tools and techniques required for management of intelligent tools at scale are just starting to be developed. It’s becoming clear that every company will be using multiple data models per employee, customer, division, project, work team, candidate pool, individual candidate, alumni, customer, investor and other stakeholders. Understanding the health of each data model is critical. Knowing how to prioritize repairs and improvements is critical.

It would be hard to spend too much time trying to figure out this aspect of the future of work. It’s a cross between admin, IT, strategic planning, productivity measurement, and the foundation of work going forward.

These data models will be fed from various sources including on the job equipment, monitoring devices, communications tools, and software that uses public data. The organization’s data needs, production, and consumption will become a, if not the, central management preoccupation.

In my next article in the series I’ll continue with how to evaluate a vendor in the areas of readiness, functionality, change, improvement, support, along with compliance, liability, and warranties.

Catch Up on the Series