Syniti Podcast Series

Knowledge-Driven Data Management

Data plays a critical role in understanding how you can focus on the use of data to drive outcomes and decision-making as it relates to changes in your business.





Introduction (01:06):

Rex Ahlstrom, Syniti's Chief Strategy and Technology Officer. In this session, Rex will navigate us through the next generation of data management for the new world. Rex is an industry expert with over 30 years of experience of data management governance, and a frequent industry speaker and global lecturer. Outside of work, Rex is an avid skier, biker, runner and yoga junkie. He's also an active volunteer and fundraiser for causes including research in cancer, cystic fibrosis and other causes. To you, Rex.



Rex Ahlstrom (01:41):

Hello and good morning, good afternoon. My name is Rex Ahlstrom and I'm the chief strategy and technology officer here at Syniti. And in my session in our virtual summit today, we're going to be talking about knowledge-driven data management and what the future holds for how you can get higher return on investment and better business outcomes through the intelligent use of data. Let's start off with a couple of market drivers that are definitely impacting how companies like Syniti are going to market and how we evolve our solutions, but also how our customers are evolving.


Organizations, as you would probably see as no surprise, use multiple clouds in our data to state.

It used to be that most applications were on-prem and there was a big move to cloud, now it's hybrid cloud. Whether that's through large providers like Amazon and Google and Azure, or whether it is through their own use of private cloud, the complexity of that cloud environment is increasing. Also, the use of machine learning, artificial intelligence, and a lot of automation technologies to essentially take the mundane tasks out of data management, things like matching, data quality cleansing, data enrichment is on the rise. And in the future, ML will do a lot of the things that currently humans have to do today, leaving the more important tasks and decision-making in the hands of those capable to make those decisions and allowing them to take advantage of automation to speed time to value.


On-premise database management systems are decreasing in terms of their penetration within the market, or at least the sale of those products. Most database management system revenue growth is in the cloud. It's still a smaller percentage when compared to on-prem, but it's clear that the move to cloud-based data management systems is on the rise. All of these things lead to higher levels of complexity when it comes to managing data across databases that are on-prem, in the cloud applications that are hosted private, as well as public. And so, looking at how you can use a knowledge-driven data management approach is really going to be crucial to achieve the outcomes that you desire.


So what do customers want? We talked to a lot of customers in industry across many different sectors, and it really kind of boils down into three simple things. Increasingly, the business needs easier access and the ability to manipulate and use data as an advantage, whether that's competitively, analysis, analytics, reporting, you name it, it has to work better. So transparency and where that data lives, how you access it and know that the data that you're using is the right data is become crucial. If you're going to provide ubiquitous access or transparency and use by a broader set of users, then obviously governance becomes important as well. There are a lot of regulatory requirements around personal identifiable information, but also ethics use of information and basically how you can and cannot access it based on the rules of the road that are established within your organization or within your industry.


Governance can no longer be something that slows the business down, but seen as something that's an accelerant to business agility and allows companies and individuals within those companies to move more fluidly with the use of data and not be held back by governance as a negative thing, but as a positive thing to provide access.

And then if you're going to provide access and you're going to provide governance over this data, then you have to make sure that the rules of the road that you establish are actually enforced. It's not just policy or static documents that are sitting in your document repositories or on the shelf, but it's actually something that you can prove and attest to the fact that those rules are being followed, that they're being enforced and that they can be measured.


So what does the future look like? Well, the future looks like a lot of what we're contending with today, but on steroids. There's going to be a lot more movement, again, of the use of machine learning, as we said, but it's interesting to see that the ramp up of what can be done and how much will be done, especially around the data and analytics space, will be increasing dramatically over the next several years. Also, organizations will be eliminating fees associated with moving data back and forth between environments. So you no longer will have to pay to get your data out or to feel like your data is trapped in one particular provider's environment. Data fluidity, transparency, and the ability to move it across these boundaries will be the norm, and so it's going to have to be managed.


Key to all of that is also the use of active metadata. The idea that companies that are using actively metadata and the information about data and its use will improve automate integration processes, reduce time to delivery by substantial percentages. So these are areas that are going to be important and areas that you should consider when you think about your data management initiatives.


At Syniti, we often talk about something we call the data journey, and it's this idea that there are many projects that we will encounter as a company.

We may be consolidating our landscape. We may be migrating to a new cloud application. We may be implementing a data lake for new business intelligence or analytics use cases. There are a lot of ways and reasons why we need to access data and how we use that data. But sometimes what gets lost is that when we go into one of these projects, we miss a key opportunity to retain the knowledge that we capture in that data event.


So if we go in, let's say, to perform a data migration, it is a unique period of time where you're going to bring together a lot of your business users, a lot of your technical users and essentially all of your data contributors to say, how is this data used? What are the rules around this data? Are there particular policies that must be followed in the use of this data? Are there audit requirements? And oftentimes it's looked at as a silos. We consider this information at the time of the project, but then at the end of the project, we lose retention of all that valuable information.


So in our view, a data journey is one where you capture knowledge at each unique entry point and make sure that that knowledge then is additive and an accelerant.

So the next data engagement, so you can reuse rules or policies or information about systems from a migration to drive ongoing data quality, to accelerate your metadata management initiatives, or to make sure that you're sourcing the right data into your data lake for your analytical applications. So the data journey needs to be thought of as a continuum, not as a project, but it's something that you will continue to get value over time.


So we're all faced with COVID-19, as evidenced by my self sitting here at my house, doing this virtual summit, you most likely sitting at home watching this virtual summit. And it's really had a big impact on our customers. We've talked to a lot of our customers that say that projects have been slowed down. Maybe large digital transformation efforts have been postponed. They're standing down on services efforts, a myriad of reasons why helping to reduce costs, focusing on the core processes of the business, where a resource is the most critical, et cetera. So what we like to think of is that there's still an opportunity to prepare for success. When projects come back online, when things speed back up, you need to be prepared with your data.


And there are ways that you can begin even without, let's say, in a migration, having a target environment available, or having an SSI on board to do your system design. You can still work on collecting, understanding data, categorizing that information, profiling it, cleansing it, much like a washing machine is used to cleanse your clothes. You decide what clothes you need cleaned, the washing machine takes care of the rest. And so, in an automated environment, we can take data, we can understand data, we can cleanse data and we can prepare and even transform data for that eventual move to cloud sourcing to your data lake, your master data management initiative, or again, whatever the phase is along the data journey that you're going to move to. So don't lose the opportunity to make progress, even if progress has slowed by focusing on your stakeholders, to focus on collaboration around data and across all of your systems at the enterprise level.


So let's talk about some top of mind topics that are very important right now for most companies, that's Syniti included. How do we improve liquidity and operating margins is at a time when we may see a slow down in our business? How can we produce reliable response metrics or KPIs for tracking things like COVID-19 and the impact of that terrible pandemic? How can we assure our supply chain is resilient and that if we can't source materials from one vendor, we can go to another, or if we perceive there could be disruptions in transportation or distribution? So these are things that are very real and are a POTUS today because of the COVID-19 pandemic.


How can we help with data? Data plays a critical role in understanding how you can focus on the use of data to drive these outcomes and decision-making as it relates to changes in your business.

And always there are key data elements that support each of these outcome-driven initiatives. So if we're looking at cash liquidity and operating margin, then understanding our vendor information, our material usage, related objects, maybe our AR balances and other aspects, and how that flows into the applications we rely on for that decision-making and outcome-based guidance.


So knowing again that we have the right data, that we've harmonized that data quickly, that we've driven it to an outcome-driven use case is something that we can focus on and we can focus on quickly. These are not engagements that take a long time. Usually within a one to two week period, we can deliver an application. We can source data. We can harmonize that information and using the thousands and thousands of analytics that we maintain around data, understand where the holes are, where the risks are and bring that information to the decision makers so that you can take proactive measures and improving your business outcomes, especially at the time of crisis, but even after all of this is over to improve your ongoing business operation. So this is not just something that's limited to our current state and the world, it's something that is good hygiene around data and will help all of your initiatives that rely on data be more successful.


So this is an interesting example of one that we recently did a with a tier one trauma hospital in the United States that had an immediate need to collaborate among their clinicians and their administrators for the purposes of reporting, both to the federal government, but also internally on how they were dealing and contending with the COVID-19 response. So the first thing they needed to do was have a collaborative environment where they could establish, what are the key KPIs? What are the metrics that will measure? Do we understand what they are, what the formulas are and how we'll calculate that information? What rules around how data gets pulled into these metrics is required? And what systems does this data reside in? In this case, for this customer, it's across many different applications, not all from the same vendor. So that data also had to be harmonized and driven together in a meaningful way.


Then we needed to replicate that data near real time into their data lake so they could have very rapid response reporting where they could track on an ongoing basis exactly how they were doing with a common understanding of what their metrics and KPIs were so that they know at all times where they are as we fight through this pandemic. And that was a turnkey solution that delivered in roughly six weeks and the value to the client was extremely high because of the urgency of the situation that they face today.



Watch Now:  Watch the full playlist of videos from the Unlocked Virtual Summit



So how do we go about supporting the data journey? And we at Syniti have a singular solution that we refer to as the Syniti Knowledge Platform. It's one solution for multiple personas. It does use intelligent technologies like machine learning, natural language processing, and it focuses on success across that entire data journey. It is also delivered as a cloud native solution so that we can stand this up quickly and deliver value quickly, as we discussed in the previous scenarios. What's at the core of our application strategy is that it starts with the business strategy. Start with the end in mind, what is the goal that we're trying to achieve? And do those goals aligned to our use of governance, our access to information, our understanding of the metadata that supports the applications below so that we can drive down into initiatives like migration, like quality, like master data management and others, but then capture that knowledge so that we can drive it back up, relate it. We achieve, as well as reuse that knowledge to drive into future data journey engagements.


So this is very much a cyclical solution. You may start at the bottom. You may say, "It sounds good, but I am facing a data quality issue with my vendors. And I need to start there." That's fine. You don't have to always start at the top. You can start with a project, but know that when we start with that project, we will capture that knowledge. We will retain the metadata and the understanding and decisions that have been made so that in the future, as you mature your strategy, as you relate it to key KPIs, you have that lineage, you have that linkage between your projects, your outcomes and your business strategy.


So let's talk about an example. We worked with a customer that was using only ETL, extract transform load technology, to be able to go through a process of migration, to be able to move to their next generation application. And when they looked at the project, they thought, well, all I need is an ETL tool. I can extract data, I can transform data and I can load data. And isn't that all I need to be able to move data from point A to point B? And the answer is if all you're doing is lifting and shifting data, and maybe that does work, but there's a lot of complexity associated with that. And there are a lot of decisions, again, that are made.


In this case, they had rules, for example, remediate nul MH DHB value. Now, unless you're steeped in particular structures and information, you may not know what that means. I didn't know what it meant the first time I saw it. It's a technical description of an implementation in an ETL tool. What you really want to understand is that a material must have a total shelf life. That makes sense. Materials expire. If I use it expired materials and a product I produce, maybe that creates a health risk to consumers of that product. There are a lot of reasons why we need to understand what these rules are that sit behind decisions about transforming data. So the goal is to be able to capture this knowledge in a non-technical way, not just in detail code, but in plain English, or whatever your language is, to say, "What is this thing? Who approved it? Do I still need it? Where is it enforced? Where should I care about this in the future? And how could I reuse this?"


In a clear example, we had a data migration customer that was facing a lot of complexity around their payment terms. As a matter of fact, they had 210 payment terms within their applications for various vendors and buyers of their products. So through the migration process, we actually worked with the business to understand, well, what do you really need? What's driving and running the business? And how do we get rid of payment terms that are not beneficial to you as a company, but also do not fit the requirements, or I would say standards, that you are trying to establish as a business? So we were able to get those 210 payment terms down to 11 payment terms. And in the process also understand who owned that decision? Was it a global or local decision? What applications, again, rely on this information? And of course, we had the technical code to be able to do the checks and conversions.


So going forward, they were able to drive that into a data quality initiative. They wanted to monitor actively these systems to make sure that if a new payment term showed up and it didn't fit the business rules or requirements, that it was flagged and that somebody was notified. Or even better, let's start managing the creation of the payment term so that we can prevent things that violate our rules in the first place. First time right. Let's make sure that what we enter into our system follows the rules and the decisions and the policies and the business strategy that we have established as a company.


So rules reuse, again, is a very powerful thing. And being able to reuse this information means you're not paying for it once, twice, three, five times across all of your projects silos, but capturing it and driving it from project to project to project. And in the meantime, your data, your data quality, your understanding, your trust in that information, where the lineage of where information is coming from, how it's being sourced is well understood and well-managed.


So Syniti has been very successful in doing this across global Fortune 500 companies around the world. We've had over 3,500 successful goal lies with these applications, across 10 different industries, across over 160 different systems, and we've been at it a long time. We like to say that we focus on one thing and that's data, but it's also outcomes related to how data is used. And so, we work very closely through our consultancy, through the implementation of our software, to understand what does our customer want to achieve? What is the business strategy? How do we achieve a return on investment faster and use automation and the smarts that we've accumulated over our long history to ensure both customer satisfaction are at very high levels, but also that the customer achieves their goals and continues to manage data as the asset that it is?


Thank you for joining me today. I appreciate the time and I look forward to engaging with you in-person rather than virtual, hopefully in the near future. Take care.



Subscribe Here:  Listen to all the episodes in the Syniti Podcast Series


Similar posts

Get Notified on New Syniti Blog Posts

Be the first to know about new blogs from Syniti to stay up-to-date on the latest industry knowledge and to learn how Syniti delivers data you can trust.