Posts

6 Reasons Companies Ignore Data Quality Issues

When lean businesses encounter data quality issues, managers may be tempted to leverage existing CRM platforms or similar tools to try and meet the perceived data cleansing needs. They might also default to reinforcing some existing business processes and educating users in support of good data. While these approaches might be a piece of the data quality puzzle, it would be naive to think that they will resolve the problem. In fact, ignoring the problem for much longer while trying some half-hearted approaches, can actually amplify the problem you’ll eventually have to deal with later. So why do they do it? Here are some reasons we have heard about why businesses have stuck their heads in the proverbial data quality sand:

1. “We don’t need it. We just need to reinforce the business rules.”

Even in companies that run the tightest of ships, reinforcing business rules and standards won’t prevent all your problem. First, not all data quality errors are attributable to lazy or untrained employees. Consider nicknames, multiple legitimate addresses and variations on foreign spellings just to mention a few. Plus, while getting your process and team in line is always a good habit, it still leaves the challenge of cleaning up what you’ve got.

2. “We already have it. We just need to use it.”

Stakeholders often mistakenly think that data quality tools are inherent in existing applications or are a modular function that can be added on. Managers with sophisticated CRM or ERP tools in place may find it particularly hard to believe that their expensive investment doesn’t account for data quality. While customizing or extending existing ERP applications may take you part of the way, we are constantly talking to companies that have used up valuable time, funds and resources trying to squeeze a sufficient data quality solution out of one of their other software tools and it rarely goes well.

3. “We have no resources.”

When human, IT and financial resources are maxed out, the thought of adding a major initiative such as data quality can seem foolhardy. Even defining business  requirements is challenging unless a knowledgeable data steward is on board. With no clear approach, some businesses tread water in spite of mounting a formal assault. It’s important to keep in mind though that procrastinating a data quality issue can cost more resources in the long run because the time it takes staff to navigate data with inherent problems, can take a serious toll on efficiency.

4. “Nobody cares about data quality.”

Unfortunately, when it comes to advocating for data quality, there is often only one lone voice on the team, advocating for something that no one else really seems to care about. The key is to find the people that get it. They are there, the problem is they are rarely asked. They are usually in the trenches, trying to work with the data or struggling to keep up with the maintenance. They are not empowered to change any systems to resolve the data quality issues and may not even realize the extent of the issues, but they definitely care because it impacts their ability to do their job.

5. “It’s in the queue.”

Businesses may recognize the importance of data quality but just can’t think about it until after some other major implementation, such as a data migration, integration or warehousing project. It’s hard to know where data quality fits into the equation and when and how that tool should be implemented but it’s a safe bet to say that the time for data quality is before records move to a new environment. Put another way: garbage in = garbage out. Unfortunately for these companies, the unfamiliarity of a new system or process compounds the challenge of cleansing data errors that have migrated from the old system.

6. “I can’t justify the cost.”

One of the biggest challenges we hear about in our industry is the struggle to justify a data quality initiative with an ROI that is difficult to quantify. However, just because you can’t capture the cost of bad data in a single number doesn’t mean that it’s not affecting your bottom line. If you are faced with the dilemma of ‘justifying’ a major purchase but can’t find the figures to back it up, try to justify doing nothing. It may be easier to argue against sticking your head in the sand, then to fight ‘for’ the solution you know you need.

Is your company currently sticking their head in the sand when it comes to data quality? What other reasons have you heard?

Remember, bad data triumphs when good managers do nothing.

8 Ways to Save Your Data Quality Project

Let’s face it, if data quality were easy, everyone would have good data and it wouldn’t be such a hot topic. On the contrary, despite all the tools and advice out there, selecting and implementing a comprehensive data quality solution still presents some hefty challenges. So how does a newly appointed Data Steward NOT mess up the data quality project? Here are a few pointers on how to avoid failure.

1.DON’T FORGET THE LITTLE PEOPLE

As with other IT projects, the top challenge for data quality projects is securing business stakeholder engagement throughout the process. But this doesn’t just mean C-level executives. Stakeholders for a data quality initiative should also include department managers and even end-users within the company who must deal with the consequences of bad data as well as the impact of system changes. Marketing, for example, relies on data accuracy to reach the correct audience and maintain a positive image. Customer Service depends on completeness and accuracy of a record to meet their specific KPIs. Finance, logistics and even manufacturing may need to leverage the data for effective operations or even to feed future decisions. When it comes to obtaining business buy-in, it is critical for Data Stewards to think outside the box regarding how the organization uses (or could use) the data and then seek input from the relevant team members. While the instinct might be to avoid decision by committee, in the end, it’s not worth the risk of developing a solution that does not meet business expectations.

2. BEWARE OF THE “KITCHEN SINK” SOLUTION

The appeal of an ‘umbrella’ data management solution can lure both managers and IT experts, offering the ease and convenience of one-stop shopping. In fact, contact data quality can often be an add-on toolset offered by a major MDM or BI vendor – simply to check the box. However, when your main concern is contact data, be sure to measure all your options against a best-of-breed standard before deciding on a vendor. That means understanding the difference between match quality vs match quantity, determining the intrinsic value (for your organization) of integrated data quality processes and not overlooking features (or quality) that might seem like nice-to-haves now but which down the line, can make or break the success of your overall solution.  Once you know the standard you are looking for with regards to contact deduplication, address validation, and single customer view, you can effectively evaluate whether those larger-scale solutions will have the granularity needed to achieve the best possible contact data cleansing for your company. While building that broader data strategy is a worthy goal, now is the time to be conscious of not throwing the data quality out with the proverbial bathwater.

3. JUST BECAUSE YOU CAN, DOESN’T MEAN YOU SHOULD

When it comes to identifying the right contact data quality solution, most companies not only compare vendors to one another but they also consider the notion of developing a solution in-house. In fact, if you have a reasonably well-equipped IT Department (or consultant team) it is entirely possible that an in-house solution will appear cheaper to develop and there may be several factors that cause organizations to ‘lean’ in that direction including the desire to have ‘more control’ over the data or eliminate security and privacy concerns.

There is a flip side, however, to these perceived advantages, that begs to be considered before jumping in. First, ask yourself, does your team really have the knowledge AND bandwidth necessary to pull this off? Contact data cleansing is both art and science. Best-of-breed applications have been developed over years of trial and error and come with very deep knowledge bases and sophisticated match algorithms that can take a data quality project from 80% accuracy to 95% or greater accuracy. When you are dealing with millions or even billions of records, that extra percentage matters. Keep in mind that even the best-intentioned developers may be all too eager to prove they can build a data quality solution, without much thought as to whether or not they should. Even if the initial investment is less expensive than a purchased solution, how much revenue is lost (or not gained) by diverting resources to this initiative rather than to something more profitable?  In-house solutions can be viable solutions, as long as they are chosen for the right reasons and nothing is sacrificed in the long run.

4. NEVER USE SOMEONE ELSE’S YARDSTICK

Every vendor you evaluate will basically tell you to measure by the benchmarks they perform the best at. So the only way to truly make an unbiased decision is to know ALL the benchmarks and then decide for yourself which is most important to your company and don’t be fooled in the fine print. For example:

  • Number of duplicates, are often touted as a key measure of an application’s efficacy, but that figure is only valuable if they are all TRUE duplicates. Check this in an actual trial of your own data and go for the tool that delivers the greater number of TRUE duplicates while minimizing false matches.
  • Speed matters too but make sure you know the run speeds on your data and on your equipment.
  • More ‘versatile’ solutions are great, as long as your users will really be able to take advantage of all the bells and whistles.
  • Likewise, the volume of records processed should cover you for today and for what you expect to be processing in the next two to five years as this solution is not going to be something you want to implement and then change within a short time frame. Hence, scalability matters as well.

So, use your own data file, test several software options and compare the results in your own environment, with your own users. Plus remember those intangibles like how long it will take you to get it up and running, users trained, quality of reports, etc. These very targeted parameters should be the measure of success for your chosen solution – not what anyone else dictates.

5. MIND YOUR OWN BUSINESS (TEST CASES, THAT IS)

Not all matching software is created equal and the only way to effectively determine which software will address your specific needs, is to develop test cases that serve as relevant and appropriate examples of the kinds of data quality issues your organization is experiencing. These should be used as the litmus to determine which applications will best be able to resolve those examples. Be detailed in developing these test cases so you can get down to the granular features in the software which address them. Here are a few examples to consider:

  • Do you have contact records with phonetic variations in their names?
  • Are certain fields prone to missing or incorrect data?
  • Do your datasets consistently have data in the wrong fields (e.g. names in address lines, postal code in city fields, etc)?
  • Is business name matching a major priority?
  • Do customers often have multiple addresses?

Once you have identified a specific list of recurring challenges within your data, pull several real-world examples from your actual database and use them in any data sample you send to vendors for trial cleansing. When reviewing the results, make sure the solutions you are considering can find these matches on a trial. Each test case will require specific features and strengths that not all data quality software offers. Without this granular level of information about the names, addresses, emails, zip codes and phone numbers that are in your system, you will not be able to fully evaluate whether a software can resolve them or not.

6. REMEMBER IT’S NOT ALL BLACK AND WHITE

Contact data quality solutions are often presented as binary – they either find the match or they don’t. In fact, as we mentioned earlier, some vendors will tout the number of matches found as the key benchmark for efficiency. The problem with this perception is that matching is not black and white – there is always a gray area of matches that ‘might be the same, but you can’t really be sure without inspecting each match pair’ so it is important to anticipate how large your gray area will be and have a plan for addressing it. This is where the false match/true match discussion comes into play.

True matches are just what they sound like while false matches are contact records that look and sound alike to the matching engine, but are in fact, different. While it’s great when a software package can find lots of matches, the scary part is in deciding what to do with them. Do you merge and purge them all? What if they are false matches? Which one do you treat as a master record?  What info will you lose? What other consequence flowed from that incorrect decision?

The bottom line is: know how your chosen data quality vendor or solution will address the gray area. Ideally, you’ll want a solution that allows the user to set the threshold of match strictness. A mass marketing mailing may err on the side of removing records in the gray area to minimize the risk of mailing dupes whereas customer data integration may require manual review of gray records to ensure they are all correct. If a solution doesn’t mention the gray area or have a way of addressing it, that’s a red flag indicating they do not understand data quality.

7. DON’T FORGET ABOUT FORMAT

Most companies do not have the luxury of one nice, cleanly formatted database where everyone follows the rules of entry. In fact, most companies have data stored in a variety of places with incoming files muddying the waters on a daily basis. Users and customers are creative in entering information. Legacy systems often have inflexible data structures. Ultimately, every company has a variety of formatting anomalies that need to be considered when exploring data cleansing tools. To avoid finding out too late, make sure to pull together data samples from all your sources and run them during your trial. The data quality solution needs to handle data amalgamation from systems with different structures and standards. Otherwise, inconsistencies will migrate and continue to cause systemic quality problems.

8. DON’T BE SHORT-SIGHTED

Wouldn’t it be nice if once data is cleansed, the record set remains clean and static? Well, it would be nice but it wouldn’t be realistic. On the contrary, information constantly evolves, even in the most closed-loop system. Contact records represent real people with changing lives and as a result, decay by at least 4 percent per year through deaths, moves, name changes, postal address changes or even contact preference updates. Business-side changes such as acquisitions/mergers, system changes, upgrades and staff turnover also drive data decay. The post-acquisition company often faces the task of either hybridizing systems or migrating data into the chosen solution. Project teams must not only consider record integrity, but they must update business rules and filters that can affect data format and cleansing standards.

Valid data being entered into the system during the normal course of business (either by CSR reps or by customers themselves) also contributes to ongoing changes within the data. New forms and data elements may be added by marketing and will need to be accounted for in the database. Incoming lists or big data sources will muddy the water. Expansion of sales will result in new audiences and languages providing data in formats you haven’t anticipated. Remember, the only constant in data quality is change. If you begin with this assumption, you skyrocket your project’s likelihood of success. Identify the ways that your data changes over time so you can plan ahead and establish a solution or set of business processes that will scale with your business.

Data quality is hard. Unfortunately, there is no one-size fits all approach and there isn’t even a single vendor that can solve all your data quality problems. However, by being aware of some of the common pitfalls and doing a thorough and comprehensive evaluation of any vendors involved, you can get your initiative off to the right start and give yourself the best possible chances of success.

What I Learned About Data Quality From Vacation

Over the 12 hours it took us to get from NY to the beaches of North Carolina, I had plenty of time to contemplate how our vacation was going to go. I mentally planned our week out and tried to anticipate what would be the best ways for us to ‘relax’ as a family. What relaxes me – is not having to clean up.  So to facilitate this, I set about implementing a few ‘business rules’ so that we could manage our mess in real-time, which I knew deep down, would be better for everyone.  The irony of this, as it relates to my role as the Director of Marketing for a Data Quality company did not escape me but I didn’t realize there would be fodder for a blog post in here until I realized business rules actually can work. Really and truly. This is how.

1. We Never Got Too Comfortable.

We were staying in someone else’s house and it wasn’t our stuff. So it dawned on me that we take much more liberty with our own things than we apparently do with someone else’s and I believe this applies to data as well. Some departments feel like they are the ‘owners’ of specific data. I know from direct experience that marketing, in many cases, takes responsibility for customer contact data, and as a result, we often take liberties knowing ‘we’ll ‘remember what we changed’ or ‘we can always deal with it later’. The reality is, there are lots of other people who use and interact with that data and each business user would benefit from following a “Treat It Like It’s Someone Else’s” approach.

2. Remember the Buck Stops With You.

In our rental, there was no daily cleaning lady and we didn’t have the freedom of leaving it messy when we left (in just a mere 7 days). So essentially, the buck stopped with us. Imagine how much cleaner your organization’s data would be if each person who touched it took responsibility for leaving it in good condition. Business rules that communicate to each user that they will be held accountable for the integrity of each data element along with clarity on what level of maintenance is expected, can help develop this sense of responsibility.

3. Maintain a Healthy Sense of Urgency.

On vacation, we had limited time before we’d have to atone for any messy indiscretions. None of us wanted to face a huge mess at the end of the week so it made us more diligent about dealing with it on the fly. To ‘assist’ the kids with this, we literally did room checks and constantly reminded each other that we had only a few days left – if they didn’t do it now, they’d have to do it later. Likewise, if users are aware that regular data audits will be performed and that they will be the ones responsible for cleaning up the mess, the instinct to proactively manage data may be just a tad stronger.

So when it comes to vacation (and data quality), there is good reason not to put off important cleansing activities that can be made more manageable by simply doing them regularly in small batches.

Keep your SQL Server data clean – efficiently!

Working with very large datasets (for example when identifying duplicate records using matching software) frequently can throw up performance problems if you are running queries returning large  volumes of data. However there are some tips and tricks that you can use to ensure your SQL code works as efficiently as possible.

In this blog post, I’m going to focus on just a few of these – there are many other useful methods, so feel free to comment on this blog and suggest additional techniques that you have seen deliver benefit.

Physical Ordering of Data and Indices

Indices and the actual physical order of your database can be very important. Suppose for example that you are using matching software to run a one off internal dedupe, looking to compare all records with several different match keys.  Let’s assume that one of those keys is zip or postal code and it’s significantly slower than the other key.

If you put your data into the physical postal code/zip order, then your matching process may run significantly faster since the underlying disk I/O will be much more efficient as the disk head won’t be jumping around (assuming that you’re not using solid state drives).  If you are also verifying the address data using post office address files, then again having it pre-ordered by postal code/zip will be a big benefit.

So how would you put your data into postcode order ready for processing?

There are a couple of options:

  • Create a clustered index on the postcode/zip field – this will cause the data to be stored in postcode/zip order,
  • If the table is in use and already has a clustered index, then the first option won’t be possible. However you may still see improved overall performance if you run a “select into” query pulling out the fields required for matching, and ordering the results by postal code/zip. Select this data into a working table and then use that for the matching process – having added any other additional non-clustered indices needed.

Avoid  SELECT *

Only select the fields you need.  SELECT * is potentially very inefficient when  working with large databases (due to the large amount of memory needed). If you only need to select a couple of fields of data (where those fields are in a certain range), and those fields are indexed, then selecting only those fields allows the index to be scanned and the data returned.  If you use SELECT *, then the DBMS will join the index table with the main data table – which is going to be a lot slower with a large dataset.

Clustered Index and Non-clustered Indices

Generally when working with large tables, you should ensure that your table has a clustered index on the primary key (a clustered index ensures that the data is ordered by the index – in this case the primary key).

For the best performance, clustered indices ought to be rebuilt at regular intervals to minimise disk fragmentation – especially if there are a lot of transactions occurring.  Note that non-clustered indices will also be rebuilt at the same time – so if you have numerous indices then it can be time consuming.

Use Appropriate Non-clustered Indices to Boost Query Efficiency

Non-clustered indices can assist with the performance of your queries – by way of example, non-clustered indices may benefit the following types of query:

  •  Columns that contain a large number of distinct values, such as a combination of last name and first name. If there are very few unique values,  most queries will not use the index because a table scan is typically more efficient.
  • Queries that do not return large result sets.
  • Columns frequently involved in the search criteria of a query (the WHERE clause) that return exact matches.
  • Decision-support-system applications for which joins and grouping are frequently required. Create multiple non-clustered indexes on columns involved in join and grouping operations, and a clustered index on any foreign key columns.
  • Covering all columns from one table in a given query. This eliminates accessing the table or clustered index altogether.

In terms of the best priority for creating indices, I would recommend the following:

1.) fields used in the WHERE condition

2.) fields used in table JOINS

3.) fields used in the ORDER BY clause

4.) fields used in the SELECT section of the query.

Also make sure that you use the tools within SQL Server to view the query plan for expensive queries and use that information to help refine your indices to boost the efficiency of the query plan.

Avoid Using Views

Views on active databases will perform slower in general, so try to avoid views. Also bear in mind that if you create indices on the view, and the data in the base tables change in some way, then the indices on both the base table and view will need updating – which creates an obvious performance hit.  In general, views are useful in data warehouse type scenarios where the main usage of the data is simply reporting and querying, rather than a lot of database updates.

Make use of Stored Procedures in SQL Server

The code is then compiled and cached, which should lead to performance benefits. That said you need to be aware of parameter sniffing and designing your stored procedures in such a way that SQL Server doesn’t cache an inefficient query execution plan.  There are various techniques that can be used:

  • Optimising for specific parameters
  • Recompile For All Execution
  • Copy parameters into Local Variables

For those interested, there’s a more in-depth, but easy to follow description of these techniques covered on the following page of SQLServerCentral.com

http://www.sqlservercentral.com/blogs/practicalsqldba/2012/06/25/sql-server-parameter-sniffing/

Queries to compare two tables or data sources

When using matching software to identify matches between two different data sources, you may encounter scenarios where one of the tables is small relative to another, very large, table, or where both tables are of similar sizes. We have  found that some techniques for comparing across the two tables run fine where both tables are not too large (say under ten million records), but do not scale if one or both of the tables are much larger than that.  Our eventual solution gets a little too detailed to describe effectively here, but feel free to contact us for information about how we solved it in our matchIT SQL application.

And Finally

Finally I’d recommend ensuring that you keep an eye on the disks housing your SQL Server database files: ensure that there’s at least 30% storage space free and that the disks are not highly fragmented; regularly doing this produces better performance.

In summary by making efforts to optimise the performance of your data cleaning operations, you will reduce load on your database server, allow regular use of the necessary applications to keep your data clean – and as a result keep your users happy.

helpIT Systems is Driving Data Quality

For most of us around the US, the Department of Motor Vehicles is a dreaded place, bringing with it a reputation of long lines, mountains of paperwork and drawn out processes. As customers, we loathe the trip to the DMV and though while standing in line, we may not give it much thought  – the reality is, poor data quality is a common culprit of some of these DMV woes. While it may seem unlikely that an organization as large and bureaucratic as the DMV can right the ship, today, DMV’s around the country are fighting back with calculated investments in data quality.

While improving the quality of registered driver data is not a new concept, technology systems implemented 15-20 years ago have long been a barrier for DMVs to actually take corrective action. However, as more DMVs begin to modernize their IT infrastructure, data quality projects are becoming more of a reality. Over the past year, helpIT has begun work with several DMVs to implement solutions designed to cleanse driver data, eliminate duplicate records, update addresses and even improve the quality of incoming data.

From a batch perspective, setting up a solution to cleanse the existing database paves the way for DMVs to implement other types of operational efficiencies like putting the license renewal process online, offering email notification of specific deadlines and reducing the waste associated with having (and trying to work with) bad data.

In addition to cleaning up existing state databases, some DMVs are taking the initiative a step further and working with helpIT to take more proactive measures by incorporating real-time address validation into their systems.  This ‘real-time data quality’ creates a firewall of sorts, facilitating the capture of accurate data by DMV representatives – while you provide it (via phone or at a window). With typedown technology embedded directly within DMV data entry forms, if there is a problem with your address, or you accidently forgot to provide them with information that affects the accuracy, like your apartment number or a street directional (North vs. South), the representatives are empowered to prompt and request clarification.

Getting your contact data to be accurate from the start means your new license is provided immediately without you having to make another visit, or call and wait on hold for 30 minutes just to resolve the problem that could have been no more than a simple typo.

Having met several DMV employees over the past year, it’s obvious that they want you to have an excellent experience. Better data quality is a great place to start. Even while DMV budgets are slashed year after year, modest investments in data quality software are yielding big results in customer experience.

 

If you want to learn more about improving the quality of your data, contact us at 866.332.7132 for a free demo of our comprehensive suite of data quality products.

Creating Your Ideal Test Data

Every day we work with customers to begin the process of evaluating helpIT data quality software (along with other vendors they are looking at). That process can be daunting for a variety of reasons from identifying the right vendors to settling on an implementation strategy, but one of the big hurdles that occurs early on in the process is running an initial set of data through the application.

Once you’ve gotten a trial of a few applications (hopefully including helpIT’s) and you are poised to start your evaluation to determine which one is going to generate the best result – you’ll need to develop a sample data set to run on the software. This is an important step not to be overlooked because you want to be sure that the software you invest in can deliver the highest quality matches so you can effectively dedupe your database and most importantly, TRUST that the resulting data is as clean as it possibly can be with the least possible wiggle room. So how do you create the ideal test data?

The first word of advice – use real data.

Many software trials will come preinstalled with sample or demo data designed primarily to showcase the features of the software. While this sample data can give you examples of generic match results, they will not be a clear reflection of your match results. This is why it is best to run an evaluation of the software on your own data whenever possible. Using the guidelines below, we suggest ‘identifying’ a real dataset that is representative of the challenges you will typically see within your actual database. That dataset will tell you if the software can find your more challenging matches, and how well it can do that.

For fuzzy matching features, you may like to consider whether the data that you test with includes these situations:

  • phonetic matches (e.g. Naughton and Norton)
  • reading errors (e.g. Horton and Norton)
  • typing errors (e.g. Notron, Noron, Nortopn and Norton)
  • one record has title and initial and the other has first name with no title
    (e.g. Mr J Smith and John Smith)
  • one record has missing name elements (e.g. John Smith and Mr J R Smith)
  • names are reversed (e.g. John Smith and Smith, John)
  • one record has missing address elements (e.g. one record has the village or house
    name and the other address just has the street number or town)
  • one record has the full postal code and the other a partial postal code or no postal code

When matching company names data, consider including the following challenges:

  • acronyms e.g. IBM, I B M, I.B.M., International Business Machines
  • one record has missing name elements e.g.
  1. The Crescent Hotel, Crescent Hotel
  2. Breeze Ltd, Breeze
  3. Deloitte & Touche, Deloitte, Deloittes.

You should also ensure that you have groups of records where the data that matches exactly, varies for pairs within the group. For example:

If you don’t have these scenarios all represented, you can doctor your real data to create them, as long as you start with real records that are as close as possible to the test cases and make one or at the most two changes to each record. In the real world, matching records will have something in common – not every field will be slightly different.

With regard to size, it’s better to work with a reasonable sample of your data than a whole database or file, otherwise the mass of information runs the risk of obscuring important details and test runs take longer than they need to. We recommend that you take two selections from your data – one for a specific postal code or geographic area, and one (if possible) an alphabetical range by last name. Join these selections together and then eliminate all the exact matches – if you can’t do this easily, one of the solutions that you’re evaluating can probably do it for you.

Ultimately, you should have a reasonable size sample without so many obvious matches, which should contain a reasonable number of fuzzier matches (e.g. matches where the first character of the postal code or last name is different between two records that otherwise match, matches with phonetic variations of last name, etc.)

__________________________________________________________________________

For more information on data quality vendor evaluations, please download our Practical Guide to Data Quality Vendor Selection.

Real-Time ERP Data? Yes.

Over the past several months our findIT S2 application has been gaining some significant traction as a real-time duplicate prevention and record linking solution on websites, CRM and ERP applications.

Just last month, helpIT added our first Epicor client seeking these same capabilities directly within their ERP. The acquisition of findIT S2 comes on the heels of their Epicor ERP purchase, validating our belief that there is a strategic opportunity for organizations to take advantage of a real-time data quality firewall, even for those already utilizing some of the world’s leading strategic CRM and ERP applications.

In each interaction with our prospects and customers we are finding new ways that the power of real-time matching is helping companies. Some unprecedented ways include:

• Providing a single customer view in both CRM and ERP applications, despite technical incompatibilities between systems
• Linking captured web leads to credit bureau information to identify applicable credit offers
• Preventing the overextension of credit to customers by finding terms previously offered

The most common scenario, of course, is organizations who understand that the prevention of duplication can lead to very significant improvements in efficiency and decisioning when compared to retroactive data quality resolution.

helpIT systems is proactively working with a range of enterprise partners to extend the value of exceptional data quality through our best-in-class front-end and back-end matching technologies.

To explore partnership or direct opportunities, please contact Josh Buckler at 866.628.2448 or connect with him via Twitter @bucklerjosh.