Garbage In Garbage Out Episode 1.

Bad data input creates issues in various ways as it flows downstream. To help eliminate it, let’s start at the beginning: the creation of garbage data itself! Welcome to this first installment of our Garbage In Garbage Out series that explores how problem data gets generated and provides some useful tools to prevent its existence.

There is one common theme to it all: human error is the target. By finding common entry point mistakes, we can look at ways of tightening up workflow and data entry processes.

Let’s look at three ways data gets initiated at laboratories:

  • Accessioning: In many labs, an accessioner begins the process by creating a “case”
  • Electronically: Through interfaces, a request can be transmitted to a lab’s LIS
  • Custom intake: By using a specially created intake solution, information can be transferred

In the case of accessioning, the process typically starts with a paper requisition.  While entering it, human error comes into play when there are oftentimes no prompts to ensure that all order information is captured accurately. Some mistakes can be made with a keystroke or dropdown selection while entering a location for example.  What can occur is a like-sounding name like Texas Pain Institute may actually be selected for Texas Pain Clinic – each may be in the system, but they are actually different entities. Later, we’ll explore ways to prevent these kinds of mistakes.

In the second scenario, orders are received electronically but still need to be inspected for inaccuracies before sending the test on to be performed.  Similar to entry in accessioning, the requester can select the wrong location in that field when given a long list from which to choose. Consequently, versions of the same tools for scenario 1 can be used to help here as well.

Programmatic errors in custom intake processes are a third type of issue. An example of this is input coming in from a file format other than HL7, like comma-delimited data. For it to work, a comma (which is implied in this scenario) would need to be edited out in the process as it would cause an error. Solutions to these issues are related to tightening up the programming…which can take some trial and error.

A lot of doom and gloom here, so time for some solutions!

There are many workflow and LIS setting options we see not being employed in some environments that could actually eliminate many of the human error issues. Solutions are found in tools that are built into most LIS packages: order entry rules and macros.  Each of them increases data entry accuracy – order entry rules by restricting what other data entry values can be selected and what “must” be entered to submit the order.  Macros are saved functions that are easily repeatable and always produce the same result which can assure accuracy.

With order entry rules, during the data submission process, when a provider requests a test with the wrong diagnostic code for a given situation, if trying to enter bad values they will be restricted from completing the entry. The outcome of steering them in the right direction is a reduction in the number of failed orders that can occur.

There are workflow modifications that can be made as well – changes can be made to the way providers enter their orders at the EMR (for interfaces) or how they are entered at the laboratory end.  Common sense steps that can be added at data entry points will be covered in future articles of this series.

All of these valuable measures, when put into place, pay off in time-saving dividends down the road. In working to further educate our audience, we’ll share other hints to help laboratory operations run more smoothly in our next installment.