Recently there was a question about cleaning up a found set on one of the FileMaker discussion forums. When a question of this nature arises, it’s typically some variation on “How can I remove [or delete] duplicate entries?” But this was the opposite: For a given found set of customers, how can I omit those whose Zip codes only appear once in the found set?
In other words, keep the records whose Zips appear multiple times and banish the others.
Off the top of my head, I suggested…
Sort by Zip code, then loop through the found set from top to bottom… using GetNthRecord() test the current record’s Zip code against the previous record and also against the next record. If both tests are negative, omit, otherwise go to next record (and of course exit after last).
As it turned out, it was a one-time cleanup task, and my suggestion was good enough. But I had a nagging feeling there were better-performing ways to go about this, and today’s demo file, Anti-deduping, part 1, presents four different methods. I encourage you to download it, experiment, and add your own methods or variations… perhaps you’ll come up with a faster approach, in which case, needless to say, I hope you’ll post a comment at the end of this article.
The Four Approaches
- ValueCount + FilterValues
If your found set is small, say 1K or 2K records, it won’t matter much which method you use, but as the found set size increases, it becomes clear that each method is faster than its predecessor.
Also, when doing speed comparisons in FileMaker, one needs to consider whether caching is skewing the results. In this demo, I found the timings of the different methods to be consistent, regardless of which order I ran the tests in, or whether I quit and restarted FileMaker between each test.
Another consideration is whether the files are hosted (across a LAN or WAN) or local. I have found performance results to be fairly consistent regardless of the hosting setup… e.g., in my testing, the GetNthRecord approach takes 16 seconds to process 5K records across a WAN, and 15 seconds to do so locally. Unless otherwise specified, all times referred to in this article refer to tests conducted on a local file.
Basic Operation of the Demo
1. Generate a found set (there are 20K records in the demo, so that’s what you’ll get if you click “All”)
2. Optionally sort (more on this below)
3. Click one of these buttons
Okay, let’s look at each method.
This was my initial stab at solving the challenge… the “off the top of my head” suggestion described previously. Unfortunately it’s not going to win any performance prizes.
The Other Three Methods
Here is the basic approach used in the remaining three methods:
1. Use a summary list field, SummaryListZip…
…to generate a stack of Zip codes corresponding to the current found set and sort order (or lack thereof). Incidentally, you can easily view the contents of SummaryListZip by clicking here:
2. Push the contents of SummaryListZip into a variable, $$summaryList:
3. Loop through the found set and process the records:
Also, whereas the GetNthRecord method must be sorted on the Zip field to work, the remaining three methods do not require sorting to work… in fact as we’ll see in just a minute, they’re much faster when the found set is unsorted.
ValueCount + FilterValues
When processing 10K records, this method is twice as fast as GetNthRecord.
With 10K records this method is 5x faster than FilterValues.
Here our logical test is looking for a second occurrence of a given Zip code. There might be more than two occurences but all we need to know is whether there’s a second one. Also note that for this test we “omit record” when the result is false, whereas in all the previous methods we did so when the result of the test was true.
And with 10K records, this method is twice as fast as PatternCount.
Sorted vs. Unsorted
The first method we looked at, GetNthRecord, only works if the found set is sorted. But the other three methods work whether the found set is sorted or not… except… things take considerably longer when the found set is sorted…
…and, the more “granular” the sort, the longer it takes. For example, on 5K records, here are timings for the Position method:
- Unsorted: 1 second
- State sort: 3 seconds
- Zip sort: 5 seconds
- ID sort: 11 seconds
Interestingly, records in the customer table are in the same order when unsorted, as they are when sorted (ascending) on ID (the primary key for the table). This raises the question: Does it take FileMaker longer to walk a sorted found set than an unsorted found set?
This question can be answered by running this script…
…on 20K records, either unsorted or sorted on the State, Zip or ID fields. In all cases the script takes either zero seconds or one second to complete.
Another question: Does it take longer for FileMaker to evaluate SummaryListZip and/or populate $$summaryList when the found set is sorted?
Stepping through the Position script with the debugger on, and with 20K records sorted by ID, this does not seem to be the case. The highlighted step completes almost instantly.
So what the heck is going on? It appears that when the found set is sorted, it takes FileMaker longer to compare the current record against the contents of $$summaryList, so each “If” test within the loop takes a bit longer than it would if the found set were unsorted. Exactly how much longer it takes depends on how granular the sort is.
Is There A Workaround?
Of course. You do know about FileMaker’s undocumented “Fix Sort Slowness” feature, don’t you?
Kidding. I’m just kidding. There is no such feature, but the effect can be achieved as follows:
- Process the found set
- Sort with “Perform without dialog checked” and no sort order specified
This will restore your previous sort order (thank you Ray Cologon for this very cool trick).
Well, that’s about it for today. In today’s demo we looked at techniques that work well when the field contents to be anti-deduped are of fixed length. In part 2 we’ll expand the techniques to work with variable-length field contents.