The good news is that I have syncing working with FileMaker. Creating a syncing strategy turned out to be a bit trickier than I thought. Most of the difficulty involved the overall workflow if the data and the iPads. For instance: we will have two iPads per trench this year and I needed to work with the trench supervisors to figure out just how they intended to use the two together. Since we will have four trenches open this year, I needed to know if I was going to have to sync eight iPads or just four, or if the two for each trench had to be in sync with each other. I think we have worked out a strategy for the two iPad use, but I will write about that separately. Right now I want to write about the rationale behind the syncing strategy that I chose and how this was implemented. The files will be uploaded soon.
There are various methods to sync databases, and when you read the various FileMaker documents and the discussion boards, you can see that there is an endless number of variables that can play into this. Ignoring for a moment that the use of FileMaker Go (FMG) on the iPad adds a level of complexity and looking at just two desktop databases, FileMaker summarizes the various methods available here.
The standard commercial package to use for this sort of thing is WorldSync’s SyncDek, which is priced in the thousands. There are various places on the web that describe how to sync databases building something yourself, including this page from FileMaker, written for FileMaker 2 and last updated in 2007.
When the iPad is added to the mix, all bets are off. FMG can’t host a database. So your desktop won’t be able to connect to the iPad. While FMG can import records, it cannot export them (except for saving and sending records, which won’t work well for us). There is a description and examples for sending data to and from the iPad to a desktop database on page 7 of the Development Guide, but I don’t think these will work for me. I will have some records that can be edited on both the desktop and laptop at the same time. They will have to be compared, which is something that the Development Guide doesn’t account for. And the comparison would have to happen on the iPad, not the desktop, so it will be slower and more cumbersome. Lastly, there is an issue with a lost connection, something mentioned by SeedCode in their demo of a forthcoming framework named GoZync.
What I have decided upon is a technique described in the FileMaker forums here. The computer that serves the database to the house (well, tent) group on the site is the one used by the finds registrar. I connect to that database with my laptop as do our ceramics people and environmental group. I will get the iPads from the teams, copy their database from the iPad to the server computer, run a script in the master database that will import and parse the data, and then replace the iPad database with a fresh copy of the master database. If I take all four iPads, copy their databases over, run the script four times, then copy the newly synced master to each iPad, then they will leave the tent with all five copies of the database with the exact same data.
The PARP:PS databases are syncing at the record level, not the field level. This is done mostly to save development time. I felt that I could create a record-level syncing process faster than I could design a process where the timestamp for the creation or modification of each field had to be recorded with a timestamp, especially since the iOS doesn’t yet have the ability to sync its clock to a time server.
I had to adjust several core things about the database. First, and central to the whole concept of a relational database, I had to rethink my serial numbers, often referred to as recids. If I just had FileMaker issue sequential serial numbers to each record, then each iPad would create an SU record with serial number SU145. That won’t work as a unique number. I could have built-in the name of the iPad with the serial number scheme, but I found something better at FileMaker Custom Functions: the UUID scheme. I added three new functions to the database. One creates the UUID, which is a unique number that starts with the creation timestamp in UNIX time, a serial number, and the MAC address of the device creating the record. I can then parse the data out with two other functions (UUID.GetTimeStamp and UUID.GetNICAddress) to parse out that data into their own fields.
- UUID (Text) Auto-Enter the calculation value from the UUID.New function
- Synced (Text) Auto-Enter ’n’, changes to ‘y’ after syncing, and sometimes to ‘i’ (for ignore) during the script
- LastSyncTimestamp (Timestamp) a global field. This might not work on networked databases, but it should work with locals and syncing.
- ModificationDate had to be changed from a date field to a timestamp field.
- ModifiedAfterSync (calculation) If ( ModificationDate > LastSyncTimestamp ; “y” ; “n” ).
- One imports all of the records from the AuditTrail table, which records every addition and edit to the database, as long as the field has the right script triggers. Once this is imported, I can reconstruct any data entry.
- Starting with the Trench table, a script imports the data from the iPad to a shadow table in the database, one that is an exact copy of the Trench table. From there, I can preform three searches to give me data based on the different possible scenarios.
- The first search is for data created on the iPad but not existing in the desktop database. Those records are imported wholesale into the Trench table. When importing records, I have the database use the same UUID, and not create a new one.
- The second search is for data that has been edited since the last sync on the iPad but not the desktop. Those records are imported from the shadow table, while matching the UUID fields and replacing the additional contents. Since the whole database on the iPad will be replaced with anything that has been edited on the desktop there is no need for the reverse search.
- The last search is for records that have been edited on both the laptop and desktop.The database displays those records, highlights the differences, and allows you to reconcile those differences before continuing.
- The shadow table is then cleared for the next database.
I am trying to add a couple of other safety mechanisms into the scripts. For example, the scripts should only work on data for the current year. Older data is not touched. And as I mentioned above, the audit trail is working and can now be called upon from inside any field.
I don’t have the database ready to upload yet. So far I only have the trenches and SU fields syncing. I need to add the rest of the sync-able tables to the scripts before I am ready for the Pompeii team to test. I will hopefully be able to post the file in a few days.