You can download the current version of the PARP:PS database here.

I blogged about the database I used at the start of the 2011 field season. This post references the database that I used at the end the field season. The earlier database syncing was working well for about a week and a half. That is when I discovered a couple of problems.

Early in the season we had a day of rain at the site. We tried working through it but ended up leaving the site after lunch. After we got back to the apartments I synced the ipad child databases to the parent database and things were fine. For the rest of the afternoon the teams worked on cleaning up their records and adding captions to their photographs while connected to the parent database over the network. The next morning I put the parent databases back on the iPads and at lunchtime all of the records that were edited the previous afternoon were imported as new records. Since I have four iPads and four child databases, that meant four attempts to add new records to the database. Not at all what I intended.

There was another workflow issue as well. The syncing that we do is based on a salesman model. That is: each salesman has an iPad and enters their own sales. One salesman shouldn’t be editing another salesman’s records, only the home office can do that. In my instance the trench supervisors are the salesman and the laptop at the Porta Stabia gate is the home office. But we have a new storage space for finds this year away from both the trenches and the gate and it makes much more sense for our registrar to inventory those finds near the storage center. Since both she and the trenches have the power to edit the small finds table, this created a potential for data loss if the two were edited in the same morning was too much for me to accept.

So I rewrote the scripts to catch the one real error and the one potential error. This involved some changes to the data structure as well. I added two fields to each table: _DeviceCreated and _DeviceModified. The device created was easy to implement, as FileMaker allows you to auto enter some data when the record is created. I told it to use Get ( SystemPlatform ) & “-” & Get ( HostName ). The platform returns -1, 1, or 2 for a desktop/laptop and 3 for an iOS device, the HostName is the network name of the computer creating the record. DeviceModified was trickier, since FMP doesn’t allow you to add to a field when the record is modified except for Date/Time, Name or Account name. I also needed the _DeviceModified field to always have data. So I initially set it to auto enter [If ( IsEmpty ( _DeviceModified ) ; _DeviceCreated ; _DeviceModified )]. Since I already have script triggers attached to all editable fields in the database, I added two steps to the StoreAuditOnSave script that handles the audit trail (Set Variable [$DevModName; Value:Get (ActiveFieldTableName) &”::”&”_DeviceModified”] and Set Field By Name [$DevModName; Get (SystemPlatform)&”-“&Get (HostName)].

Once that was done, I could mark the records which have the same modification date and modification device as the parent database. This is done in the shadow database only (MatchModificationDate and MatchModificationDevice). I also reordered the script steps so now the syncing steps are as follows:

x_1_2 Import New xxxx

Imports the records from the child database into the parent shadow database

x_2_2 Ignore Matching Records

Checks for records that have the same modification date and device between the real and shadow tables. If all of them are the same, the script skips to the end of the section, deletes the shadow records, and moves on to the next step. Otherwise, it puts an i in the sync field for the other steps to ignore them.

x_3_2 Find records created on child

Finds records in the shadow table that are new. Those are imported wholesale.

x_4_2  Find records edited on both parent and child

This step will find a record that has been edited on two different children. If one child is edited, it becomes part of the parent record and the ModifiedAfterSync says ‘y’. When the steps run with the next child, it flags them both as changed since the last sync and moves to the resolve screen to check the data

x_5_2 Find records edited on child and not parent

This has the ability to cause the most problems since the old parent record is overwritten. That is why I moved it to the end. It uses an import routine that updates matching records (based on their UUID fields). If a field is added to the main table and not the shadow table, unpredictable results occur with this type of import mechanism, so check the import script step carefully.

x_6_2 Remove shadow xxxx data

This clears the records from the shadow table.

99_99 Finished Syncing

After the scripts are run once (by selecting Scripts > Sync2> 1_1_2 Import Audit Trail) the script asks you if you have more databases to sync. If you say yes, it does nothing and you can sync the next database. If  you say no, it resets the ‘synced’ field to ‘y’ and the LastSyncTimestamp to the current date and time. At that point the file can be closed and copied to the devices for more data entry.

The old sync scripts are still there, but hidden.

These scripts work on an open parent database and closed child database. So all of the child records are imported. The script steps filter out only the current year’s records, so previous year’s records should only be edited on the parent database on the server computer.  What I would like to do next is have the parent database actually open the child database and search each table for records that were created in the current year. That way when the parent database imports the records, it will import only the current year’s records, which will make the whole process run more quickly.

I have tried to speed things up in other ways. When the media table is imported into the media shadow table, it doesn’t import the container fields. And when the x_5_2 script is run and matching records are updated, the container fields are left alone.

There is more to say about the changes in the database, but the syncing issues were the most important for us and the part of the database that received the most development attention.

Advertisement