Welcome to our first multi-lingual blog post, coming to you in English, Spanish, Italian, and "Digital". Today, we discuss how to perform live bulk uploads, updates, and deletions of over 300,000 records on FileMaker Server/FileMaker Cloud in about 35 minutes or less, safely and without incurring any downtime. Hold on to your hats, this one is definitely for the "square pegs in the round holes".
The problem
Whether on premise or in the cloud, most multi-user databases often require routine updating, loading, or unloading of large data sets. Often times data feeds originate in other systems and exist in the form of a flat file such as a .csv or fixed-length ASCII file. The go-to method for such updates is to take the database offline and perform all mass imports in single user mode, but what if that isn't an option?
Some applications are "Always On", 24-hours a day. In a scenario where there are three work shifts accessing your application around the clock, the maximum available downtime may be less than one (1) hour, sometimes as little as 30 minutes. In some extreme scenarios, downtime may not be an option.
What are the challenges presented by typical import processes? There are several factors to consider:
1. An import locks a table
Operations such as importing records or performing a "replace all" values in a field place an exclusive lock on the entire table. On a 300,000 record import, that can mean the table will remain locked for a prolonged period of time until the import is complete.
2. Concurrent imports will actually run slower
If there are multiple such large imports to perform into different tables in a limited timespan, runnig two or more concurrent imports to save time isn't advisable either, as performance for each individual import may actually suffer. There may also be unintended consequences caused by unknown dependencies such as joins between the tables (such as auto-entered/computed fields being populated with erroneous information due to incomplete calculations).
3. An import can corrupt startup restoration logs
Compounding the problem is the issue of transaction logs used by the database server for startup restoration. When performing a large import, Claris recommends disabling transaction logging to avoid potential database corruption. Unfortunately disabling transaction logging requires a soft-reboot of the database engine, which defeats the purpose of avoiding server downtime.
4. A server-side import script takes too long
Although a safer practice, in our tests a server-side scheduled import of 300,000+ records into a single table can take upwards of 48 hours to complete, even for a narrow table of less than 40 fields. If you have three such tables to populate (approximately 1 million records to upload) and a limited time window in which to do it, that math simply doesn't add up.
5. Time may be of the essence
While similar results can be achieved using certain slight of hand techniques such as "sliding window" table switching (which may be achievable via table graph manipulation and the separation model), for any OLTP system under maintenance window constraints, the bottleneck continues to be how long it takes to upload the data to the server. For situations where the data needs to be online "now", some heavy lifting is required.
The solution – first in 'Digital':
Implement a client-side RPC Bot with chunking and throttling algorithms
Rather than a bulk import, we decided to perform INSERT operations using stored procedures. The outcome was highly successful, as we were able to quickly and safely populate our table with a large data set (nearly 300,000 rows across 34 columns) in just over 33 minutes (even when testing with an iPad!).
Using this approach, we are able to perform the equivalent of a mass "import" live and without interruption (no table locking, no taking file offline, etc.). The process is also high performance, with numerous throughput tests across multiple hosts (both virtual and bare metal) consistently averaging 145-150 rows per second.
For more full-featured table maintenance functionality , we extended our RPC Queue to include selective UPDATE operations, selective DELETE operations, and optionally TRUNCATE-table operations. Based on numerous tests across multiple hosts (both virtual and bare metal), total updates and total deletions of every row in the entire table only took slightly longer to execute than the original insert of all rows.
Here's an overview of the techniques we employ:
Input Stream: We use a Buffered Input Stream to read the entire data file and ingest it into a temporary (global) text column within a working table. Reading our test 170MB text file and inserting its contents into a column takes us less than 30 seconds.
Chunking: We developed a data chunking algorithm to parse the data file into payloads that can be parametrized for consumption by stored procedures.
Looping: To expedite data processing, a For-Loop iterates over the chunks, firing off one stored procedure per chunk in rapid fire sequence.
Stored Procedures: A stored procedure iterates over each payload, processing one row at a time. The host excutes multiple stored procedures in parallel.
Throttling: To guard against crashing the server's script engine or overtaxing the server's resources, we developed a self-regulating, queue throttling mechanism for managing the client-side queue.
Truncating: For situations where the table must be re-populated from scratch, the table can optionally be truncated.
Interruptibility: Throughout execution, our client-side RPC Queue can be paused, resumed, or aborted.
Robotic Process Automation: In addition to on-demand tasks, we also enable time-based scheduling of tasks using "Install OnTimer Script". This converts your device running FileMaker client into a basic Robotic Process Automation station.
Now for the details:
Input Stream
If the source data file has been delivered as a BLOB file inserted into a binary column, the file is first exported to the Documents directory on the local file system, from which it can then be read back in as plain text using File Input Stream commands.
Chunking
The practice of data chunking is well understood and commonly employed for data synchronization and deduplication use cases. For our algorithm, chunks are capped at 560,000 characters, which is just over half of FileMaker's maximum parameter length for stored procedures (current limit is 1 million characters).
Stored Procedures
Likewise, most database platforms support stored procedures. We leverage the FileMaker Server Script Engine (FMSE), which implements this functionality across the entire server product line and can be invoked via Perform Script on Server.
Queuing
The missing piece of the puzzle on the FileMaker platform (at least as of this writing) has been RPC queuing. Whether client-side or server-side, there is currently no native support for RPC queuing/stored procedure queuing on the FileMaker platform. Scheduled Tasks/Cron Jobs are available, but that is not the same as a queue.
Because it is easier to configure, monitor, manage (and interrupt if necessary) our solution implements a self-monitoring, interruptible, client-side queue. This means a dedicated workstation is required for the entire duration of a task.
Load monitoring, self-throttling
Our throttle is adaptive, regulating the queue by introducing a pause of variable duration depending on the server's backlog.
On servers supporting version 2 of FileMaker's Admin REST API, the pause length is determined by RPC load monitoring. On hosts that do not support or are not configured to accept Admin API calls, payload-based throttling is implemented.
Pause duration ranges from 2 seconds to 2 minutes based on either host load monitoring or chunk size (estimated stored procedure duration).
Next, the solution in English:
Implement a client-side PSoS Script Queue with batching and pausing
Rather than run an import, we decided to instead CREATE the records using the Perform Script On Server (PSoS) script step. The outcome was highly successful, as we were able to quickly and safely populate our 34-field table with nearly 300,000 records in just over 33 minutes.
Using this approach, we are able to perform the equivalent of a mass record "import" live and without interruption (no table locking, no taking files offline, etc.). The process is also high performance, with numerous tests across multiple FileMaker Servers (both virtual and bare metal) consistently averaging 145-150 records per second.
For more full-featured table maintenance functionality, we extended our RPC Queue's looping scripts to include selective UPDATE, selective DELETIONS, and optionally TRUNCATE table. Based on our tests, total updates and total deletions of every record in the entire table only took slightly longer to execute than the original insert of all records.
Here's an overview of the techniques we use:
Read From Data File: We used the Read From Data File step to read the entire data file and save it to a global text field in a working table.
Chunking/Batching: We wrote calculations and script steps to chop the data file into multiple smaller batches that can be sent to the server as script parameters.
Looping: To expedite record processing, a Loop repeats from 1 to the maximum number of data chunks, firing off one PSoS script per chunk in rapid fire sequence.
Perform Script On Server: On the server, a subscript loops over each data batch received, processing one record at a time. The server can run multiple of these Perform Script on Server sessions at the same time.
Throttling: To guard against crashing the server's script engine or overtaxing the server's resources, we developed a self-regulating mechanism to pause/resume the client-side queue.
Truncating: For situations where the table must be wiped and have all the records reloaded from scratch, the script can optionally truncate the table.
Interruptibility: If necessary, the client-side script queue can be paused, resumed, or aborted.
Now for the details:
Read From Data File
If the source data file has been delivered as a document in a container field, the file is first exported to the Documents folder on the client computer, from which it can then be read back into a text field (as plain text) using Data File script steps. The actual reading of the data file and inserting into a text global field takes less than 30 seconds.
Chunking/Batching
When it comes to running scripts on server for large data sets, you have to "spoon feed” it the data in smaller bites or the server will choke trying to process the entire thing in one shot. Data chunking is a batching solution commonly used for data synchronization.
The maximum parameter length for FileMaker's Perform Script On Server command is 1 million characters. For safety and manageability, the batch/chunk sizes our script creates are capped at 560,000 characters, just slightly above 50% of max.
Perform Script On Server
Also known as stored procedures, most database platforms support running scripts on the server. The FileMaker Server Script Engine (FMSE), implements this functionality across the entire server product line, and can be invoked via Perform Script on Server.
Queuing
As of this writing, the missing piece of the puzzle on the FileMaker platform has been RPC queuing. Whether client-side or server-side, there is currently no native support on the FileMaker platform for creating and managing a list of on-demand Perform Script On Server sessions. Script Scheduling is available, but that is not the same as a PSoS queue.
The PSoS Queue we developed is self -monitoring and interruptible. Because it's easier to set up, monitor, manage (and cancel if necessary) it has been implemented client-side. This means that for running these table update processes, a dedicated FileMaker workstation (aka a FileMaker Bot) is required.
Load monitoring, self-throttling
By automatically adjusting the script pause for longer/shorter time periods, our script queue manages the load placed on the server as best as possible.
On servers supporting version 2 of FileMaker's Admin REST API, the pause length is determined by monitoring the list of pending PSoS scripts. On hosts that do not support or aren't set up to accept Admin API calls (i.e. have no SSL certificate), the duration of the script pause is calculated based on the size of the data chunks.
Pause lengths range from 2 seconds to 2 minutes, based on either host load monitoring or chunk size (estimated PSoS script duration).
Demo file instructions
We've provided a full working demo file. To walk you thru the setup process, our file imitates FileMaker's native import dialog window, including options for selecting which fields to exclude and which fields to match on.
It all begins with importing the latest version of the FAA's Releasable Aircraft Database, then reading the text from that file into a text field.
On the next screen, choosing a destination layout populates a portal with a list of target fields and allows you to match the columns from your data file (on the left) against the fields from your selected table (on the right).
Above the center column, a popover menu allows you to select between "INSERT" (creates new records), "UPDATE" (updates existing matching records), and "DELETE" (deletes matching records only). The middle column then allows you to choose which field(s) to match on (for updates/deletes only) and which field(s) to exclude.
On the following screen, you'll have the option to schedule your task on a recurring basis or to run the task immediately only one time. A third option, which is to schedule one-time tasks, could also have been added but for demo purposes we thought it was not necessary.
All the core functionality is open-sourced. Simply click on the "Get Source Code" button to export a minimal version of our demo that you can fully examine.
Download our demo file here.
User name: Developer,
Password: filemaker (lower case)
(Note: Requires FileMaker Pro 18, FileMaker Go 18 and FileMaker Server 18, Cloud 1.18+)
Caveats – aka 'what could go wrong?'
While we only experienced FMSE overload during our development (but no crashes or permanent damage) we must always recommend caution, especially when working with large data sets. Here's a short (but not exhaustive) list of possible challenges.
1. FMSE overload
If the number of PSoS scripts running in parallel (at the same time) exceeds the server's maximum capacity setting, the server's FMSE can be overrun, leading to never-ending scripts that must be manually terminated. This can lead not just to incomplete data uploads, but also potential locking/freezing —or even permanent damage— of your database(s).
2. Database resources overload
If the number of PSoS scripts running in parallel exceed the server's processing capacity, the entire database server may crash. This can lead to damaged database files that may need to be recovered.
Recommendations
1. Test on a copy of your production database
Don't run this on your live database until you've tested it with your own import data files and tables. Ideally, you'd have an entirely separate test server to try these types of modifications before rolling them into production.
2. Make backups (plural)
Backup, backup, backup. Even after thoroughly testing this offline, make sure to have several full backups of your FileMaker databases before running this in production.
3. Upgrade your server's resources
If you are managing this amount of records, and fast data processing is important to you, it may be worth considering upgrading your server's horsepower. On a cloud or virtual environment, this may be as simple as allocating more CPU, RAM, and storage to your instance. If you're running a bare metal machine, even installing more RAM and a faster SSD is a relatively cheap and painless process these days.
4. Increase your server's PSoS threshold
Though not supported in FileMaker Cloud, the on-premise FileMaker Server Script Engine (FMSE) is configurable. Depending on your host's configuration (number of CPU cores, memory, and throughput capabilities), it may be possible for your database server to handle more than 100 concurrent script sessions.
For the on-premise edition of FileMaker Server, the default concurrent stored procedures (PSoS) threshold is 100 sessions. If your server has the resources, it may be advantageous to increase the FMSE threshold such that your host can process more payloads in parallel by running a greater number of concurrent script sessions. Modifying this setting does not require any sort of reboot, and it can be easily changed back after you've completed your task.
On FileMaker Cloud, the remote procedure threshold is not configurable, and is dependent on your software license.
How to increase your FileMaker Server's FMSE threshold
At the server's command-line, type:
From FileMaker client (via Admin REST API):
About our test and results:
- We tested with the FAA's Releasable Aircraft Database Download data file.
- Tested on FileMaker Server 18 and FileMaker Cloud 1.18.
- Virtual Servers were running Windows Server 2012/2016 on t2.large AWS ec2 instances.
- Bare metal server was Late 2012 quad-core i7 Mac mini with 16 GB RAM.
- All hosts used default server cache and default server statistics intervals.
- Network performance: latency between 5 ms to 40 ms.
- Update tasks are searching for records by unique ID.
- Tests were conducted using narrow tables ( maintaining ~35 non-computed columns).
- Test fields are all alpha-numeric (text, date, time, etc.), no container/binary data.
- Test field content is relatively narrow (not book novels).
- FileMaker Pro/Go robots used default cache settings.
- FileMaker Pro/Go robots used WiFi or hard-wired Gigabit ethernet.
Finally, in binary
In closing. If at first you don't succeed...
I couldn't find a detailed video on how to upload to FilmMaker so I went google searching, but here too just a couple screenshots.
Could you make a video at least on how to set up recording in FilmMaker? I can't figure out what settings to set for multiple uploads. If you have the option to record in 4K, Screen Capture allows you to do that.
I hope you can record the video, as I've been unable to get moving for 4 days now