TABLE OF CONTENTS

What is the Platform Science TMS Integration?


The Platform Science TMS Integration (hereby referred to as “the integration”) is a background service that syncs TMS data to (and retrieves updates, messages, and actions from) the Platform Science platform, enabling near real-time bi-directional workflow, data flow and messaging between driver applications and back-office systems.  Loads assigned to drivers in the TMS are automatically created and transmitted to driver devices, alongside configurable workflow actions and forms, and the actions that a driver takes are automatically retrieved from the platform, captured, and synced to corresponding events in the TMS. 


How does it work?

The integration’s components all serve one of two primary functions: either pushing TMS data to the Platform Science platform, or pulling driver events, data inputs and documents back into the TMS.  This is the essence of the integration’s purpose, performing these two main functions to allow for bi-directional syncing of TMS data between back-office systems and driver devices.    

 

Included in the integration are several processes that monitor the TMS database to determine when any relevant changes are made, such as when a new load is created or assigned to a driver.  The way in which these monitors work is largely dependent on the TMS system being used, but in general, database-level change tracking is used to determine when updates are made to certain types of assets like locations, tractors, and users.  For loads, the integration hooks into TMS-level update routines that are triggered whenever back-office users make updates to a load. When those changes are detected, the integration will compare the last known state of that data against the state that it may be in the platform, and when needed, transmit those changes to Platform Science to be updated and reflected immediately on the appropriate driver’s device. 


Load information input by the driver, workflow actions, documents and other data are captured from driver devices through event consumer queues that are exposed through the Platform Science platform.  As drivers complete trips and perform workflow steps, Platform Science will populate those events into the appropriate event queue with all of their corresponding data.  The integration monitors these queues in real-time to retrieve data as soon as it is available, applying it directly to the TMS and reflecting those updates to back-office staff within seconds.  

 

How is the Integration installed? 


The integration runs as a Windows Service and can be installed on nearly any Windows machine that has network access to the underlying TMS database.  The service’s files contain everything that it needs to run, avoiding the need to install any kind of underlying frameworks or SDK’s ahead of time. 


The services make use of a suite of pre-defined database scripting that are used to retrieve data from your TMS database to display to drivers, and then later take driver-entered data and events and update the TMS accordingly. This scripting will need to be installed with the assistance of a database administrator. 


ASR will work closely with your team throughout the installation process, helping to determine the best installation location, guiding database administrators through scripting deployment, and configuring the integration for your environment. 


Settings


There are many configuration values that the integration uses to determine how it should behave in both the application and the database.  Those values come from two respective sources – a JSON configuration file in the application’s working directory for application configuration, and the PS.Settings database table for database scripting configuration.  Any configuration value can be updated to tweak the integration’s behavior or enable or disable certain features as desired.   


Application Settings


The table below details settings values in the application configuration file that are the most likely to be updated depending on the needs of your business.  This is not an exhaustive list, as there are many configuration values that will most likely remain static (e.g., stored procedure names for various functions). 


Setting Name 

Description 

Default Value 

Geofences.UseDefaultPolygon 

Determines whether or not a default polygon geofence should be used in instances where a task requires a polygon geofence but polygon points are not provided 

False 

Geofences.DefaultPolygonPoints 

The number of points in the generated default polygon shape.  Minimum 3 

4 

Geofences.DefaultPolygonRadiusMiles 

The radius of the default polygon shape in miles.  Decimal values accepted. 

0.25 

MonitoringQueue.JobQueueInterval

Controls how frequently the integration will poll the database for new or uploaded load / trip information.  Value is in seconds 

5 

AdHocSettings.SchemaTableCleanupEnabled

Enables automated cleanup of integration auditing and history tables out to a configurable number of days.  Days of history to keep is configured in database settings table. 

False 

MasterDataConfig.AssetEnabled

Enables syncing of Assets (tractors and trailers) from the local TMS to the Platform Science platform 

True 

MasterDataConfig.DriverEnabled 

Enables syncing of Drivers from the local TMS to the Platform Science platform 

True 

MasterDataConfig.LocationEnabled 

Enables syncing of Locations from the local TMS to the Platform Science platform. 

True 

MasterDataConfig.MessageEnabled

Enables syncing of Messages from the local TMS to the Platform Science platform. 

True 

MasterDataConfig.UserEnabled 

Enables the syncing of non-driver users from the local TMS to the Platform Science platform. 

True 

 

Database Settings

The table below details configuration values at the database level, what they do and what their default values are. These can be altered by updating the corresponding record in the table or, if that value has not been added yet, adding a new record for that value.  By default, the PS.Settings table is empty on initial integration installation.  This is expected behavior; the procedure that is called to read and use configuration values provides a default value when attempting to read from the settings table. 


Setting Name 

Description 

Default Value 

RetainWorkflowTaskDays

The number of days that Workflow Task information should be retained in auditing tables when Schema Table Cleanup is enabled in application settings. 

90 

EnableAutoDepart 

Enables processing of automated geofence departure events. 

False 

ArriveCreatesDeadhead

For TMS systems that use deadheads, creates a deadhead event on an Arrive instead of a Start if one is not present on the load. 

False 

StartActualizesDepart

Actualizes both arrive and depart on a Start event. 

False 

UseTerminalOffset

(TMWSuite specific configuration) Uses the Terminal Label Extra String 2 as an offset to the Date/Time Received from the Device. 

False 

DeadheadByTractor

Uses the last known Tractor position as a preferred starting point for Deadheads. 

True 

UpdateShiftOnLogin
Determines if device login events should set the login time on the users closest pending shift.
False
UpdateShiftOnLogout
Determines if device logout events should set the logout time on the users closest open shift.False


How and when does the TMS get updated?

As this document is being written, the primary source of TMS updates comes from Workflow Tasks that are performed by drivers as they progress through Jobs.  Each Job (i.e., Load) in Platform Science has one or more Steps (i.e., Stops) associated with it, and each Step can have one or more Workflow Tasks to complete as part of that Step.  The most common Workflow Tasks are Arrives and Departs. 


As soon as a driver completes a particular task on their device, either manually or automatically in the case of geofenced arrivals and departures, the device will send a notification of that task’s completion to the Platform Science platform, including information about the task, when it was completed, and any corresponding metadata.  That task is then populated into the corresponding event consumer queue to be consumed as soon as the client is ready to handle it. 


The integration is continually monitoring these event consumer queues, pulling down and processing events as soon as they are available.  The result is near real-time syncing of a load’s status between the driver’s device and the TMS, updating the underlying systems and reflecting to back-office staff within seconds of the event occurring. 


What data is transferred from the TMS to driver devices?

The data that is read from the underlying TMS database can be roughly separated into two categories: master data consisting of largely static entities (such as locations, users, assets, vehicles, etc.) and load data that is used to convey to the driver everything that they need to fully complete a load. 


Master Data is stored in the Platform Science platform and can be used as references on Jobs, as well as enabling their management through the Platform Science Web interface.  The integration monitors for any changes to those entities and will update the Platform accordingly.  This currently consists of: 

  • Assets (tractors, trailers) 
  • Drivers 
  • Locations 
  • Users (back-office users who can access the platform’s web interface) 

Load Data is dynamically populated and pushed through the platform to driver devices as loads are created, assigned, and updated.  This includes everything about a load that your driver’s need to know and is highly configurable to allow for different business and operational requirements where needed.  This currently consists of (but is not limited to): 

  • Stops 
  • Appointment Times 
  • Workflow Actions (currently Arrive and Depart from stops) 
  • Commodities 
  • Location information including geofences 
  • Shipment Details 
  • Shipping Documents 
  • Driver Alerts 
  • Additional Remarks 
  • … and much more! 

 

How do dispatches get sent to drivers?


Here’s a step-by-step overview of the process to get dispatches out to drivers. 


 

 

  1. Newly created or updated dispatches are fed into the PS.JobsQueue table by the procedure PS.usp_SaveUpdatedTrip

        a. Exactly where this takes place is dependent on the TMS.  For example, updates in TMW are triggered by the system procedure update_move_postprocessing, which is invoked whenever loads are created or updated in the TMW interface. 

    2. The integration continually monitors the PS.JobsQueue table for new entries.  Any that it finds are marked with a “Processing” flag and pulled into the integration to be built into Jobs by the procedure PS.usp_GetUpdatedJobs. 
        a. At time of writing, PS.usp_GetUpdatedJobs is the source of most “decision making” when it comes to what data is returned for a Job and how it is structured. 

    3. The integration checks the Platform Science API to see if a matching Job already exists.  If it does, that Job is updated with the information retrieved from the database.  Otherwise, a new Job is created. 

    4. The integration then updates the PS.JobsQueue table appropriately based on Job processing status 
        a. Successfully processed Jobs are removed from the queue 
        b. Jobs that encountered an error during processing only have their Processing flag reset, causing them to be retrieved and processed on the next pass. 
        c. All Jobs are added to (or updated in) the PS.JobsHistory table for auditing, regardless of processing status.
     

How do messages get sent to drivers?


This is largely dependent on the underlying TMS and the messaging components it is using.  In general, the process will roughly adhere to the following steps. 


  1. The integration invokes the stored procedure PS.usp_GetMessages on an interval.  This procedure’s job is to examine the underlying database and return any new outbound messages that have not been sent. 
            a. Where this procedure looks varies based on the messaging components being used.  For something like TotalMail, for example, the procedure uses database-level change tracking to monitor TotalMail tables for new messages. 
  2. Any new messages are pulled into the integration, built as Messages and created in the Platform Science API.  Creating a message “sends” it to the driver and links it into existing conversation threads where appropriate. 
     

How are messages sent by drivers returned to my TMS?


  1. The integration subscribes one of its monitoring components to the “messaging” event consumer queue.  All messages sent from driver devices are put into this queue by Platform Science. 
            a. ASR and other third-party vendors subscribe to “external” event consumer queues that are intended only for that vendor’s use.  These external queues are duplicates of their “primary” counterparts, allowing the primary to remain available for use by the client themselves and avoiding any overlap or contention with the third-party vendor.  For the messaging queue, this will be named something like "external.<vendor name>.messaging” (e.g. "external.asr.messaging”) 
  2. Messages are deserialized and inserted into the database (or other repository) by the stored procedure PS.usp_UpdateMessage. 
            a. This procedure audits messages and their parts in several different tables: PS.Messages, PS.MessageFields, PS.Conversations and PS.MessageRecipients. 
            b. As with other processes, exactly where driver messages are inserted is entirely dependent on the TMS and the messaging components it is using. 
  3. After message audit, the stored procedure PS.usp_ProcessFormMessage is called to perform any customized post-processing. 
     

How are Workflow actions retrieved and applied to the TMS?


Workflow actions (tasks) are processed by two separate components in the integration: one that retrieves and audits all workflow tasks, and another which separately monitors the auditing table and processes all audited workflow tasks which have yet to be processed.  This slight difference in handling compared to other similar processes is largely due to the complexity and importance of workflow actions.  We want to audit workflow tasks as quickly as possible to ensure they are removed from the event consumer queue as quickly as possible, and we don’t want any potential failures in their processing to interrupt that flow. 


  1. The integration subscribes one of its monitoring components to the “workflow” event consumer queue.  All workflow tasks performed by drivers are put into this queue by Platform Science. 
            a. ASR and other third-party vendors subscribe to “external” event consumer queues that are intended only for that vendor’s use.  These external queues are duplicates of their “primary” counterparts, allowing the primary to remain available for use by the client themselves and avoiding any overlap or contention with the third-party vendor.  For the workflow queue, this will be named something like “external.<vendor name>.workflow” (e.g. "external.asr.workflow").
  2. Workflow tasks are deserialized and inserted into the database by the stored procedure PS.usp_AuditWorkflowTask 
            a. This procedure audits workflow tasks in two tables: PS.WorkflowTaskHistory and PS.WofkflowTaskFields. 


A separate component handles processing Workflow tasks. 


  1. PS.usp_ProcessWorkflowTasks is invoked on an interval.  This procedure iterates over tasks in PS.WorkflowTaskHistory and processes them in the order they were received. 
     

How are form messages retrieved and inserted into the TMS?


Form Messages are very similar to standard Messages in the way that they are handled and the scripts that are invoked to do so.  The primary difference is that Form Messages typically represent some kind of action being taken (or data being gathered) that isn’t related to a Job (load).  A good example of this is a Time Off Request form. 


  1. The integration subscribes one of its monitoring components to the "form_submission” event consumer queue.  All form messages sent from driver devices are put into this queue by Platform Science. 
            a. ASR and other third-party vendors subscribe to “external” event consumer queues that are intended only for that vendor’s use.  These external queues are duplicates of their “primary” counterparts, allowing the primary to remain available for use by the client themselves and avoiding any overlap or contention with the third-party vendor.  For the form submission queue, this will be named something like “external.<vendor name>.form_submission” (e.g. "external.asr.form_submission").
  2. Form Messages are deserialized and inserted into the database (or other repository) by the stored procedure PS.usp_UpdateMessage. 
            a. This procedure audits form messages and their parts in two tables, PS.Messages and PS.MessageFields. 
  3. After message audit, the stored procedure PS.usp_ProcessFormMessage is called to perform any customized post-processing. 
            a. This will normally constitute any processing done on a form.  PS.usp_UpdateMessage's primary job is to ensure that form messages are correctly audited. 
     

How can I further customize the Integration for my needs?


ASR Solutions specializes in working closely with your business to provide custom technology solutions that work the way you need them to.  Combined with the impressive suite of features Platform Science offers which cover a wide range of needs for various types of logistics operations throughout multiple markets, the integration can be expanded and customized to perform nearly any function that your business may require. 


For more information, feel free to email us at sales@asr-solutions.com, or call our toll-free number at 888-427-7835. Our team will be more than happy to work with you and ensure that your needs are met.