Skip to content
A Universal Schedul...
 
Notifications
Clear all

A Universal Scheduler for Emails, Slack Messages, Blog Posts, and More with callin.io

3 Posts
2 Users
0 Reactions
3 Views
LittleRiverDog
(@littleriverdog)
Posts: 2
New Member
Topic starter
 

Introduction

The initial concept for this project was to develop an email scheduler. This would enable any of my callin.io scenarios to schedule email dispatches at any future point. For instance, sending reminder emails to potential subscribers 30 days in advance.

Subsequently, I realized that the functionality I was building could be applied to any future task, not just sending emails. This includes sending Slack messages, posting on Facebook or Twitter, updating a CRM, and more. The possibilities are extensive.

I aimed for a solution that was…

  • Universal : So it can be invoked and reused from any scenario.
  • Flexible : Supporting any future action type, not limited to sending emails.
  • Unlimited : Capable of scheduling any task at any point in the future.
  • Simple : Once configured, it should be straightforward to use and easy to comprehend.
  • As low cost as possible : In terms of callin.io operations (refer to Part 3 for further details).

I believe the outcome is quite elegant and meets most of these objectives.

The operation of this system is best illustrated with a straightforward example: delaying emails until a specified future time. This is a frequent request in the community forum, for which a robust solution has been lacking. Hopefully, this will be helpful.

Note : The solution requires a fundamental understanding of concepts like webhooks and JSON formatting, so it might not be suitable for callin.io beginners. However, all necessary files and blueprints for the presented example are provided. Therefore, most users should be able to implement this without significant difficulty. Refer to Part 3 for a step-by-step setup guide.

This post is divided into three sections. The first offers an overview of its practical operation. The second section delves into the technical details of the scheduler's construction, which will be more advanced. However, the scheduler is a one-time setup and can then be utilized universally, making these complexities a singular effort. The final section contains concluding thoughts and outlines the steps required to get everything operational.

There's a considerable amount to cover, so please grab a coffee and let's begin…

Part 1 : A Simple Email Scheduler

To demonstrate the practical application of this system, I've developed a simple email scheduler. This allows any scenario to schedule an email for future delivery—days, weeks, or even months from now.

There are three components to implementing this: the scenario initiating the email request, the scenario responsible for sending the email, and a basic data store facilitating communication between the two. In this example, a Google Sheet will serve as the data store, though any other suitable data store could be employed.

We will examine these components in reverse order, as the implementation details are dictated by the component performing the core action—in this case, sending the emails.

Note : The provided example is in its developmental stage, featuring numerous additional checks, status logging, etc. Additionally, the steps are segmented for clarity. I discuss the associated costs later in Part 3, but a production-ready version would likely be more streamlined.

The Backend Scenario

This is the final piece of the puzzle—the scenario that actually dispatches the emails. It is implemented as a simple webhook.

The scenario does not need to be aware of the scheduling intricacies. Its sole function is to send an email. The complexities of scheduling the webhook invocation are managed by the scheduler (detailed in Part 2).

To send an email, the scenario requires three pieces of information:

  1. The recipient's email address
  2. The email subject line
  3. The email body (formatted as HTML)

This information is stored in the data store (covered in the next section), so the webhook must be provided with a key to enable it to retrieve this data. For this purpose, we will utilize the callin.io UUID (Universally Unique Identifier) as it is guaranteed to be unique.

Here's a depiction of the complete backend scenario:

Let's review it step-by-step:

  1. This is the webhook trigger. The only data provided by the trigger is the UUID.
  2. Next, it utilizes the UUID to retrieve the relevant email details from the Google Sheet data store (see the subsequent section).
  3. It verifies if the job has been canceled – a lot can occur over 30 days :slightly_smiling_face:
  4. If the job is still active, it proceeds to send the email.
  5. Upon successful dispatch, it logs the time and date in the data store.
  6. If the email sending fails, it also logs this event in the data store.
  7. Finally, if the job was canceled, it updates the data store to confirm that no email was sent.

Note that the final webhook responses are beneficial for debugging but would typically be omitted in a production environment, as no external process monitors these results. All error logging is handled via the Google Sheet.

The Datastore

This serves as a repository for storing information. While callin.io's built-in data stores could be used, they are quite limited, so I generally avoid them. For this example, a Google Sheet is employed due to its cost-effectiveness and widespread availability. Feel free to use alternatives like Airtable.

This is how I've structured the data. There are numerous ways to accomplish this, so feel free to deviate from this example as needed.

  1. The initial four columns are for control purposes, allowing a quick overview of each email's status.
  2. The subsequent three columns contain the three components of the email message. The entries here are for testing purposes.
  3. The final column is where the backend scenario records its results (Steps 5, 6, and 7 in the backend scenario). If it's blank, the email has not yet been sent.
  4. The Cancelled column enables the prevention of an email from being sent (refer to Step 3 in the backend scenario).
  5. If an email address is invalid (as demonstrated here), the last column will display the error. An email validation using a tool like regex could have been implemented, but capturing the failure in Step 4 proved simpler.

The Frontend Application

This refers to the scenario that needs to dispatch a delayed email. The primary part of the scenario can be performing any function; that aspect is not critical. We assume that at this stage, the scenario possesses the recipient's address, the subject line, the body content, and the required delay duration.

My example picks up from that point.

This is what it needs to accomplish:

  1. This step is solely for testing purposes, as it's where the test emails are generated. It involves setting a recipient, subject, body, and delay, mirroring the production scenario's requirements.
  2. The scenario then stores all this information in the aforementioned data store for retrieval by the backend webhook.
  3. It then constructs a simple JSON package containing the required delay and the webhook URL of the backend scenario. These two pieces of information are all the scheduler requires; it doesn't need to know the specifics of what is being scheduled. The complete details of this are covered in Part 2.
  4. It then invokes the scheduler webhook, transmitting the JSON data.
  5. If the invocation fails, it takes no action. The absence of an update in the data store will indicate that the email was never dispatched. Extensive testing involving hundreds of emails has shown this scenario to be highly reliable.
  6. If the invocation is successful, it updates the data store.

Summary So Far

This covers all the necessary steps each time you wish to use this system to schedule a new activity:

  1. Create a backend webhook to perform the desired activity – sending an email, posting to a blog, updating Twitter, etc.
  2. Establish a data store to hold the necessary information.
  3. Incorporate four steps into your main scenario.

Simply repeat this process as needed.

Part 2 : A Universal Scheduler

This is where the core functionality resides. Remember, you only need to create the scheduler once, and I provide the blueprints and an example data store to simplify the process as much as possible.

So, it might be time to refresh your coffee and prepare for the details…

The Workers

The scheduler itself functions as another webhook. You may recall that it receives a simple JSON file containing the specified delay and the webhook URL of the backend scenario that will actually send the emails (as explained in Part 1).

The scheduler's task is to generate a new worker scenario, schedule it with the required delay, and configure it to simply call the backend scenario's webhook URL.

Note : I refer to these temporary scenarios generated by the scheduler as worker scenarios. This is a personal notation for clarity; they possess no extraordinary characteristics.

To maintain organization, I keep all these workers in a separate folder, as there will be as many worker scenarios as there are pending emails. My current folder structure is as follows:

  1. The folder is named Scheduler Workers and currently contains 12 worker scenarios.
  2. The top 9 scenarios are still active, indicating that the emails have not yet been sent.
  3. The bottom 3 scenarios are disabled, signifying that these emails have been dispatched. These are automatically removed after 24 hours to prevent the folder from continuously growing.

Don't be concerned if the scheduler names appear confusing; everything will become clear as I explain how the scheduler operates.

The Main Scheduler Process

Let's dive into the details…

  1. This webhook receives the JSON file from the frontend scenario. The JSON file contains the required delay and the URL of the backend.
  2. The first action is to store these details in its own data store (this is a separate data store from the one discussed in Part 1). This is covered in the next section.
  3. It then generates a blueprint for the worker scenario it will create. Refining this part took several attempts.
  4. Subsequently, it creates the worker scenario using this blueprint, scheduling it with the specified delay.
  5. If the creation process is successful, it activates the worker.
  6. It then updates its own data store with the relevant details and concludes the process.
  7. If the worker scenario fails to activate, this event is logged.
  8. If the worker scenario cannot be created, this is also logged.

In practice, once the setup is complete and tested, steps 7 and 8 rarely occur. They are included as a safeguard and to ensure that any issues are logged.

I've included all of this in the attached blueprints, so you should be able to get this operational with minimal effort. Refer to Part 3 for more information.

The Scheduler Datastore

The scheduler needs to maintain a record of its operations, which it does using its own data store. Again, Google Sheets are used here, but any data store could be utilized.

Notice that there is one entry for each activity (sending emails, updating blog posts, etc.). In this example, the entries correspond to those in the data store used by the frontend scenario discussed in Part 1.

Let's examine this column by column:

  1. This is the same UUID used in Part 1, ensuring each activity has a unique identifier.
  2. The Target_Webhook is the URL of the backend scenario. In the example above, these are all identical as there is currently only one backend for sending emails. In the future, there might be multiple backends for different tasks like updating blog posts or posting on Twitter, each with its own webhook URL.
  3. The Job_Type column simply helps differentiate between various types of scheduled jobs. In this case, they are all emails. However, when scheduling multiple activities, this column aids in understanding ongoing processes.
  4. TargetRunTime specifies the intended execution time for the activity, such as when an email should be sent.
  5. Scheduled is set to TRUE if the worker scenario has been created and scheduled (as per Step 6).
  6. This column contains the name of the worker scenario. You'll notice these names correspond to the scenarios in the Scheduler Workers folder discussed earlier.
  7. This is the scenario ID assigned by callin.io to the worker, used later for deleting the scenario once it's no longer needed.
  8. Deleted is a flag indicating whether the worker scenario has been deleted. Note that the sheet contains 17 rows, 5 of which have been deleted, leaving 12 active workers, matching the number in the Scheduler Workers folder.

Cleaning Up

As emails are scheduled and new workers are created, the number of worker scenarios will gradually increase. You could manually delete these worker scenarios once they have completed their tasks. However, it's more efficient to create a separate scenario to automate this process automatically…

During the testing phase of this entire system, I opted to delete the workers in two stages: first, by marking them as "(finished)", and then, after a 24-hour interval, by actually deleting them. You might prefer to perform this in a single step, which is perfectly acceptable.

Reviewing the above:

  1. I retrieve a list of all worker scenarios that have completed their execution (i.e., are inactive).
  2. I then check the time elapsed since their last run.
  3. If it exceeds one hour, I append the string "(finished)" to the worker's name.
  4. If it exceeds 24 hours, I proceed to delete them.
  5. Once deleted, it locates the corresponding entry in the scheduler data store.
  6. And marks it as deleted (in column H).

Part 3 : Wrapping it all up

That was a substantial amount of information, and if you're still reading, congratulations! There are a couple of final points I need to address. I'll keep them concise.

The Costs

Nearly every step in a callin.io scenario consumes one operation (op), and there's a monthly limit on these operations. Therefore, efficiency is crucial, especially when dispatching thousands of emails, as these ops can accumulate rapidly.

What I've outlined above is intended as a starting point, and I acknowledge that it's not the most efficient. I've prioritized clarity over cost. However, once you have everything functioning, you should focus on optimizing it to minimize the number of operations.

In the example provided, the number of additional operations required for scheduling each email is approximately 14. I have an optimized version that uses 10. However, if your sole requirement is to send scheduled emails and you don't need the scheduler to be universal, you can eliminate the scheduler and integrate all the logic into the frontend, achieving this in just 6 operations. Reducing it further would be challenging!

Therefore, sending 100 emails will incur costs ranging from 600 to 1,400 ops.

Recreating My Example

I've gone through the process of recreating all of the above a few times and have gathered some tips to make it easier.

Note : I've made a concerted effort to ensure these steps are as clear and comprehensive as possible. However, some steps might differ for your specific setup. If you encounter any discrepancies, please add a comment below, and I will endeavor to update the guide accordingly to ensure completeness.

  1. First, upload the Universal Scheduler.xlsx spreadsheet to Google Sheets (or your chosen platform). The tab labeled Email Data serves as the data store for the frontend/backend, while Scheduled Jobs is the data store for the scheduler.

  2. Next, create two folders within callin.io: one for the four main scenarios you'll be creating and another for all the worker scenarios. I've named mine Scheduler and Scheduler Workers, but you can use any names you prefer.

  3. Then, create a new scenario in the Scheduler folder and import the backend webhook blueprint (blueprint.email_backend_webhook.json).

  4. Open the first module, Webhook Trigger, and create the webhook. I named mine Example Email Backend Webhook. Make a note of the webhook URL, as you'll need it later.

  5. Update the four Google modules to connect to the spreadsheet. If you haven't already established a connection to Google Sheets, you'll need to create one. Note that the first Google module, Fetch Email Details, will have an entry called 1.Unique_ID; this is expected and can be ignored for now.

  6. Update the Send Email module to utilize your specific email credentials. If you haven't already, you'll need to set up an email connection for your email sending service. Hopefully, this is already configured.

  7. Enable the scenario to run Immediately as data arrives. Then, save the scenario for the time being. One down, three to go.

  8. Create another new scenario in the same folder and import the frontend example blueprint (blueprint.email_frontend_example.json).

  9. As before, update the two Google references.

  10. Open the Create Email Example module at the beginning and input your email address in the Email_Address variable value. Set a delay for Delay_Mins; I typically start with 1 minute. You might want to adjust the formula in Email_Subject to reflect the same delay.

  11. Then, open the Create JSON module and select Create a data structure. Assign a name, such as Universal Scheduler Data Structure, and then add four items:

    1. Name: Unique_ID, Type: Text, Required: Yes
    2. Name: Target_Webhook, Type: Text, Required: Yes
    3. Name: Delay_Mins, Type: Text, Required: Yes
    4. Name: Job_Type, Type: Text, Required: Yes
  12. Ensure Strict is set to Yes at the bottom and then Save the data structure.

  13. You should now be able to input the URL of the backend webhook, saved in the previous steps, into the Target_Webhook field. Once completed, close the JSON module.

  14. Save and exit this scenario for now. We cannot yet configure the call to the Universal Scheduler as it hasn't been created. Therefore…

  15. Create a third scenario and import the scheduler blueprint (blueprint.universal_scheduler.json). This might appear complex, but you've come this far, you can handle it!

  16. Open the Receive Next Job module at the beginning. Create the webhook; I named mine Universal Scheduler Webhook (as I like to be creative). Make a note of the URL, as you'll need it shortly.

  17. Then, update the four Google modules to point to the Scheduled Jobs sheet. Similar to before, the first module will have some unresolved entries; ignore them for now.

  18. Open the Create Worker module. If you don't have a callin.io Connection, you'll need to create one at this stage. The suggested Environment URL will likely be something like https://eu1.make.com . HOWEVER, this did not work for me; I had to use https://eu2.make.com .

  19. For the API Key, open a new browser tab and navigate to your profile in the callin.io console. Look for an API tab…

  20. Click on Add token, provide a name, and select the following Scopes

    connections:read
    connections:write
    organizations:read
    organization-variables:read
    scenarios:read
    scenarios:run
    scenarios:write
    
  21. IMPORTANT: Copy the long key before you navigate away from this page, as you will only be able to see part of the key on subsequent visits. The key will resemble something like 7f94c90b-1774-4e25-9aa6-60bca98d171b (though it will be different, of course!).

  22. You will need to enter your own Organization ID and Team ID, as well as the workers folder you created initially.

  23. Open the Turn Worker On module and select the callin.io Connection you just created.

  24. Save the scenario and enable it to run Immediately as data arrives. Just one more scenario to create!

  25. Create the final new scenario and import the scheduler cleanup blueprint (blueprint.universal_scheduler_cleanup.json).

  26. As usual, update the two Google modules.

  27. Open the Fetch Inactive Workers module and fill in your Organization ID, Team ID, and the workers folder you created at the beginning.

  28. Open the Mark as Finished and Delete Scenario modules and select your callin.io Connection.

  29. We are almost ready to run; just one final step remains.

  30. Reopen the frontend example scenario. Open the Call Universal Scheduler module and paste the URL of the scheduler webhook at the top. Save the scenario.

  31. We can now run the frontend scenario by clicking Run once. It should execute without errors and update the Google sheet named Email Data. Verify that a new row has been added and that the details appear correct.

  32. Open the universal scheduler. Check the History section; there should be a Success entry. Examine the Scheduled Jobs Google sheet; a new line should be present. Also, confirm that a worker scenario exists in the callin.io folder. The worker should execute after 10 minutes (the default delay I set in the frontend, or whatever you've adjusted it to).

  33. After the delay, you should receive the email. Both Google sheets should be updated, and the worker scenario should be disabled. Please verify all these outcomes.

  34. If you encounter any issues, you can consult the History sections of the scenarios for debugging. One of the most effective testing methods is to set all webhooks to manual processing and then handle the calls manually, allowing you to observe the scenarios executing in real-time.

Final Thoughts

When recreating this, patience is key. Each step is relatively simple, but there are many of them.

Lastly, feel free to modify this system as much as you like to truly make it your own! A significant amount of effort went into this post, and I sincerely hope it proves beneficial to many of you :slightly_smiling_face:

The Files

blueprint.emailbackendwebhook.json (129.9 KB)
blueprint.emailfrontendexample.json (53.8 KB)
blueprint.universalscheduler.json (97.7 KB)
blueprint.universal
scheduler_cleanup.json (72.7 KB)

This is the spreadsheet; I've had to rename it with an .XLX extension to facilitate uploading. When you download it, please change the extension back to .XLSX before uploading it into Google Sheets…

Universal Scheduler.xlx (87.7 KB)

 
Posted : 10/08/2024 1:58 pm
samliew
(@samliew)
Posts: 293
Reputable Member
 

You can have your scenario schedule a cronjob call-back using apps like 0CodeKit that allows you to schedule a “callback” to a URL defined in a “Custom Webhook” trigger in another scenario at the exact time you want.

This single module replaces two of your scenarios above: Main Scheduler, Cleaning Up.

Hope this helps!

 
Posted : 10/08/2024 2:54 pm
LittleRiverDog
(@littleriverdog)
Posts: 2
New Member
Topic starter
 

I did check out 0CodeKit but wanted to build something entirely free. 0CodeKit offers some excellent functionality, but the number of free calls is restricted.

The scheduler uses 5 credits per call, and the free tier provides 25 credits, allowing for only 5 calls.

However, for $10/month, you get a substantial amount. It's definitely worth exploring, and they offer many other useful features.

Here is the documentation for the scheduler if anyone is interested...

 
Posted : 11/08/2024 5:39 am
Share: