Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
AWS CONNECTOR
Sign up with Airtable: https://airtable.com/signup
Specify Table name
Now the Airtable workspace/base is prepared. Let's configure the OAuth app.
clientId
and clientSecret
into Appmixer Backoffice.The last step is to configure clientId and clientSecret in the Appmixer Backoffice for appmixer:airtable
The AWS SNS Connector allows you to send SMS messages using Amazon Simple Notification Service (SNS). However, there are certain limitations and prerequisites that must be met for successful message delivery.
Before using the AWS SNS Connector to send SMS messages, ensure the following:
SNS Sandbox Registration:
If your AWS account is in SNS SMS Sandbox Mode, SMS messages can only be sent to verified phone numbers added in the AWS Console.
To register a phone number, follow .
Region Compatibility:
The SNS SMS service may be region-dependent. Ensure the selected region supports SNS SMS services.
Messages might only be delivered if the selected region matches your AWS account’s region.
Spending Limit:
AWS imposes a default spending limit of $1 USD for SMS.
If the limit is reached, no SMS will be sent.
Check and increase the limit in the AWS SNS console under SMS settings.
IAM Permissions:
To enable SMS delivery logs, you must have the necessary IAM permissions.
Without proper permissions, viewing SMS failures or debugging will not be possible.
Check IAM policies and request permissions if needed.
When an SMS is published via SNS, a 200 OK
response is returned.
However, this only confirms that SNS received the request. It does not confirm message delivery.
To track the status of an SMS:
Enable CloudWatch Logs for SNS SMS (requires permissions).
Use AWS SNS Delivery Status Logs to check failures.
Verify Phone Number:
Check if the recipient's number is added in the AWS SNS SMS sandbox.
Check Spend Limit:
Go to AWS SNS Console → Text Messaging (SMS) → Spend Limit.
Enable Logs for Debugging:
Navigate to AWS CloudWatch Logs and enable SNS delivery status logs.
Ensure IAM permissions allow access to logs.
Confirm Region Settings:
Use a region that supports SNS SMS delivery.
The AWS SNS Connector itself functions correctly. However, message delivery depends on AWS account settings, sandbox verification, spending limits, region settings, and logging permissions. By following the prerequisites and debugging steps above, users can ensure successful SMS delivery through AWS SNS.
For further assistance, consult AWS support or refer to the official SNS documentation.
SMS not received
Phone number not verified in sandbox
Add number to SNS sandbox
200 OK but no SMS
No balance in the AWS SNS SMS spend limit
Increase spending limit in AWS console
SMS delivery failure
Region mismatch
Use the correct region for SMS service
Unable to view logs
Insufficient IAM permissions
Request permissions to enable CloudWatch logs
Guides to Configuring OAuth2 Applications
A universal guide outlining the common steps involved in setting up any OAuth2 application, including registering your application, obtaining client credentials, and defining redirect URIs.
In order to offer the Azure Cognitive Services components to your customers, you have to set an Azure key and endpoint through the Backoffice first.
Sign up with Asana: https://app.asana.com
To create an OAuth app with Asana, follow the oficial documentation: https://developers.asana.com/docs/oauth#register-an-application.
Take note of the application's client ID and the client secret (which should be protected as a password)
You will need to specify the App name
and Redirect URL
in this step.
The App name
can be anything you want.
The Redirect URL
should be the URL where Asana will redirect the user after they authorize your app. This depends on your environment, usually in the following format: https://api.qa.appmixer.com/auth/asana/callback
clientId
and clientSecret
into Appmixer Backoffice.The last step is to configure clientId and clientSecret in the Appmixer Backoffice for appmixer:asana
Cloudflare List Integrations allow you to manage IP lists efficiently. You can add or remove IPs from Cloudflare IP Lists and specify the TTL (Time To Live), which automatically removes the IP after a set time period.
You can access custom lists in your Cloudflare account by navigating to Manage Account > Configurations > Lists. Cloudflare Lists connector only supports IP lists.
To authenticate the Cloudflare List connector, a Global API Key is required. To obtain this key, log in to your Cloudflare account and navigate to My Profile > API Tokens. Click on "View" to get the token:
In the Cloudflare connector authentication window, use your Cloudflare account email and the Global API Key to log in.
Account | Rule Policies | Edit
Account | Account Settings | Edit
Zone | Zone WAF | Edit
You can find all the necessary information on how to register an OAuth app in Jira in their official guide.
Access Attlassian Developer Console and create a new app. Click Create and select OAuth 2.0 integration.
Click on the "Register an app" button.
When you create an OAuth 2.0 (3LO) app, it's private by default. This means that only you can install and use it. If you want to distribute your app to other users, you must enable sharing.
Please follow the instructions in the official guide to set up distribution.
You need to specify the scopes that your app will use. For Jira, you will need to select the following scopes:
read:me
read:jira-work
write:jira-work
manage:jira-project
manage:jira-configuration
read:jira-user
report:personal-data
This is how the permissions should look like:
After you have created the app, you will see the overview of the app. Here you can find the client ID and client secret.
clientId
, clientSecret
into Appmixer Backoffice.The last step is to configure clientId and clientSecret in the Appmixer Backoffice for appmixer:jira
connector.
In 2020 Google introduced Application verification If an app uses Google APIs to access Google users’ data. This makes it impossible to use the Appmixer Google modules in production without the verification process. Such a verification process has to be done by you.
Most of the Google modules need what Google marks as Sensitive or Restricted Scope in order to work. For example, the Gmail module needs https://www.googleapis.com/auth/gmail.compose scope to create a new email and https://www.googleapis.com/auth/gmail.readonly to get new emails.
Originally, the Gmail module was the only one affected by this new Google policy. But new scopes are being marked as restricted. Later, they shut down the older Spreadsheets API, and with the new API, the only way to get a list of users Spreadsheets is to use the Drive API and restricted scope https://www.googleapis.com/auth/drive.metadata.readonly.
The last (up to now) module affected by this policy is the Drive module.
Go to the Google developer console at https://console.developers.google.com/ and create a new project.
When the project is created, add some APIs.
Add the Gmail API.
The Google Sheets API.
The Google Drive API.
The Google Calendar API.
The Analytics API.
The next step is the OAuth consent screen.
User Type - Internal vs External. The Internal user type allows you to use the Google modules without the app verification process mentioned at the beginning of this page. However, the modules will work only for accounts within your organization.
We are going to choose External for the purpose of this tutorial.
On the next page, leave the scopes empty.
You can add test users, but you can do this later. Only users added as test users will be able to authenticate!
Here is the Oauth consent screen configured. The app is in the testing mode, it is external (for users outside your organization). No more than 100 accounts (user cap) can be used with this application.
We need a client ID and a client secret.
Choose the Web application type and give it a name and authorize redirect URI.
For the purpose of this tutorial, we are going to use localhost:2200 which is the default port for the Appmixer API when running on localhost. If you are creating an Oauth application for your production/QA, the URI of your server will be here. The suffix /auth/google/callback will remain there.
You will get your Client ID and Client Secret.
They have to be inserted into the Appmixer. You can use the Backoffice to do that.
By default the callback URL passed to Google is in the form of <your-api-url>/auth/appmixer:google/callback. However in some cases, you could need to pass a different callback URL. You can do this by inserting a callbackUrl value in the service configuration:
In order to use Google API Webhooks, you have to verify your domain ownership. More about domain verification can be found here. If you use the Appmixer Self-Managed package and you run Appmixer on your own servers using your custom domain, you can use CNAME records to verify your domain ownership. Otherwise, if you are an Appmixer Hosted customer, you can use the HTML file method to verify your Appmixer tenant domain (api.YOUR_TENANT.appmixer.cloud
) at Google.
When you use a Google Webhook component without a verified domain, you will receive the Unauthorized WebHook callback channel error.
First, open the Google developers console and your Appmixer project and follow the instructions in the next picture.
Add your Appmixer tenant API URL (api.YOUR_TENANT.appmixer.cloud
) and continue to verify the domain ownership. Click 'Take me there' and then 'Add a property'.
Again, use api.YOUR_TENANT.appmixer.cloud
. Then download the HTML verification file.
After you download your HTML verification file, upload it via the Appmixer Backoffice interface to the Public Files section on the left. When you are done you should see your file listed:
Click the 'Verify' button to finish the verification process.
Now, you can go back to the developer console and add the api.YOUR_TENANT.appmixer.cloud
domain once more. But this time, because it's already verified, it will appear in the list of your domains.
The new domain has to be added to the OAuth consent screen as well.
Visit https://developers.facebook.com/ and create a new OAuth application (or use your existing one).
Add Facebook Login for Business and Marketing API products.\
Copy the App ID and App Secret a and paste it to your Appmixer Backoffice -> Connector Configuration page under the appmixer:facebookbusiness
configuration ID as clientId
and clientSecret
, respectively:\
Add your Appmixer tenant domains as your App domains:\
Note: replace YOUR_TENANT with the name of your hosted tenant. If you're a customer on an Appmixer Self-Managed plan, you can have a completely custom domains that Appmixer runs on. Use your own domains instead.
Set the OAuth redirect URI under Facebook Login for Business / Settings to https://api.YOUR_TENANT.appmixer.cloud/auth/facebookbusiness/callback
:\
The first thing you need is an app created in the developer portal. Navigate to https://developer.blackboard.com and signup for an account. When you are logged in, you can see a dashboard with a list of your applications and a button to create a new one:
Note that listed applications have an Application ID. We will need it when we register our application in the blackboard instance. Click on the + button to create a new application. A simple form appears with your Application name and description inputs, and a domains input, for your domains. After you fill in the fields, create the new application.
Log in with admin privileges, and go to the System Admin tab in the top menu. After that, look under Integrations for REST API Integrations:
Next, you will see a list with all your API Integrations. Click on Create Integration to add a new one:
After that, you will show a small form. One of the inputs is the Application ID which as we showed in the first step, is available in the applications list at the developer portal. Next, browse and select a user. We recommend using the same user you are logged in with. On the End User Access and Authorized To Act As User, select Yes in both options. After that click on Submit to finish the registration.
Finally, we need to set the OAuth clientId and clientSecret in the Backoffice to finish the integration. To obtain these values, go to the applications list in the developer portal. Find your application and click on the 3 dots at the end of the row. Then click on the Manage Keys option:
After you will see a list of your application keys. Click on the + sign to create a new key. Your key will be created and its information will be shown to you:
Be sure to store this information somewhere safe, as you won't be able to get it again.
Next, log into the Backoffice and go to Services. Next create a configuration for appmixer:blackboard. Finally, add your Application ID as clientId and Secret as clientSecret. Also set serverUrl to your instance URL:
And you are done! You are ready to use the Blackboard module.
Go to the Apps and Keys page (you can also get there from the Docs)
Add a new app:
Get the ClientId, ClientSecret and set the redirect URI:
clientId
: Integration Key
clientSecret
: Secret Keys
set the Redirect URI to https://[YOUR_API_BASE]/auth/docusign/callback
, for example https://api.appmixer.com/auth/docusign/callback
And set the clientId
, clientSecret
to the service configuration in the Backoffice.
Configure the Dropbox.
Open the Dropbox App Console.
Log in with your Dropbox account credentials.
Set the Redirect URI to https://[YOUR_API_BASE]/auth/dropbox/callback
, for example https://api.appmixer.com/auth/dropbox/callback
To allow additional users, click on "Enable Additional Users.
Configure the App Key as clientId
and the App Secret as clientSecret
in the service settings within the Backoffice, under the configuration ID appmixer:dropbox
.
To learn more about configuring connectors, please visit the Connector Configuration section.
Ensure that the following permissions are selected:
account_info.read
files.content.read
files.content.write
files.metadata.read
To authenticate Imperva connector, an API Key is required. To obtain this key, log in to your Imperva account and navigate to Account > My Profile > API Keys. Click on "Add API Key" to generate a new key. Enter a name (and optionally otherparameters) for the API Key and click on "Create".
Take note of the API ID and API Key. You will need these to authenticate the connector.
You can also view the API Key details in the API Keys section.
In the Imperva connector authentication window, use your Imperva API ID and API Key to log in.
There are three optional configuration parameters for the SetBlockIPRule
action of the Imperva connector. See the table below for more information.
blockIpMaxIpsPerRule
No
20
The maximum number of IP addresses that can be added to a block rule.
blockIpMaxParallelRequests
No
5
The maximum number of parallel requests that can be made to block IP addresses.
blockIpMaxRulesInImperva
No
500
The maximum number of custom rules that can be added to Imperva. See https://docs.imperva.com/bundle/cloud-application-security/page/rules/create-rule.htm
https://www.activecampaign.com/
Sign-in to to your ActiveCampaign account. Once you are logged in, locate the "Settings" option. This can be found in the bottom left corner of the homepage.
Within the settings page, you will see a section labeled 'Developer'. Click on this section to navigate to the Developer page. On the 'Developer' page, you will see your API URL and your API key.
use the API Key to authenticate ActiveCampaign components in the Appmixer Designer.
Unfortunately, since August 20, 2018 Highrise no longer accepts signups.
We keep the module in Appmixer for the older customers who have their Oauth2 application. If you are a new Appmixer customer, you won't be able to create the OAuth2 application and therefore use the Highrise module. In that case, you should remove it from the platform.
Requests to the Highrise API contain headers with User-Agent
property. This is set by default to Appmixer
, but it can be overwritten using Backoffice:
Sign up with Freshdesk: https://www.freshdesk.com
Navigate to Profile Settings -> API. Copy the API token on the right side. If you don't see the API key, please refer to the Freshdesk documentation for more information.
Use the copied token as API key
along with Domain
when connecting Freshdesk components in Appmixer. In this example the domain is acme-help
and the API key is the copied token.
Follow the next link:
You need to register one more URL in order to use the OneDrive Javascript file picker:
And select both of these options:
And save the changes:
The next step is creating clientId and clientSecret:
Unfortunately, the secret has an expiration, you're going to have to create a new one in 2 years:
Use the furthest date possible:
You will get a new secret, copy the Value into the clipboard:
This is the clientSecret, the Microsoft Secret ID is the Appmixer clientId. These secrets need to be added into the Backoffice:
And add new keys:
The clientId is the _Application (client) ID f_rom the main page:
Then go to the API permissions section:
Choose Microsoft Graph:
Delegated permissions:
Type-in Files:
And select these two:
Then type-in offline and select this permission:
The final set of permissions:
https://linkedin.com
You need to start by creating a new app on the LinkedIn Developer platform. Visit https://developer.linkedin.com/ and log into your account. Once logged in, navigate to "MyApps", then click on "Create App". Fill in all necessary information like app name, app logo, app description, app privacy policy URL etc. You also need to fill the "Associated LinkedIn page". This is the LinkedIn page that will be associated with the app. If you don't have a LinkedIn page, you can create one by clicking on the "Create a LinkedIn Page" link.
Navigate to the "Auth" section where you will find OAuth 2.0 settings. Here, you can set the "Authorized Redirect URLs". Take note of the application's client ID and the client secret (which should be protected as a password)
You will need to specify the Redirect URL
in this step.
The Redirect URL
should be the URL where LinkedIn will redirect the user after they authorize your app. This depends on your environment, usually in the following format: https://api.ACME.com/auth/asana/callback
Configure scopes: Navigate to the "Products" section, request access to "Share on LinkedIn" and "Sign In with LinkedIn using OpenID Connect", once approved (it's instant approval), you will be able to see them on the top of the screen:
Set clientId
and clientSecret
into Appmixer Backoffice for appmixer:linkedin
Verification will enable scopes to be used in the LinkedIn Appmixer connector: "Advertising API", "Lead Sync API", "Conversions API" and "LinkedIn Ad Library". These scopes are not used in the current version of the LinkedIn connector, therefore it's not required. You can still perform the verification by sending the verification link generated in the "Settings" tab to the Super Admin of the page associated with the App.
Process estimated time: instant approval.
Configure the Jotform.
Go to the JotForm website.
Log in with your JotForm account credentials.
Navigate to Account Settings: Click on your avatar/initials in the top right corner of the page. Choose "Settings" from the dropdown menu. Or go directly to the https://jotform.com/myaccount/api
Here, you might see a list of existing API keys if you've created any before. To generate a new one, click the “Create New Key” button
In the authentication pop-up enter your API key.
Sign up with Mailchimp: https://mailchimp.com/
Now you need to add an OAuth application. Navigate to: Profile > Extras > Registered apps.
Click on the "Register an app" button.
Now save the OAuth client and copy the client ID and secret. You will need them later.
clientId
, clientSecret
and subdomain
into Appmixer Backoffice.The last step is to configure clientId and clientSecret in the Appmixer Backoffice for appmixer:mailchimp
connector.
Sign up with Monday: https://auth.monday.com/users/sign_up_new
Navigate to your Monday Developer Center -> My access tokens. Create a new token and copy it.
Use the copied token as apiKey
when connecting Monday components in Appmixer.
Register an OAuth2 application in a similar way as usual Microsoft Azure App with few changes:
you don't need to configure OneDrive file picker
configure API permissions for Dynamics 365, see below
Go to the API permissions section:
Choose Dynamics CRM:
Delegated permissions:
The final set of permissions:
Creating CRM Dataverse environment
In order to use the Microsoft Dynamics 365 CRM connector, you'll need a Dataverse environment alongside your Microsoft account. You can review your environments in PowerPlatform administration. If you don't have any environment there, you can start with a trial. From the list of available apps, select Dynamics 365 Sales. Once you complete the sign-up process, the CRM environment will be created.
Once created you can view details of the environment and see Environment URL address which you will need when connecting a Microsoft Dynamics 365 CRM component to your account.
Note that in order to interact with Lead entities, you'll need to install Sales module into your Dataverse environment. Some entities like Contact or Account come with a Dataverse environment by default, but it is not the case for Lead entity.
Addition configuration
Microsoft Dynamics 365 CRM connector has additional settings when it comes to caching Dataverse environment schema and data. These settings can be configured in the Backoffice the same way as clientId or clientSecret:
listLookupOptionsCacheTTL
- Cache TTL for inspector variables in milliseconds. Default value 1 minute. Determines how long the values of various Lookup fields are stored in Appmixer cache. This affects values fetched into select boxes in the inspector for fields like Currency, Contact, etc.
listOutportCacheTTL
- Cache TTL for outports in milliseconds. Default 1 hour. Determines how long Appmixer caches the structure of the Dataverse instance (environment) for output ports. In case you add a new table or change an existing table (eg. by changing column type), this is the time you need to wait before the changes are propagated into components outPort schema.
listInspectorCacheTTL
- Cache TTL for inspector in milliseconds. Default 1 hour. Similar to listOutportCacheTTL
. Determines how long Appmixer caches the structure of the Dataverse instance (environment) for the inspector. In case you add a new table or change an existing table (eg. by changing column type), this is the time you need to wait before the changes are propagated into the component's inspector.
The YouTube API.
Sign up with Pipedrive: https://www.pipedrive.com
Navigate to My Account -> Personal Preferences -> API. Copy the API token.
Use the copied token as apiKey
when connecting Pipedrive components in Appmixer.
https://platform.openai.com
OpenAI Account: The first thing you need to do is have an OpenAI account. You can sign up for free.
After signing up, go to https://platform.openai.com/api-keys.
Create a New API Key: You would find a button or link labeled "Create a new key" or "Generate API keys". Click on it.
Keep this key safe and secure. It should not be shared with anyone or exposed in client-side code, as it could be used to misuse your OpenAI services.
use the API Key to authenticate OpenAI components in the Appmixer Designer.
You will need to create a new Redmine installation. Go to https://www.redmine.org/projects/redmine/wiki/RedmineInstall and follow the instructions. Alternatively, you can use a Docker image: https://hub.docker.com/_/redmine or a hosted solution.
Once you have your Redmine application make sure you have the REST API enabled. Go to Administration -> Settings -> API and check the "Enable REST web service" option.
After that, you will need to create a new API key. Go to Administration -> Users -> Edit user and create a new API key.
You will enter this API key and also the URL of your Redmine installation in when connecting each Redmine component.
You need to create a developer account first (more about different Hubspot accounts can be found herehttps://developers.hubspot.com/docs/api/account-types)
Create a new application.
In order to get rid of a warning that your users will see before installing Hubspot module, you have to verify your identity as an app provider.
You can manage the Contact & Support settings.
Or skip for now.
Set the application name, logo, and description.
Go to the Auth tab and copy&paste these values into the Backoffice.
Create Hubspot configuration in the Backoffice.
Go back to Hubspot and set the redirect URL, replace acme.com with your domain and keep the suffix /auth/hubspot/callback
Adding scopes.
Hit Save.
One more thing that you will need is the Hubspot, go back to the Apps page and hit the Get HubSpot API key button.
Then copy&paste the generated key and put it into the Backoffice.
Your domain is unverified Until your domain is set and verified, users will see a warning when installing your app.
Users will encounter this warning during the installation or management of the app. To eliminate this warning, the verification process must be completed. HubSpot mandates the configuration of a DNS TXT record as the sole method for verification.
To configure and verify a custom HubSpot app for a hosted instance of Appmixer, such as ACME.appmixer.cloud, you will need to establish a custom domain for the Appmixer instance (for example, ACME.com). Without this custom domain, domain verification will not be possible.
First, you need to set an Quickbooks key and endpoint through the Backoffice in order to provide the Quickbooks components to your customers. go to the https://developer.intuit.com/app/developer/dashboard to create a new app
Select the name and scope: \
Then you receive your clientID and clientSecret. Also add the redirect URL (https://api.appmixer.com/auth/quickbooks/callback) \
go the Appmixer BackOffice -> Configuration. Add new configuration: appmixer:quickbooks
Add your clientId and clientSecret keys
All trigger components (for example NewInvoice
) use QuickBooks webhooks. In order to use this component(s) in Appmixer, you have to register Appmixer's QuickBooks URI in QuickBooks first and set the webhook key in the Appmixer BackOffice.
Go to https://developer.intuit.com/app/developer/dashboard then choose your Appmixer app. Navigate to the "Webhooks" section either for the sandbox or production environment and add a new webhook.
Fill the field Endpoint URL with https://api.acme.appmixer.cloud/plugins/appmixer/quickbooks/webhooks
(replace acme
with your subdomain) and select the events you want to listen to. Configure the event triggers you want to listen to.
For detailed steps, see QuickBooks documentation.
Make sure to copy the Verifier Token and paste it to the Appmixer BackOffice -> Configuration -> appmixer:quickbooks
-> webhookVerifierToken
field.
To integrate Appmixer with Salesforce, you are going to need an app in your Salesforce account. You can either create a new one or use one that already exists.
To create an app, you have to login into Salesforce Developer portal. Once logged in, on the left menu, look up Platform Tools -> Apps -> App Manager. This will show you a list of your existing apps and provide the option to create a new one via the New Connected App on the top right side:
In the next screen, you will be requested to fill out some basic data about your new app. Look out for the API (Enable OAuth Settings) section. Tick the Enable OAuth Settings checkbox, and fill out the callback URL and scopes as shown in this screenshot:
Check the Require Secret for Web Server Flow, Require Secret for Refresh Token Flow and Enable Authorization Code and Credentials Flow checkboxes.
With your app ready, you can get the client id and secret required for integrate Salesforce into Appmixer. Go to the App Manager and search for your app in the list. On the dropdown in the right side, click the View option.
In the next screen, look out for API (Enable OAuth Settings) section and click on the Manage Consumer Details button. You should see now your client id (shown as Consumer Key) and secret (shown as Consumer Secret):
Now we can provide these values to Appmixer. Log into the Backoffice and go to the Configuration section. Create a configuration service for Salesforce as follows:
Finally, set the clientId (Consumer Key in Salesforce) and the clientSecret (Consumer Secret in Salesforce) like this:
The Salesforce OAuth authentication offers several behaviors when users try to authenticate. This is the summary of the available options (taken from Salesforce Oauth docs):
login
—The authorization server must prompt the user for reauthentication, forcing the user to log in again.
consent
—The authorization server must prompt the user for reapproval before returning information to the client.
select_account
—If present, take one of the following actions.
If zero or one hint is available and the user is logged in, show the approval page without prompting for login.
If zero or one hint is available and the user isn’t logged in, prompt for login.
If more than one hint is available, show the account chooser.
By default, we use the login
option, but you can configure it through the Backoffice.
Login into the Backoffice, and go to the Services module. Then add a new service like this:
2. Add a promptType key, and set it to one of the supported values by Salesforce:
The default login URL is https://login.salesforce.com, this can be changed with a baseUrl.
Log in to your Schoology server. Look at the top navigation bar and go to Tools > School Management.
In the new screen, click on Integrations in the left menu. Now click on the API tab. You should see your consumer key and secret. The secret is hidden, so you have to use the Reveal Existing Secret option.
Log into the Backoffice and go to the Services section. Add a new service configuration named appmixer:schoology
. After it is created, add the following configuration values:
Here is the explanation for each value:
baseUrl: The API base URL. Usually is https://api.schoology.com/v1, but there are some cases where it could be different.
consumerKey and consumerSecret: The consumer key and secret we got from the Schoology server.
domain: The subdomain part from your server. For example, if your server URL is https://example.schoology.com, then the domain value is just an example.
Appmixer offers a built-in component for taking Screenshots.
It uses API. If you want to have this module on the platform you have to create your own Screenshot account and set the API token in the Backoffice.
Since version 4.2.1 you can configure the string that appears on the Trallo Oauth page.
If you don't do this, the Oauth page will show Appmixer by default.
Configure the Shopify.
Sign Up or Log In: If you haven’t already, go to the and sign up for an account. If you have an account, just log in.
Create a New App: Once logged in, navigate to the "Apps" section using the dashboard menu. Click on "Create app" to get started, select the "Create app manually"
Note the clientId
and clientSecret
. Next, click on the "Choose distribution"
Select the "Public distribution" option
Go to the "Configuration" section from the main menu
set the Redirect URI to https://[YOUR_API_BASE]/auth/twitter/callback
, for example https://api.appmixer.com/auth/titter/callback
event version to the latest (2024-01)
Embed app in Shopify admin: false
set the compliance webhooks (it's required for the submission)
Customer data request endpoint: https://[YOUR_API_BASE]/plugins/appmixer/shopify/customers/data_request
Customer data erasure endpoint: https://[YOUR_API_BASE]/plugins/appmixer/shopify/customers/redact
Shop data erasure endpoint: https://[YOUR_API_BASE]/plugins/appmixer/shopify/shop/redact
Go to the "API Access" section from the main menu. In the "Access requests" request access for the "Protected customer data access" Fill details in the "Select your data use and reasons" section:
Fill the form in the "Provide your data protection details" section:
Configure the App Key as clientId
and the App Secret as clientSecret
in the service settings within the Backoffice, under the configuration ID appmixer:shopify
. Also, add the appStoreInstallRedirectUri
pointing to your instance, for example https://my.master-hamster-87765.appmixer.cloud/
Please note that only a 'developer' shop instance can be authenticated with your OAuth application unless your app is verified. 'Production' shop instances cannot authenticate with an unverified app. This guide does not cover all the requirements for the app verification process.
App admin url:
Once the app admin account is created and verified you need to create an instance.
It will take a minute or two or three or five to complete.
To configure webhooks in ServiceNow to receive notifications when a new record is created, follow these steps. This involves creating an Outbound REST Message, setting up a Business Rule to capture record creation, and configuring the webhook.
Navigate to System Web Services > Outbound > REST Message.
Click on the New button to create a new REST Message.
Fill in the fields such as:
Name: e.g., AppmixerNotifications
.
Click Submit and then open the newly created REST Message.
Click on the HTTP Methods tab and create a new HTTP Method.
Fill in the details such as:
Name: e.g., events
.
HTTP Method: POST.
Endpoint: Ensure this is filled with your webhook listener’s URL.
Optionally, add default HTTP Request Headers (e.g., Content-Type) and customize the Request Body or Query Parameters if needed.
Save the HTTP Method.
Navigate to the table for which you want to capture new record creation. For example, if it’s the Incident table, go to Incident > All.
Right-click on the form header and select Configure > Business Rules.
Alternatively, you can navigate to Activity Subscriptions -> Administration -> Business Rules.
Click on the New button to create a new Business Rule.
Set the fields as follows:
Name: e.g., Incident Rules
.
Table: Select the appropriate table (e.g., Incident
).
When: Insert.
Under Advanced, check the box Advanced to open the script editor.
In the Script section, add the following code:
Note that AppmixerNotifications
should be the same as the name of the Outbound REST Message in the previous step.
The code above handles only new records in the incidents
table. If you want to track new items in other tables, create another business rules and send the notifications to the Appmixer endpoint, where the type
of the event is in format like: '.insert':
To learn more about configuring connectors, please visit the section.
Now your developer instance is ready and accessible: . Note that when creating a new account, your instance ID (dev180380
in this case) will be different.
Endpoint: https://<YOUR_API_BASE>/plugins/appmixer/servicenow/events
for example https://api.appmixer.com/plugins/appmixer/servicenow/events
You'll need a Snowflake account. If you don't have one, you can sign up for a free trial on the Snowflake website.
You will need the following details to connect to your Snowflake database:
Account Name: This is often part of your Snowflake URL. You can find it in the Snowflake web interface after logging into your account.
Username: The username of the Snowflake user account you wish to use.
Password: The password for that Snowflake user.
Database Name: The name of the database you want to connect to.
Warehouse: The name of the virtual warehouse to use for running queries.
Account:
Your account identifier is usually part of your Snowflake URL.
When you log into Snowflake, the URL looks like https: //abc12345.us-east-1.snowflakecomputing.com
.
Here, abc12345.us-east-1
is your account identifier.
Username and Password:
The credentials you use to log into the Snowflake web interface.
Database and Warehouse:
After logging into Snowflake, navigate to the Databases and Warehouses tabs to see available options.
Choose the appropriate database and warehouse for your needs.
Go to https://api.slack.com/apps and create a new app.
Name your application and select a workspace.
Then you receive your clientID, clientSecret and signingSecret.
After that create an App-Level Token with the following scopes: authorizations:read
. This token is used verify authorization of the user for incoming Events API requests from Slack.
Copy and paste those values into the Backoffice.
clientID
is the Client ID
from the Slack app.
clientSecret
is the Client Secret
from the Slack app.
authToken
is the App-Level Token
from the Slack app.
signingSecret
is the Signing Secret
from the Slack app (just under Client Secret).
The next thing is adding the redirect URL and scopes.
Example redirect URL: https://api.YOUR_SUBDOMAIN.appmixer.cloud/auth/slack/callback
And add these scopes.
In order to use the application accross workspaces you have to distribute it.
Some components (New Channel Message
or New User
) use Slack Events API (https://api.slack.com/events-api). In order to use this components in Appmixer, you have to register Appmixer's Slack URI in Slack first.
Go to https://api.slack.com/apps then choose your Appmixer app.
Go to your app settings (https://api.slack.com/apps/{your-app-id}/event-subscriptions)
The Request URL has to point to your Appmixer backend server. The rest of the URL /plugins/appmixer/slack/events
has to remain like this. Your Appmixer backend server has to be running before you enter the URL. Slack will immediately try to connect to that URL.
Then you need to subscribe to these events:
messages.channels
, message.groups
for the New Channel Message
component
team_join
for the New User
component
Don't forget to hit Save Changes:
You can now use the appmixer.slack.list.NewChannelMessageRT component in a flow.
In Appmixer 4.2 the Slack module was upgraded. We did remove all the legacy API and switched to their newest API. For some time, it was possible to use the Appmixer Slack module with both legacy Slack apps and the new Slack apps. But the legacy apps are deprecated and will retire in August 2021 (https://api.slack.com/legacy/workspace-apps). If you use the legacy Slack app, you either have to create a new Slack app or migrate the existing one.
In order to migrate your Slack legacy app, go to Oauth & Permission section.
Then scroll down to Scopes and hit the Update Scopes button.
Ignore the first page, because that's all about scopes for bots.
Scroll all the way down and hit continue. You get to a second page with user scopes.
Select channels:read, channels:history, channels:write, groups:read, groups:write, groups:history, chat:write and users:read.
Verify the scopes and migrate the app.
And confirm. The Appmixer Slack module since version 4.2.1 is already updated and ready for the new Slack apps.
Flows that use the SendPrivateChannelMessage component won't work. The component will appear unauthenticated. The reason is a different Oauth scope in the new Slack app version. In this case, the user has to authenticate the component again.
https://twilio.com
First, you'll need to create an account on the Twilio website.
After signing up, go to https://console.twilio.com/
Copy the Account SID and Auth Token from the "Account Info". Alternatively you can navigate to the "API Keys & Tokens" section, SID and Auth Token are available there.
use the Auth Token and Account Token to authenticate the Twilio components in the Appmixer Designer.
When creating Typeform Oauth2 application (https://admin.typeform.com/account#/section/apps), you can choose between access tokens that do expire in a week and access tokens that do not expire.
The token expiration is new in Typeform. Appmixer components before version 4.2 cannot handle it.
If you're registering the Oauth application for version 4.2 or newer, you can decide if you want to use tokens that do expire or not. If you decide to go with Unlimited then you don't have to set anything in the Backoffice, it is the default behavior. On the other hand, if you want to use tokens that expire, you have to go to the Backoffice and set the useRefreshToken to true
for the appmixer:typeform
service.
Then you can use the One Week configuration.
appmixer.utils.ai
Appmixer AI utilities allows you to introduce AI functionality in your integrations, automations or simply extend your product with AI capabilities via Appmixer's No-Code Studio.
After installing the appmixer.utils.ai
module, go to the Backoffice to configure the OpenAI apiKey
for the module:
Additionally, you can also override the default OpenAI models for the SendPrompt
(default is "gpt-4o"
) and GenerateImage
(default is "dall-e-3"
) components:\
Configure the Twitter.
Navigate to the Twitter Developer Platform and sign up if you haven't already.
You might be required to apply for a developer account where you'll need to provide some information about your intended use of Twitter’s API.
After logging in to your Twitter Developer account, navigate to the Developer Dashboard.
Twitter should automatically create a default project and application for you. If it doesn't, click "Create Project" and follow the prompts to set up a new one. After creating the project, you'll be able to create an application. Navigate to your project and select "Add App."
Update application details, such as the name, description, and icon, by clicking the "Edit" button. Next, configure the authentication settings.
Set the "App permissions" to "Read and write" and "Type of App" to "Web App"
Set the Redirect URI to https://[YOUR_API_BASE]/auth/twitter/callback
, for example https://api.appmixer.com/auth/titter/callback
Get the Client Id
and Client Secret
from the Keys and Tokens page
Configure the App Key as clientId
and the App Secret as clientSecret
in the service settings within the Backoffice, under the configuration ID appmixer:twitter
.
To learn more about configuring connectors, please visit the Connector Configuration section.
Controls
The Digest Component collects incoming data entries and releases them as a batch based on a threshold, a schedule, or a manual trigger via a webhook. This component is useful for aggregating data over time and sending it at controlled intervals.
threshold
number
Number of entries required to trigger output. If set, output is triggered when this count is reached.
webhookUrl
string
A generated webhook URL that allows manual triggering of output.
minute
string
Specifies at which minutes of the hour the digest should run. Supports cron-like syntax.
hour
string
Specifies at which hours of the day the digest should run. Supports cron-like syntax.
dayMonth
string
Specifies on which days of the month the digest should run. Supports cron-like syntax.
dayWeek
string
Specifies on which days of the week the digest should run. Supports cron-like syntax.
timezone
string
Timezone for scheduling, defaults to GMT.
outputType
string
Format of the output data. Options: first
(first entry), array
(all entries as an array), object
(one entry at a time), file
(CSV file).
in
Receives data entries for aggregation.
out
Emits the aggregated data based on the configured condition (threshold, schedule, or webhook trigger).
Set a Threshold (Optional)
Define the threshold
parameter to release data when a certain number of entries are reached.
Schedule Batch Processing (Optional)
Configure minute
, hour
, dayMonth
, and dayWeek
to process batches at specific times.
Manual Trigger via Webhook (Optional)
Use the webhookUrl
to release entries manually at any time.
Select Output Type
Choose how the collected data should be sent (first
, array
, object
, or file
).
Supports multiple trigger methods: Data is released either via count (threshold
), time (schedule
), or manually (webhook
).
Flexible scheduling: Uses cron-like syntax for precise control over execution times.
Different output formats: Users can choose between receiving data as an array, a single entry, or a downloadable CSV file.
The Condition Component evaluates expressions and directs data to either the true
or false
output port based on the result. It supports logical conditions using multiple operators and can process complex logical groupings.
expression
object
A structured condition with logical operators (AND
, OR
) and comparison rules.
=
Checks if two values are equal.
!=
Checks if two values are not equal.
>
Checks if a value is greater than another.
>=
Checks if a value is greater than or equal to another.
<
Checks if a value is less than another.
<=
Checks if a value is less than or equal to another.
%
Checks if a number is divisible by another (modulo).
empty
Checks if a value is empty.
notEmpty
Checks if a value is not empty.
contains
Checks if a value contains another value.
range
Checks if a value falls within a specified range.
regex
Validates if a value matches a given regular expression.
true
Fires if the condition evaluates to true
.
false
Fires if the condition evaluates to false
.
Define Conditions
Create conditions using logical grouping (AND
, OR
).
Choose an operator and provide a comparison value.
Evaluate Inputs
Inputs are compared using selected operators.
Logical groups (AND
, OR
) allow complex expressions.
Direct Output
If the condition evaluates to true
, data is sent to the true
port.
If the condition evaluates to false
, data is sent to the false
port.
Logical Groups: Conditions can be grouped using AND
and OR
for flexible logic handling.
Range Handling: Use rangeMin
and rangeMax
to check if a value is within a range.
Regex Support: Allows pattern matching for advanced validation.
Visit the Use App Events tutorial for more information.
The On App Event Component triggers when an event is received from an external application, allowing automation workflows to respond dynamically to incoming app events.
This component listens for app events and processes incoming data, making it ideal for integrating external systems and event-driven workflows.
event
string
The name of the event to listen for.
eventDataExample
string
An example JSON structure representing the event data.
out
Emits the received app event data.
Registers Webhook: Subscribes to the specified event using the Appmixer API.
Receives Event Data: Waits for an event from the external system.
Processes Data:
Parses the received JSON payload.
Extracts structured data based on eventDataExample
.
Emits Structured Output: Sends the processed event data to the out
port.
data
object
The JSON payload received from the app event.
Webhook-Based Execution: Efficiently listens for app events using webhooks.
Dynamic Event Handling: Supports various app events based on configurations.
Ideal for Integrations: Enables event-driven automation between external apps and workflows.
This component is essential for workflows requiring real-time event processing and seamless app integration.
The Counter Component maintains an internal counter that increments with each received message. It can also be reset to a specified value.
count
number
The initial counter value (default: 0
).
increment
number
The amount by which the counter increments (default: 1
, can be negative for decrement).
item
Triggers the counter increment.
reset
Resets the counter to a specified value or its initial value if not provided.
count
Emits the updated counter value after each increment.
Set Initial Value and Increment
Configure the count
parameter for the starting value.
Set increment
to define how much the counter increases or decreases.
Increment Behavior
Each time a message arrives at the item
port, the counter increases by the increment
value.
Reset Counter
Sending a message to the reset
port resets the counter to the specified value or its initial value if no value is provided.
Supports both increment and decrement operations.
State is maintained: The counter retains its value across executions until reset.
Useful for tracking counts in loops, event occurrences, or rate-limiting scenarios.
The Each Component iterates over a list and outputs a new message for each item. It allows processing list elements individually, enabling sequential execution of subsequent actions.
list
array
/ string
The list to iterate over. If a string is provided, it must be a valid JSON array.
in
Receives the list to iterate over.
item
Emits each item from the list with additional metadata.
done
Fires after all items have been processed, providing the total count and correlation ID.
item
Portindex
The index of the current item in the list.
value
The value of the current item.
count
Total number of items in the list.
correlationId
Unique identifier for tracking batch processing.
done
Portcount
The total number of processed items.
correlationId
Unique identifier for tracking batch execution.
Provide a List
Ensure the input list
is an array or a valid JSON string representing an array.
Processing Items
Each item in the list is sent separately to the item
port.
The done
port is triggered after all items are processed.
Handling Index and Correlation ID
Each item is assigned an index
and a correlationId
to track processing.
Supports JSON Parsing: If list
is a string, it will attempt to parse it as JSON.
State Persistence: Keeps track of progress and resumes processing if interrupted.
Efficient for Iteration: Useful for batch processing workflows requiring individual handling of list elements.
The Join Component is used to merge multiple input connections. It forwards every received message to the output port without modification.
in
Receives messages from multiple sources.
out
Emits the received input messages as output.
Connect Multiple Inputs
Attach multiple input sources to the in
port.
Forward Messages
Every received message is immediately sent to the out
port.
Does Not Modify Messages: Messages are forwarded exactly as received.
Useful for Routing: Can be used to merge multiple data streams into one processing path.
The Switch component allows conditional branching of messages based on predefined expressions. It evaluates up to five conditions (out1
to out5
) and routes messages accordingly. If none of the conditions match, the message is sent to the else
output.
expression1
- expression5
expression
Conditional expressions for respective outputs (out1
to out5
).
else
default
Messages that do not match any condition are sent here.
Each expression supports AND/OR logic, allowing complex conditions.
=
Equal to
!=
Not equal to
>
Greater than
>=
Greater than or equal to
<
Less than
<=
Less than or equal to
%
Modulo operation (divisibility check)
empty
Checks if the value is empty
notEmpty
Checks if the value is not empty
contains
Checks if a value contains a substring
range
Checks if a value falls within a specified range
regex
Matches a value against a regular expression
out1
- out5
Message is sent to the first matching output based on conditions.
else
If no conditions are met, the message is sent to this output.
Define Conditions
Configure up to 5 expressions (expression1
to expression5
) for routing messages.
Set Operators
Use supported operators to define comparison logic.
Handle Default Cases
If none of the conditions match, the message is sent to else
.
The OnStart Component is a trigger that fires once when the flow starts. It emits a timestamp indicating when the flow was initiated.
out
PortAutomatic Trigger
The component automatically fires when the flow begins execution.
Start Time Emission
Outputs a started
timestamp that can be used for logging, tracking, or scheduling purposes.
One-Time Execution: Triggers only once per flow execution.
Useful for Logging: Can be used to capture flow start times for analytics or debugging purposes.
The JoinEach Component is designed to work with the Each Component. It collects multiple incoming messages based on a specified count and outputs them as a single batch when all expected messages have arrived. If some messages do not arrive within a timeout period, it outputs the partial batch along with count information.
out
Porttimeout
PortSet Up with Each Component
Ensure messages sent to in
contain a correlationId
matching the one from the Each component.
Define Expected Count
Send a message to the count
port specifying how many messages the component should wait for.
Processing Messages
The component collects messages based on the correlationId
.
When all expected messages arrive, they are sent as an array to the out
port.
If not all messages arrive before the timeout, the timeout
port emits the partial batch.
Works with Correlation IDs: Ensures messages from the same batch are correctly joined.
Timeout Handling: Prevents indefinite waiting if some messages fail to arrive.
Batch Processing: Useful for grouping messages into structured outputs for further processing.
The SetVariable Component assigns variables dynamically based on input type and value. It allows storing values in various formats such as text, numbers, dates, booleans, and files.
Text (text
) – Stores a string value.
Textarea (textarea
) – Stores a longer text value.
Number (number
) – Stores a numeric value.
Date-Time (date-time
) – Stores a date and time.
Boolean (toggle
) – Stores a true/false value.
File (filepicker
) – Stores a selected file reference.
Provide Variables
Send an object containing variable names, types, and values to the in
port.
Processing Variables
The component assigns the variables dynamically based on their types.
Retrieve Output
The processed variables are sent as an object through the out
port.
Flexible Variable Storage: Supports multiple data types for dynamic variable assignments.
Useful for Workflow Automation: Allows temporary storage of values for later processing.
Outputs in Key-Value Format: The output is structured as an object containing assigned values.
Convert a CSV file to XLSX (Excel 2007+ XML Format).
Provide File ID:
Select the CSV file to be converted using its fileId
.
Processing:
The component processes the file and converts it into an XLSX format.
Retrieve Output:
The converted file is returned with a fileId
and fileName
for further use.
The CSV2HTML component converts a CSV file into an HTML table. This allows for easier visualization and sharing of structured data.
Select the CSV File:
Provide the fileId
of the CSV file to be converted.
Processing:
The component will transform the CSV file into an HTML table.
Retrieve Output:
The converted HTML file’s fileId
and fileName
will be returned.
The Archive component allows you to create a ZIP or TAR archive from a set of files. You can also append files to an existing archive, provided it matches the specified format.
Select Output Format:
Choose between zip
or tar
as the output format.
Provide File Name:
Specify a name for the archive file, ensuring it includes the appropriate extension (.zip
or .tar
).
Add Files to Archive:
Upload or select files using their fileId
.
If an archiveFileID
is provided, new files will be appended to the existing archive.
Processing:
The component processes and creates the archive, returning the generated file’s fileId
and fileName
.
Batch File Compression: Combine multiple files into a single ZIP or TAR archive.
Incremental Archiving: Append files to an existing archive for ongoing data collection.
Data Backup & Export: Store or transfer multiple files efficiently as a single archive.
Overview
The HTML to CSV Converter component allows you to transform an HTML table file into a CSV format. This is useful for extracting structured data from HTML documents into a tabular format.
Input Parameters
Output Parameters
Instructions
Select the HTML File:
Provide the fileId
of the HTML document that contains a table.
Processing:
The component extracts the table data from the HTML file and converts it into a structured CSV format.
Retrieve Output:
The component returns a new file with fileId
and fileName
in CSV format.
The DataUri2File component converts a Data URI into a file. It extracts the file content and metadata from the Data URI and generates a downloadable file.
Provide a Data URI
Input a valid Data URI string that represents a file.
Specify a File Name (Optional)
Enter a desired file name with or without an extension.
If omitted, the extension is inferred from the file type.
Processing
The component extracts the file content from the Data URI and saves it as a downloadable file.
Returns the fileId
and fileName
of the newly created file.
The CSV2JSON component converts a CSV file into a JSON file.
Provide File ID:
Select or upload the CSV file to be converted.
Processing:
The component processes the CSV file and converts it into JSON format.
Output:
The converted JSON file is available with its unique fileId
and fileName
.
out
Fires when the flow starts, providing the start time.
started
The timestamp (ISO 8601 format
) indicating when the flow started.
in
Receives individual messages to be aggregated. Each message must contain a correlationId
.
count
Specifies the total number of messages expected. Must be provided before aggregation begins.
out
Fires when all expected messages arrive, emitting an array of collected messages.
timeout
Fires when the timeout is reached before all expected messages arrive, providing the partial batch and counts.
items
The array of collected messages.
items
The array of messages that arrived before the timeout.
count
The number of expected messages.
arrived
The number of messages that actually arrived.
in
Accepts an object containing variables and their values.
out
Emits the assigned variables as an object.
fileId
string
✅
The ID of the file to convert.
fileId
string
The unique ID of the converted XLSX file.
fileName
string
The name of the converted file.
fileId
string
✅
The unique identifier of the CSV file.
fileId
string
The ID of the generated HTML file.
fileName
string
The name of the generated file.
outputFormat
string
✅
The format of the archive file. Options: zip
, tar
.
fileName
string
✅
The name of the output archive file, including the extension.
files
object
❌
List of files to be added to the archive. Each file is represented by its fileId
.
archiveFileID
string
❌
The ID of an existing archive file. If provided, new files will be appended to this archive.
fileId
string
The unique ID of the newly created or updated archive file.
fileName
string
The name of the archive file.
fileId
string
✅
The ID of the HTML file to convert.
fileId
string
The unique ID of the converted file.
fileName
string
The name of the generated CSV file.
dataUri
string
✅
A Data URI representation of a file.
fileName
string
❌
The desired name of the output file. If omitted, a default name (result.[EXT]
) is assigned based on the file type.
fileId
string
The unique ID of the generated file.
fileName
string
The name of the resulting file.
fileId
string
✅
The ID of the CSV file to convert.
fileId
string
The unique ID of the generated JSON file.
fileName
string
The name of the generated JSON file.
HTML to JSON Converter
Description Convert an HTML table file into a JSON file.
Input Parameters
fileId
(string) - The ID of the file to be converted.
Output Parameters
fileId
(string) - The ID of the newly created JSON file.
fileName
(string) - The name of the converted JSON file.
JSON2CSV Component
Overview
The JSON2CSV component converts a JSON file into a CSV format. This enables easier data handling for spreadsheet applications and data processing tools that require CSV input.
Input Parameters
fileId
(string, required): The ID of the JSON file to be converted.
Output Parameters
fileId
(string): The ID of the newly generated CSV file.
fileName
(string): The name of the generated CSV file.
Instructions
Upload JSON File: Provide the fileId
of the JSON file you wish to convert.
Processing: The component processes the input JSON and converts it into a structured CSV format.
Retrieve Output: The resulting CSV file's fileId
and fileName
are returned as output.
JSON2XLSX Component
Overview
The JSON2XLSX component converts a JSON file into an XLSX (Excel 2007+ XML Format) file.
Input Parameters
fileId
string
✅
The ID of the JSON file to convert.
Output Parameters
fileId
string
The ID of the generated XLSX file.
fileName
string
The name of the XLSX file.
Instructions
Select the Input JSON File:
Upload or provide the fileId
of the JSON file to be converted.
Processing:
The component converts the JSON file into an XLSX format.
Retrieve the XLSX File:
The output provides the new file's fileId
and fileName
.
XLSX2CSV Component
The XLSX2CSV component converts an XLSX Excel 2007+ XML format file into a CSV file.
fileId
string
✅
The ID of the XLSX file to convert.
fileId
string
The unique ID of the newly created CSV file.
fileName
string
The name of the converted CSV file.
Provide File ID:
Select an XLSX file using its fileId
.
Conversion Process:
The component processes the XLSX file and converts it into CSV format.
Retrieve Output:
The converted file is returned with its fileId
and fileName
.
This component is useful for extracting tabular data from Excel files into a widely used CSV format for further processing.
The HTML2XLSX component converts an HTML table file into an XLSX (Excel 2007+ XML Format) file.
fileId
string
✅
The ID of the HTML file containing the table to convert.
fileId
string
The unique ID of the converted XLSX file.
fileName
string
The name of the generated XLSX file.
Provide the File ID:
Select or upload an HTML file containing a table.
Conversion Process:
The component extracts tabular data from the HTML file and converts it into an Excel spreadsheet.
Retrieve the Output:
The resulting XLSX file is provided as output with a unique fileId
and fileName
.
The JSON2HTML component converts a JSON file into an HTML table.
fileId
string
✅
The ID of the JSON file to be converted.
fileId
string
The unique ID of the converted HTML file.
fileName
string
The name of the output HTML file.
Provide the JSON File:
Select or upload a JSON file using its fileId
.
Processing:
The component reads the JSON file and converts it into an HTML table.
Retrieve the Output:
The output file is provided as an HTML file with a fileId
and fileName
.
Overview
The AddRows component allows adding multiple rows to an existing dataset.
Input Parameters
dataset
array
✅
The dataset to which rows will be added.
rows
array
✅
An array of row objects to be added.
Output Parameters
dataset
array
The updated dataset with the new rows.
Instructions
Provide the Dataset:
Ensure the dataset is passed in the correct array format.
Add Rows Data:
Provide multiple rows in an array where each object represents a row.
Process & Output:
The component adds the rows and outputs the updated dataset.
Overview
The AddRow component allows adding a single row to a dataset.
Input Parameters
dataset
array
✅
The dataset to which the row will be added.
row
object
✅
The row object containing column values.
Output Parameters
dataset
array
The updated dataset with the new row.
Instructions
Provide the Dataset:
Ensure the dataset is passed in the correct array format.
Add Row Data:
Provide the new row as an object where keys match column names.
Process & Output:
The component adds the row and outputs the updated dataset.
Description
Create a new CSV file with an initial text content.
Input Parameters
filename
string
✅
The name of the CSV file.
initialContent
string
✅
CSV formatted content to initialize the new file.
delimiter
string
✅
A character to use as a delimiter between columns.
Output Parameters
out
object
Generates an output port with headers.
Instructions
Provide File Name: Enter the desired CSV file name.
Set Delimiter: Define a delimiter (default: ,
).
Enter Initial Content: Add pre-formatted CSV content.
Process CSV Creation: The component generates a CSV file.
XLSX2JSON Component
Description: Convert an XLSX (Excel 2007+ XML format) file to JSON.
Input Parameters:
fileId (string
, required): The ID of the file to be converted.
Output Parameters:
fileId (string
): The unique ID of the converted JSON file.
fileName (string
): The name of the generated JSON file.
Overview
The AddColumn component allows adding a new column to an existing dataset.
Input Parameters
dataset
array
✅
The dataset to which the column will be added.
columnName
string
✅
The name of the new column.
defaultValue
any
❌
Default value for the new column (optional).
Output Parameters
dataset
array
The updated dataset with the new column.
Instructions
Provide the Dataset:
Ensure the dataset is passed in the correct array format.
Specify Column Details:
Enter the column name.
Optionally, provide a default value for all rows.
Process & Output:
The component adds the column and outputs the updated dataset.
XLSX2HTML Component
The XLSX2HTML component converts an XLSX (Excel 2007+ XML) file into an HTML table format.
fileId
string
✅
The ID of the XLSX file to convert.
fileId
string
The unique ID of the converted file.
fileName
string
The name of the generated HTML file.
Select the XLSX File
Choose an existing XLSX file by providing its fileId
.
Processing
The component will convert the XLSX file into an HTML table.
Output
The resulting file ID and name will be returned for further use.
Description: The ExportCSV component exports structured data into a CSV file.
Define Data Source:
Provide structured data in an array format, ensuring proper key-value mapping.
Set Output File Name:
Choose a name for the CSV file, including the .csv
extension.
Include Headers (Optional):
Enable Use Headers if the first row should contain column headers.
Processing:
The component processes the structured data and creates a CSV file.
The newly generated file is output with its fileId
and fileName
.
Description: The DeleteRows component removes specific rows from a CSV file based on a defined filter.
Use Column Names (Optional):
If the first row of the CSV file contains column names, enable the Use Column Names toggle.
Otherwise, columns will be identified by their index.
Processing:
The component scans the CSV file for rows that match the given filter.
Any matching rows are removed, and a new CSV file is generated.
The DeleteColumns component removes specific columns from a CSV file. It allows users to define which columns to delete, using either column names (if headers exist) or column indices.
Determine Column Identification Method
If the first row contains column names, set withHeaders
to true
. This ensures the correct columns are deleted based on their names.
If column names are not available, columns are identified by index.
Select Columns to Remove
Specify the columns you want to delete.
Ensure the correct format is used for column identification.
Processing
The component removes the selected columns and generates a new CSV file.
The modified file is returned with a new fileId
.
Output
The processed file can be downloaded or used as input for further data processing.
The GetRows component retrieves rows from a CSV file. It allows filtering of rows based on conditions and supports column headers for structured data extraction.
Use Column Headers (Optional)
If the first row of the CSV file contains column names, enable the Use Column Names toggle.
Otherwise, columns will be identified by their index.
Apply Row Filters (Optional)
Enable Filter Rows if you want to return only the rows matching specific criteria.
Filters can be defined based on column values.
Processing
The component extracts rows based on the provided criteria.
The processed data is returned as an array of rows.
Output
The retrieved rows can be used for further processing or stored for reference.
Description: The GetRow component finds the first row that matches the provided filter and returns the entire row.
Use Column Names (Optional):
If the first row of the CSV file contains column names, enable the Use Column Names toggle.
Otherwise, columns will be identified by their index.
Processing:
The component scans the CSV file for a row that matches the given filter.
Once found, it returns the entire row as an object.
Description: The GetCell component finds the first row that matches the provided filter and returns the row value under the specified column.
Use Column Names (Optional):
If the first row of the CSV file contains column names, enable the Use Column Names toggle.
Otherwise, columns will be identified by their index.
Processing:
The component scans the CSV file for a row that matches the given filter.
Once found, it returns the value of the specified column in that row.
data
array
✅
The structured data to be converted into CSV format.
fileName
string
✅
The name of the output CSV file, including the .csv
extension.
withHeaders
boolean
✅
Set to true
if the first row should contain column headers.
fileId
string
The ID of the generated CSV file.
fileName
string
The name of the exported CSV file.
withHeaders
boolean
✅
Set to true
if the first row represents column names (CSV header) and should be used to identify columns.
fileId
string
The ID of the processed CSV file.
withHeaders
boolean
✅
Set to true
if the first row represents column names (CSV header) and should be used to identify columns.
fileId
string
The ID of the processed CSV file.
withHeaders
boolean
✅
Set to true
if the first row represents column names (CSV header) and should be used to identify columns.
filterRows
boolean
✅
Enable filtering to return only rows that match specified conditions.
fileId
string
The ID of the processed CSV file.
rows
array
The extracted rows from the CSV file.
withHeaders
boolean
✅
Set to true
if the first row represents column names (CSV header) and should be used to identify columns.
fileId
string
The ID of the processed CSV file.
row
object
The full row that matches the filter.
withHeaders
boolean
✅
Set to true
if the first row represents column names (CSV header) and should be used to identify columns.
fileId
string
The ID of the processed CSV file.
cell
string
The value found in the specified column.
HTTP CONNECTOR
The ImportCSV component allows users to import a CSV file into a workflow. It supports specifying a delimiter and can process files from user input.
fileId
string
✅
The ID of the CSV file to be imported.
filename
string
❌
The name of the CSV file.
content
string
❌
The content of the CSV file in raw string format.
delimiter
string
❌
A character used to separate columns in the CSV file. Default is ,
.
fileId
string
The ID of the imported CSV file.
Select a CSV File
Use the file picker to choose a CSV file for import.
Alternatively, specify the file ID if it's already stored.
Define the Delimiter (Optional)
If the CSV file uses a delimiter other than a comma (,
), specify it in the Delimiter field.
Processing
The component will parse the CSV file and make it available for further processing.
The UpdateRows component updates all rows in a CSV file that match a provided filter. It allows users to modify values while maintaining the integrity of the dataset.
withHeaders
boolean
✅
Set to true
if the first row represents column names (CSV header) and should be used to identify the columns.
fileId
string
The ID of the updated CSV file.
Enable Column Header Usage (Optional)
If your CSV file contains column headers in the first row, enable withHeaders
to update rows based on column names.
Otherwise, updates will be performed based on column indices.
Specify Filters and Updates
Define conditions that identify which rows should be updated.
Provide new values for selected columns.
Processing
The component updates all matching rows in the CSV file.
The modified file is outputted with a new fileId
.
The HTTP DELETE component allows sending HTTP DELETE requests to external services, enabling users to remove resources via API calls. It provides response details, including body, headers, and status code.
url
string
Fully qualified URI of the target resource (required).
headers
object
JSON object containing request headers.
body
string
/ object
Optional data sent as the request body.
bodyBase64Encode
boolean
If enabled, encodes request body in Base64 format.
responseEncoding
string
Encoding format for the response (default: utf8
). Supported values: ascii
, utf8
, utf16le
, base64
, latin1
, binary
, hex
, null
(binary mode).
body
The response body from the external service.
headers
The response headers.
request
Request metadata.
statusCode
The HTTP status code of the response.
Ensure API Endpoint Supports DELETE Requests
The url
must be a valid API endpoint accepting DELETE requests.
Set Necessary Headers
If authentication is required, pass headers (e.g., Authorization
token).
Provide Optional Request Body
Some APIs accept a request body in DELETE requests.
Configure Response Encoding
Use utf8
(default) or other supported encodings.
Review API Response
Check statusCode
and body
for success or failure messages.
appmixer.utils.email
Appmixer offers a built-in component for sending emails from the platform.
The SendEmail component uses our Mandrill API KEY by default. It is recommended to change that to your own Mandrill API KEY if you want to keep using Mandrill as the email service provider. You can do that from the Backoffice. The service ID is appmixer:utils:email in this case. You can also change the default from email address which is set to no-reply@appmixer.com.
The SendEmail component is an example, how sending emails from Appmixer can be done. You can have your own mailing system which you would like to use or you would like to use a different third-party application. In such a case, you should remove the SendEmail component from the system and offer your users your own implementation.
The HTTP POST Component sends HTTP POST requests to external services. It supports both raw body and form-data submissions, making it suitable for various API interactions.
Supports both raw
and form-data
body types.
form-data
mode allows file uploads along with other input types.
Proper headers must be set for application/json
or multipart/form-data
depending on the request type.
The RenameColumn component renames a column in a CSV file. This is useful for standardizing column names before processing or integrating CSV data with other systems.
Specify CSV File and Delimiter
Provide the fileId
of the CSV file containing the column to be renamed.
Set the correct delimiter
(e.g., ,
, ;
, ).
Rename the Column
Enter the current column name in columnName
.
Specify the new column name in newColumnName
.
Processing
The component will rename the specified column in the CSV file.
The modified file will be outputted with a new fileId
.
The HTTP PUT Component sends HTTP PUT requests to external services. It is commonly used for updating or replacing resources via API calls.
Ensure API Endpoint Supports PUT Requests
The url
must be a valid API endpoint accepting PUT requests.
Set Necessary Headers
If authentication is required, pass headers (e.g., Authorization
token).
Provide Request Body
Use the body
parameter to define the data being sent.
Configure Response Encoding
Use utf8
(default) or other supported encodings.
Review API Response
Check statusCode
and body
for success or failure messages.
PUT requests typically replace entire resources: Ensure that the provided data includes all necessary fields.
Proper headers must be set: Use Content-Type: application/json
for JSON payloads.
url
string
Fully qualified URI of the target resource (required).
headers
object
JSON object containing request headers.
bodyType
string
Specifies the format of the request body. Options: raw
, form-data
.
body
string
/ object
Request body content for raw
mode.
bodyFormData
object
Key-value pairs for form-data
submissions.
bodyBase64Encode
boolean
If enabled, encodes request body in Base64 format.
responseEncoding
string
Encoding format for the response (default: utf8
). Supported values: ascii
, utf8
, utf16le
, base64
, latin1
, binary
, hex
, null
.
body
The response body from the external service.
headers
The response headers.
request
Request metadata.
statusCode
The HTTP status code of the response.
fileId
string
✅
The ID of the CSV file where the column should be renamed.
delimiter
string
✅
The character used to separate columns (default: ,
).
columnName
string
✅
The name of the column to be renamed.
newColumnName
string
✅
The new name for the column.
fileId
string
The ID of the CSV file with the renamed column.
url
string
Fully qualified URI of the target resource (required).
headers
object
JSON object containing request headers.
body
string
/ object
Data sent as the request body.
bodyBase64Encode
boolean
If enabled, encodes request body in Base64 format.
responseEncoding
string
Encoding format for the response (default: utf8
). Supported values: ascii
, utf8
, utf16le
, base64
, latin1
, binary
, hex
, null
.
body
The response body from the external service.
headers
The response headers.
request
Request metadata.
statusCode
The HTTP status code of the response.
The Uptime Component periodically checks if a site is up or down. It sends a message when the site's availability status changes (i.e., the site goes from up to down or vice versa).
target
string
The URL of the site to check for uptime. Must start with http://
or https://
.
up
Triggered when the site becomes available. Returns the target URL, status code, and downtime details.
down
Triggered when the site becomes unavailable. Returns the target URL and status code.
Set the Target URL
Provide a valid URL in the target
field.
Monitoring Behavior
The component periodically checks the status of the target site.
If the site is down, it triggers the down
port.
When the site comes back up, it triggers the up
port with downtime duration details.
Ensure Proper Protocol
Only URLs with http://
or https://
are allowed.
Status Codes Handling:
A 5xx
response is considered a down status.
Successful responses indicate the site is up.
State Persistence:
The component remembers the last known state of the site to detect status changes.
Time Tracking:
When a site goes down, the downtime duration is recorded and sent once it comes back up.
The HTTP Response Component is used to return an HTTP response to a webhook. It allows setting a response body, status code, and headers, enabling proper interaction with external services that expect a structured HTTP response.
body
string
/ object
Data to be sent as the response body.
statusCode
number
The status code of the response.
headers
string
/ object
The headers object associated with the response.
Set Response Body
Provide a valid JSON or string value as the response body.
Define Status Code
Choose an appropriate HTTP status code (e.g., 200
for success, 400
for client errors, 500
for server errors).
Specify Headers
Ensure proper content types (e.g., Content-Type: application/json
) when sending structured data.
Ensure Valid JSON (If Required)
If Content-Type
is application/json
, the body should be a valid JSON object.
Handles JSON Parsing: Automatically parses headers and body if they are provided as JSON strings.
Error Handling:
If headers cannot be parsed, an error will be logged and the response will be canceled.
If Content-Type
is application/json
, but the body is invalid JSON, an error is thrown.
The HTTP PATCH component allows sending HTTP PATCH requests to external services. It is typically used to partially update resources via API calls. The connector processes the request and returns the response details, including body, headers, and status code.
url
string
Fully qualified URI of the target resource (required).
headers
object
JSON object containing request headers.
body
string
/ object
Data sent as the request body for updating the resource.
bodyBase64Encode
boolean
If enabled, encodes request body in Base64 format.
responseEncoding
string
Encoding format for the response (default: utf8
). Supported values: ascii
, utf8
, utf16le
, base64
, latin1
, binary
, hex
, null
(binary mode).
body
The response body from the external service.
headers
The response headers.
request
Request metadata.
statusCode
The HTTP status code of the response.
Ensure API Endpoint Supports PATCH Requests
The url
must be a valid API endpoint accepting PATCH requests.
Set Necessary Headers
If authentication is required, pass headers (e.g., Authorization
token).
Provide Request Body
Use the body
parameter to define the data that needs to be updated.
Configure Response Encoding
Use utf8
(default) or other supported encodings.
Review API Response
Check statusCode
and body
for success or failure messages.
The HTTP GET component allows sending HTTP GET requests to external services. It is useful for retrieving data from APIs and other web resources. The connector processes the request and returns the response details, including body, headers, and status code.
url
string
Fully qualified URI of the target resource (required).
headers
object
JSON object containing request headers.
body
object
Query parameters as a JSON object of key/value pairs.
bodyBase64Encode
boolean
If enabled, encodes request body in Base64 format.
responseEncoding
string
Encoding format for the response (default: utf8
). Supported values: ascii
, utf8
, utf16le
, base64
, latin1
, binary
, hex
, null
(binary mode).
body
The response body from the external service.
headers
The response headers.
request
Request metadata.
statusCode
The HTTP status code of the response.
Ensure API Endpoint Supports GET Requests
The url
must be a valid API endpoint accepting GET requests.
Set Necessary Headers
If authentication is required, pass headers (e.g., Authorization
token).
Provide Query Parameters
Use the body
parameter to define query parameters as a JSON object.
Configure Response Encoding
Use utf8
(default) or other supported encodings.
Review API Response
Check statusCode
and body
for success or failure messages.
The Dynamic Webhook component allows users to subscribe to a webhook URL, triggering the connector when a request is received at that URL. It can be used to listen for incoming HTTP requests and process their data in real time.
Automatically generates a webhook URL upon activation.
Supports HTTP methods for subscribing (POST
, PUT
, GET
, PATCH
) and unsubscribing (DELETE
, POST
, PUT
, GET
, PATCH
).
Can capture request data, headers, query parameters, and method details.
Allows configuring immediate response behavior.
onStartRequestUrl
string
URL to subscribe the webhook when the flow starts.
onStartRequestMethod
string
HTTP method to use when subscribing. Options: POST
, PUT
, GET
, PATCH
.
onStartRequestHeaders
object
HTTP headers to send when subscribing.
onStartRequestBody
object
Request body data to send when subscribing.
onStopRequestUrl
string
URL to unsubscribe the webhook when the flow stops.
onStopRequestMethod
string
HTTP method to use when unsubscribing. Options: DELETE
, POST
, PUT
, GET
, PATCH
.
onStopRequestHeaders
object
HTTP headers to send when unsubscribing.
onStopRequestBody
object
Request body data to send when unsubscribing.
immediateResponse
boolean
Whether to send an immediate response when a request is received. Defaults to true
.
method
The HTTP method of the incoming request.
data
The body of the incoming request.
query
Query parameters from the request URL.
headers
HTTP headers of the incoming request.
Configure Webhook Subscription
Provide the onStartRequestUrl
and onStartRequestMethod
to define how the webhook should be registered.
Set necessary headers and request body for subscription if required.
Listen for Incoming Requests
The webhook will trigger whenever a request is sent to the generated URL.
Captured request details (method, headers, body, query) will be forwarded to the output port.
Configure Webhook Unsubscription
Provide onStopRequestUrl
and onStopRequestMethod
to define how to remove the webhook upon stopping.
Use runtime expressions to pass response data from the start request.
Immediate Response Option
Enable immediateResponse
if you want to return an automatic response to incoming requests.
If disabled, use a Response component later in the flow.
The Webhook Trigger Component fires when an HTTP request is received on a dynamically generated webhook URL. It allows external services to send data to this webhook, which then triggers a workflow in response.
request
Fires when an HTTP request is received. Contains method, data, query, and headers.
url
string
The dynamically generated webhook URL.
generateInspector
boolean
If true
, generates an inspector for configuration.
immediateResponse
boolean
If true
, sends an immediate response with received data. If false
, use the Response component to customize responses.
Retrieve Webhook URL
When the component is initialized, it generates a webhook URL.
Use this URL to receive HTTP requests from external sources.
Configure Immediate Response
Enable immediateResponse
to automatically return the received data.
Disable immediateResponse
to define a custom response later in the flow using the Response component.
Process Incoming Requests
When a request is received, it is sent to the request
output port.
The data includes HTTP method, body, headers, and query parameters.
Custom Responses: If immediateResponse
is false
, ensure a Response component is added to handle replies.
Supports Any HTTP Method: Webhooks can receive GET
, POST
, PUT
, DELETE
, or any other HTTP method.
Dynamic URL Generation: Each instance generates a unique webhook URL.
The Request Approval Component allows human interaction within workflows by creating approval tasks that require manual decision-making. The component sends notifications to requesters and approvers and waits for an approval or rejection response.
This component is used to introduce human decision points into an automated workflow, allowing designated approvers to review and approve or reject tasks.
task
Receives task details including title, description, requester, approver, and deadline.
created
Emits task details when a new approval request is created.
approved
Emits task details when the task is approved.
rejected
Emits task details when the task is rejected.
due
Emits task details if the task reaches the decision deadline without a response.
title
string
Title of the approval request.
description
string
Description of the request.
requester
string
Email of the person requesting approval.
approver
string
Email of the person who must approve or reject the request.
decisionBy
string
Deadline by which a decision must be made (ISO 8601 format).
Receives Approval Request: Accepts task details and creates an approval request.
Sends Notifications: Notifies the requester and approver via email.
Waits for Response: Listens for an approval or rejection decision.
Processes Decision:
If approved, sends the task details to the approved
port.
If rejected, sends the task details to the rejected
port.
If the deadline passes without a decision, sends the task details to the due
port.
Webhook Support: Uses webhooks to listen for approval decisions.
Time-Sensitive Workflow: Ensures decisions are processed before deadlines.
Workflow Control: Useful for integrating human validation in business processes.
This component is essential for workflows requiring human approval, such as purchase approvals, compliance checks, or document validations.
The After All Component is used at the end of a test flow to ensure that all connected components have completed execution before proceeding. It collects and processes all incoming messages before sending them to the output.
This component is ideal for synchronizing multiple parallel branches in a workflow, ensuring all branches finish before continuing.
in
Receives messages from all connected components.
out
Sends messages after all connected components have completed execution.
timeout
number
Maximum time (in seconds) to wait before timing out. Default is 180
seconds.
Receives Messages: Captures messages from all connected components.
Tracks Completion: Checks if all expected messages have arrived.
Timeout Handling: If not all messages arrive within the specified time, it times out and reports missing components.
Sends Output: Once all messages are received (or upon timeout), sends the collected messages to the output port.
Ensures Completion: Guarantees that all necessary components finish before execution proceeds.
Timeout Protection: Prevents indefinite waiting by enforcing a timeout.
Parallel Synchronization: Ideal for workflows with multiple parallel branches requiring synchronization.
This component is essential for orchestrating complex test flows where all processes must complete before continuing execution.
The Language module uses by default Appmixer credentials. To overwrite them with yours, go to Backoffice and add the following settings:
There is a built-in Tasks component RequestApprovalEmail. This component is more like a template, something you can use and built your own version that better fits into your platform. Although, even this component may be useful.
This component sends one email to the requestor and another email to the approver. It uses the Appmixer Mandrill account to do that. Those emails contain a few Appmixer related information - Reply-To, From Email, ... These can be configured using Backoffice.
The email contains an Appmixer logo by default, you can change that by setting logoURL key in the Backoffice and pointing that to your own product log.
Appmixer contains components that we use to test components. You can use them to test your own custom component. There is a ProcessE2EResults component which sends result from a test run (flow run) into a predefined list of email addresses.
It uses our Mandrill API KEY by default. You can change that in the Backoffice.
The Request Approval Email Component automates email notifications for approval tasks, allowing requesters and approvers to receive structured email alerts about approval requests.
This component sends approval request emails to designated approvers and notifies requesters when a decision is made. It is useful for workflows requiring manual review and approval steps.
Receives Approval Request: Accepts task details and creates an approval request.
Generates Email Notifications: Sends formatted email notifications to the approver and requester.
Handles Responses:
If approved, sends the task details to the approved
port.
If rejected, sends the task details to the rejected
port.
If the deadline passes without a decision, sends the task details to the due
port.
Stores Dashboard Links: Provides a dashboard link in the email for approvers and requesters to manage tasks.
Uses Webhooks: Listens for approval decisions via webhook responses.
Email Customization: Emails contain task details, approval/rejection buttons, and dashboard links.
Automated Follow-ups: Ensures requesters and approvers receive timely notifications.
This component is essential for approval-based workflows that require structured email notifications and human validation processes.
The last thing is the PEOPLE_TASKS_DASHBOARD_URL environment variable that has to be set for the Appmixer engine. This URL will be used in the RequestApprovalEmail component to create links in the email that is sent to the approver. This has to point to a page in your application that will display the tasks. That is a page that implements the widget.
task
Receives task details including title, description, requester, approver, and deadline.
approved
Emits task details when the request is approved.
rejected
Emits task details when the request is rejected.
due
Emits task details if the task reaches the decision deadline without a response.
title
string
Title of the approval request.
description
string
Description of the request.
requester
string
Email of the person requesting approval.
approver
string
Email of the person who must approve or reject the request.
decisionBy
string
Deadline by which a decision must be made (ISO 8601 format).