Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Guides to Configuring OAuth2 Applications
A universal guide outlining the common steps involved in setting up any OAuth2 application, including registering your application, obtaining client credentials, and defining redirect URIs.
https://www.activecampaign.com/
Sign-in to to your ActiveCampaign account. Once you are logged in, locate the "Settings" option. This can be found in the bottom left corner of the homepage.
Within the settings page, you will see a section labeled 'Developer'. Click on this section to navigate to the Developer page. On the 'Developer' page, you will see your API URL and your API key.
use the API Key to authenticate ActiveCampaign components in the Appmixer Designer.


Sign up with Airtable: https://airtable.com/signup
Specify Table name
Now the Airtable workspace/base is prepared. Let's configure the OAuth app.
clientId and clientSecret into Appmixer Backoffice.The last step is to configure clientId and clientSecret in the Appmixer Backoffice for appmixer:airtable









To authenticate with AWS services Lambda, S3, and SNS, you need to provide Access Key Id and Secret Access Key. You can create a new user in the AWS IAM console (https://console.aws.amazon.com/iam/home) and assign the necessary permissions to that user.
The neccessary permissions depend on the services and components you want to use. For example, if you want to use the New S3 Object component, you need to assign the AmazonS3ReadOnlyAccess policy to your user. If you want to use the Send SNS Message component, you need to assign the AmazonSNSFullAccess policy.
Controls
HTTP CONNECTOR
HTML to JSON Converter
Description Convert an HTML table file into a JSON file.
Input Parameters
fileId (string) - The ID of the file to be converted.
Output Parameters
fileId (string) - The ID of the newly created JSON file.
fileName (string) - The name of the converted JSON file.
Unfortunately, since August 20, 2018 Highrise no longer accepts signups.
We keep the module in Appmixer for the older customers who have their Oauth2 application. If you are a new Appmixer customer, you won't be able to create the OAuth2 application and therefore use the Highrise module. In that case, you should remove it from the platform.
Requests to the Highrise API contain headers with User-Agent property. This is set by default to Appmixer, but it can be overwritten using Backoffice:
You will need to create a new Redmine installation. Go to https://www.redmine.org/projects/redmine/wiki/RedmineInstall and follow the instructions. Alternatively, you can use a Docker image: https://hub.docker.com/_/redmine or a hosted solution.
Once you have your Redmine application make sure you have the REST API enabled. Go to Administration -> Settings -> API and check the "Enable REST web service" option.
After that, you will need to create a new API key. Go to Administration -> Users -> Edit user and create a new API key.
You will enter this API key and also the URL of your Redmine installation in when connecting each Redmine component.
Sign up with Asana:
To create an OAuth app with Asana, follow the oficial documentation: .
Take note of the application's client ID and the client secret (which should be protected as a password)
You will need to specify the App name and Redirect URL in this step.
The App name can be anything you want.
The Redirect URL should be the URL where Asana will redirect the user after they authorize your app. This depends on your environment, usually in the following format: https://api.qa.appmixer.com/auth/asana/callback
clientId and clientSecret into Appmixer Backoffice.The last step is to configure clientId and clientSecret in the Appmixer Backoffice for appmixer:asana
Cloudflare WAF integrations allows you to actively block attacker IP by using Cloudflare WAF infrastructure.
Cloudflare WAF connector support authentication via Account API Token or User API Token
To create the Account API Token, go to the Manage Accounts > Account API Tokens, click on "Create Token". On the next screen, select the "Create Custom Token" option.
Name the token and select the permission:
Account | Rule Rulesets | Edit
To create the User API Token, go to the Profile > API Tokens, click on "Create Token". On the next screen, select the "Create Custom Token" option.
Name the token and select the permission:
Account | Rule Rulesets | Edit
In the Cloudflare connector authentication window, use your API Token to log in.
Configure the Dropbox.
Open the .
Log in with your Dropbox account credentials.
Set the Redirect URI to https://[YOUR_API_BASE]/auth/dropbox/callback, for example https://api.appmixer.com/auth/dropbox/callback
To allow additional users, click on "Enable Additional Users.
Configure the App Key as clientId and the App Secret as clientSecret in the service settings within the Backoffice, under the configuration ID appmixer:dropbox.
To learn more about configuring connectors, please visit the section.
Ensure that the following permissions are selected:
account_info.read
files.content.read
files.content.write
files.metadata.read
Sign up with Freshdesk:
Navigate to Profile Settings -> API. Copy the API token on the right side. If you don't see the API key, please refer to the for more information.
Use the copied token as API key along with Domain when connecting Freshdesk components in Appmixer. In this example the domain is acme-help and the API key is the copied token.
https://platform.openai.com
OpenAI Account: The first thing you need to do is have an OpenAI account. You can sign up for free.
After signing up, go to https://platform.openai.com/api-keys.
Create a New API Key: You would find a button or link labeled "Create a new key" or "Generate API keys". Click on it.
Keep this key safe and secure. It should not be shared with anyone or exposed in client-side code, as it could be used to misuse your OpenAI services.
use the API Key to authenticate OpenAI components in the Appmixer Designer.
Appmixer offers a built-in component for taking Screenshots.
It uses API. If you want to have this module on the platform you have to create your own Screenshot account and set the API token in the Backoffice.
Configure the Jotform.
Go to the .
Log in with your JotForm account credentials.
Navigate to Account Settings: Click on your avatar/initials in the top right corner of the page. Choose "Settings" from the dropdown menu. Or go directly to the
Here, you might see a list of existing API keys if you've created any before. To generate a new one, click the “Create New Key” button
In the authentication pop-up enter your API key.
appmixer.utils.ai
Appmixer AI utilities allows you to introduce AI functionality in your integrations, automations or simply extend your product with AI capabilities via Appmixer's No-Code Studio.
After installing the appmixer.utils.ai module, go to the Backoffice to configure the OpenAI apiKey for the module:
Additionally, you can also override the default OpenAI models for the SendPrompt (default is "gpt-4o") and GenerateImage (default is "dall-e-3") components:\
appmixer.utils.email
Appmixer offers a built-in component for sending emails from the platform.
The SendEmail component uses our Mandrill API KEY by default. It is recommended to change that to your own Mandrill API KEY if you want to keep using Mandrill as the email service provider. You can do that from the Backoffice. The service ID is appmixer:utils:email in this case. You can also change the default from email address which is set to [email protected].
The SendEmail component is an example, how sending emails from Appmixer can be done. You can have your own mailing system which you would like to use or you would like to use a different third-party application. In such a case, you should remove the SendEmail component from the system and offer your users your own implementation.





























The CSV2HTML component converts a CSV file into an HTML table. This allows for easier visualization and sharing of structured data.
fileId
string
✅
The unique identifier of the CSV file.
fileId
string
The ID of the generated HTML file.
fileName
string
The name of the generated file.
Select the CSV File:
Provide the fileId of the CSV file to be converted.
Processing:
The component will transform the CSV file into an HTML table.
Retrieve Output:
The converted HTML file’s fileId and fileName will be returned.
Convert a CSV file to XLSX (Excel 2007+ XML Format).
fileId
string
✅
The ID of the file to convert.
fileId
string
The unique ID of the converted XLSX file.
fileName
string
The name of the converted file.
Provide File ID:
Select the CSV file to be converted using its fileId.
Processing:
The component processes the file and converts it into an XLSX format.
Retrieve Output:
The converted file is returned with a fileId and fileName for further use.
XLSX2JSON Component
Description: Convert an XLSX (Excel 2007+ XML format) file to JSON.
Input Parameters:
fileId (string, required): The ID of the file to be converted.
Output Parameters:
fileId (string): The unique ID of the converted JSON file.
fileName (string): The name of the generated JSON file.
Description
Create a new CSV file with an initial text content.
Input Parameters
filename
string
✅
The name of the CSV file.
initialContent
string
✅
CSV formatted content to initialize the new file.
delimiter
string
✅
A character to use as a delimiter between columns.
Output Parameters
out
object
Generates an output port with headers.
Instructions
Provide File Name: Enter the desired CSV file name.
Set Delimiter: Define a delimiter (default: ,).
Enter Initial Content: Add pre-formatted CSV content.
Process CSV Creation: The component generates a CSV file.
The JSON2HTML component converts a JSON file into an HTML table.
fileId
string
✅
The ID of the JSON file to be converted.
fileId
string
The unique ID of the converted HTML file.
fileName
string
The name of the output HTML file.
Provide the JSON File:
Select or upload a JSON file using its fileId.
Processing:
The component reads the JSON file and converts it into an HTML table.
Retrieve the Output:
The output file is provided as an HTML file with a fileId and fileName.
Overview
The AddRow component allows adding a single row to a dataset.
Input Parameters
dataset
array
✅
The dataset to which the row will be added.
row
object
✅
The row object containing column values.
Output Parameters
dataset
array
The updated dataset with the new row.
Instructions
Provide the Dataset:
Ensure the dataset is passed in the correct array format.
Add Row Data:
Provide the new row as an object where keys match column names.
Process & Output:
The component adds the row and outputs the updated dataset.
JSON2XLSX Component
Overview
The JSON2XLSX component converts a JSON file into an XLSX (Excel 2007+ XML Format) file.
Input Parameters
fileId
string
✅
The ID of the JSON file to convert.
Output Parameters
fileId
string
The ID of the generated XLSX file.
fileName
string
The name of the XLSX file.
Instructions
Select the Input JSON File:
Upload or provide the fileId of the JSON file to be converted.
Processing:
The component converts the JSON file into an XLSX format.
Retrieve the XLSX File:
The output provides the new file's fileId and fileName.
JSON2CSV Component
Overview
The JSON2CSV component converts a JSON file into a CSV format. This enables easier data handling for spreadsheet applications and data processing tools that require CSV input.
Input Parameters
fileId (string, required): The ID of the JSON file to be converted.
Output Parameters
fileId (string): The ID of the newly generated CSV file.
fileName (string): The name of the generated CSV file.
Instructions
Upload JSON File: Provide the fileId of the JSON file you wish to convert.
Processing: The component processes the input JSON and converts it into a structured CSV format.
Retrieve Output: The resulting CSV file's fileId and fileName are returned as output.
XLSX2HTML Component
The XLSX2HTML component converts an XLSX (Excel 2007+ XML) file into an HTML table format.
fileId
string
✅
The ID of the XLSX file to convert.
fileId
string
The unique ID of the converted file.
fileName
string
The name of the generated HTML file.
Select the XLSX File
Choose an existing XLSX file by providing its fileId.
Processing
The component will convert the XLSX file into an HTML table.
Output
The resulting file ID and name will be returned for further use.
Overview
The AddRows component allows adding multiple rows to an existing dataset.
Input Parameters
dataset
array
✅
The dataset to which rows will be added.
rows
array
✅
An array of row objects to be added.
Output Parameters
dataset
array
The updated dataset with the new rows.
Instructions
Provide the Dataset:
Ensure the dataset is passed in the correct array format.
Add Rows Data:
Provide multiple rows in an array where each object represents a row.
Process & Output:
The component adds the rows and outputs the updated dataset.
The Apigee connector allows you block requests from specific IP addresses. To make the integration work, you need to
register Google OAuth2 application
perform setup in Apigee instance
In 2020 Google introduced Application verification If an app uses Google APIs to access Google users’ data. This makes it impossible to use the Appmixer Google modules in production without the verification process. Such a verification process has to be done by you.
Most of the Google modules need what Google marks as Sensitive or Restricted Scope in order to work. For example, the Gmail module needs https://www.googleapis.com/auth/gmail.compose scope to create a new email and https://www.googleapis.com/auth/gmail.readonly to get new emails.
www.googleapis.com/auth/cloud-platform (sensitive scope)
The connector requires the following IAM permission. This is the end-user permission needed to use the connector:
apigee.keyvaluemapentries.get
apigee.keyvaluemapentries.create
apigee.keyvaluemapentries.update
apigee.environments.list (Optional. Used to display a list of Apigee environments, allowing the user to easily select the desired environment in the Designer.)
apigee.keyvaluemaps.create (Optional. If this permission is not granted, you will need to manually create a Key Value Map (KVM) in Apigee; see the KVM storage section for instructions. If the permission is granted, the connector will automatically create the KVM.)
Go to the Google developer console at https://console.developers.google.com/ and create a new project.
Next, enable the required APIs for your project. Go to you project APIs & Services > Enable APIs & service and click on the Enable APIs and Services button. Enable the following APIs:
Apigee API: apigee.googleapis.com
Api HUB API: apihub.googleapis.com
Service Networking: servicenetworking.googleapis.com
Compute Engine: compute.googleapis.com
Cloud Key Management Service (KMS): cloudkms.googleapis.com
source: Step 1: Enable required APIs
The next step is the OAuth consent screen.
User Type - Internal vs External. The Internal user type allows you to use the Google modules without the app verification process mentioned at the beginning of this page. However, the modules will work only for accounts within your organization.
We are going to choose External for the purpose of this tutorial.
On the next page, leave the scopes empty.
You can add test users, but you can do this later. Only users added as test users will be able to authenticate!
Here is the Oauth consent screen configured. The app is in the testing mode, it is external (for users outside your organization). No more than 100 accounts (user cap) can be used with this application.
We need a client ID and a client secret.
Choose the Web application type and give it a name and authorize redirect URI.
For the purpose of this tutorial, we are going to use localhost:2200 which is the default port for the Appmixer API when running on localhost. If you are creating an Oauth application for your production/QA, the URI of your server will be here. The suffix /auth/google/callback will remain there.
You will get your Client ID and Client Secret.
They have to be inserted into the Appmixer. You can use the Backoffice to do that.
By default the callback URL passed to Google is in the form of <your-api-url>/auth/apigee/callback. However in some cases, you could need to pass a different callback URL. You can do this by inserting a callbackUrl value in the service configuration:
In order to use Google API Webhooks, you have to verify your domain ownership. More about domain verification can be found here. If you use the Appmixer Self-Managed package and you run Appmixer on your own servers using your custom domain, you can use CNAME records to verify your domain ownership. Otherwise, if you are an Appmixer Hosted customer, you can use the HTML file method to verify your Appmixer tenant domain (api.YOUR_TENANT.appmixer.cloud) at Google.
First, open the Google developers console and your Appmixer project and follow the instructions in the next picture.
Add your Appmixer tenant API URL (api.YOUR_TENANT.appmixer.cloud) and continue to verify the domain ownership. Click 'Take me there' and then 'Add a property'.
Again, use api.YOUR_TENANT.appmixer.cloud. Then download the HTML verification file.
After you download your HTML verification file, upload it via the Appmixer Backoffice interface to the Public Files section on the left. When you are done you should see your file listed:
Click the 'Verify' button to finish the verification process.
Now, you can go back to the developer console and add the api.YOUR_TENANT.appmixer.cloud domain once more. But this time, because it's already verified, it will appear in the list of your domains.
The new domain has to be added to the OAuth consent screen as well.
Shared flows in Apigee allow you to create reusable policies that can be applied across multiple API proxies. Blocking IP Shared Flow is a shared flow that blocks requests from specific IP addresses.
To install the Blocking IP Shared Flow and apply the Shared flow in any Proxy API, follow these steps:
Get the bundle:
Upload bundle Go to Apigee and navigate to the Shared Flows section, then click on the Upload Bundle icon to upload a new shared flow bundle.
Deploy the shared flow
Once deployed, go to you API proxy and add the shared flow to the preflow of the proxy endpoint.
API Proxy > Open your proxy > DEVELOP
Navigate to Policies and click on the plus icon to add a policy.
Select the Flow Callout
Enter any name and display name, for example FC-blocking-ip-shared-flow
Select the shared flow deplyed in the previous step: blocking-ip-shared-flow
hit Create
To apply the policy, go to Proxy Endpoints and select the PreFlow tab.
Click on the plus icon to add the policy step.
Select the policy you just created: FC-blocking-ip-shared-flow
Save the changes, deploy the API proxy and you are done.
The Blocking IP Shared Flow uses the Apigee Key Value Map (KVM) to store the blocked IP addresses.
to create a KVM, follow these steps:
Go to Management > Environments > {env} > Key Value Maps
click on the Create Key Value Map button
name the KVM as apigee-blocked-ips
Cloudflare List Integrations allow you to manage IP lists efficiently. You can add or remove IPs from Cloudflare IP Lists and specify the TTL (Time To Live), which automatically removes the IP after a set time period.
You can access custom lists in your Cloudflare account by navigating to Manage Account > Configurations > Lists. Cloudflare Lists connector only supports IP lists.
Cloudflare Lists connector support authentication via Account API Token, User API Token, or Global API Key (deprecated).
To create the Account API Token, go to the Manage Accounts > Account API Tokens, click on "Create Token". On the next screen, select the "Create Custom Token" option.
Name the token and select the permission:
Account | Account Filter Lists | Edit
To create the User API Token, go to the Profile > API Tokens, click on "Create Token". On the next screen, select the "Create Custom Token" option.
Name the token and select the permission:
Account | Account Filter Lists | Edit
My Profile > API Tokens. Click on "View" to get the token:
In the Cloudflare connector authentication window, use your Cloudflare account email and the Global API Key or your API Token to log in. Please note that when you use the API Token (User or Account), don't enter the email, just leave the email input empty.
Sign up with Mailchimp: https://mailchimp.com/
Now you need to add an OAuth application. Navigate to: Profile > Extras > Registered apps.
Click on the "Register an app" button.
Now save the OAuth client and copy the client ID and secret. You will need them later.
clientId, clientSecret and subdomain into Appmixer Backoffice.The last step is to configure clientId and clientSecret in the Appmixer Backoffice for appmixer:mailchimp connector.
Register an OAuth2 application in a similar way as usual Microsoft Azure App with few changes:
you don't need to configure OneDrive file picker
configure API permissions for Dynamics 365, see below
Go to the API permissions section:
Choose Dynamics CRM:
Delegated permissions:
The final set of permissions:
Creating CRM Dataverse environment
In order to use the Microsoft Dynamics 365 CRM connector, you'll need a Dataverse environment alongside your Microsoft account. You can review your environments in PowerPlatform administration. If you don't have any environment there, you can start with a trial. From the list of available apps, select Dynamics 365 Sales. Once you complete the sign-up process, the CRM environment will be created.
Once created you can view details of the environment and see Environment URL address which you will need when connecting a Microsoft Dynamics 365 CRM component to your account.
Note that in order to interact with Lead entities, you'll need to install Sales module into your Dataverse environment. Some entities like Contact or Account come with a Dataverse environment by default, but it is not the case for Lead entity.
Addition configuration
Microsoft Dynamics 365 CRM connector has additional settings when it comes to caching Dataverse environment schema and data. These settings can be configured in the Backoffice the same way as clientId or clientSecret:
listLookupOptionsCacheTTL - Cache TTL for inspector variables in milliseconds. Default value 1 minute. Determines how long the values of various Lookup fields are stored in Appmixer cache. This affects values fetched into select boxes in the inspector for fields like Currency, Contact, etc.
listOutportCacheTTL - Cache TTL for outports in milliseconds. Default 1 hour. Determines how long Appmixer caches the structure of the Dataverse instance (environment) for output ports. In case you add a new table or change an existing table (eg. by changing column type), this is the time you need to wait before the changes are propagated into components outPort schema.
listInspectorCacheTTL - Cache TTL for inspector in milliseconds. Default 1 hour. Similar to listOutportCacheTTL. Determines how long Appmixer caches the structure of the Dataverse instance (environment) for the inspector. In case you add a new table or change an existing table (eg. by changing column type), this is the time you need to wait before the changes are propagated into the component's inspector.
First, you need to set an Quickbooks key and endpoint through the Backoffice in order to provide the Quickbooks components to your customers.
Go to the https://developer.intuit.com/app/developer/dashboard to create a new app.
Select the name and scope:
Then you receive your clientID and clientSecret.
Also add the redirect URL. Example: https://api.YOUR_SUBDOMAIN.appmixer.cloud/auth/quickbooks/callback.
Go the Appmixer BackOffice -> Configuration. Add new configuration: appmixer:quickbooks Add your clientId and clientSecret keys.
All trigger components (for example NewInvoice) use QuickBooks webhooks. In order to use this component(s) in Appmixer, you have to register Appmixer's QuickBooks URI in QuickBooks first and set the webhook key in the Appmixer BackOffice.
Go to https://developer.intuit.com/app/developer/dashboard then choose your Appmixer app. Navigate to the "Webhooks" section either for the sandbox or production environment and add a new webhook.
Fill the field Endpoint URL with https://api.acme.appmixer.cloud/plugins/appmixer/quickbooks/webhooks (replace acme with your subdomain) and select the events you want to listen to. Configure the event triggers you want to listen to.
For detailed steps, see QuickBooks documentation.
Make sure to copy the Verifier Token and paste it to the Appmixer BackOffice -> Configuration -> appmixer:quickbooks -> webhookVerifierToken field.
https://linkedin.com
You need to start by creating a new app on the LinkedIn Developer platform. Visit https://developer.linkedin.com/ and log into your account. Once logged in, navigate to "MyApps", then click on "Create App". Fill in all necessary information like app name, app logo, app description, app privacy policy URL etc. You also need to fill the "Associated LinkedIn page". This is the LinkedIn page that will be associated with the app. If you don't have a LinkedIn page, you can create one by clicking on the "Create a LinkedIn Page" link.
Navigate to the "Auth" section where you will find OAuth 2.0 settings. Here, you can set the "Authorized Redirect URLs". Take note of the application's client ID and the client secret (which should be protected as a password)
You will need to specify the Redirect URL in this step.
The Redirect URL should be the URL where LinkedIn will redirect the user after they authorize your app. This depends on your environment, usually in the following format: https://api.ACME.com/auth/asana/callback
Configure scopes: Navigate to the "Products" section, request access to "Share on LinkedIn" and "Sign In with LinkedIn using OpenID Connect", once approved (it's instant approval), you will be able to see them on the top of the screen:
Set clientId and clientSecret into Appmixer Backoffice for appmixer:linkedin
Verification will enable scopes to be used in the LinkedIn Appmixer connector: "Advertising API", "Lead Sync API", "Conversions API" and "LinkedIn Ad Library". These scopes are not used in the current version of the LinkedIn connector, therefore it's not required. You can still perform the verification by sending the verification link generated in the "Settings" tab to the Super Admin of the page associated with the App.
Process estimated time: instant approval.
When creating Typeform Oauth2 application (https://admin.typeform.com/account#/section/apps), you can choose between access tokens that do expire in a week and access tokens that do not expire.
The token expiration is new in Typeform. Appmixer components before version 4.2 cannot handle it.
If you're registering the Oauth application for version 4.2 or newer, you can decide if you want to use tokens that do expire or not. If you decide to go with Unlimited then you don't have to set anything in the Backoffice, it is the default behavior. On the other hand, if you want to use tokens that expire, you have to go to the Backoffice and set the useRefreshToken to true for the appmixer:typeform service.
Then you can use the One Week configuration.
Go to the Apps and Keys page (you can also get there from the Docs)
Add a new app:
Get the ClientId, ClientSecret and set the redirect URI:
clientId: Integration Key
clientSecret: Secret Keys
set the Redirect URI to https://[YOUR_API_BASE]/auth/docusign/callback, for example https://api.appmixer.com/auth/docusign/callback
And set the clientId, clientSecret to the service configuration in the Backoffice.
Configure the Twitter.
Navigate to the Twitter Developer Platform and sign up if you haven't already.
You might be required to apply for a developer account where you'll need to provide some information about your intended use of Twitter’s API.
After logging in to your Twitter Developer account, navigate to the Developer Dashboard.
Twitter should automatically create a default project and application for you. If it doesn't, click "Create Project" and follow the prompts to set up a new one. After creating the project, you'll be able to create an application. Navigate to your project and select "Add App."
Update application details, such as the name, description, and icon, by clicking the "Edit" button. Next, configure the authentication settings.
Set the "App permissions" to "Read and write" and "Type of App" to "Web App"
Set the Redirect URI to https://[YOUR_API_BASE]/auth/twitter/callback, for example https://api.appmixer.com/auth/titter/callback
Get the Client Id and Client Secret from the Keys and Tokens page
Configure the App Key as clientId and the App Secret as clientSecret in the service settings within the Backoffice, under the configuration ID appmixer:twitter.
To learn more about configuring connectors, please visit the Connector Configuration section.
https://twilio.com
First, you'll need to create an account on the Twilio website.
After signing up, go to https://console.twilio.com/
Copy the Account SID and Auth Token from the "Account Info". Alternatively you can navigate to the "API Keys & Tokens" section, SID and Auth Token are available there.
use the Auth Token and Account Token to authenticate the Twilio components in the Appmixer Designer.
Sign up with Monday: https://auth.monday.com/users/sign_up_new
Navigate to your Monday Developer Center -> My access tokens. Create a new token and copy it.
Use the copied token as apiKey when connecting Monday components in Appmixer.
Log in to your Schoology server. Look at the top navigation bar and go to Tools > School Management.
In the new screen, click on Integrations in the left menu. Now click on the API tab. You should see your consumer key and secret. The secret is hidden, so you have to use the Reveal Existing Secret option.
Log into the Backoffice and go to the Services section. Add a new service configuration named appmixer:schoology. After it is created, add the following configuration values:
Here is the explanation for each value:
baseUrl: The API base URL. Usually is https://api.schoology.com/v1, but there are some cases where it could be different.
consumerKey and consumerSecret: The consumer key and secret we got from the Schoology server.
domain: The subdomain part from your server. For example, if your server URL is https://example.schoology.com, then the domain value is just an example.
Visit https://developers.facebook.com/ and create a new OAuth application (or use your existing one).
Add Facebook Login for Business and Marketing API products.\
Copy the App ID and App Secret a and paste it to your Appmixer Backoffice -> Connector Configuration page under the appmixer:facebookbusiness configuration ID as clientId and clientSecret, respectively:\
Add your Appmixer tenant domains as your App domains:\
Note: replace YOUR_TENANT with the name of your hosted tenant. If you're a customer on an Appmixer Self-Managed plan, you can have a completely custom domains that Appmixer runs on. Use your own domains instead.
Set the OAuth redirect URI under Facebook Login for Business / Settings to https://api.YOUR_TENANT.appmixer.cloud/auth/facebookbusiness/callback:\
Since version 4.2.1 you can configure the string that appears on the Trallo Oauth page.
If you don't do this, the Oauth page will show Appmixer by default.
The first thing you need is an app created in the developer portal. Navigate to and signup for an account. When you are logged in, you can see a dashboard with a list of your applications and a button to create a new one:
Note that listed applications have an Application ID. We will need it when we register our application in the blackboard instance. Click on the + button to create a new application. A simple form appears with your Application name and description inputs, and a domains input, for your domains. After you fill in the fields, create the new application.
Log in with admin privileges, and go to the System Admin tab in the top menu. After that, look under Integrations for REST API Integrations:
Next, you will see a list with all your API Integrations. Click on Create Integration to add a new one:
After that, you will show a small form. One of the inputs is the Application ID which as we showed in the first step, is available in the applications list at the developer portal. Next, browse and select a user. We recommend using the same user you are logged in with. On the End User Access and Authorized To Act As User, select Yes in both options. After that click on Submit to finish the registration.
Finally, we need to set the OAuth clientId and clientSecret in the to finish the integration. To obtain these values, go to the applications list in the developer portal. Find your application and click on the 3 dots at the end of the row. Then click on the Manage Keys option:
After you will see a list of your application keys. Click on the + sign to create a new key. Your key will be created and its information will be shown to you:
Be sure to store this information somewhere safe, as you won't be able to get it again.
Next, log into the Backoffice and go to Services. Next create a configuration for appmixer:blackboard. Finally, add your Application ID as clientId and Secret as clientSecret. Also set serverUrl to your instance URL:
And you are done! You are ready to use the Blackboard module.
Create a Calendly Developer account:
Go to and sign up.
⚠️ Note: This is not your Calendly user account. It’s a separate developer account.
Create a new OAuth application:
Provide a name for your application.
Select the kind of application:
Web
Select your environment:
Sandbox (recommended for development)
Production (use when going live with customer data)
Sandbox: HTTP with localhost domain allowed
Example:
Production: Must use HTTPS.
👉 You can add multiple redirect URIs, separated by commas. 👉 For Appmixer, set the redirect URI to:
Example:
Once the app is created, you’ll see:
Client ID
Client Secret
⚠️ Important: These values will only be shown once. Copy them immediately!
Go to the Appmixer BackOffice -> Configuration.
Add new configuration: acme:calendly .
Add your clientId and clientSecret keys with values.
To edit an existing OAuth app in Calendly:
Click the menu icon → Edit.
Editable attributes:
Application Name
Kind of App
Environment Type
Redirect URI
⚠️ Note: Client Secret and Webhook Signing Key cannot be retrieved again after creation.
To authenticate Imperva connector, an API Key is required. To obtain this key, log in to your Imperva account and navigate to Account > My Profile > API Keys. Click on "Add API Key" to generate a new key. Enter a name (and optionally otherparameters) for the API Key and click on "Create".
Take note of the API ID and API Key. You will need these to authenticate the connector.
You can also view the API Key details in the API Keys section.
In the Imperva connector authentication window, use your Imperva API ID and API Key to log in.
There are three optional configuration parameters for the SetBlockIPRule action of the Imperva connector. See the table below for more information.
You can find all the necessary information on how to register an OAuth app in Jira in their .
Access and create a new app. Click Create and select OAuth 2.0 integration.
Click on the "Register an app" button.
When you create an OAuth 2.0 (3LO) app, it's private by default. This means that only you can install and use it. If you want to distribute your app to other users, you must enable sharing.
Please follow the instructions in the to set up distribution.
You need to specify the scopes that your app will use. For Jira, you will need to select the following scopes:
read:me
read:jira-work
write:jira-work
manage:jira-project
manage:jira-configuration
read:jira-user
report:personal-data
This is how the permissions should look like:
After you have created the app, you will see the overview of the app. Here you can find the client ID and client secret.
clientId, clientSecret into Appmixer Backoffice.The last step is to configure clientId and clientSecret in the Appmixer Backoffice for appmixer:jira connector.
You need to create a developer account first (more about different Hubspot accounts can be found here)
Create a new application.
In order to get rid of a warning that your users will see before installing Hubspot module, you have to verify your identity as an app provider.
You can manage the Contact & Support settings.
Or skip for now.
Set the application name, logo, and description.
Go to the Auth tab and copy&paste these values into the .
Create Hubspot configuration in the .
Go back to Hubspot and set the redirect URL, replace acme.com with your domain and keep the suffix /auth/hubspot/callback
Adding scopes.
Hit Save.
One more thing that you will need is the Hubspot, go back to the Apps page and hit the Get HubSpot API key button.
Then copy&paste the generated key and put it into the .
Your domain is unverified Until your domain is set and verified, users will see a warning when installing your app.
Users will encounter this warning during the installation or management of the app. To eliminate this warning, the verification process must be completed. HubSpot mandates the configuration of a DNS TXT record as the sole method for verification.
To configure and verify a custom HubSpot app for a hosted instance of Appmixer, such as ACME.appmixer.cloud, you will need to establish a custom domain for the Appmixer instance (for example, ACME.com). Without this custom domain, domain verification will not be possible.
Configure the Shopify.
Sign Up or Log In: If you haven’t already, go to the and sign up for an account. If you have an account, just log in.
Create a New App: Once logged in, navigate to the "Apps" section using the dashboard menu. Click on "Create app" to get started, select the "Create app manually"
Note the clientId and clientSecret. Next, click on the "Choose distribution"
Select the "Public distribution" option
Go to the "Configuration" section from the main menu
set the Redirect URI to https://[YOUR_API_BASE]/auth/twitter/callback, for example https://api.appmixer.com/auth/titter/callback
event version to the latest (2024-01)
Embed app in Shopify admin: false
set the compliance webhooks (it's required for the submission)
Customer data request endpoint: https://[YOUR_API_BASE]/plugins/appmixer/shopify/customers/data_request
Customer data erasure endpoint: https://[YOUR_API_BASE]/plugins/appmixer/shopify/customers/redact
Shop data erasure endpoint: https://[YOUR_API_BASE]/plugins/appmixer/shopify/shop/redact
Go to the "API Access" section from the main menu. In the "Access requests" request access for the "Protected customer data access" Fill details in the "Select your data use and reasons" section:
Fill the form in the "Provide your data protection details" section:
Configure the App Key as clientId and the App Secret as clientSecret in the service settings within the Backoffice, under the configuration ID appmixer:shopify. Also, add the appStoreInstallRedirectUri pointing to your instance, for example https://my.master-hamster-87765.appmixer.cloud/
To learn more about configuring connectors, please visit the section.
Please note that only a 'developer' shop instance can be authenticated with your OAuth application unless your app is verified. 'Production' shop instances cannot authenticate with an unverified app. This guide does not cover all the requirements for the app verification process.
You'll need a Snowflake account. If you don't have one, you can sign up for a free trial on the .
You will need the following details to connect to your Snowflake database:
Account Name: This is often part of your Snowflake URL. You can find it in the Snowflake web interface after logging into your account.
Username: The username of the Snowflake user account you wish to use.
Password: The password for that Snowflake user.
Database Name: The name of the database you want to connect to.
Warehouse: The name of the virtual warehouse to use for running queries.
Account:
Your account identifier is usually part of your Snowflake URL.
When you log into Snowflake, the URL looks like https: //abc12345.us-east-1.snowflakecomputing.com.
Here, abc12345.us-east-1 is your account identifier.
Username and Password:
The credentials you use to log into the Snowflake web interface.
Database and Warehouse:
After logging into Snowflake, navigate to the Databases and Warehouses tabs to see available options.
Choose the appropriate database and warehouse for your needs.
Go to the Discord Developer Portal: https://discord.com/developers/applications and create a new Application.
Click “New Application”
Give it a name, accept their Terms of Service and click “Create”
Open your new application
To obtain your OAuth credentials, go to OAuth2 tab:
Client ID: shown on the “Client information” card as “Client ID”
Client Secret: click “Reset Secret” to generate a secret and copy it
You’ll paste these into the Appmixer Backoffice later.
You must add the redirect/callback URL so Discord can return users to Appmixer after authorization.
Go to OAuth2 tab
In “Redirects”, add your Appmixer URL. Example:
https://api.YOUR_SUBDOMAIN.appmixer.cloud/auth/discord/callback
Note: Replace YOUR_SUBDOMAIN with your actual Appmixer subdomain. The path “/auth/discord/callback” follows the same pattern as other Appmixer connectors.
To make the events (sending messages, reacting to events) be from a bot, add one:
Click “Reset Token” to reveal the Bot Token and copy it
Keep the token secure. You’ll paste it in the Backoffice as well.
Check the settings below in Authorization Flow, recommended set up is:
Public Bot: ON
Requires OAuth2 Code Grant: ON
Server Members Intent: ON
Rest can be turned off.
How you configure scopes:
Installation → Default Install Settings:
User Install: applications.commands
Guild Install: applications.commands, bot
Copy and paste the values into the Backoffice under the Discord connector configuration (appmixer:discord):
clientId → "Client ID" from Discord
clientSecret → “Client Secret” from Discord
botToken → “Bot Token”
After saving, return to your flow in Appmixer. Click “Connect account” for the Discord connector, complete the authorization, and you’re ready to use Discord components in your flows.
The Join Component is used to merge multiple input connections. It forwards every received message to the output port without modification.
Connect Multiple Inputs
Attach multiple input sources to the in port.
Forward Messages
Every received message is immediately sent to the out port.
Does Not Modify Messages: Messages are forwarded exactly as received.
Useful for Routing: Can be used to merge multiple data streams into one processing path.
The DataUri2File component converts a Data URI into a file. It extracts the file content and metadata from the Data URI and generates a downloadable file.
Provide a Data URI
Input a valid Data URI string that represents a file.
Specify a File Name (Optional)
Enter a desired file name with or without an extension.
If omitted, the extension is inferred from the file type.
Processing
The component extracts the file content from the Data URI and saves it as a downloadable file.
Returns the fileId and fileName of the newly created file.
The Counter Component maintains an internal counter that increments with each received message. It can also be reset to a specified value.
Set Initial Value and Increment
Configure the count parameter for the starting value.
Set increment to define how much the counter increases or decreases.
Increment Behavior
Each time a message arrives at the item port, the counter increases by the increment value.
Reset Counter
Sending a message to the reset port resets the counter to the specified value or its initial value if no value is provided.
Supports both increment and decrement operations.
State is maintained: The counter retains its value across executions until reset.
Useful for tracking counts in loops, event occurrences, or rate-limiting scenarios.
Overview
The HTML to CSV Converter component allows you to transform an HTML table file into a CSV format. This is useful for extracting structured data from HTML documents into a tabular format.
Input Parameters
Output Parameters
Instructions
Select the HTML File:
Provide the fileId of the HTML document that contains a table.
Processing:
The component extracts the table data from the HTML file and converts it into a structured CSV format.
Retrieve Output:
The component returns a new file with fileId and fileName in CSV format.
XLSX2CSV Component
The XLSX2CSV component converts an XLSX Excel 2007+ XML format file into a CSV file.
Provide File ID:
Select an XLSX file using its fileId.
Conversion Process:
The component processes the XLSX file and converts it into CSV format.
Retrieve Output:
The converted file is returned with its fileId and fileName.
This component is useful for extracting tabular data from Excel files into a widely used CSV format for further processing.
The CSV2JSON component converts a CSV file into a JSON file.
Provide File ID:
Select or upload the CSV file to be converted.
Processing:
The component processes the CSV file and converts it into JSON format.
Output:
The converted JSON file is available with its unique fileId and fileName.
The SetVariable Component assigns variables dynamically based on input type and value. It allows storing values in various formats such as text, numbers, dates, booleans, and files.
Text (text) – Stores a string value.
Textarea (textarea) – Stores a longer text value.
Number (number) – Stores a numeric value.
Date-Time (date-time) – Stores a date and time.
Boolean (toggle) – Stores a true/false value.
File (filepicker) – Stores a selected file reference.
Provide Variables
Send an object containing variable names, types, and values to the in port.
Processing Variables
The component assigns the variables dynamically based on their types.
Retrieve Output
The processed variables are sent as an object through the out port.
Flexible Variable Storage: Supports multiple data types for dynamic variable assignments.
Useful for Workflow Automation: Allows temporary storage of values for later processing.
Outputs in Key-Value Format: The output is structured as an object containing assigned values.
The DeleteColumns component removes specific columns from a CSV file. It allows users to define which columns to delete, using either column names (if headers exist) or column indices.
Determine Column Identification Method
If the first row contains column names, set withHeaders to true. This ensures the correct columns are deleted based on their names.
If column names are not available, columns are identified by index.
Select Columns to Remove
Specify the columns you want to delete.
Ensure the correct format is used for column identification.
Processing
The component removes the selected columns and generates a new CSV file.
The modified file is returned with a new fileId.
Output
The processed file can be downloaded or used as input for further data processing.
The OnStart Component is a trigger that fires once when the flow starts. It emits a timestamp indicating when the flow was initiated.
out PortAutomatic Trigger
The component automatically fires when the flow begins execution.
Start Time Emission
Outputs a started timestamp that can be used for logging, tracking, or scheduling purposes.
One-Time Execution: Triggers only once per flow execution.
Useful for Logging: Can be used to capture flow start times for analytics or debugging purposes.
The HTML2XLSX component converts an HTML table file into an XLSX (Excel 2007+ XML Format) file.
Provide the File ID:
Select or upload an HTML file containing a table.
Conversion Process:
The component extracts tabular data from the HTML file and converts it into an Excel spreadsheet.
Retrieve the Output:
The resulting XLSX file is provided as output with a unique fileId and fileName.
The GetRows component retrieves rows from a CSV file. It allows filtering of rows based on conditions and supports column headers for structured data extraction.
Use Column Headers (Optional)
If the first row of the CSV file contains column names, enable the Use Column Names toggle.
Otherwise, columns will be identified by their index.
Apply Row Filters (Optional)
Enable Filter Rows if you want to return only the rows matching specific criteria.
Filters can be defined based on column values.
Processing
The component extracts rows based on the provided criteria.
The processed data is returned as an array of rows.
Output
The retrieved rows can be used for further processing or stored for reference.
Description: The ExportCSV component exports structured data into a CSV file.
Define Data Source:
Provide structured data in an array format, ensuring proper key-value mapping.
Set Output File Name:
Choose a name for the CSV file, including the .csv extension.
Include Headers (Optional):
Enable Use Headers if the first row should contain column headers.
Processing:
The component processes the structured data and creates a CSV file.
The newly generated file is output with its fileId and fileName.
Description: The GetCell component finds the first row that matches the provided filter and returns the row value under the specified column.
Use Column Names (Optional):
If the first row of the CSV file contains column names, enable the Use Column Names toggle.
Otherwise, columns will be identified by their index.
Processing:
The component scans the CSV file for a row that matches the given filter.
Once found, it returns the value of the specified column in that row.
The UpdateRows component updates all rows in a CSV file that match a provided filter. It allows users to modify values while maintaining the integrity of the dataset.
Enable Column Header Usage (Optional)
If your CSV file contains column headers in the first row, enable withHeaders to update rows based on column names.
Otherwise, updates will be performed based on column indices.
Specify Filters and Updates
Define conditions that identify which rows should be updated.
Provide new values for selected columns.
Processing
The component updates all matching rows in the CSV file.
The modified file is outputted with a new fileId.
Description: The GetRow component finds the first row that matches the provided filter and returns the entire row.
Use Column Names (Optional):
If the first row of the CSV file contains column names, enable the Use Column Names toggle.
Otherwise, columns will be identified by their index.
Processing:
The component scans the CSV file for a row that matches the given filter.
Once found, it returns the entire row as an object.
Overview
The AddColumn component allows adding a new column to an existing dataset.
Input Parameters
Output Parameters
Instructions
Provide the Dataset:
Ensure the dataset is passed in the correct array format.
Specify Column Details:
Enter the column name.
Optionally, provide a default value for all rows.
Process & Output:
The component adds the column and outputs the updated dataset.
Description: The DeleteRows component removes specific rows from a CSV file based on a defined filter.
Use Column Names (Optional):
If the first row of the CSV file contains column names, enable the Use Column Names toggle.
Otherwise, columns will be identified by their index.
Processing:
The component scans the CSV file for rows that match the given filter.
Any matching rows are removed, and a new CSV file is generated.
The ImportCSV component allows users to import a CSV file into a workflow. It supports specifying a delimiter and can process files from user input.
Select a CSV File
Use the file picker to choose a CSV file for import.
Alternatively, specify the file ID if it's already stored.
Define the Delimiter (Optional)
If the CSV file uses a delimiter other than a comma (,), specify it in the Delimiter field.
Processing
The component will parse the CSV file and make it available for further processing.
The RenameColumn component renames a column in a CSV file. This is useful for standardizing column names before processing or integrating CSV data with other systems.
Specify CSV File and Delimiter
Provide the fileId of the CSV file containing the column to be renamed.
Set the correct delimiter (e.g., ,, ;, \t).
Rename the Column
Enter the current column name in columnName.
Specify the new column name in newColumnName.
Processing
The component will rename the specified column in the CSV file.
The modified file will be outputted with a new fileId.
in
Receives messages from multiple sources.
out
Emits the received input messages as output.
dataUri
string
✅
A Data URI representation of a file.
fileName
string
❌
The desired name of the output file. If omitted, a default name (result.[EXT]) is assigned based on the file type.
fileId
string
The unique ID of the generated file.
fileName
string
The name of the resulting file.
count
number
The initial counter value (default: 0).
increment
number
The amount by which the counter increments (default: 1, can be negative for decrement).
item
Triggers the counter increment.
reset
Resets the counter to a specified value or its initial value if not provided.
count
Emits the updated counter value after each increment.
fileId
string
✅
The ID of the HTML file to convert.
fileId
string
The unique ID of the converted file.
fileName
string
The name of the generated CSV file.
fileId
string
✅
The ID of the XLSX file to convert.
fileId
string
The unique ID of the newly created CSV file.
fileName
string
The name of the converted CSV file.
fileId
string
✅
The ID of the CSV file to convert.
fileId
string
The unique ID of the generated JSON file.
fileName
string
The name of the generated JSON file.
in
Accepts an object containing variables and their values.
out
Emits the assigned variables as an object.
withHeaders
boolean
✅
Set to true if the first row represents column names (CSV header) and should be used to identify columns.
fileId
string
The ID of the processed CSV file.
out
Fires when the flow starts, providing the start time.
started
The timestamp (ISO 8601 format) indicating when the flow started.
fileId
string
✅
The ID of the HTML file containing the table to convert.
fileId
string
The unique ID of the converted XLSX file.
fileName
string
The name of the generated XLSX file.
withHeaders
boolean
✅
Set to true if the first row represents column names (CSV header) and should be used to identify columns.
filterRows
boolean
✅
Enable filtering to return only rows that match specified conditions.
fileId
string
The ID of the processed CSV file.
rows
array
The extracted rows from the CSV file.
data
array
✅
The structured data to be converted into CSV format.
fileName
string
✅
The name of the output CSV file, including the .csv extension.
withHeaders
boolean
✅
Set to true if the first row should contain column headers.
fileId
string
The ID of the generated CSV file.
fileName
string
The name of the exported CSV file.
withHeaders
boolean
✅
Set to true if the first row represents column names (CSV header) and should be used to identify columns.
fileId
string
The ID of the processed CSV file.
cell
string
The value found in the specified column.
withHeaders
boolean
✅
Set to true if the first row represents column names (CSV header) and should be used to identify the columns.
fileId
string
The ID of the updated CSV file.
withHeaders
boolean
✅
Set to true if the first row represents column names (CSV header) and should be used to identify columns.
fileId
string
The ID of the processed CSV file.
row
object
The full row that matches the filter.
dataset
array
✅
The dataset to which the column will be added.
columnName
string
✅
The name of the new column.
defaultValue
any
❌
Default value for the new column (optional).
dataset
array
The updated dataset with the new column.
withHeaders
boolean
✅
Set to true if the first row represents column names (CSV header) and should be used to identify columns.
fileId
string
The ID of the processed CSV file.
fileId
string
✅
The ID of the CSV file to be imported.
filename
string
❌
The name of the CSV file.
content
string
❌
The content of the CSV file in raw string format.
delimiter
string
❌
A character used to separate columns in the CSV file. Default is ,.
fileId
string
The ID of the imported CSV file.
fileId
string
✅
The ID of the CSV file where the column should be renamed.
delimiter
string
✅
The character used to separate columns (default: ,).
columnName
string
✅
The name of the column to be renamed.
newColumnName
string
✅
The new name for the column.
fileId
string
The ID of the CSV file with the renamed column.






http://localhost:1234https://[YOUR_API_BASE]/auth/docusign/callbackhttps://api.appmixer.com/auth/docusign/callback


blockIpMaxIpsPerRule
No
20
The maximum number of IP addresses that can be added to a block rule.
blockIpMaxParallelRequests
No
5
The maximum number of parallel requests that can be made to block IP addresses.
blockIpMaxRulesInImperva
No
500
The maximum number of custom rules that can be added to Imperva. See https://docs.imperva.com/bundle/cloud-application-security/page/rules/create-rule.htm










































































































The HTTP GET component allows sending HTTP GET requests to external services. It is useful for retrieving data from APIs and other web resources. The connector processes the request and returns the response details, including body, headers, and status code.
url
string
Fully qualified URI of the target resource (required).
headers
object
JSON object containing request headers.
body
object
Query parameters as a JSON object of key/value pairs.
bodyBase64Encode
boolean
If enabled, encodes request body in Base64 format.
responseEncoding
string
Encoding format for the response (default: utf8). Supported values: ascii, utf8, utf16le, base64, latin1, binary, hex, null (binary mode).
body
The response body from the external service.
headers
The response headers.
request
Request metadata.
statusCode
The HTTP status code of the response.
Ensure API Endpoint Supports GET Requests
The url must be a valid API endpoint accepting GET requests.
Set Necessary Headers
If authentication is required, pass headers (e.g., Authorization token).
Provide Query Parameters
Use the body parameter to define query parameters as a JSON object.
Configure Response Encoding
Use utf8 (default) or other supported encodings.
Review API Response
Check statusCode and body for success or failure messages.
The HTTP PUT Component sends HTTP PUT requests to external services. It is commonly used for updating or replacing resources via API calls.
url
string
Fully qualified URI of the target resource (required).
headers
object
JSON object containing request headers.
body
string / object
Data sent as the request body.
bodyBase64Encode
boolean
If enabled, encodes request body in Base64 format.
responseEncoding
string
Encoding format for the response (default: utf8). Supported values: ascii, utf8, utf16le, base64, latin1, binary, hex, null.
body
The response body from the external service.
headers
The response headers.
request
Request metadata.
statusCode
The HTTP status code of the response.
Ensure API Endpoint Supports PUT Requests
The url must be a valid API endpoint accepting PUT requests.
Set Necessary Headers
If authentication is required, pass headers (e.g., Authorization token).
Provide Request Body
Use the body parameter to define the data being sent.
Configure Response Encoding
Use utf8 (default) or other supported encodings.
Review API Response
Check statusCode and body for success or failure messages.
PUT requests typically replace entire resources: Ensure that the provided data includes all necessary fields.
Proper headers must be set: Use Content-Type: application/json for JSON payloads.
The HTTP DELETE component allows sending HTTP DELETE requests to external services, enabling users to remove resources via API calls. It provides response details, including body, headers, and status code.
url
string
Fully qualified URI of the target resource (required).
headers
object
JSON object containing request headers.
body
string / object
Optional data sent as the request body.
bodyBase64Encode
boolean
If enabled, encodes request body in Base64 format.
responseEncoding
string
Encoding format for the response (default: utf8). Supported values: ascii, utf8, utf16le, base64, latin1, binary, hex, null (binary mode).
body
The response body from the external service.
headers
The response headers.
request
Request metadata.
statusCode
The HTTP status code of the response.
Ensure API Endpoint Supports DELETE Requests
The url must be a valid API endpoint accepting DELETE requests.
Set Necessary Headers
If authentication is required, pass headers (e.g., Authorization token).
Provide Optional Request Body
Some APIs accept a request body in DELETE requests.
Configure Response Encoding
Use utf8 (default) or other supported encodings.
Review API Response
Check statusCode and body for success or failure messages.
The HTTP POST Component sends HTTP POST requests to external services. It supports both raw body and form-data submissions, making it suitable for various API interactions.
url
string
Fully qualified URI of the target resource (required).
headers
object
JSON object containing request headers.
bodyType
string
Specifies the format of the request body. Options: raw, form-data.
body
string / object
Request body content for raw mode.
bodyFormData
object
Key-value pairs for form-data submissions.
bodyBase64Encode
boolean
If enabled, encodes request body in Base64 format.
responseEncoding
string
Encoding format for the response (default: utf8). Supported values: ascii, utf8, utf16le, base64, latin1, binary, hex, null.
body
The response body from the external service.
headers
The response headers.
request
Request metadata.
statusCode
The HTTP status code of the response.
Supports both raw and form-data body types.
form-data mode allows file uploads along with other input types.
Proper headers must be set for application/json or multipart/form-data depending on the request type.
To integrate Appmixer with Salesforce, you are going to need an app in your Salesforce account. You can either create a new one or use one that already exists.
To create an app, you have to login into Salesforce Developer portal. Once logged in, on the left menu, look up Platform Tools -> Apps -> App Manager. This will show you a list of your existing apps and provide the option to create a new one via the New Connected App on the top right side:
In the next screen, you will be requested to fill out some basic data about your new app. Look out for the API (Enable OAuth Settings) section. Tick the Enable OAuth Settings checkbox, and fill out the callback URL and scopes as shown in this screenshot:
Check the Require Secret for Web Server Flow, Require Secret for Refresh Token Flow and Enable Authorization Code and Credentials Flow checkboxes.
With your app ready, you can get the client id and secret required for integrate Salesforce into Appmixer. Go to the App Manager and search for your app in the list. On the dropdown in the right side, click the View option.
In the next screen, look out for API (Enable OAuth Settings) section and click on the Manage Consumer Details button. You should see now your client id (shown as Consumer Key) and secret (shown as Consumer Secret):
Now we can provide these values to Appmixer. Log into the Backoffice and go to the Configuration section. Create a configuration service for Salesforce as follows:
Finally, set the clientId (Consumer Key in Salesforce) and the clientSecret (Consumer Secret in Salesforce) like this:
The Salesforce OAuth authentication offers several behaviors when users try to authenticate. This is the summary of the available options (taken from Salesforce Oauth docs):
login—The authorization server must prompt the user for reauthentication, forcing the user to log in again.
consent—The authorization server must prompt the user for reapproval before returning information to the client.
select_account—If present, take one of the following actions.
If zero or one hint is available and the user is logged in, show the approval page without prompting for login.
If zero or one hint is available and the user isn’t logged in, prompt for login.
If more than one hint is available, show the account chooser.
By default, we use the login option, but you can configure it through the Backoffice.
Login into the Backoffice, and go to the Services module. Then add a new service like this:
2. Add a promptType key, and set it to one of the supported values by Salesforce:
The default login URL is https://login.salesforce.com, this can be changed with a baseUrl.
The Switch component allows conditional branching of messages based on predefined expressions. It evaluates up to five conditions (out1 to out5) and routes messages accordingly. If none of the conditions match, the message is sent to the else output.
Each expression supports AND/OR logic, allowing complex conditions.
Define Conditions
Configure up to 5 expressions (expression1 to expression5) for routing messages.
Set Operators
Use supported operators to define comparison logic.
Handle Default Cases
If none of the conditions match, the message is sent to else.
The JoinEach Component is designed to work with the Each Component. It collects multiple incoming messages based on a specified count and outputs them as a single batch when all expected messages have arrived. If some messages do not arrive within a timeout period, it outputs the partial batch along with count information.
out Porttimeout PortSet Up with Each Component
Ensure messages sent to in contain a correlationId matching the one from the Each component.
Define Expected Count
Send a message to the count port specifying how many messages the component should wait for.
Processing Messages
The component collects messages based on the correlationId.
When all expected messages arrive, they are sent as an array to the out port.
If not all messages arrive before the timeout, the timeout port emits the partial batch.
Works with Correlation IDs: Ensures messages from the same batch are correctly joined.
Timeout Handling: Prevents indefinite waiting if some messages fail to arrive.
Batch Processing: Useful for grouping messages into structured outputs for further processing.
The Dynamic Webhook component allows users to subscribe to a webhook URL, triggering the connector when a request is received at that URL. It can be used to listen for incoming HTTP requests and process their data in real time.
Automatically generates a webhook URL upon activation.
Supports HTTP methods for subscribing (POST, PUT, GET, PATCH) and unsubscribing (DELETE, POST, PUT, GET, PATCH).
Can capture request data, headers, query parameters, and method details.
Allows configuring immediate response behavior.
Configure Webhook Subscription
Provide the onStartRequestUrl and onStartRequestMethod to define how the webhook should be registered.
Set necessary headers and request body for subscription if required.
Listen for Incoming Requests
The webhook will trigger whenever a request is sent to the generated URL.
Captured request details (method, headers, body, query) will be forwarded to the output port.
Configure Webhook Unsubscription
Provide onStopRequestUrl and onStopRequestMethod to define how to remove the webhook upon stopping.
Use runtime expressions to pass response data from the start request.
Immediate Response Option
Enable immediateResponse if you want to return an automatic response to incoming requests.
If disabled, use a Response component later in the flow.
The Each Component iterates over a list and outputs a new message for each item. It allows processing list elements individually, enabling sequential execution of subsequent actions.
item Portdone PortProvide a List
Ensure the input list is an array or a valid JSON string representing an array.
Processing Items
Each item in the list is sent separately to the item port.
The done port is triggered after all items are processed.
Handling Index and Correlation ID
Each item is assigned an index and a correlationId to track processing.
Supports JSON Parsing: If list is a string, it will attempt to parse it as JSON.
State Persistence: Keeps track of progress and resumes processing if interrupted.
Efficient for Iteration: Useful for batch processing workflows requiring individual handling of list elements.
The Condition Component evaluates expressions and directs data to either the true or false output port based on the result. It supports logical conditions using multiple operators and can process complex logical groupings.
Define Conditions
Create conditions using logical grouping (AND, OR).
Choose an operator and provide a comparison value.
Evaluate Inputs
Inputs are compared using selected operators.
Logical groups (AND, OR) allow complex expressions.
Direct Output
If the condition evaluates to true, data is sent to the true port.
If the condition evaluates to false, data is sent to the false port.
Logical Groups: Conditions can be grouped using AND and OR for flexible logic handling.
Range Handling: Use rangeMin and rangeMax to check if a value is within a range.
Regex Support: Allows pattern matching for advanced validation.
The HTTP PATCH component allows sending HTTP PATCH requests to external services. It is typically used to partially update resources via API calls. The connector processes the request and returns the response details, including body, headers, and status code.
Ensure API Endpoint Supports PATCH Requests
The url must be a valid API endpoint accepting PATCH requests.
Set Necessary Headers
If authentication is required, pass headers (e.g., Authorization token).
Provide Request Body
Use the body parameter to define the data that needs to be updated.
Configure Response Encoding
Use utf8 (default) or other supported encodings.
Review API Response
Check statusCode and body for success or failure messages.
Visit the tutorial for more information.
The On App Event Component triggers when an event is received from an external application, allowing automation workflows to respond dynamically to incoming app events.
This component listens for app events and processes incoming data, making it ideal for integrating external systems and event-driven workflows.
Registers Webhook: Subscribes to the specified event using the Appmixer API.
Receives Event Data: Waits for an event from the external system.
Processes Data:
Parses the received JSON payload.
Extracts structured data based on eventDataExample.
Emits Structured Output: Sends the processed event data to the out port.
Webhook-Based Execution: Efficiently listens for app events using webhooks.
Dynamic Event Handling: Supports various app events based on configurations.
Ideal for Integrations: Enables event-driven automation between external apps and workflows.
This component is essential for workflows requiring real-time event processing and seamless app integration.
In order to offer the Azure Cognitive Services components to your customers, you have to set an Azure key and endpoint through the Backoffice first.
expression1 - expression5
expression
Conditional expressions for respective outputs (out1 to out5).
else
default
Messages that do not match any condition are sent here.
=
Equal to
!=
Not equal to
>
Greater than
>=
Greater than or equal to
<
Less than
<=
Less than or equal to
%
Modulo operation (divisibility check)
empty
Checks if the value is empty
notEmpty
Checks if the value is not empty
contains
Checks if a value contains a substring
range
Checks if a value falls within a specified range
regex
Matches a value against a regular expression
out1 - out5
Message is sent to the first matching output based on conditions.
else
If no conditions are met, the message is sent to this output.
in
Receives individual messages to be aggregated. Each message must contain a correlationId.
count
Specifies the total number of messages expected. Must be provided before aggregation begins.
out
Fires when all expected messages arrive, emitting an array of collected messages.
timeout
Fires when the timeout is reached before all expected messages arrive, providing the partial batch and counts.
items
The array of collected messages.
items
The array of messages that arrived before the timeout.
count
The number of expected messages.
arrived
The number of messages that actually arrived.
onStartRequestUrl
string
URL to subscribe the webhook when the flow starts.
onStartRequestMethod
string
HTTP method to use when subscribing. Options: POST, PUT, GET, PATCH.
onStartRequestHeaders
object
HTTP headers to send when subscribing.
onStartRequestBody
object
Request body data to send when subscribing.
onStopRequestUrl
string
URL to unsubscribe the webhook when the flow stops.
onStopRequestMethod
string
HTTP method to use when unsubscribing. Options: DELETE, POST, PUT, GET, PATCH.
onStopRequestHeaders
object
HTTP headers to send when unsubscribing.
onStopRequestBody
object
Request body data to send when unsubscribing.
immediateResponse
boolean
Whether to send an immediate response when a request is received. Defaults to true.
method
The HTTP method of the incoming request.
data
The body of the incoming request.
query
Query parameters from the request URL.
headers
HTTP headers of the incoming request.
list
array / string
The list to iterate over. If a string is provided, it must be a valid JSON array.
in
Receives the list to iterate over.
item
Emits each item from the list with additional metadata.
done
Fires after all items have been processed, providing the total count and correlation ID.
index
The index of the current item in the list.
value
The value of the current item.
count
Total number of items in the list.
correlationId
Unique identifier for tracking batch processing.
count
The total number of processed items.
correlationId
Unique identifier for tracking batch execution.
expression
object
A structured condition with logical operators (AND, OR) and comparison rules.
=
Checks if two values are equal.
!=
Checks if two values are not equal.
>
Checks if a value is greater than another.
>=
Checks if a value is greater than or equal to another.
<
Checks if a value is less than another.
<=
Checks if a value is less than or equal to another.
%
Checks if a number is divisible by another (modulo).
empty
Checks if a value is empty.
notEmpty
Checks if a value is not empty.
contains
Checks if a value contains another value.
range
Checks if a value falls within a specified range.
regex
Validates if a value matches a given regular expression.
true
Fires if the condition evaluates to true.
false
Fires if the condition evaluates to false.
url
string
Fully qualified URI of the target resource (required).
headers
object
JSON object containing request headers.
body
string / object
Data sent as the request body for updating the resource.
bodyBase64Encode
boolean
If enabled, encodes request body in Base64 format.
responseEncoding
string
Encoding format for the response (default: utf8). Supported values: ascii, utf8, utf16le, base64, latin1, binary, hex, null (binary mode).
body
The response body from the external service.
headers
The response headers.
request
Request metadata.
statusCode
The HTTP status code of the response.
event
string
The name of the event to listen for.
eventDataExample
string
An example JSON structure representing the event data.
out
Emits the received app event data.
data
object
The JSON payload received from the app event.












The Archive component allows you to create a ZIP or TAR archive from a set of files. You can also append files to an existing archive, provided it matches the specified format.
outputFormat
string
✅
The format of the archive file. Options: zip, tar.
fileName
string
✅
The name of the output archive file, including the extension.
files
object
❌
List of files to be added to the archive. Each file is represented by its fileId.
archiveFileID
string
❌
The ID of an existing archive file. If provided, new files will be appended to this archive.
fileId
string
The unique ID of the newly created or updated archive file.
fileName
string
The name of the archive file.
Select Output Format:
Choose between zip or tar as the output format.
Provide File Name:
Specify a name for the archive file, ensuring it includes the appropriate extension (.zip or .tar).
Add Files to Archive:
Upload or select files using their fileId.
If an archiveFileID is provided, new files will be appended to the existing archive.
Processing:
The component processes and creates the archive, returning the generated file’s fileId and fileName.
Batch File Compression: Combine multiple files into a single ZIP or TAR archive.
Incremental Archiving: Append files to an existing archive for ongoing data collection.
Data Backup & Export: Store or transfer multiple files efficiently as a single archive.
The Digest Component collects incoming data entries and releases them as a batch based on a threshold, a schedule, or a manual trigger via a webhook. This component is useful for aggregating data over time and sending it at controlled intervals.
threshold
number
Number of entries required to trigger output. If set, output is triggered when this count is reached.
webhookUrl
string
A generated webhook URL that allows manual triggering of output.
minute
string
Specifies at which minutes of the hour the digest should run. Supports cron-like syntax.
hour
string
Specifies at which hours of the day the digest should run. Supports cron-like syntax.
dayMonth
string
Specifies on which days of the month the digest should run. Supports cron-like syntax.
dayWeek
string
Specifies on which days of the week the digest should run. Supports cron-like syntax.
timezone
string
Timezone for scheduling, defaults to GMT.
outputType
string
Format of the output data. Options: first (first entry), array (all entries as an array), object (one entry at a time), file (CSV file).
in
Receives data entries for aggregation.
out
Emits the aggregated data based on the configured condition (threshold, schedule, or webhook trigger).
Set a Threshold (Optional)
Define the threshold parameter to release data when a certain number of entries are reached.
Schedule Batch Processing (Optional)
Configure minute, hour, dayMonth, and dayWeek to process batches at specific times.
Manual Trigger via Webhook (Optional)
Use the webhookUrl to release entries manually at any time.
Select Output Type
Choose how the collected data should be sent (first, array, object, or file).
Supports multiple trigger methods: Data is released either via count (threshold), time (schedule), or manually (webhook).
Flexible scheduling: Uses cron-like syntax for precise control over execution times.
Different output formats: Users can choose between receiving data as an array, a single entry, or a downloadable CSV file.
App admin url: https://developer.servicenow.com/dev.do#!/home
Once the app admin account is created and verified you need to create an instance.
It will take a minute or two or three or five to complete.
Now your developer instance is ready and accessible: https://dev180380.service-now.com/. Note that when creating a new account, your instance ID (dev180380 in this case) will be different.
To configure webhooks in ServiceNow to receive notifications when a new record is created, follow these steps. This involves creating an Outbound REST Message, setting up a Business Rule to capture record creation, and configuring the webhook.
Navigate to System Web Services > Outbound > REST Message.
Click on the New button to create a new REST Message.
Fill in the fields such as:
Name: e.g., AppmixerNotifications.
Endpoint: https://<YOUR_API_BASE>/plugins/appmixer/servicenow/events for example https://api.appmixer.com/plugins/appmixer/servicenow/events
Click Submit and then open the newly created REST Message.
Click on the HTTP Methods tab and create a new HTTP Method.
Fill in the details such as:
Name: e.g., events.
HTTP Method: POST.
Endpoint: Ensure this is filled with your webhook listener’s URL.
Optionally, add default HTTP Request Headers (e.g., Content-Type) and customize the Request Body or Query Parameters if needed.
Save the HTTP Method.
Navigate to the table for which you want to capture new record creation. For example, if it’s the Incident table, go to Incident > All.
Right-click on the form header and select Configure > Business Rules.
Alternatively, you can navigate to Activity Subscriptions -> Administration -> Business Rules.
Click on the New button to create a new Business Rule.
Set the fields as follows:
Name: e.g., incident events.
Table: Select the appropriate table (e.g., Incident).
When: after, Insert, Update, Delete. This setting will trigger an event in Appmixer whenever a record is inserted, updated, or deleted in the table.
Under Advanced, check the box Advanced to open the script editor.
You need to be a user with admin permissions to perform this step.
In the Script section, add the following code. Please note that AppmixerNotifications must match the name of the Outbound REST Message specified in the previous step, and events should correspond to the event name defined in the Outbound REST Message.
(function executeRule(current, previous /*null when async*/) {
function serializeGlideRecordToJSON(gr) {
const obj = {};
const fields = gr.getFields();
for (let i = 0; i < fields.size(); i++) {
const field = fields.get(i);
const fieldName = field.getName();
// skipping sys_id and other sensitive fields if necessary
if (fieldName !== 'sys_id' && fieldName !== 'sys_updated_on' && fieldName !== 'sys_created_on') {
obj[fieldName] = gr.getValue(fieldName);
}
}
return obj;
}
const operation = current.operation();
// Uncomment this line for tracing
// messages can be seen in System Logs > System Log > All
//gs.info("Webhook Rule Fired for " + gs.getProperty('instance_name'));
const instance = gs.getProperty('instance_name');
const tableName = current.getTableName();
// Make sure this matches the name of your Outbound REST Message
// `AppmixerNotifications` is the name of the Outbound REST Message
// `events` is the name of the HTTP method within the `AppmixerNotifications` REST Message mapped to the Appmixer plugin url
const rq = new sn_ws.RESTMessageV2('AppmixerNotifications', 'events');
const requestBody = {
'type': instance + '.' + tableName + '.' + operation,
'data': serializeGlideRecordToJSON(current)
};
rq.setRequestBody(JSON.stringify(requestBody));
rq.execute();
})(current, previous); To see messages triggered by the Business Rule go System Logs > System Log > All
To see messages triggered by the sn_ws.RESTMessageV2 within the Business Rule, go to System Logs > Outbound HTTP Requests
Follow steps on: https://www.servicenow.com/docs/bundle/zurich-platform-security/page/integrate/authentication/task/configure-api-key.html
After that is done, you should have:
Created Inbound Authentication Profile
Created REST API Key
Created REST API Access Policy
Main requirements in the set up:
Inbound Authentication Profile
The Auth Parameter should be set to Type: Auth Header (Parameter Name: x-sn-apikey)
REST API Access Policy
All checkboxes except for Global and Apply to all versions should be checked
The Inbound authentication profile table should have these two Authentication profiles:
API Key
On-call subscription basic authentication
The names of the authentication profiles can vary if you called them differently during their creation, the main point is that this REST API Access Policy should work with both API Key AND Basic Authentication otherwise you will receive error 401.
The AWS SNS Connector allows you to send SMS messages using Amazon Simple Notification Service (SNS). However, there are certain limitations and prerequisites that must be met for successful message delivery.
Before using the AWS SNS Connector to send SMS messages, ensure the following:
SNS Sandbox Registration:
If your AWS account is in SNS SMS Sandbox Mode, SMS messages can only be sent to verified phone numbers added in the AWS Console.
To register a phone number, follow .
Region Compatibility:
The SNS SMS service may be region-dependent. Ensure the selected region supports SNS SMS services.
Messages might only be delivered if the selected region matches your AWS account’s region.
Spending Limit:
AWS imposes a default spending limit of $1 USD for SMS.
If the limit is reached, no SMS will be sent.
Check and increase the limit in the AWS SNS console under SMS settings.
IAM Permissions:
To enable SMS delivery logs, you must have the necessary IAM permissions.
Without proper permissions, viewing SMS failures or debugging will not be possible.
Check IAM policies and request permissions if needed.
When an SMS is published via SNS, a 200 OK response is returned.
However, this only confirms that SNS received the request. It does not confirm message delivery.
To track the status of an SMS:
Enable CloudWatch Logs for SNS SMS (requires permissions).
Use AWS SNS Delivery Status Logs to check failures.
Verify Phone Number:
Check if the recipient's number is added in the AWS SNS SMS sandbox.
Check Spend Limit:
Go to AWS SNS Console → Text Messaging (SMS) → Spend Limit.
Enable Logs for Debugging:
Navigate to AWS CloudWatch Logs and enable SNS delivery status logs.
Ensure IAM permissions allow access to logs.
Confirm Region Settings:
Use a region that supports SNS SMS delivery.
The AWS SNS Connector itself functions correctly. However, message delivery depends on AWS account settings, sandbox verification, spending limits, region settings, and logging permissions. By following the prerequisites and debugging steps above, users can ensure successful SMS delivery through AWS SNS.
For further assistance, consult AWS support or refer to the official SNS documentation.
Sign in to your Figma account at
Navigate to the
You need a Figma account to create applications (free account works)
⚠️ Note: Some API features require a Figma Professional or Organization plan.
Access Developer Settings:
Go to Account Settings → Connected apps
Or visit: https://www.figma.com/developers/apps
Create New App:
Click "Create new app"
Fill in the application details:
App name (e.g., "Appmixer Design Integration")
App description - Describe your integration's purpose
App website - Your organization's website (optional)
OAuth Configuration:
In your app settings, configure OAuth details
Set your callback URL for OAuth flow
Set Callback URL:
Add your OAuth redirect URI in the "Callback URL" field
For Appmixer, set the redirect URI to:
Example:
OAuth Scopes:
Figma OAuth provides access to:
File access - Read and write files
Team access - Access team information
Webhook access - Register and manage webhooks
After creating your application, you'll find:
Client ID - Your OAuth application identifier
Client Secret - Your OAuth application secret
⚠️ Important: Keep your Client Secret secure and never expose it in client-side code!
For server-to-server integrations, you can use personal access tokens:
Generate Personal Access Token:
Go to Account Settings → Personal access tokens
Click "Create new token"
Give it a descriptive name
Select appropriate scopes:
File content - Read file structure and content
File comments - Read and write comments
File dev resources - Access dev mode resources
Token Security:
Copy the token immediately (it won't be shown again)
Store securely and treat like a password
Tokens inherit your account permissions
Figma supports webhooks for real-time file updates:
Create Webhook:
Use the
POST to https://api.figma.com/v2/webhooks
Webhook Events:
FILE_UPDATE - File content changes
FILE_DELETE - File deletion
FILE_VERSION_UPDATE - New version created
LIBRARY_PUBLISH - Library component published
Webhook Payload Example:
Go to the Appmixer BackOffice -> Configuration.
Add new configuration: acme:figma.
Add your clientId and clientSecret keys with values from Figma.
Rate Limits:
Personal tokens: 1000 requests per hour
OAuth apps: 1000 requests per hour per user
Burst limit: 120 requests per minute
File access: Additional limits on large files
⚠️ File Access:
OAuth apps can only access files the user has permission to view
Team files require appropriate team membership
Private files require explicit sharing
⚠️ API Limitations:
Read-only API (cannot modify file content via API)
Comments and dev resources can be modified
File exports have size and complexity limits
⚠️ Webhook Delivery:
Webhooks may be delayed during high traffic
Implement proper retry logic for failed deliveries
Validate webhook signatures for security
Sign in to your Square account at
Navigate to the
Create a developer account if you don't have one already
⚠️ Note: You need a Square business account to access production features. Sandbox is available for development.
Create a new application:
Go to
Click "Create your first application" or "+" to add a new app
Fill in the application details:
Application Name (e.g., "My Payment Integration")
Application Description - Describe your integration's purpose
Choose Environment:
Sandbox - For development and testing
Production - For live transactions with real money
Navigate to OAuth Tab:
In your application dashboard, click on "OAuth"
Configure OAuth settings for secure authentication
Set Redirect URIs:
Add your callback URL in the "Redirect URL" field
For Appmixer, set the redirect URI to:
Example:
Configure Permissions:
Select the Square API permissions your app needs:
PAYMENTS_READ - Read payment information
PAYMENTS_WRITE - Process payments
ORDERS_READ - Read order information
ORDERS_WRITE - Create and update orders
CUSTOMERS_READ - Read customer profiles
CUSTOMERS_WRITE - Create and update customers
INVENTORY_READ - Read inventory information
ITEMS_READ - Read catalog items
After configuring your application, you'll find your API credentials:
Application ID - Your OAuth application identifier
Application Secret - Your OAuth application secret
Access Token - For server-to-server API calls (if using personal access token)
⚠️ Important: Keep your Application Secret secure and never expose it in client-side code!
If your integration needs real-time updates:
Navigate to Webhooks:
In your application dashboard, click on "Webhooks"
Click "Add Endpoint"
Configure Webhook Settings:
Notification URL: Your endpoint to receive webhook notifications
Events: Select events you want to subscribe to:
payment.created
payment.updated
order.created
order.updated
customer.created
customer.updated
Webhook Security:
Square signs webhook requests with a signature
Use the provided webhook signature key to verify authenticity
Go to the Appmixer BackOffice -> Configuration.
Add new configuration: acme:square.
Add your clientId (Application ID) and clientSecret (Application Secret) keys with values.
Test OAuth Flow:
Use your Application ID to initiate OAuth authorization
Square authorization URL: https://connect.squareup.com/oauth2/authorize
Required parameters: client_id, redirect_uri, response_type=code, scope
API Testing:
Square API base URLs:
Sandbox: https://connect.squareupsandbox.com/
Production: https://connect.squareup.com/
Include Authorization: Bearer <access_token> header in requests
Test with endpoints like /v2/locations to verify authentication
Rate Limits:
API calls: 10 requests per second per access token
Webhooks: No specific rate limits, but implement proper handling
Payment processing: Additional limits may apply based on your Square account
Follow the next link:
You need to register one more URL in order to use the OneDrive Javascript file picker:
And select both of these options:
And save the changes:
The next step is creating clientId and clientSecret:
Unfortunately, the secret has an expiration, you're going to have to create a new one in 2 years:
Use the furthest date possible:
You will get a new secret, copy the Value into the clipboard:
This is the clientSecret, the Microsoft Secret ID is the Appmixer clientId. These secrets need to be added into the Backoffice:
And add new keys:
The clientId is the _Application (client) ID f_rom the main page:
Then go to the API permissions section:
Choose Microsoft Graph:
Delegated permissions:
Type-in Files:
And select these two:
Then type-in offline and select this permission:
The final set of permissions:
To enable custom tenant functionality for Microsoft connections, follow these steps:
Navigate to your Appmixer admin panel (back office) and go to Connector Configuration.
Find the "appmixer:microsoft" connector in the list
If it doesn't exist, add it by clicking the Add Configuration button
Click on "Show Configuration Values"
Click "Add Item" to create a new configuration entry
Set the following values:
Key: microsoftTenant
Value: true
Click Confirm to save the configuration
Note: This configuration enables the tenant field in the Microsoft connection interface.
Return to your flow and locate a Microsoft connector
Click "Connect account"
You should now see a "Tenant" field in the connection form
Enter your Tenant ID in the field
Custom Tenant ID: Enter your organization's specific tenant identifier
Default Behavior: Enter common to use Microsoft's default multi-tenant behavior
💡 Tip: If you're unsure about your Tenant ID, you can find it in the Azure Active Directory overview page in the Azure portal.
Simply remove the "microsoftTenant" from the Configuration Values or set its value to false
Go to and create a new app.
Name your application and select a workspace.
Then you receive your clientID, clientSecret and signingSecret.
After that create an App-Level Token with the following scopes: authorizations:read. This token is used verify authorization of the user for incoming Events API requests from Slack.
Copy and paste those values into the Backoffice.
clientID is the Client ID from the Slack app.
clientSecret is the Client Secret from the Slack app.
authToken is the App-Level Token from the Slack app.
signingSecret is the Signing Secret from the Slack app (just under Client Secret).
The next thing is adding the redirect URL and scopes.
Example redirect URL: https://api.YOUR_SUBDOMAIN.appmixer.cloud/auth/slack/callback
In order to be able to send messages as bot to channels, you have to add these scopes in the OAuth & Permissions section, specifically in the Bot Token Scopes section.
In order to be able to work with channels and groups as user, you have to add these scopes in the OAuth & Permissions section, specifically in the User Token Scopes section.
In order to use the application accross workspaces you have to distribute it.
Some components (New Channel Message or New User) use Slack Events API (). In order to use this components in Appmixer, you have to register Appmixer's Slack URI in Slack first.
Go to then choose your Appmixer app.
Go to your app settings ()
The Request URL has to point to your Appmixer backend server. The rest of the URL /plugins/appmixer/slack/events has to remain like this. Your Appmixer backend server has to be running before you enter the URL. Slack will immediately try to connect to that URL.
Then you need to subscribe to these events:
messages.channels, message.groups for the New Channel Message component
team_join for the New User component
Don't forget to hit Save Changes:
You can now use the appmixer.slack.list.NewChannelMessageRT component in a flow.
In Appmixer 4.2 the Slack module was upgraded. We did remove all the legacy API and switched to their newest API. For some time, it was possible to use the Appmixer Slack module with both legacy Slack apps and the new Slack apps. But the legacy apps are deprecated and will retire in August 2021 (). If you use the legacy Slack app, you either have to create a new Slack app or migrate the existing one.
In order to migrate your Slack legacy app, go to Oauth & Permission section.
Then scroll down to Scopes and hit the Update Scopes button.
Ignore the first page, because that's all about scopes for bots.
Scroll all the way down and hit continue. You get to a second page with user scopes.
Select channels:read, channels:history, channels:write, groups:read, groups:write, groups:history, chat:write and users:read.
Verify the scopes and migrate the app.
And confirm. The Appmixer Slack module since version 4.2.1 is already updated and ready for the new Slack apps.
Flows that use the SendPrivateChannelMessage component won't work. The component will appear unauthenticated. The reason is a different Oauth scope in the new Slack app version. In this case, the user has to authenticate the component again.
Sign in to your Strava account at
Navigate to
You need an existing Strava athlete account to create API applications
⚠️ Note: API applications are tied to your Strava athlete account.
Create a new application:
Click "Create & Manage Your App"
Fill in the application details:
Application Name (e.g., "My Fitness Integration")
Category - Select the most appropriate category for your app
Club - Optional, select a club if applicable
Website - Your application's website URL
Application Description - Describe what your integration does
Application Icon:
Upload an icon for your application (optional but recommended)
Supported formats: PNG, JPG (recommended size: 124x124 pixels)
Authorization Callback Domain:
Enter your callback domain in the "Authorization Callback Domain" field
For Appmixer, use your API domain without the full path:
Example:
OAuth Redirect URI:
Strava will construct the full callback URL
The complete redirect URI will be:
Example:
After creating your application, you'll see your API credentials:
Client ID - Your application's unique identifier
Client Secret - Your application's secret key
⚠️ Important: Keep your Client Secret secure and never expose it in client-side code!
Strava uses the following OAuth scopes:
read - Read public profile information
read_all - Read all profile information (including private activities)
profile:read_all - Read all profile information
profile:write - Update profile information
activity:read - Read public activities
activity:read_all - Read all activities (including private)
activity:write - Create and update activities
⚠️ Note: Strava has strict guidelines about data usage. Ensure you comply with their .
Go to the Appmixer BackOffice -> Configuration.
Add new configuration: acme:strava.
Add your clientId and clientSecret keys with values.
Rate Limits:
Default: 100 requests per 15 minutes, 1000 requests per day
Higher limits: Available upon request for approved applications
Rate limit headers are included in API responses
For Public Applications:
Submit for Review:
Applications requesting higher rate limits need approval
Provide detailed information about your use case
Include screenshots and documentation
Compliance Requirements:
Follow Strava's
Implement proper data handling and privacy measures
Display Strava attribution where required
SMS not received
Phone number not verified in sandbox
Add number to SNS sandbox
200 OK but no SMS
No balance in the AWS SNS SMS spend limit
Increase spending limit in AWS console
SMS delivery failure
Region mismatch
Use the correct region for SMS service
Unable to view logs
Insufficient IAM permissions
Request permissions to enable CloudWatch logs
https://[YOUR_API_BASE]/auth/figma/callbackhttps://api.appmixer.com/auth/figma/callback{
"event_type": "FILE_UPDATE",
"file_key": "ABC123",
"timestamp": "2023-01-01T12:00:00Z"
}

https://[YOUR_API_BASE]/auth/square/callbackhttps://api.appmixer.com/auth/square/callback
[YOUR_API_DOMAIN]api.appmixer.comhttps://[YOUR_API_DOMAIN]/auth/strava/callbackhttps://api.appmixer.com/auth/strava/callback

chat:write
Send messages as @BOT
chat:write.customize
Send messages as @BOT with a customized username and avatar
chat:write.public
Send messages to channels @BOT isn't a member of
























































Create or upgrade to a Pinterest Business account at Pinterest Business
Navigate to the Pinterest Developers Portal
Sign in with your Pinterest Business account credentials
⚠️ Note: You need a Pinterest Business account to access the Pinterest API. Personal accounts cannot create apps.
Create a new app:
Go to My Apps in the developer portal
Click "Create app"
Fill in the application details:
App name (e.g., "My Pinterest Integration")
App description - Describe what your integration does
Website URL - Your application's website
Platform - Select "Web" for most integrations
App Configuration:
App type: Choose between "Public" or "Internal"
Linked Business Account: Link to your Pinterest Business account
App Purpose: Specify how you'll use Pinterest data
Navigate to App Settings:
In your app dashboard, go to "OAuth" or "Settings" section
Configure OAuth redirect URIs
Set Redirect URIs:
Add your callback URL in the "Redirect URIs" field
For Appmixer, set the redirect URI to:
https://[YOUR_API_BASE]/auth/pinterest/callbackExample:
https://api.appmixer.com/auth/pinterest/callbackConfigure Scopes:
Select the permissions your app needs:
read_public - Read public boards and pins
write_public - Create public boards and pins
read_secret - Read secret boards and pins (if authorized)
write_secret - Create secret boards and pins (if authorized)
read_users - Read user profile information
write_users - Update user profile information
After configuring your app, you'll find your API credentials:
App ID (Client ID) - Your application's unique identifier
App secret (Client Secret) - Your application's secret key
⚠️ Important: Keep your App Secret secure and never expose it in client-side code!
For Public Apps:
Submit for Review:
Public apps require Pinterest review before going live
Provide detailed use case description
Include screenshots and flow documentation
Specify which scopes you need and why
Review Requirements:
Your app must comply with Pinterest's API policies
Demonstrate legitimate use of requested permissions
Show how user data will be handled responsibly
Go to the Appmixer BackOffice -> Configuration.
Add new configuration: acme:pinterest.
Add your clientId (App ID) and clientSecret (App Secret) keys with values.
Rate Limits:
Pinterest implements rate limiting per user and per app
Specific limits depend on your app's approval status and usage
Rate limit information is included in API response headers
⚠️ Business Account Required:
Only Pinterest Business accounts can create and manage apps
Personal accounts cannot access the Pinterest API
Convert your personal account to business if needed
⚠️ Content Policy:
Pinterest has strict content policies and community guidelines
All programmatically created content must comply with these rules
Violations can result in API access suspension
⚠️ Token Management:
Access tokens have expiration times
Implement proper token refresh logic
Store tokens securely and never expose them
⚠️ Attribution:
Display appropriate Pinterest branding when required
Follow Pinterest's brand guidelines
Include proper attribution for Pinterest content
Sign in to your Intercom account at Intercom
Navigate to the Intercom Developer Hub
Go to Your Apps section in the Developer Hub
⚠️ Note: You need an existing Intercom workspace to create developer apps.
Create a new app:
Click "New app" or "Create app"
Provide an App name (e.g., "My Integration")
Add an App description explaining what your integration does
Select your Workspace from the dropdown
App Configuration:
Choose "Internal integration" for private use within your workspace
Or choose "Public app" if you plan to distribute it publicly
Navigate to Authentication:
In your app dashboard, go to the "Authentication" section
Select "OAuth" as the authentication method
Set up OAuth Configuration:
Redirect URLs: Add your callback URL
Scopes: Select the permissions your app needs:
read_conversations - Read conversations and messages
write_conversations - Create and update conversations
read_contacts - Read contact information
write_contacts - Create and update contacts
read_teams - Read team information
And other scopes as needed for your integration
👉 For Appmixer, set the redirect URI to:
https://[YOUR_API_BASE]/auth/intercom/callbackExample:
https://api.appmixer.com/auth/intercom/callbackAfter configuring OAuth, you'll find your credentials in the Authentication section:
Client ID - Found in the OAuth section
Client Secret - Found in the OAuth section
⚠️ Important: Keep your Client Secret secure and never expose it in client-side code!
If your integration needs real-time updates:
Go to Webhooks section:
Navigate to "Webhooks" in your app dashboard
Click "Create webhook"
Webhook Configuration:
Webhook URL: Your endpoint to receive webhook notifications
Topics: Select events you want to subscribe to:
conversation.created
conversation.updated
contact.created
contact.updated
And others as needed
Webhook Security:
Note the Webhook Secret for verifying webhook signatures
Use this to validate that webhooks are coming from Intercom
Go to the Appmixer BackOffice -> Configuration.
Add new configuration: acme:intercom.
Add your clientId and clientSecret keys with values.
Install your app:
In the app dashboard, click "Install app"
This will install the app to your selected workspace
Generate Access Token (for testing):
Go to "Configure" -> "Authentication"
Generate a test access token if needed for development
API Testing:
Use the generated credentials to test API calls
Intercom API base URL: https://api.intercom.io/
Include Authorization: Bearer <access_token> header in requests
When ready for production:
App Review (for public apps):
Submit your app for Intercom's review process
Provide detailed information about your app's functionality
Include privacy policy and terms of service links
App Store Listing (optional):
If approved, your app can be listed in the Intercom App Store
Configure app store listing with screenshots and descriptions
⚠️ Rate Limits:
Intercom has rate limits based on your plan
Standard: 1,000 requests per minute
See Intercom Rate Limits for details
⚠️ API Versions:
Intercom uses API versioning via headers
Always specify the API version: Intercom-Version: 2.10
⚠️ Scopes:
Request only the minimum scopes needed for your integration
Users will see all requested permissions during OAuth flow
Sign in to your Linear workspace at Linear
You need admin permissions in your Linear workspace to create applications
Navigate to your workspace settings
⚠️ Note: Only workspace administrators can create OAuth applications in Linear.
Navigate to API Settings:
Go to Settings → Account → API
Or visit: https://linear.app/[your-workspace]/settings/api
Create New Application:
Click "Create new" under OAuth applications
Fill in the application details:
Application name (e.g., "Appmixer Integration")
Description - Brief description of your integration
Website URL - Your organization's website
Callback URLs - OAuth redirect endpoints
Set Callback URLs:
Add your OAuth callback URL in the "Callback URLs" field
For Appmixer, set the redirect URI to:
https://[YOUR_API_BASE]/auth/linear/callbackExample:
https://api.appmixer.com/auth/linear/callbackConfigure Scopes:
Linear uses specific scopes to control access:
read - Read access to issues, projects, teams, and users
write - Write access to create and update issues and comments
admin - Administrative access (use with caution)
Application Details:
Application name: Your integration name
Description: Purpose of your integration
Website: Your application or company website
After creating your application, you'll receive:
Client ID - Your OAuth application identifier
Client Secret - Your OAuth application secret (keep this secure!)
⚠️ Important: Store your Client Secret securely and never expose it in client-side code!
Linear supports webhooks for real-time updates:
Create Webhook:
In API settings, go to "Webhooks" section
Click "Create webhook"
Configure Webhook Settings:
URL: Your endpoint to receive webhook notifications
Label: Descriptive name for the webhook
Secret: Optional signing secret for verification
Select Resources:
Choose which resources trigger webhooks:
Issue - Issue creation, updates, deletion
Comment - Comment creation and updates
Project - Project changes
ProjectUpdate - Project status updates
For server-to-server integrations, you can use personal API keys:
Generate Personal API Key:
Go to Settings → Account → API
Click "Create key" under Personal API keys
Add a descriptive label
Copy the generated key immediately (it won't be shown again)
⚠️ Note: Personal API keys have the same permissions as your user account.
Go to the Appmixer BackOffice -> Configuration.
Add new configuration: acme:linear.
Add your clientId and clientSecret keys with values from Linear.
Test OAuth Flow:
Linear authorization URL: https://linear.app/oauth/authorize
Required parameters: client_id, redirect_uri, response_type=code, scope
API Testing:
Linear GraphQL API endpoint: https://api.linear.app/graphql
Include Authorization: Bearer <access_token> header
Test with a simple query like fetching the viewer:
{
viewer {
id
name
email
}
}Linear uses GraphQL exclusively:
Common Queries:
Issues: issues, issue
Teams: teams, team
Projects: projects, project
Users: users, user
Common Mutations:
Create Issue: issueCreate
Update Issue: issueUpdate
Create Comment: commentCreate
Example Issue Query:
{
issues(first: 10) {
nodes {
id
title
description
state {
name
}
assignee {
name
}
}
}
}Rate Limits:
Complexity-based limiting: Each query has a complexity score
Maximum complexity: 1000 points per query
Rate limit: 2000 requests per hour per access token
Burst limit: 120 requests per minute
In 2020 Google introduced Application verification If an app uses Google APIs to access Google users’ data. This makes it impossible to use the Appmixer Google modules in production without the verification process. Such a verification process has to be done by you.
Most of the Google modules need what Google marks as Sensitive or Restricted Scope in order to work. For example, the Gmail module needs https://www.googleapis.com/auth/gmail.compose scope to create a new email and https://www.googleapis.com/auth/gmail.readonly to get new emails.
Originally, the Gmail module was the only one affected by this new Google policy. But new scopes are being marked as restricted. Later, they shut down the older Spreadsheets API, and with the new API, the only way to get a list of users Spreadsheets is to use the Drive API and restricted scope https://www.googleapis.com/auth/drive.metadata.readonly.
The last (up to now) module affected by this policy is the Drive module.
Go to the Google developer console at https://console.developers.google.com/ and create a new project.
When the project is created, add some APIs.
Add the Gmail API.
The Google Sheets API.
The Google Drive API.
The Google Calendar API.
The Analytics API.
The YouTube API.
The next step is the OAuth consent screen.
User Type - Internal vs External. The Internal user type allows you to use the Google modules without the app verification process mentioned at the beginning of this page. However, the modules will work only for accounts within your organization.
We are going to choose External for the purpose of this tutorial.
On the next page, leave the scopes empty.
You can add test users, but you can do this later. Only users added as test users will be able to authenticate!
Here is the Oauth consent screen configured. The app is in the testing mode, it is external (for users outside your organization). No more than 100 accounts (user cap) can be used with this application.
We need a client ID and a client secret.
Choose the Web application type and give it a name and authorize redirect URI.
For the purpose of this tutorial, we are going to use localhost:2200 which is the default port for the Appmixer API when running on localhost. If you are creating an Oauth application for your production/QA, the URI of your server will be here. The suffix /auth/google/callback will remain there.
You will get your Client ID and Client Secret.
They have to be inserted into the Appmixer. You can use the Backoffice to do that.
By default the callback URL passed to Google is in the form of <your-api-url>/auth/appmixer:google/callback. However in some cases, you could need to pass a different callback URL. You can do this by inserting a callbackUrl value in the service configuration:
In order to use Google API Webhooks, you have to verify your domain ownership. More about domain verification can be found here. If you use the Appmixer Self-Managed package and you run Appmixer on your own servers using your custom domain, you can use CNAME records to verify your domain ownership. Otherwise, if you are an Appmixer Hosted customer, you can use the HTML file method to verify your Appmixer tenant domain (api.YOUR_TENANT.appmixer.cloud) at Google.
First, open the Google developers console and your Appmixer project and follow the instructions in the next picture.
Add your Appmixer tenant API URL (api.YOUR_TENANT.appmixer.cloud) and continue to verify the domain ownership. Click 'Take me there' and then 'Add a property'.
Again, use api.YOUR_TENANT.appmixer.cloud. Then download the HTML verification file.
After you download your HTML verification file, upload it via the Appmixer Backoffice interface to the Public Files section on the left. When you are done you should see your file listed:
Click the 'Verify' button to finish the verification process.
Now, you can go back to the developer console and add the api.YOUR_TENANT.appmixer.cloud domain once more. But this time, because it's already verified, it will appear in the list of your domains.
The new domain has to be added to the OAuth consent screen as well.

































