PostgreSQL is a go-to database for managing structured data like users, payments, and analytics. But modern applications rarely rely on just one system. You might store flexible user sessions in MongoDB, run localized operations in MySQL, use MS SQL for legacy systems, or manage separate PostgreSQL instances for different regions or tenants.
Traditionally, pulling related data from multiple databases required custom backend logic, complicated ETL workflows, and multiple API calls, all just to stitch records together manually.
With API Maker’s Find & Join feature, you skip all that. It lets you join data across PostgreSQL and any other supported database using simple, JSON-based REST API queries. You get clean, unified responses, with no sync scripts, no backend coding, and no hassle.
PostgreSQL is trusted for its performance, data integrity, and advanced query features. It’s a favorite for everything from core business logic to real-time analytics. But in real-world architectures, it rarely stands alone. Many teams combine PostgreSQL with databases like MongoDB for document-based data, MySQL for legacy systems, or SQL Server for enterprise integrations.
The problem? Joining data across these systems typically requires custom APIs, data syncing workflows, or a backend service layer, all of which are time-consuming, error-prone, and difficult to maintain.
API Maker’s Find & Join feature solves this problem. It lets you connect PostgreSQL with any other supported database, SQL or NoSQL, and run powerful cross-database queries using a consistent, schema-based API interface. You can filter, join, and retrieve deeply nested fields from different systems in one clean request, all with a simple JSON query, no code needed.
This guide walks you through using PostgreSQL as the source for cross-database joins. For a full overview of how Find & Join works across any setup, check out the Find & Join Feature page.
To perform cross-database joins with PostgreSQL as the source, you need to define the relationships directly within your API Maker schema. This schema configuration tells API Maker how different tables and databases relate to one another, allowing it to resolve joins dynamically at runtime.
For example, if your PostgreSQL orders
table references a customers
collection in MongoDB (or a table in MySQL, SQL Server, etc.), you can describe that relationship in your schema without writing backend logic. API Maker takes care of resolving those connections under the hood, regardless of the database engine involved.
Once the schema is configured, you can:
For full details on schema configuration, refer to the Schema Setup Documentation.
Let’s say you have an orders
table in PostgreSQL, and you want each order to reference a customer record stored in another database, like MongoDB, MySQL, or even another PostgreSQL instance.
With API Maker, you can declare this relationship directly in your schema, no code, no sync scripts. Here's a sample schema configuration for connecting customer_id
in orders
to the _id
field in a MongoDB customers
collection:
customer_id: <ISchemaProperty> {
__type: EType.string,
instance: "Mongo",
database: "crm_db",
table: "customers",
column: "_id"
}
Here’s what each property does:
__type:
Matches the type of your foreign key (customer_id
) with the target field (_id
)instance:
Specifies the external database engine, Mongo
, MySQL
, Postgres
, etc.database:
The name of the connected databasetable:
The target table or collection you're referencingcolumn:
The field you're joining against in the external systemOnce this relationship is defined:
customer_id.name
or customer_id.status
directly in API responsesThis works with all supported database engines, letting you use PostgreSQL as the source while joining data from MongoDB, MySQL, SQL Server, or any other integrated system.
Imagine a setup where:
projects
tableteams
collectionprojects.team_id
field links to the MongoDB teams._id
fieldWith API Maker's Find & Join, you can seamlessly query and filter PostgreSQL records based on data from MongoDB, all via RESTful APIs using JSON.
Let’s say you want to fetch all projects where the associated team (from MongoDB) operates in the EMEA region.
Request:
GET /api/projects?find={ "team_id.region": "EMEA" }
This GET request uses dot notation (team_id.region
) to apply a condition on a MongoDB field, even though the primary data (projects) comes from PostgreSQL.
Response:
{
"success": true,
"statusCode": 200,
"data": [
{
"id": 301,
"name": "Cloud Migration",
"team_id": "64ec9a12e84a6b2c915a19fe",
"status": "active",
"start_date": "2025-06-01"
},
{
"id": 302,
"name": "Infrastructure Upgrade",
"team_id": "64ec9a12e84a6b2c915a19fe",
"status": "planning",
"start_date": "2025-07-15"
}
]
}
Now let’s say you want the three most recently created engineering teams (from MongoDB) that have at least 10 members, and fetch their associated projects (from PostgreSQL).
Endpoint:
POST /api/projects/query
Content-Type: application/json
Request Body:
{
"find": {
"team_id.size": { "$gte": 10 },
"team_id.department": "engineering"
},
"limit": 3,
"sort": {
"team_id.created_at": -1
}
}
Response:
{
"success": true,
"statusCode": 200,
"data": [
{
"id": 305,
"name": "AI Model Deployment",
"team_id": "64efab45c03a8d22b38fb141",
"status": "in-progress",
"start_date": "2025-08-01"
},
{
"id": 303,
"name": "Platform Refactor",
"team_id": "64efab45c03a8d22b38fb141",
"status": "active",
"start_date": "2025-06-10"
}
]
}
These examples show how API Maker eliminates the need for custom backend logic or data stitching. You can filter, sort, and paginate PostgreSQL records using live MongoDB values, with one consistent query pattern across all databases.
Modern applications demand up-to-the-second data, whether for powering live dashboards, enabling fast user support, or triggering business logic like fraud checks. But when PostgreSQL is just one part of a larger system that includes MongoDB, MySQL, or other databases, achieving real-time visibility across them can be a challenge.
That’s where API Maker’s Find & Join comes in.
Instead of syncing databases with ETL pipelines or juggling multiple APIs, you can query data from PostgreSQL and any other connected database, all in a single REST call.
API Maker runs each query on-demand, resolving relationships across systems at runtime, no sync lag, no stale data. Just accurate, real-time information every time your API is called.
When you're joining PostgreSQL with other databases using API Maker, a well-structured schema can significantly improve performance, maintainability, and query efficiency.
Here’s how to get the most out of your setup:
Ensure that your join keys (like user_id
, sku
, order_id
) have the same data type in both PostgreSQL and the external database, whether that’s MongoDB, MySQL, or another SQL engine. Mismatched types (e.g., integer vs. string) can cause joins to fail or return empty results.
For faster queries, add indexes to the fields you’re using for joins, on both the PostgreSQL side and the external system. API Maker automatically takes advantage of these indexes during live runtime joins.
While deep joins (e.g., across three or more databases or nested relationships) are fully supported, they can slow down response time. For performance-critical APIs, aim to keep population depth to 1–2 levels when possible.
When defining cross-database joins in your API Maker Table Schema, use meaningful naming conventions and inline comments to explain what’s being joined. This is especially helpful when working with multiple databases or team members.
Platforms like Firebase, Supabase, and Appwrite are great for quickly building backends, but when it comes to querying across multiple databases, especially with PostgreSQL as your primary source, they hit clear limitations.
API Maker, on the other hand, is purpose-built for flexible, real-time, cross-database querying. Here’s how it differs:
With API Maker, you can join PostgreSQL with other sources like MongoDB, MySQL, SQL Server, or even another PostgreSQL instance, all in real time, via a single API call.
In contrast, Firebase and Supabase only support their own native storage, while Appwrite doesn’t support joins across databases at all.
API Maker uses a schema-driven approach to define relationships between databases. This means joins are automatically resolved at runtime, no custom backend logic required.
Supabase offers in-database joins within Postgres, but lacks external join support. Firebase and Appwrite aren’t designed for relational logic at all.
Every API Maker endpoint supports joins, filters, sorting, and population across related fields, even when those fields come from different databases.
Supabase's PostgREST only handles Postgres-native joins. Firebase’s Firestore and Appwrite offer basic queries but can’t perform complex filtering or relationship-based lookups.
API Maker lets you mix and match relational data from PostgreSQL with NoSQL sources like MongoDB, giving you a broader, more flexible backend architecture.
By contrast, Supabase and Appwrite are tightly bound to SQL engines, and Firebase is limited to its document model.
Yes, and that’s exactly what API Maker is built for. You can keep PostgreSQL as your core database and seamlessly join it with MongoDB, MySQL, Oracle, or even another PostgreSQL instance. No syncing, no migrations, just define the relationship in the schema and start querying.
Absolutely. Whether your data lives in 2 or 5 different databases, API Maker lets you query across all of them at once. Define the joins in your schema, and then use a simple REST or POST API to pull in everything you need, all in one response.
Data consistency is handled for you. As long as your join fields (like user_id
, order_id
, etc.) are correctly indexed, API Maker automatically resolves the joins at runtime, with accurate, real-time results. No custom checks or backend logic needed.
No worries, API Maker handles type differences for you. For example, you can join a UUID field from PostgreSQL with a string _id
from MongoDB. The platform normalizes and maps types automatically, so you don’t have to do manual conversions.
Yes. API Maker supports both cloud-hosted and on-premise PostgreSQL instances. As long as the database is accessible over the network, you can integrate it with other data sources in real time, perfect for hybrid or legacy environments.
Definitely. Instead of writing multiple queries or building custom join logic, you get a single, optimized response with all the data you need, already filtered and joined. That means faster response times, fewer API calls, and better performance for frontend apps and mobile clients.