- 4 minutes to read
</i> Configure Nodinite Pickup Service Logging Agent for PostgreSQL Integration">

PostgreSQL Configuration Example

Effortlessly integrate your enterprise systems by configuring the Nodinite Pickup Logging Service to fetch JSON Log Events directly from your PostgreSQL database. This guide empowers you to set up secure, scalable, and reliable log event collection for your organization, with robust validation and table design for compliance and analytics.

✅ Seamless integration with PostgreSQL for real-time log event collection
✅ Secure and compliant long-term storage of log data
✅ Scalable configuration for enterprise environments
✅ Automated validation and error handling for reliable operations

%% The diagram below illustrates how the Pickup Service interacts with PostgreSQL tables, processes messages, and stores them in the :Nodinite: Log Database. graph LR subgraph "PostgreSQL" roS(fal:fa-database Database Table) end subgraph "Nodinite instance" roPS(fal:fa-truck-pickup Pickup Service) roS --> roPS roPS --> |Long term storage|roDB(fal:fa-database Log Database) end

The diagram above illustrates how the Pickup Service interacts with PostgreSQL tables, processes messages, and stores them in the Nodinite Log Database.

PostgreSQLs

The PostgreSQLs section lets you manage all PostgreSQL database sources for your log events.

You configure these sources in the Settings.json file, which uses JSON format. The default path is:

C:\Program Files\Nodinite\Logging Agent - Pickup Service\Settings.json
{
  ...
  "PostgreSQLs": [
    {
      "Enabled": true,
      "LogApi": "https://localhost/Nodinite/Dev/LogAPI/",
      "UseConfigurationDatabase": false,
      "ConfigurationDatabaseConnectionString": null,
      "ConnectionString": "Server=name.databases.appdomain.cloud;Port=31902;Database=ibmclouddb;User Id=ibm_cloud_user;Password=yoursecretpassword;SSLMode=Prefer;Trust Server Certificate=true",
      "MaxRowsLimit": 500,
      "ReplaceRules": [
        {
          "Name": "Fix Endpoint real customer id to {customerId}",
          "ReplaceName": false,
          "ReplaceUri": true,
          "Pattern": "/([0-9]{4,})$",
          "Group1Value": "{customerId}"
        }
      ]
    }
  ]
  ...                      
}

PostgreSQLs is an array of PostgreSQL configurations. Each entry defines how the Log Agent connects to and processes messages from a specific PostgreSQL database. This setup ensures your Nodinite JSON Log Events are reliably collected and managed.

Property Description Value Example Comment
ConnectionString PostgreSQL Connection string Server=replaceme.databases.appdomain.cloud;Port=31902;Database=ibmclouddb;User Id=ibm_cloud_id;Password=yoursecretpassword;SSLMode=Prefer;Trust Server Certificate=true
MaxRowsLimit Maximum number of rows to fetch on each loop 500 (default) Larger values may impact memory usage
Enabled See the Shared Configuration section for more info
LogAPI See the Shared Configuration section for more info
UseConfigurationDatabase See the Shared Configuration section for more info
ConfigurationDatabaseConnectionString See the Shared Configuration section for more info
ReplaceRules See the shared Replace Rules section for more info

Important

You must restart the Nodinite Pickup Logging Service for configuration changes to take effect.

PostgreSQL table

If you use a PostgreSQL database to store Log Events, your custom logging solution must define the following LogEvents table:

Column Datatype Purpose
LogEvent json Your JSON encoded Log Event goes here
Id bigint Automatically created identity; the PickupService must know which row to flag as invalid if the value in LogEvent is not valid
ValidationFailed boolean Automatically set to false when inserted; may be set to true during processing if errors are detected
ValidationFailedText text Updated if errors occur validating the LogEvent during processing
Created timestamp Automatically created during insert; useful for troubleshooting

Info

Invalid Log Events will have ValidationFailed set to True. You must remove these entries manually.

-- Table: public."LogEvents"

-- DROP TABLE public."LogEvents";

CREATE TABLE public."LogEvents"
(
    "LogEvent" json NOT NULL,
    "Id" bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ),
    "ValidationFailed" boolean NOT NULL DEFAULT false,
    "ValidationFailedText" text NULL,   
    "Created" timestamp with time zone NOT NULL DEFAULT now(),
    CONSTRAINT "LogEvents_pkey" PRIMARY KEY ("Id")
)
WITH (
    OIDS = FALSE
)
TABLESPACE pg_default;

ALTER TABLE public."LogEvents"
    OWNER to admin;

GRANT ALL ON TABLE public."LogEvents" TO usergoeshere;

-- Repeat the grants as needed

Info

Change the value for 'usergoeshere' to your actual user.


Next Step

Configure