- 4 minutes to read

Custom Logging, FAQ, Troubleshooting, Performance, Asynchronous Custom Logging, performance, impact, latency, asynchronous, throughput Does Custom Logging impact integration performance? Learn about asynchronous logging, batching, and performance best practices.

Performance Impact of Custom Logging

Does Custom Logging impact integration performance?

Properly implemented Custom Logging has minimal performance impact (<5ms overhead)—asynchronous logging to intermediate storage decouples log writes from integration processing, preventing latency and throughput degradation.

✅ Minimal Overhead (<5ms)

Asynchronous logging architecture:

  1. Integration code writes JSON Log Event to in-memory queue (<1ms)
  2. Background thread writes to intermediate storage (queue/file/database) (2-4ms, non-blocking)
  3. Pickup Service asynchronously fetches logs (zero impact on integration)

Result: Integration processing continues immediately—no waiting for network calls, disk I/O, or Nodinite HTTP responses.

Example throughput:

  • Without logging: 1,000 orders/minute
  • With async Custom Logging: 995 orders/minute (0.5% impact)

✅ No Network Latency

Integration doesn't call Nodinite Web API directly—writes to local/nearby storage (file share, Azure queue in same region, local database).

Network call latency avoided: 50-200ms per log event.

✅ Handles Burst Traffic

During high-volume periods (10,000 orders/hour), logs queue up in intermediate storage—Pickup Service ingests at controlled rate without overwhelming Nodinite or blocking integrations.

⚠️ Significant Overhead (50-200ms per log)

Synchronous logging (calling Log API directly):

  1. Integration code calls Nodinite HTTP API
  2. Waits for HTTP response (50-200ms network + processing time)
  3. Integration resumes processing

Result: 50-200ms added to every integration transaction.

Example throughput:

  • Without logging: 1,000 orders/minute
  • With sync Custom Logging: 400 orders/minute (60% degradation)

⚠️ Failures Block Integration

If Nodinite Web Server is unavailable (maintenance, network issues), synchronous log calls fail—integration must handle errors, retry, or lose logs.

Not recommended for production.

Optimizing Performance

Use Asynchronous Appenders

Log4Net asynchronous appender:

<appender name="AsyncForwardingAppender" type="log4net.Appender.AsyncForwardingAppender">
  <appender-ref ref="NodiniteFileAppender" />
  <bufferSize value="128" />
  <fix value="All" />
</appender>

Serilog async writing:

.WriteTo.Async(a => a.AzureServiceBus(...))

Benefit: Log calls return in <1ms, background threads handle I/O.

Batch Log Writes

Serilog batching:

.WriteTo.AzureServiceBus(
    connectionString: "...",
    queueName: "nodinite-logs",
    batchPostingLimit: 50,
    period: TimeSpan.FromSeconds(5))

Benefit: 50 log events sent in single batch instead of 50 individual writes—reduces overhead.

Write to Fast Storage

Storage performance comparison:

Storage Type Write Latency Recommendation
In-memory queue <1ms Best (combined with background flush)
Local SSD file 1-3ms Excellent for on-premises
Azure Service Bus 3-8ms Excellent for cloud
Network file share 5-15ms Good (ensure low-latency network)
Database INSERT 10-30ms Good (use async commits)
HTTP call to Log API 50-200ms ⚠️ Not recommended (synchronous)

Choose fast storage in same datacenter/region as integration.

Filter Low-Value Logs

Don't log every trivial event—focus on business-valuable logs:

  • Log: Order received, payment processed, shipment created, errors, failures
  • Don't log: Health check pings, internal retry attempts (unless failed), debug traces in production

Benefit: Reduced log volume = lower overhead, faster search, lower costs.

Real-World Performance Examples

Financial Services Company

Environment: 20 custom .NET microservices, 500K API calls/day

  • Before Custom Logging: 1,200 requests/second average throughput
  • After Async Custom Logging (Serilog → Azure Service Bus): 1,190 requests/second average throughput
  • Impact: 0.8% degradation, <5ms overhead per request

Acceptable trade-off for business intelligence value.

E-Commerce Company

Environment: Node.js integrations processing 100K orders/day

  • Attempted Synchronous Logging (direct Log API calls): 40% throughput drop, 200ms added per order
  • Switched to Async Logging (Winston → file share → Pickup Service): <1% impact, 3ms overhead per order

Async logging essential for performance.

Monitoring Performance Impact

Metrics to track:

  • Integration throughput - Messages/orders/transactions per minute (before vs after logging)
  • Processing latency - End-to-end transaction time (P50, P95, P99 percentiles)
  • Appender buffer size - If async buffers fill up, investigate (increase buffer or reduce log volume)
  • Pickup Service lag - If intermediate storage accumulates backlog, scale Pickup Service or reduce log volume

Acceptable impact: <5% throughput degradation, <10ms P95 latency increase.


Related Topics:
Asynchronous Logging Architecture
Pickup Service Configuration
Intermediate Storage Options

See all FAQs: Troubleshooting Overview

Next Step

Back to Custom Logging Overview