Can I log to multiple destinations simultaneously (Nodinite + Splunk + local files)?
Yes—appender-based logging frameworks enable writing to multiple destinations simultaneously without code changes. Log once, deliver to Nodinite, Splunk, local files, CloudWatch, and custom destinations in parallel.
Why Log to Multiple Destinations
✅ Migration Safety
During platform migrations, log to both old and new systems:
- Nodinite (new centralized logging)
- Splunk (existing logs for comparison)
- Local files (backup during transition)
Zero risk: Verify Nodinite logs match Splunk before decommissioning old system.
✅ Compliance Requirements
Meet regulatory requirements demanding multiple log retention locations:
- Nodinite (operational visibility, business intelligence)
- Immutable archive (S3 Glacier, Azure Blob cold storage for 7-year retention)
✅ Hybrid Architectures
Support multi-team environments with different tools:
- Nodinite (integration team uses for correlation, BPM)
- Splunk (security team uses for audit trails, intrusion detection)
- Local files (developers use for debugging in dev/test)
✅ Redundancy
Ensure log availability even if one destination fails:
- Nodinite primary destination
- Local file share as backup (if Nodinite temporarily unavailable)
How Appenders Enable Multiple Destinations
Appender frameworks (Log4Net, Serilog, Log4J, Winston) support multiple sinks/appenders configured declaratively:
.NET Framework Example (Log4Net)
app.config:
<log4net>
<!-- Appender 1: Nodinite via file share -->
<appender name="NodiniteFileAppender" type="log4net.Appender.RollingFileAppender">
<file value="\\fileserver\nodinite-logs\myapp\" />
<appendToFile value="true" />
<layout type="Nodinite.Log4Net.JsonLayout, Nodinite.Log4Net" />
</appender>
<!-- Appender 2: Local files for debugging -->
<appender name="LocalFileAppender" type="log4net.Appender.RollingFileAppender">
<file value="C:\Logs\myapp.log" />
<appendToFile value="true" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level %logger - %message%newline" />
</layout>
</appender>
<!-- Appender 3: Splunk via HTTP -->
<appender name="SplunkAppender" type="Splunk.Logging.RollingFileAppender, Splunk.Logging.Log4Net">
<url value="https://splunk.company.com:8088" />
<token value="YOUR-SPLUNK-TOKEN" />
</appender>
<root>
<level value="INFO" />
<appender-ref ref="NodiniteFileAppender" />
<appender-ref ref="LocalFileAppender" />
<appender-ref ref="SplunkAppender" />
</root>
</log4net>
Code remains identical:
log.Info("Order processed", new { OrderId = "PO-12345", CustomerId = "ACME" });
Result: Log event written to all 3 destinations—Nodinite file share, local file, Splunk HTTP endpoint.
.NET Core Example (Serilog)
Program.cs:
Log.Logger = new LoggerConfiguration()
// Sink 1: Nodinite via Azure Service Bus
.WriteTo.AzureServiceBus(
connectionString: "Endpoint=sb://...",
queueName: "nodinite-logs")
// Sink 2: Local files
.WriteTo.File("C:\\Logs\\myapp.log", rollingInterval: RollingInterval.Day)
// Sink 3: Application Insights
.WriteTo.ApplicationInsights(telemetryConfig, TelemetryConverter.Traces)
.CreateLogger();
Code remains identical:
Log.Information("Order {OrderId} processed", "PO-12345");
Result: Log event written to 3 destinations—Azure Service Bus queue (Nodinite), local file, Application Insights.
Java Example (Log4J with SLF4J)
log4j2.xml:
<Configuration>
<Appenders>
<!-- Appender 1: Nodinite via RabbitMQ -->
<RabbitMQ name="NodiniteAppender"
host="rabbitmq.company.com"
queue="nodinite-logs">
<JsonLayout />
</RabbitMQ>
<!-- Appender 2: Local file -->
<File name="FileAppender" fileName="logs/myapp.log">
<PatternLayout pattern="%d{ISO8601} [%t] %-5level %logger{36} - %msg%n" />
</File>
<!-- Appender 3: Splunk -->
<Splunk name="SplunkAppender"
url="https://splunk.company.com:8088"
token="YOUR-TOKEN" />
</Appenders>
<Loggers>
<Root level="info">
<AppenderRef ref="NodiniteAppender" />
<AppenderRef ref="FileAppender" />
<AppenderRef ref="SplunkAppender" />
</Root>
</Loggers>
</Configuration>
Code remains identical:
logger.info("Order {} processed for customer {}", orderId, customerId);
Different Formats Per Destination
Send different formats to each destination:
- Nodinite JSON Log Event format (for business intelligence, Search Fields, BPM)
- Splunk: Plain text with timestamp (for security team's existing queries)
- Local files: Verbose debug format (for developers)
Same log call, different serialization per appender.
Performance Considerations
- ✅ Asynchronous appenders - Background threads write to destinations, no blocking
- ✅ Batching - Serilog/Log4J batch writes to reduce overhead
- ⚠️ Synchronous HTTP appenders - Can add latency (Splunk HTTP), prefer async
Best practice: Use asynchronous appenders for all destinations to avoid integration performance impact.
Configuration Best Practices
- ✅ Externalize configuration - Use config files (
app.config,appsettings.json,log4j2.xml), not hardcoded - ✅ Environment-specific - Dev logs to console + local files, Production logs to Nodinite + archive
- ✅ Failure handling - If one destination fails (e.g., Splunk down), others continue
- ✅ Monitoring - Alert if appenders fail (check Pickup Service backlog, Splunk ingestion metrics)
Related Topics:
Log4Net Appender Documentation
Serilog Multiple Sinks
Asynchronous Logging Architecture
See all FAQs: Troubleshooting Overview
Next Step
Back to Custom Logging Overview