Ready to run instance(s) Category
Nodinite BizTalk Server Monitoring Agent empowers you to monitor BizTalk Server Ready to run instances for all BizTalk Applications in your group. Instantly detect issues, automate actions, and maintain business continuity.
- Ready to run instances in BizTalk are listed within Nodinite as resources named 'Ready to run instance(s)'.
- Nodinite mirrors the Application structure, providing a 1:1 mapping with BizTalk Applications.
- Ready to run instances are grouped by the Category Ready to run instance(s) for streamlined management.

Here's an example of a Monitor View filtered by the 'Ready to run instance(s)' category.
Note
All User operations within Nodinite are Log Audited, supporting your security and corporate governance compliance policies.
Understanding BizTalk Server Ready to Run Instances
Ready to Run Instances represent a critical state in BizTalk Server's message processing pipeline. These are service instances—orchestrations, messaging instances, or isolated host processes—that have been scheduled for execution by the BizTalk engine and are waiting in the host queue for an available thread to begin processing.
What Does "Ready to Run" Mean?
When BizTalk receives a message or activates an orchestration, it doesn't execute immediately. Instead, the instance is placed in a "Ready to Run" queue where it waits for:
- Available thread resources in the host instance thread pool
- Throttling conditions to clear (if BizTalk is rate-limiting processing)
- Host instance startup (if host instances were stopped or recycling)
- Message dequeuing from the MessageBox database work queues
Normal behavior: Instances transition through Ready to Run state in milliseconds to a few seconds as threads become available. The queue acts as a buffer between message arrival and processing.
Why Monitoring Ready to Run Instances is Critical
Excessive Ready to Run instances indicate processing bottlenecks where message arrival rate exceeds processing capacity. This creates cascading problems:
- Processing Delays – Messages waiting in queues for extended periods, violating SLAs and delaying business transactions
- Throttling Activation – BizTalk's host throttling mechanism may engage when queue depths exceed thresholds, further slowing processing
- MessageBox Pressure – Ready to Run instances store state in MessageBox database, increasing query overhead and transaction log growth
- Thread Starvation – All available threads busy processing, preventing new instances from starting
- Resource Saturation – High CPU, memory, or database contention preventing efficient instance scheduling
- Host Instance Instability – Extreme queue depths can cause host instance crashes or Out of Memory conditions
Types of Ready to Run Instances
- Orchestration Instances – Orchestrations activated and waiting to execute first shape
- Messaging Instances – Messages in receive pipeline, send pipeline, or port processing waiting for threads
- Isolated Adapter Instances – SOAP, HTTP, or other isolated adapter host instances queued for execution
Normal vs. Problematic Accumulation
Normal Ready to Run patterns:
- Burst processing – Batch file drops create temporary spikes during nightly processing windows
- Startup queues – Brief accumulation after host instance restart as backlog clears
- Controlled throttling – Intentional rate limiting during high-load periods (BizTalk working as designed)
- Seconds-duration queuing – Instances spending <5 seconds in Ready to Run during steady-state operations
Problematic Ready to Run patterns:
- Sustained high counts – Hundreds/thousands of instances queued for minutes or hours
- Linear growth – Queue depth increasing over time without clearing, indicating processing can't keep up with arrival rate
- Long-duration waiting – Individual instances in Ready to Run state for >30 seconds
- All hosts affected – Multiple host instances across different BizTalk applications showing queues simultaneously
- Post-incident accumulation – Large queues persisting after backend system recovery or host instance restart
Root causes of excessive Ready to Run instances:
- Insufficient host instance resources – Too few host instances or insufficient thread pool configuration
- CPU/memory saturation – Server hardware bottlenecks preventing thread execution
- Database performance – Slow MessageBox queries or disk I/O causing dequeue delays
- Downstream system latency – Slow backend web services, databases, or file systems creating processing backpressure
- Large message processing – Very large messages consuming threads for extended periods
- Inefficient orchestration logic – Complex transformations or synchronous calls blocking threads
Nodinite evaluates both count (how many instances are queued) and time (how long instances wait in queue), triggering alerts when thresholds are breached. This enables early detection of throughput bottlenecks before they cascade into system-wide failures.
What are the key features for Monitoring BizTalk Server Ready to run instances?
Nodinite's Ready to Run instance monitoring provides dual-threshold evaluation combined with real-time visibility into BizTalk's processing queues, enabling proactive capacity management and bottleneck detection:
- Dual-Threshold Evaluation – Intelligent monitoring using both count-based (queue depth) and time-based (queue wait time) thresholds to detect different bottleneck patterns—capacity issues vs. thread starvation.
- Real-Time Queue Visibility – View queued instances to identify which services, orchestrations, or message types are accumulating, helping pinpoint bottleneck sources.
- Application-Specific Configuration – Tailor threshold settings per BizTalk Application to accommodate different throughput characteristics—high-volume messaging vs. low-volume orchestrations.
- Early Warning System – Detect processing slowdowns before they cascade into throttling, host crashes, or SLA violations.
- Capacity Planning Data – Historical queue depth patterns reveal when infrastructure scaling (more hosts, faster hardware, database optimization) is needed.
What is evaluated for BizTalk Ready to run instances?
The monitoring agent continuously queries BizTalk's MessageBox database to assess queue depths and instance wait times across all applications. Nodinite evaluates instances against both count and time thresholds, providing comprehensive throughput health assessment:
| State | Status | Description | Actions | |
|---|---|---|---|---|
| Unavailable | Resource not available | Evaluation of the 'BizTalk Ready to run Instances' is not possible due to network or security-related problems | Review prerequisites | |
| Error | Error threshold is breached | More Ready to run instances exist than allowed by the Error threshold | Details Edit thresholds | |
| Warning | Warning threshold is breached | More Ready to run instances exist than allowed by the Warning threshold | Details Edit thresholds | |
| OK | Within user-defined thresholds | Number of Ready to run instances are within user-defined thresholds | Details Edit thresholds |
Tip
You can reconfigure the evaluated state using the Expected State feature on every Resource within Nodinite. For example, batch processing applications with predictable nightly queue spikes can expect Warning states during processing windows without generating false alarms.
Actions
When Ready to Run instances accumulate beyond thresholds or instances wait in queues longer than expected, immediate investigation is required to prevent processing backlogs and SLA violations. Nodinite provides Remote Actions for rapid queue diagnostics.
These actions enable operations teams to view queued instances and identify bottleneck sources without accessing BizTalk servers directly. All actions are audit logged for compliance tracking.
Available Actions for Ready to Run Instances
The following Remote Actions are available for the Ready to run instance(s) Category:

Ready to run instance Actions Menu in Nodinite Web Client.
Details
When alerts indicate excessive Ready to Run instances or prolonged queue wait times, the Details view provides critical diagnostic information about which instances are queued and why processing may be delayed. This interface provides visibility into BizTalk's internal scheduling queues without requiring Group Hub page access.
What you can see:
- Instance details – Service name, service type, instance ID, message type
- Queue timestamp – When instance entered Ready to Run state (critical for time-based alerting)
- Processing host – Which host instance queue holds the waiting instance
- Instance state – Current scheduling state and subscription details
- Queue position – Relative ordering in host queue (if available)
When to use this view:
- Bottleneck diagnosis – Identify which orchestrations or message types are accumulating in queues
- Capacity planning – Determine if specific services need dedicated host instances or more resources
- Throttling investigation – Verify if instances are queued due to BizTalk throttling conditions
- Host instance health checks – Confirm queues clear properly after host recycling
- SLA monitoring – Track how long critical messages wait before processing begins
- Performance tuning – Identify inefficient orchestrations or pipelines consuming excessive thread time
Common diagnostic patterns:
- Single service type dominates – One orchestration or adapter consuming all threads → needs isolated host or optimization
- All host instances queued – System-wide resource constraint (CPU, memory, database) → infrastructure scaling required
- Specific message types stuck – Routing or subscription issues preventing message delivery → configuration error
- Cyclical queue spikes – Predictable batch processing patterns → adjust thresholds or schedule batch windows
- Queues never clear – Arrival rate permanently exceeds processing capacity → add host instances or optimize code
Tip
Compare Ready to Run counts across multiple BizTalk Applications. If only one application shows high queues, the bottleneck is likely application-specific (inefficient code, slow backend). If all applications are queued, the bottleneck is infrastructure-level (server CPU, database, network).
To access instance details, press the Action button and select the Details menu item:

Action button menu with 'Details' option.
The modal displays comprehensive information about all queued instances:

Details modal showing Ready to Run instances with queue timestamps and service information.
Edit thresholds
Ready to Run instance monitoring uses dual-threshold evaluation—both count-based (queue depth) and time-based (queue wait duration)—to detect different bottleneck patterns. This comprehensive approach enables detection of both capacity exhaustion (high counts) and thread starvation (long wait times).
When to adjust thresholds:
Count thresholds (queue depth):
- After infrastructure changes – More host instances, faster servers, database optimization → increase thresholds
- Following application optimization – Faster orchestrations, streamlined pipelines → lower thresholds appropriate
- For batch processing windows – Predictable nightly spikes require higher thresholds during batch hours
- Based on host instance configuration – More threads per host → can handle deeper queues before saturation
- To match business SLAs – If 100-message queue clears in seconds, that may be acceptable for your SLA
Time thresholds (queue wait duration):
- Based on message urgency – Real-time transactions (5-10 second threshold) vs. batch processing (5-10 minute threshold)
- For SLA enforcement – Set Warning at 50% of SLA, Error at 80% of SLA to enable intervention before violations
- After performance tuning – Faster processing → lower time thresholds to detect anomalies earlier
- For different message types – EDI/B2B with tight SLAs need lower thresholds than internal reporting messages
- During troubleshooting – Temporarily lower thresholds to increase alert sensitivity during investigations
Application-specific vs. global:
- Set global defaults for typical messaging-heavy applications with similar throughput
- Configure per-application overrides for applications with unique patterns:
- High-volume EDI processing (thousands queued = normal)
- Low-volume orchestrations (10+ queued = problem)
- Real-time API integrations (any queue >30 seconds = SLA risk)
- Batch processing apps (spike during scheduled windows)
Threshold tuning methodology:
- Establish baseline – Monitor queue depths and wait times during normal operations for 1-2 weeks
- Identify peak patterns – Note daily/weekly/monthly peaks (month-end processing, business hours vs. overnight)
- Set Warning thresholds – 20-30% above normal peak values to detect early anomalies
- Set Error thresholds – At capacity limits where host instances begin struggling (thread exhaustion, memory pressure)
- Account for growth – Add headroom for business volume increases (seasonal, acquisition, new customers)
- Validate with stakeholders – Confirm thresholds align with business SLAs and acceptable processing delays
Thresholds can be managed through the Actions menu or via [Remote Configuration][] for bulk adjustments.
To manage the Ready to run instance(s) threshold for the selected BizTalk Server Application, press the Action button and select the Edit thresholds menu item:

Action button menu with Edit thresholds option.
The modal allows you to configure both time-based and count-based alert thresholds:

Dual-threshold configuration for comprehensive queue monitoring.
Time-based evaluation
Time-based evaluation detects instances stuck in queues longer than expected, indicating thread starvation, processing bottlenecks, or infrastructure resource constraints. Nodinite tracks how long each instance waits in Ready to Run state before thread assignment, triggering alerts when wait times exceed configured thresholds.
Time-based evaluation is always active. If you don't want time-based alerting, set thresholds long enough to avoid false alerts (e.g., 24 hours for applications without strict SLAs), or use the Expected State feature to accept Warning states as normal.
Why time thresholds matter more than count:
Queue count alone can be misleading—1000 instances queued for 2 seconds each (high throughput, healthy) is very different from 10 instances queued for 10 minutes each (severe bottleneck). Time thresholds detect actual processing delays impacting business SLAs.
Example time threshold configurations:
- Real-time API integrations (sub-second SLAs): Warning 5 seconds, Error 15 seconds
- B2B/EDI messaging (minutes-level SLAs): Warning 30 seconds, Error 2 minutes
- Order processing orchestrations (5-minute SLA): Warning 2 minutes, Error 4 minutes
- Batch file processing (hourly processing windows): Warning 10 minutes, Error 30 minutes
- Reporting/analytics workflows (no strict SLA): Warning 30 minutes, Error 2 hours
Diagnostic value of time-based alerts:
- Sudden spike from seconds to minutes → Recent infrastructure change or new bottleneck introduced
- Gradual increase over days/weeks → Growing message volumes or degrading backend system performance
- Specific time windows → Daily pattern suggests resource contention (backups, batch jobs, business hours)
- All applications affected → BizTalk infrastructure issue (server CPU, MessageBox database, network)
- Single application → Application-specific problem (slow orchestration, inefficient pipeline, backend latency)
Warning
If instances wait in Ready to Run queues for minutes or hours, messages are not being processed. This directly impacts business operations—orders not fulfilled, invoices not sent, integrations not executing. Time-based alerts enable intervention before complete processing failure.
Tip
Set Warning thresholds at 50% of your SLA target, Error thresholds at 80% of SLA. This provides early warning for remediation before SLA violations occur. Example: 10-minute SLA → Warning 5 minutes, Error 8 minutes.
| State | Name | Data Type | Description |
|---|---|---|---|
| Warning TimeSpan | Timespan 00:13:37 (13 minutes 37 seconds) | If any Ready to Run instance has been waiting in queue longer than this timespan, a Warning alert is raised. Format: days.hours:minutes:seconds (e.g., 0.00:05:00 = 5 minutes) | |
| Error TimeSpan | Timespan 01:10:00 (1 hour 10 minutes) | If any Ready to Run instance has been waiting in queue longer than this timespan, an Error alert is raised. Format: days.hours:minutes:seconds (e.g., 0.00:15:00 = 15 minutes) |
Count-based evaluation
Count-based evaluation detects excessive queue depths, indicating message arrival rate exceeds processing capacity. High queue counts signal infrastructure bottlenecks, insufficient host instances, or resource saturation requiring capacity expansion.
What queue counts reveal:
- 0-50 instances – Normal steady-state for most applications, healthy processing
- 50-200 instances – Moderate load, monitor for clearing patterns
- 200-500 instances – High load, investigate if sustained for >5 minutes
- 500-1000 instances – Severe backlog, likely infrastructure constraint
- 1000+ instances – Critical backlog, immediate intervention required to prevent host crashes
Note: High-volume applications (millions of messages/day) may have different baselines. Establish your application's normal patterns through historical monitoring.
How to set count thresholds:
- Measure processing capacity – Determine max sustained throughput (messages/second) your host instances can handle
- Calculate queue tolerance – If processing 100 msg/sec, a 500-message queue clears in 5 seconds (acceptable). Same queue at 10 msg/sec takes 50 seconds (problem).
- Set Warning threshold – At queue depth representing 30-60 seconds of processing backlog
- Set Error threshold – At queue depth representing 2-5 minutes of backlog or risk of host memory pressure
- Account for burst tolerance – Batch processing creates temporary spikes; thresholds should accommodate expected bursts
Example count threshold configurations:
- Low-volume orchestrations (10-50 msg/hour): Warning 20, Error 50
- Medium-volume messaging (500-1000 msg/hour): Warning 100, Error 300
- High-volume EDI/B2B (5000-10000 msg/hour): Warning 500, Error 1500
- Very high-volume integration hub (50000+ msg/hour): Warning 2000, Error 5000
- Batch processing applications (overnight file drops): Warning 1000, Error 3000
Diagnostic patterns from queue counts:
- Steady high count – Sustained processing capacity issue, need more host instances or optimization
- Linearly increasing count – Arrival rate > processing rate, will eventually exhaust resources (add hosts urgently)
- Cyclical spikes – Predictable patterns (business hours, batch windows) → adjust thresholds or schedule capacity
- Sudden spike then gradual clear – Burst processing (healthy), ensure queue clears within acceptable time
- Queue never clears – Fundamental capacity mismatch, infrastructure scaling required immediately
Warning
Sustained high queue counts (500+ for >10 minutes) risk:
- BizTalk throttling – Host throttling state activated, further slowing processing
- MessageBox bloat – Queue state stored in database, degrading query performance
- Host instance crashes – Memory exhaustion from queued instance metadata
- Cascading failures – Backpressure affects upstream systems, dependent applications
Tip
Capacity planning rule of thumb: When Warning threshold is breached regularly (>10% of time), plan infrastructure scaling within 30-60 days. When Error threshold is breached, scaling is needed immediately to prevent production incidents.
Infrastructure scaling options when queues accumulate:
- Add host instances – Deploy additional instances of bottlenecked hosts across existing servers
- Add servers – Provision new BizTalk servers and distribute hosts
- Optimize code – Refactor slow orchestrations, pipelines, or custom components
- Isolate workloads – Move high-volume services to dedicated host instances
- Optimize MessageBox – Database maintenance, indexing, file group optimization
- Backend optimization – Improve downstream system performance to reduce processing time
- Implement throttling – Controlled rate limiting at message source to prevent overload
| State | Name | Data Type | Description |
|---|---|---|---|
| Warning Count | integer | If the total number of Ready to Run instances exceeds this value, a Warning alert is raised. Set at queue depth representing 30-60 seconds of processing backlog for early detection. | |
| Error Count | integer | If the total number of Ready to Run instances exceeds this value, an Error alert is raised. Set at critical queue depth indicating capacity exhaustion or risk of host instability. |
Next Step
Related Topics
BizTalk Monitoring Agent
Administration
Monitoring Agents
Add or manage a Monitoring Agent Configuration