Skip to main content

When using the 23R2 Dispatch Console for manual resource assignment for a customer who does not use PSO and has 4000 resources, is it advisable to use multiple datasets?  What advantage would the use of multiple datasets bring without PSO?  Is there is a concern that so many resources under one dataset would cause performance issues in the Dispatch Console even without using PSO?  Couldn’t filters on Resource Group and Region sufficiently limit the number or resources hitting the Dispatch Console even with all 4000 resources in one dataset?

Even without PSO I would definitely use multiple datasets to avoid performance issues. How are these  4,000 resources split - countries/regions/business units, etc.? How many Service Organizations or sites are you using or planning to use? How many resources would a dispatcher typically be responsible for?


@Alexander Heinze 

Thanks Alexander,

Those 4,000 resources are for Crown, all in the USA, split across 73 “branches” each of which is a site, and those branches are grouped into 10 regions.  Each branch/site handles its own dispatch of around 50 technicians.  Reframing my questions I would ask, what is a recommended maximum number of resources per dataset when not using PSO?  And to understand the potential performance impact, why would a dispatch console that is filtered to show only a few dozen resources suffer a performance problem even with an underlying dataset of thousands of resources?


@dakius , we are working on benchmarking figures as we speak. I would however draw the conclusion that these figures that you are mention above are probably the highest by far that we have seen from a manual scheduling perspective thus far. The benchmarking exercise will result in somekind of recommendations from our end, but this is still in flight. Lastly applying filters in Dispatch Console as the default population behaviour will improve the performance and reduce the load time significantly.


Hi ​@Björn Kleist, ​@Alexander Heinze,

@dakius , we are working on benchmarking figures as we speak. I would however draw the conclusion that these figures that you are mention above are probably the highest by far that we have seen from a manual scheduling perspective thus far. The benchmarking exercise will result in somekind of recommendations from our end, but this is still in flight. Lastly applying filters in Dispatch Console as the default population behaviour will improve the performance and reduce the load time significantly.

Do you have an update on the benchmark numbers for Dispatch Console?

There is a situation where the customer is experiencing a visibility issue in the Gantt when “Hide non-working time” is enabled. However, this happens only in their Production environment but not in any of the other lower environments which are in the same version, 24.1.5. Client framework versions are also the same (24.1.12). The assumed cause for this issue is the high amount of data available in their Prod compared to that of other environments. So, they would like to know the number of activities and resources that can be loaded to a single dataset.


Hi @Thanuja, 

Benchmark exercise is still ongoing. What you are describing however needs to be handled through a support case. Is there such a support case if so we should investigate the issue and see whether it is related to performance or not. I dont think it sounds like a performance issue if the gantt is not struggeling to load data , but that rather you are experiencing some kind of problem with the Hide working time.


Hi ​@Björn Kleist,

Thank you for the response. There is indeed a support case raised by a customer. I have already initiated discussions with Scheduling and Framework Product Development teams. Their recommendation was to upgrade to the latest version as some improvements have been done to hide non-working time algorithm since 24R1 SU5 and it could solve this issue.


Reply