<img src="//trc.taboola.com/1222697/log/3/unip?en=page_view" width="0" height="0" style="display:none">

29 July 2019

RPA Reporting and Analytics 103: Queues

29 July 2019

RPA Reporting and Analytics 103: Queues

⬅  SEE ALL THE ARTICLES

I have some good news! There’s a way for you to cut your custom logging by up to 50% but still capture the same data. In my first blog post, I mentioned that some fields were only present in Robot execution logs when queues were used. Let me just give you a quick refresher of what those are:

 

1. Transaction Start - every time a transaction within a process starts (you’ll only see this when using queues)

2. Transaction End - every time a transaction within a process ends (you’ll only see this when using queues)

 

What do queues have to do with transactions? Let me first explain what a queue is in the context of UiPath (not to be confused with a queue when programming), and why it’s so incredibly helpful for reporting and analytics.

 

One of the most common use cases for RPA is data entry, usually centered around Excel. Imagine that you have an Excel sheet with 1000 rows, and you have your Robot going through the sheet row by row. Now, let’s say that processing each row takes the Robot an hour, meaning that it would take 1000 Robot hours in total to get through that Excel sheet. That’s a pretty large amount of hours for one Excel sheet, especially if this process needs to run more than once. But what if you could have multiple Robots working on processing that sheet together? If five Robots were assigned, you would reduce your processing time by 80%!

 

How does that work?

 

You can turn each row in that Excel sheet into a queue item (called a ‘transaction’ once it's in progress), and put it into a UiPath queue! A queue, in its simplest definition, is basically a container on UiPath Orchestrator that serves as a holding space for all of those transactions. The first transaction that gets added will be the first one that gets passed back out for processing, provided no other ordering mechanism is used.

 

Queues have a bunch of really cool features like priorities, due dates, and more that can affect the order items get processed in. At their best (and if used correctly) queues are your key to scaling your deployment.

 

One of the best use cases for a queue is invoice processing since you have a bunch of invoices that need to be handled in the same way (or at least in a defined number of ways). For example, all invoices under $10,000 should be immediately approved, invoices between $10,000 and $100,000 should be checked for required fields, and any invoice above $100,000 should be rejected and sent to a human for review.

 

Screenshot 2019-07-27 at 18.44.15

 

Let’s think about a few key operational metrics you want to capture for this process:

• How many invoices were processed

• How long it took on average to process each invoice

• How many invoices were rejected

• How many times the process errored out

 

Some other things you’ll probably want to capture along the way for business metrics might include:

• Invoice number

• Invoice vendor

• Invoice amount

• Various invoice fields such as department, currency, contact person, etc.

• Average turnaround time for human review of rejected invoices

 

To get this information, you’re going to have to do a hefty amount of custom logging for each invoice you process. If you’ve elected to send your Robot execution logs to a target, you’ll see this add up really quickly. A new JSON log will be sent every time you have a log message activity in your workflow, which in this case is at least five logs per invoice. You’ll have to think about scale and performance impacts a lot sooner if you follow that route, which is why in Reporting and Analytics 102 I cautioned against using the log message activity. If you use the add log fields activity, you’ll definitely cut down on the number of JSON logs sitting in your backend, but you’ll still have to account for and keep track of more variables than you’d need with a queue.

 

Now we can come back to those transaction fields that I mentioned – we keep track of the start and end of processing for each queue, meaning that the only thing you need to do to track processing time is to find the difference between them. That means you’re adding zero custom fields instead of two, and you have two less variables to hold in memory. This kind of post-processing can be easily done on any business intelligence (BI) tool, so you have nothing extra to add to your workflow. Yay for memory management!

 

Keeping tons of variables in scope until the end of a workflow can get expensive memory-wise, so the less variables you need to save for the add log fields activity, the better.

 

Let’s break down some of the metrics I listed earlier in this same fashion so you can see just how helpful queues can be:

 

KPI QUEUE CUSTOM LOG

Average time to process each invoice

Average(Transaction End – Transaction Start). (+0)

Need to track start time and end time of processing for each invoice (+2)

# invoices processed

Count of transactions with status “successful.” No extra fields needed if using the REFramework* for development (+0)

Need a field to keep track of # of invoices (+1)

# rejected invoices

Count of business exceptions*. (+0)

Need a field to keep track of # rejected invoices (+1)

# errors

Count of application exceptions*. (+0)

Need a field to either keep track of # errors or need to log each error (+at least 1)

TOTAL 0 5

 

That’s five extra fields right there that you wouldn’t need if you were using a queue. *Now keep in mind, these numbers are only valid if you’re using the REFrameWork, our development framework best suited for transactional processes and designed to work with queues from Orchestrator. The framework has built-in modules to account for the final status of your transaction, offering you the option to classify it as successful, failed with an application exception (meaning something went wrong in the workflow), or failed with a business exception (the invoice is over $100,000 so it can’t be processed).

 

Beyond that, the framework also includes built-in error handling and takes care of a lot of pre-processing you might have to do since it reads in values from a pre-configured Excel config file where you can specify file paths or other important variables you might need to reference inside of your workflow. I can’t stress enough just how important using the REFramework is if you’re going to work with a queue. If you want to leverage it correctly, please take a look at our Level 3 – RPA Developer Advanced Training on our free online UiPath Academy. It’s your best bet to correctly scale and use queues.

 

Screenshot 2019-07-27 at 18.20.17

 

There’s another built-in feature of queues that can be helpful for bypassing custom logging - every queue item has a feature called Output where you can store any business variables. It works pretty much the same way as add log fields, in that when you use the Set Transaction Status activity, there’s a popup where you add a field name, and then a value.

 

If you’re using our Orchestrator APIs to get data for reporting purposes, you’re pretty much done here. You can use the API to get the business variables and their values. However, for those of you querying SQL directly or using NoSQL to hold your logs, I’d suggest continuing to use add log fields for these types of business variables.

 

Since queues are so important and provide quite a bit of information, we have separate dashboards all about queues in each of our out-of-the-box (OOTB) dashboard options and on Monitoring! I really like the queues page on Monitoring because it provides two important metrics for each queue:

 

1. AHT – Average Handling Time, the average time it takes to process each item in the queue.

2. Estimated Completion, how long Orchestrator thinks processing the rest of the queue will take based on the AHT per item.

 

You’ll also get to see whether queue items were processed on time based on the priority that was set, which is referred to as the SLA (service level agreement) and drill down into each queue to see individualized stats. 

 

Untitled

 

A final note about queues – they’re rarely used when it comes to attended automation since all of the processing is being done on a case-by-case basis alongside a human. Those of you focusing on attended automation will probably want to put queues on hold for now, but there are still some important metrics that you can track. I’ll dive into metrics for attended automation in my next post, so hold on to your questions until then!

 

Do you have a personal story or a UiPath tutorial

to share with the Community?

I want to contribute

 

Michelle Yurovsky is a Senior Analytics Product Manager at UiPath.


by Michelle Yurovsky

TOPICS: RPA Tutorial, RPA Analytics

Show sidebar