Login using Social Account
     Continue with GoogleLogin using your credentials
There are two types of inference configurations are available in DL Workbench.
1. Single Inference
While creating the project you have run baseline inference where bach size = 1 and number of parallel streams = 1. Refer Run Single Inference guide to understand how to run single inference with different batch size and number of parallel streams
2. Group Inference
DL Workbench provides a graphical interface to find the optimal configuration of batches and parallel requests on a certain machine. Refer Run Group Inference guide to understand how to run group inference in DL Workbench. The guide also helps you learn how to find performance sweet spot in throughput and latency tradeoff.
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
No hints are availble for this assesment
Answer is not availble for this assesment