Zend certified PHP/Magento developer

How can I approach system-wide, per-process resource monitoring on Windows Server 2016 Datacenter?

Context: My team hosts multiple applications on a Windows Server 2016 box. For planning purposes, I’m interested in profiling CPU and RAM usage per application over time. Most of these apps are .NET via IIS, while a few are .NET and scheduled through Task Scheduler, and some are Python scripts run on a schedule or ad-hoc by another process. None are guaranteed to be running at any given time (I don’t believe I can rely on a given PID).

What I’ve tried so far: In Performance Monitor, I’ve created some test Data Collector Sets for Processor Time % on the following process selections:

  • Total
  • All Instances
  • A subset of multiple running applications

In all cases, the results I get back show the Total Processor Time %.

Problem: I’m seeking a report/chart/log/anything else that breaks down profiled metrics by process. That is, for each time slice, I want to see “ApplicationA used N CPU %, ApplicationB used M CPU %” and so on.

All the searches I’ve done so far show me how to do two things:

  1. Profile any metrics on a single process, creating one DCS per; and
  2. Profile the system-wide total for any metrics.

This leads me to wonder if I’m approaching this the wrong way; I’m extending my skillset into DevOps and infrastructure planning, so many of my assumptions are naive. QED my question is a Choose Your Own Adventure two-parter:

How do I:

A) Log resource use for all processes on the system to identify what EACH needs in average and peak usage?

OR:

B) Approach this in a sensible, systematic manner according to the current best practices?