Building on your basic experience with Cumulative Flow Diagrams (CFDs) on recent projects, we'll
first look closer at the "link" between CFD’s and Little's Law and discuss: the underlying required conditions (assumptions), and explore workflow policies that support these conditions. If these conditions don’t exist, there has to be implications, right? From there, we’ll look at analysis performed on
several “real” project data sets (each representing 15 - 18 months), starting with developing a preliminary data distribution table, then refining it to derive actual-based t-shirt sizes and initial “low-effort to produce” but meaningful (probabilistic) SLAs. Could this help to develop policies to guide “sizing” of work items, or help determine the effectiveness of changes you make to policies going forward?
We’ll continue the earlier refinement process looking next at percentiles, and then, using parametric
statistics including exploring benefits of utilizing a data transformation for data sets that are log-normally distributed. Is it necessary to consider whether data is normally distributed or not, and how would you learn if it really makes a difference in your context?
Along the way, we’ll explore the resulting control charts created and learn how they can help identify
outliers and provide a basis for determining which “problem” work items might actually be “normal” (a frequent occurrence), and which might truly be “unusual” (unlikely to occur). Would this be helpful in how you might develop polices and processes to manage “problem” work items through your workflow, or
in developing specific and direct risk mitigation strategies or tactics? We’ll close by plotting trends of various analyses performed (counts, average lead and cycle times, standard deviations, distraction frequencies, etc.) and then pull it all together at the end to see how the analysis above might help
determine expected throughput and forecast completion times for your projects.