1 About Slick

Slick is a decision analysis tool that presents the outcomes of potential policy options across various states of nature. The App allows for the simultaneous presentation of various performance metrics and can account for uncertainty in the states of nature. Slick is interactive and allows users to filter results live in order to explore robustness and performance.

While Slick can be applied to any decision analysis context it was specifically designed to investigate the performance of fisheries management procedures tested by management strategy evaluation (MSE).

Importantly the App is platform agnostic: results arising from any MSE framework that are formatted in a compatible Slick object can be loaded and vizualised in the App. The MSE R packages DLMtool and MSEtool are Slick-compatible and include tools to convert MSE results to the Slick format. For more information on developing Slick objects please see the developers guide.

2 Purpose of this document

This document:

  • Describes the background to Slick
  • Explains how to access and use the Slick App
  • Describes Slick outputs

3 Quick Start

For a demonstration of Slick go to the online App hosted here

4 Introduction

4.1 Management Strategy Evaluation

Management Strategy Evaluation (MSE) is an approach for establishing simple rules for managing a resource and then simulation testing their robustness to various hypothetical scenarios for system dynamics (Butterworth and Punt 1999; Cochrane et al. 1998).

Often referred to as Management Procedures (MPs, aka Harvest Strategies) these rules typically use streamlined data to generate management advice such as a Total Allowable Catch (TAC).

In fisheries, MSE differs substantially from conventional stock assessment in how models of fisheries dynamics are used to derive management advice. In conventional stock assessment, fisheries dynamics models are used to directly derive management advice. For example, setting a TAC commensurate with fishing mortality rate at maximum sustainable yield. MSEs typically use a greater number of fitted fisheries dynamics models (‘operating models’) that span a much wider range of uncertainties in order to test the robustness of MPs. The focus in MSE is robustness accounting for feedbacks between management options and the system rather than establishing a single ‘best’ model of the resource.

Consequently, MSE allows managers and stakeholders to establish a comparatively simple management rule (an MP), understand its performance and have confidence that it can perform adequately even in the face of uncertainties in system dynamics.

Punt et al. (2014) provide a comprehensive summary of the history of MSE implementations.

4.2 Slick Presentation of MSE Results

MSEs have four axes over which results are generally presented:

  1. operating models (a state of nature or scenario for real system dynamics)
  2. management procedures (MP - a management option, aka. harvest strategy)
  3. performance metrics (aka. cost function, utility measure. E.g. probability of not overfishing, long-term yields)
  4. uncertainty within an operating model (multiple simulations for each discrete state of nature)

Slick allows users to filter operating models, performance metrics and management procedures in order to explore robustness and characterize performance. Importantly, Slick is MSE-platform agnostic. Provided MSE practitioners format their results in a compatible Slick object, these can be loaded to the App.

Slick presents MSE results in 11 Figures designed to inform decision making by revealing the absolute and comparative performance of candidate management procedures .

5 Accessing the App

5.1 Online

Slick is freely available online.

5.2 Offline

You can also run the App locally on your computer. To do so install the [R package] (https://github.com/Blue-Matter/Slick) and use the Slick() function:

library(Slick)
Slick()

6 Using the App

6.1 Filtering

On the righthand side of the App is a list of Filters that allow the user to change the operating models, management procedures and performance metrics for which results are presented. When the user makes changes to the Filter checkboxes, a red ‘FILTER’ button appears and must be pressed in order to update the results that are presented. If you see the red ‘FILTER’ button it means that you are looking at results that do not necessarily correspond with the current selection of filters.

6.2 Home

The Home panel contains a dropdown menu of example Slick objects that may be loaded. Alternatively the user can load their custom Slick file. A Slick example must be selected or a file loaded, before continuing.

The Home panel includes summary text describing the case study and also a series of tables that provide greater detail about the included management procedures, operating models and performance metrics.

In Slick there are three varieties of performance metric: deterministic, stochastic and projection, which are presented in various ways.

  • Deterministic performance metrics are reported as a single number per management procedured and operating model. These are typically used to provide a summary of performance over many simulations for a given year. For example, the probability of yield in 2035 exceeding current yield. Deterministic performance metrics are used to provide top-level performance summaries in the spider, zigzag and rail plots.

  • Stochastic performance metrics are essentially the same as deterministic performance metrics but are reported by simulation and as such express uncertainty in outcomes for each management procedure and operating model. An example of a stochastic performance metric could be yield relative to current yield in 2035. Stochastic performance metrics are used in boxplots to express uncertainty in peformance outcomes.

  • Projection performance metrics are the same as stochastic perfomrance metrics but are available for all projection years. Projection metrics present the evolution of performance over time and are used in Slick Kobe plots.

6.3 Spider

The first results page provides a top-sheet overview of performance among candidate MPs. Individual spider plots provide MP specific performance outcomes and a larger spider plot provides direct comparisons among candidate MPs. These are deterministic (point values) performance metrics which are scaled from 0 to 100 where 100 is better performance.

MP-specific radar plots include a value which is the mean score among all selected performance metrics.

There is an option to present performance metrics on a relative scale, in which case, for each metric, the range of values is renormalized such that the highest value shown for any MP is 100 and the lowest is zero.

As with all results pages, a summary panel is included at the top of each page which highlights any outstanding results.

These plots can include a large number of performance metrics. However, where possible it is preferable to select a small number (e.g. 6 or fewer) - when spider plots include a large numbers of metrics, the order in which they are presented can determine the apparent size of the shaded area.

6.4 Zigzag

Zigzag graphs provide an alternative comparison of candidate MP performance. These are essentially the spokes of the spider plots unfurled and presented along a straight axis. Performance among the various metrics increases along the x-axis with aggregate mean performance among all metrics presented as large point at the top of the plot.

The values of the mean performance among all metrics are also presented in a performance table below the figure.

6.5 Rail

As a complement to the spider and zigzag figures, the same information can be presented in floating bars that better characterize the range of performance outcomes among candidate MPs.

6.6 Kobe

A standard diagnostic for sustainable exploitation is the Kobe plot which describes MP biomass performance relative to a target level on the x-axis and exploitation rate performance relative to a target on the y-axis. A single Kobe-like plot summarizes the outcomes of the MSE projection in the final projection year. This plot helps to summarize long term biomass and exploitation performance to better highlight constrast in the sustainability among MPs.

Uncertainty in performance outcomes is expressed with white vertical and horizontal lines that represent percentiles relating to a certain interquantile range. By default, the 90% interquantile range is selected but the user can adjust this range using the percentile slider to the top-right of the Kobe plot.

6.7 Kobe Time

An extension of the Kobe plot, this figure distinguishes between the biological status of the stock and its probable trajectory, showing status in each individual year of the projection (vs. only the final year, as in the Kobe plot of the prior tab).

6.8 Line

In many decision making contexts there is a state variable of interest (e.g. population numbers) that like performance metrics has a projected future. Unlike performance metrics, state variables also have a historical reconstruction that provides important context for projected outcomes.

Here the median lines for each MP are compared together and then individual plots for each MP include uncertainty.

6.9 Slope

An alternative summary of Kobe-type biomass and exploitation metrics is provided that attempts to rank candidate MPs to further highlight critical differences in the main fisheries tradeoff of maximizing both catch and population abundance.

6.10 Boxplot

Box plots focus on the uncertainty in performance outcomes among candidate MPs, where possible highlighting those that obtain the best performance.

6.11 Boxplot OM

An important feature of MSE is that it focuses on robustness of MPs among various operating models. The boxplot OM panel shows trade-offs disaggregated by operating model to help identify those scenarios that are pinch points for MP performance.

6.12 Spider OM

A spider plot array among MPs and operating models reveals scenarios that affect the absolute and relative performance of candidate MPs.

6.13 Line OM

State variable projections are provided across a range of operating models and MPs.

7 Acknowledgements

Slick development was funded by the Ocean Foundation, with support from The Pew Charitable Trusts. Many thanks to Shana Miller, Grantly Galland and Sara Pipernos for their feedback and suggestions.

The prototype figure designs were developed by 5W Designs. Many thanks also to 5W for their helpful feedback on the Shiny App.

The Slick App, manuals and example MSE objects were coded by Blue Matter Science Ltd.

8 References

Butterworth, D.S., Punt, A.E. 1999 Experiences in the evaluation and implementation of management procedures. ICES Journal of Marine Science, 5: 985-998, http://dx.doi.org/10.1006/jmsc.1999.0532.

Cochrane, K L., Butterworth, D.S., De Oliveira, J.A.A., Roel, B.A., 1998. Management procedures in a fishery based on highly variable stocks and with conflicting objectives: experiences in the South African pelagic fishery. Rev. Fish. Biol. Fisher. 8, 177-214.

Punt, A.E., Butterworth, D.S., de Moor, C.L., De Oliveira, J.A.A., Haddon, M. 2014. Management strategy evaluation: best practices. Fish and Fisheries. 17(2): 303:334. https://doi.org/10.1111/faf.12104

9 Acronyms