Shiny App
The {ordinalsimr} package combines a Shiny interface with simulation functions for two-group ordinal outcomes. In practice, the app is designed to help users compare test performance under user-defined assumptions and to summarize Type I error, Type II error, and power.
In the app, users can set:
- Number of simulation iterations
- One or more sample sizes
- Allocation between groups (for example, 1:1)
- Outcome category probabilities for each group
- Which statistical tests to include
The app also provides progress tracking for longer simulation runs, optional Type I error runs for each group, plots, and downloadable outputs.
Data Generation Process
Data generation is handled by assign_groups() and
orchestrated in repeated runs by run_simulations().
At each iteration:
-
assign_groups()samples group membership (y) using the specified allocation probabilities. - It then samples ordinal outcomes (
x) within each group usingprob0(group 0) andprob1(group 1). -
run_simulations()repeats this process for each requested sample size and iteration count.
This design keeps the data-generating mechanism explicit and directly tied to user-entered assumptions.
Statistical Tests Used
For each simulated dataset, ordinal_tests() computes
p-values for the selected methods. By default, all implemented methods
are run:
- Wilcoxon rank-sum test (
stats::wilcox.test) - Fisher’s exact test (simulation-based p-value;
stats::fisher.test) - Chi-squared test without continuity correction
(
stats::chisq.test(correct = FALSE)) - Chi-squared test with continuity correction
(
stats::chisq.test(correct = TRUE)) - Proportional odds model (
rms::lrm) - Coin independence test (
coin::independence_testto fit the test, thencoin::pvalueto extract the p-value)
The test set can be restricted in both the app and function calls, which is useful for targeted method comparisons.
These methods provide complementary views of group differences for ordinal endpoints: rank-based tests focus on distributional shift, contingency-table tests assess association between group and category counts, and the proportional-odds model summarizes effects in an ordered logistic framework.
Using them side-by-side is helpful in simulation studies because Type I error and power can change with sample size, allocation imbalance, and outcome distribution shape.
Practical Script-Based Workflow
An explanation of how to use the core simulation functions in a script-based workflow is provided in the Coding Simulations vignette. The same functions that power the app can be used in scripts for more customized analyses, batch runs, or integration with other workflows.
This workflow mirrors the app logic: generate group/outcome data repeatedly, compute test p-values, then summarize results across iterations.
Package Architecture
The package is structured in three connected layers:
-
Simulation core:
assign_groups(),ordinal_tests(), andrun_simulations()define data generation, test evaluation, and iteration over simulation settings. Therun_simulations_in_background()function wrapsrun_simulations()to run simulation in background processes for the Shiny app. -
Computation helpers: functions such as
calculate_power_t2error()andcalculate_t1_error()summarize simulation results. -
Shiny modules: the app server
(
app_server) wires together modular components for data entry, simulation triggers, background execution, progress updates, plotting, and export/report generation.
This separation helps keep methods transparent while allowing the app and script-based workflows to use the same simulation engine.
Bug reports and feature requests can be submitted as issues at https://github.com/NeuroShepherd/ordinalsimr/issues
