Guides

MUD Moderation Policy with Moderation playbooks

MUDs present unique moderation challenges where the boundary between in-character (IC) roleplay and out-of-character (OOC) harassment is often ambiguous, and text-based interaction lacks the visual cues of graphical games. This guide provides a technical implementation framework for immortals and administrators to build enforceable moderation infrastructure, including database schemas for incident logging, command frameworks for warnings, and calibration protocols to ensure consistent enforcement across distributed staff teams.

4-6 hours initial setup, ongoing calibration required7 steps
Technical flowchart diagram example showing process workflow
Example technical workflow structure for moderation pipelines
1

Define the IC/OOC Boundary Matrix

Create a technical specification distinguishing in-character roleplay conflict from out-of-character harassment. Document specific thresholds: meta-gaming (using OOC information IC), targeted OOC profanity in public channels, and persistent unwanted contact across alternate characters (alts). Codify these as boolean checks in your policy documentation to remove subjective interpretation during enforcement.

⚠ Common Pitfalls

  • Using subjective terms like 'excessive' without quantitative limits
  • Failing to address alt-character harassment where players use multiple accounts to circumvent blocks
2

Architect the Sanction Escalation Ladder

Design a tiered system from temporary mute (15 minutes) to permanent siteban. Include technical specifications for strike decay (warnings expire after 90 days) and provisions for severe violations requiring immediate escalation (doxxing, real-world threats). Store this configuration in a machine-readable format to enable automated enforcement checks.

sanction_ladder.json
{
  "tiers": [
    {"level": 1, "action": "mute", "duration": "15m", "strike_decay": "90d"},
    {"level": 2, "action": "channel_ban", "duration": "24h", "scope": ["chat","newbie"]},
    {"level": 3, "action": "temp_ban", "duration": "7d", "appealable": true},
    {"level": 5, "action": "permanent_ban", "scope": "account", "appeal_cooldown": "30d"}
  ],
  "immediate_ban": ["doxxing", "server_exploit", "real_world_threats"]
}

⚠ Common Pitfalls

  • Infinite accumulation of minor warnings leading to disproportionate bans
  • Lack of differentiation between account-level and IP-level sanctions
3

Implement the Incident Logging Schema

Deploy a structured logging system capturing timestamp, involved parties, room VNUM or zone location, raw log excerpts, and action taken. For Diku/Circle derivatives, extend the existing syslog with structured output; for LPMuds, create a secure log directory readable only by admin level (UID mudadmin). Include cryptographic hashing of log excerpts to ensure integrity during appeals.

incident_schema.sql
CREATE TABLE moderation_incidents (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
  reporter VARCHAR(50) NOT NULL,
  subject VARCHAR(50) NOT NULL,
  room_vnum INTEGER,
  log_excerpt TEXT,
  action_taken VARCHAR(100),
  acting_immortal VARCHAR(50),
  hash VARCHAR(64) -- SHA-256 of log_excerpt for integrity verification
);

⚠ Common Pitfalls

  • Storing logs in player-accessible directories or backup rotations
  • Failing to verify log integrity before appeals hearings
4

Deploy the Warning Command Framework

Code in-game commands (e.g., 'warn <player> <level> <reason>') that auto-populate the incident log and notify the player with specific policy citation. Include confirmation prompts for sanctions above level 2 to prevent misclicks. Ensure the command checks for wizinvis or masking states that could hide the moderator's identity.

warn_command.c
ACMD(do_warn) {
  char arg[MAX_INPUT_LENGTH], reason[MAX_INPUT_LENGTH];
  struct char_data *victim;
  int level;
  
  two_arguments(argument, arg, reason);
  if (!(victim = get_char_vis(ch, arg, NULL, FIND_CHAR_WORLD))) {
    send_to_char(ch, "Warning: Player not found.\r\n");
    return;
  }
  
  /* Confirmation gate for severe sanctions */
  if (level > 2 && !PLR_FLAGGED(ch, PLR_CONFIRMED)) {
    send_to_char(ch, "Retype to confirm: warn %s %d %s\r\n", arg, level, reason);
    SET_BIT(PLR_FLAGS(ch), PLR_CONFIRMED);
    return;
  }
  
  log_incident(ch, victim, level, reason);
  send_to_char(victim, "[SYSTEM] Warning level %d issued: %s\r\n", level, reason);
  REMOVE_BIT(PLR_FLAGS(ch), PLR_CONFIRMED);
}

⚠ Common Pitfalls

  • Commands lacking audit trails of which immortal issued the warning
  • Allowing warnings to be issued while the immortal is invisible or masked, preventing accountability
5

Design the Appeals Workflow

Create a documented path for banned players to request review via email or web form, not in-game (which they cannot access). Specify response timeframes (72 hours for initial response) and the composition of the appeals panel (minimum 2 immortals not involved in original incident, plus one from different timezone). Store appeal decisions in a separate table linked to original incidents.

⚠ Common Pitfalls

  • Permitting banned players to create new accounts to appeal (implement IP/device fingerprint checks)
  • Staff discussing appeals in the same channels as active moderation, creating bias
6

Establish Moderator Calibration Protocols

Institute bi-weekly case review sessions where immortals discuss borderline scenarios and vote on hypothetical enforcement. Maintain a shared decision log to ensure consistency across time zones and staff rotations. New immortals must shadow senior staff for 2 weeks before issuing independent sanctions.

⚠ Common Pitfalls

  • Senior admins overriding sanctions without documentation in the shared log
  • Moderators operating in isolation without cross-checking similar precedents
7

Secure the Staff Communication Channel

Migrate sensitive moderation discussions out of in-game channels (which may be compromised by wizinvis bugs or logging exploits) to encrypted external platforms (Signal, Matrix, or private Discord with strict access controls). Document what information can be shared (facts only, not speculation) and require 2FA for all staff accounts.

⚠ Common Pitfalls

  • Discussing active cases in channels with non-staff or former staff present
  • Failing to backup decision rationale before purging old chat logs

What you built

Moderation policy in MUDs is technical infrastructure, not merely rules text. By implementing structured logging, command-level enforcement gates, and calibration protocols, you create an auditable system that withstands the scrutiny of appeals and prevents the inconsistent enforcement that erodes player trust in persistent text-based communities. Regular review of your escalation ladder and incident patterns will identify systemic issues before they require reactive intervention.