Cross-MUD Discovery & Directories with Directory data pip
MUD directories suffer from stale listings and unchecked promotional claims. This guide provides the architectural components to build a sustainable discovery platform: automated telnet verification, weighted community reviews with in-game confirmation, and faceted search for filtering by roleplay intensity. The implementation uses PostgreSQL for flexible metadata storage and Discord for moderation feedback loops.

Design extensible MUD endpoint schema
Create tables storing host:port combinations, connection protocols (telnet/SSL/TLS), codebase identifiers (Diku, LP, MOO), and JSONB extensibility for custom server metadata. Include soft-delete flags and last_verified timestamps to track crawler results without losing historical data.
CREATE TABLE mud_endpoints (
id SERIAL PRIMARY KEY,
host VARCHAR(255) NOT NULL,
port INTEGER CHECK (port > 0 AND port < 65536),
protocol VARCHAR(10) DEFAULT 'telnet',
codebase VARCHAR(50),
metadata JSONB DEFAULT '{}',
last_verified TIMESTAMP,
is_soft_deleted BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_verified ON mud_endpoints(last_verified);⚠ Common Pitfalls
- •Storing player passwords or sensitive auth data in directory tables
- •Over-normalizing flexible game metadata into rigid columns
Implement telnet liveness crawler
Build an async Python service using telnetlib3 to attempt connections, capture initial banners and MOTD snippets, detect connection timeouts versus refused connections. Store connection latency and banner hashes to detect when server content changes significantly.
import asyncio
import telnetlib3
async def probe_mud(host, port):
try:
reader, writer = await telnetlib3.open_connection(
host, port, connect_minwait=2.0
)
banner = await asyncio.wait_for(reader.read(1024), timeout=5.0)
writer.close()
return {"status": "online", "banner_hash": hash(banner)}
except (OSError, asyncio.TimeoutError):
return {"status": "offline"}⚠ Common Pitfalls
- •Getting IP banned by MUD hosts for aggressive probing
- •Misinterpreting connection refused vs timeout as permanent offline status
Automate HTTP link validation
Create a scheduled job to HEAD check all external URLs (homepages, Discord invites, forums) using aiohttp with rotating user agents. Implement exponential backoff for 429 responses and flag permanent redirects that may indicate domain squatting.
import aiohttp
import asyncio
async def check_links(urls):
async with aiohttp.ClientSession() as session:
for url in urls:
try:
async with session.head(
url, allow_redirects=True, timeout=10
) as resp:
if resp.status >= 400:
yield (url, "dead")
except Exception:
yield (url, "error")⚠ Common Pitfalls
- •False positives from Cloudflare-protected sites
- •Following redirect chains into expired domain parking pages
Build controlled vocabulary tagging
Design a faceted taxonomy for immutable tags (codebase type, world genre) versus subjective tags (RP intensity: none/light/heavy/full). Use PostgreSQL array types or join tables with tag categories to enable efficient filtering without string parsing.
CREATE TABLE mud_tags (
id SERIAL PRIMARY KEY,
endpoint_id INTEGER REFERENCES mud_endpoints(id),
category VARCHAR(50),
tag_value VARCHAR(100),
UNIQUE(endpoint_id, category, tag_value)
);
CREATE INDEX idx_tag_category ON mud_tags(category, tag_value);⚠ Common Pitfalls
- •Allowing free-text tags that drift into synonyms (fantasy vs medieval)
- •Subjective ratings without calibration between reviewers
Implement verified community reviews
Require in-game verification codes generated by a trusted bot character on each MUD to confirm reviewers actually play there. Weight ratings by account age and implement Bayesian averaging to prevent review bombing from newly created accounts.
CREATE TABLE mud_reviews (
id SERIAL PRIMARY KEY,
endpoint_id INTEGER REFERENCES mud_endpoints(id),
reviewer_discord_id VARCHAR(32),
verification_code VARCHAR(64) UNIQUE,
rp_intensity INTEGER CHECK (rp_intensity BETWEEN 1 AND 5),
content TEXT,
weight DECIMAL DEFAULT 1.0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);⚠ Common Pitfalls
- •Server owners farming verification codes for fake reviews
- •Weighting systems that penalize legitimate new players
Deploy faceted search indexing
Index tags, review scores, and connection status in PostgreSQL using GIN indexes on JSONB metadata fields, or sync to Elasticsearch for complex filtering. Ensure search results prioritize recently verified online MUDs over historical listings.
CREATE INDEX idx_metadata_gin ON mud_endpoints
USING GIN(metadata);
CREATE INDEX idx_active_codebase ON mud_endpoints(codebase)
WHERE is_soft_deleted = FALSE AND last_verified > NOW() - INTERVAL '7 days';⚠ Common Pitfalls
- •Indexing stale offline servers as top results
- •Complex joins causing query timeouts with large datasets
Integrate Discord feedback webhooks
Configure Discord incoming webhooks to notify moderation channels when automated checks detect offline servers, new reviews require approval, or conflicting tags are reported. Use Discord threads for community discussion on contested listings.
import requests
def notify_moderation(webhook_url, message):
payload = {"content": f"Directory Alert: {message}"}
requests.post(webhook_url, json=payload)⚠ Common Pitfalls
- •Webhook rate limits during bulk updates
- •Notification fatigue causing moderators to ignore alerts
Create curation maintenance workflows
Build admin dashboards showing listings not verified in 30+ days, bulk edit tools for tag normalization, and automated weekly reports of connection success rates by codebase type. Schedule weekly crawler runs and monthly manual audit queues.
SELECT host, port, last_verified
FROM mud_endpoints
WHERE last_verified < NOW() - INTERVAL '30 days'
AND is_soft_deleted = FALSE;⚠ Common Pitfalls
- •Over-automation removing human judgment on edge cases
- •Dashboards requiring authentication too complex for volunteer curators
What you built
Discovery directories fail when automation replaces curation. Schedule weekly telnet crawls to catch intermittent outages, monthly manual reviews of borderline tags, and quarterly audits of the controlled vocabulary. Maintain a changelog of tag definitions visible to the community to prevent drift in subjective ratings like roleplay intensity.