CompleteNoobs Docker Image Creation
Please Select a Licence from the LICENCE_HEADERS page |
And place at top of your page |
If no Licence is Selected/Appended, Default will be CC0 Default Licence IF there is no Licence placed below this notice!
When you edit this page, you agree to release your contribution under the CC0 Licence LICENCE:
More information about the cc0 licence can be found here: You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. Licence: Statement of Purpose The laws of most jurisdictions throughout the world automatically confer exclusive Copyright and Related Rights (defined below) upon the creator and subsequent owner(s) (each and all, an "owner") of an original work of authorship and/or a database (each, a "Work"). Certain owners wish to permanently relinquish those rights to a Work for the purpose of contributing to a commons of creative, cultural and scientific works ("Commons") that the public can reliably and without fear of later claims of infringement build upon, modify, incorporate in other works, reuse and redistribute as freely as possible in any form whatsoever and for any purposes, including without limitation commercial purposes. These owners may contribute to the Commons to promote the ideal of a free culture and the further production of creative, cultural and scientific works, or to gain reputation or greater distribution for their Work in part through the use and efforts of others. For these and/or other purposes and motivations, and without any expectation of additional consideration or compensation, the person associating CC0 with a Work (the "Affirmer"), to the extent that he or she is an owner of Copyright and Related Rights in the Work, voluntarily elects to apply CC0 to the Work and publicly distribute the Work under its terms, with knowledge of his or her Copyright and Related Rights in the Work and the meaning and intended legal effect of CC0 on those rights. 1. Copyright and Related Rights. A Work made available under CC0 may be protected by copyright and related or neighboring rights ("Copyright and Related Rights"). Copyright and Related Rights include, but are not limited to, the following: the right to reproduce, adapt, distribute, perform, display, communicate, and translate a Work; moral rights retained by the original author(s) and/or performer(s); publicity and privacy rights pertaining to a person's image or likeness depicted in a Work; rights protecting against unfair competition in regards to a Work, subject to the limitations in paragraph 4(a), below; rights protecting the extraction, dissemination, use and reuse of data in a Work; database rights (such as those arising under Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, and under any national implementation thereof, including any amended or successor version of such directive); and other similar, equivalent or corresponding rights throughout the world based on applicable law or treaty, and any national implementations thereof. 2. Waiver. To the greatest extent permitted by, but not in contravention of, applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and unconditionally waives, abandons, and surrenders all of Affirmer's Copyright and Related Rights and associated claims and causes of action, whether now known or unknown (including existing as well as future claims and causes of action), in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each member of the public at large and to the detriment of Affirmer's heirs and successors, fully intending that such Waiver shall not be subject to revocation, rescission, cancellation, termination, or any other legal or equitable action to disrupt the quiet enjoyment of the Work by the public as contemplated by Affirmer's express Statement of Purpose. 3. Public License Fallback. Should any part of the Waiver for any reason be judged legally invalid or ineffective under applicable law, then the Waiver shall be preserved to the maximum extent permitted taking into account Affirmer's express Statement of Purpose. In addition, to the extent the Waiver is so judged Affirmer hereby grants to each affected person a royalty-free, non transferable, non sublicensable, non exclusive, irrevocable and unconditional license to exercise Affirmer's Copyright and Related Rights in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "License"). The License shall be deemed effective as of the date CC0 was applied by Affirmer to the Work. Should any part of the License for any reason be judged legally invalid or ineffective under applicable law, such partial invalidity or ineffectiveness shall not invalidate the remainder of the License, and in such case Affirmer hereby affirms that he or she will not (i) exercise any of his or her remaining Copyright and Related Rights in the Work or (ii) assert any associated claims and causes of action with respect to the Work, in either case contrary to Affirmer's express Statement of Purpose. 4. Limitations and Disclaimers. No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document. Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work, express, implied, statutory or otherwise, including without limitation warranties of title, merchantability, fitness for a particular purpose, non infringement, or the absence of latent or other defects, accuracy, or the present or absence of errors, whether or not discoverable, all to the greatest extent permissible under applicable law. Affirmer disclaims responsibility for clearing rights of other persons that may apply to the Work or any use thereof, including without limitation any person's Copyright and Related Rights in the Work. Further, Affirmer disclaims responsibility for obtaining any necessary consents, permissions or other rights required for any use of the Work. Affirmer understands and acknowledges that Creative Commons is not a party to this document and has no duty or obligation with respect to this CC0 or use of the Work. |
Versions
- This is version 0.1 - many bugs
- CompleteNoobs Docker Image 0.2
Complete Noobs Docker Wiki Tutorial
errors
- This mainly works - just need to fix the extensions popular pages and contrubtion scores
- The XML updater requires more work - currently idea placeholder
Prerequisites
- Ubuntu 24.04
- Docker installed and running
- Your user in docker group:
sudo usermod -aG docker $USER
(then logout/login)
Step 2: Create All Files
2.1: Dockerfile
nano Dockerfile
Copy this exactly:
FROM mediawiki:1.44
# Mediawiki 1.44 used over latest because can confirm extensions youtube and pagenotice works
# Install dependencies
RUN apt-get update && apt-get install -y \
mariadb-server \
python3 \
python3-requests \
python3-bs4 \
python3-pygments \
curl \
wget \
unzip \
nano \
git \
&& apt-get clean
# Copy scripts
COPY download_latest_xml.py /usr/src/download_latest_xml.py
COPY setup_wiki.sh /usr/src/setup_wiki.sh
COPY update_xml.sh /usr/src/update_xml.sh
COPY entrypoint.sh /entrypoint.sh
# Make executable
RUN chmod +x /usr/src/setup_wiki.sh /entrypoint.sh /usr/src/update_xml.sh
# Download XML
RUN python3 /usr/src/download_latest_xml.py
# Setup wiki
RUN /usr/src/setup_wiki.sh
EXPOSE 80
VOLUME /var/lib/mysql
VOLUME /var/www/html/images
ENTRYPOINT ["/entrypoint.sh"]
2.2: XML Download Script
nano download_latest_xml.py
#!/usr/bin/env python3
import os
import requests
from bs4 import BeautifulSoup
import re
BASE_URL = "https://xml.completenoobs.com/xmlDumps/"
def parse_date_from_dump(dump_name):
match = re.match(r'(\d{2})_(\d{2})_(\d{2})\.Noobs', dump_name)
if match:
day, month, year = match.groups()
year_int = int(year)
full_year = 2000 + year_int if year_int <= 49 else 1900 + year_int
return (full_year, int(month), int(day))
return (0, 0, 0)
def get_available_dumps():
try:
response = requests.get(BASE_URL, timeout=30)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
dumps = [link.get('href').rstrip('/') for link in soup.find_all('a')
if re.match(r'\d{2}_\d{2}_\d{2}\.Noobs/$', link.get('href', ''))]
return sorted(dumps, key=parse_date_from_dump, reverse=True)
except Exception as e:
print(f"Error fetching dumps: {e}")
return []
def get_dump_files(dump):
try:
response = requests.get(f"{BASE_URL}{dump}/", timeout=30)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
files = [link.get('href') for link in soup.find_all('a')
if link.get('href', '').endswith('.xml')]
return sorted(files, reverse=True)
except Exception as e:
print(f"Error fetching dump files: {e}")
return []
def download_file(url, filename):
try:
print(f"Downloading {filename}...")
response = requests.get(url, stream=True, timeout=60)
response.raise_for_status()
total_size = int(response.headers.get('content-length', 0))
downloaded = 0
with open(filename, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
downloaded += len(chunk)
if total_size > 0:
progress = (downloaded / total_size) * 100
print(f"\rProgress: {progress:.1f}%", end='', flush=True)
print()
return True
except Exception as e:
print(f"Error downloading {filename}: {e}")
return False
def main():
print("Fetching available XML dumps...")
dumps = get_available_dumps()
if not dumps:
print("No dumps found!")
exit(1)
newest_dump = dumps[0]
print(f"Latest dump: {newest_dump}")
files = get_dump_files(newest_dump)
if not files:
print("No XML files found in latest dump!")
exit(1)
newest_xml = files[0]
xml_url = f"{BASE_URL}{newest_dump}/{newest_xml}"
local_filename = "/tmp/completenoobs_dump.xml"
if download_file(xml_url, local_filename):
print(f"Successfully downloaded {newest_xml}")
with open("/tmp/dump_info.txt", "w") as f:
f.write(f"{newest_dump}/{newest_xml}")
else:
print("Failed to download XML dump!")
exit(1)
if __name__ == "__main__":
main()
2.3: Main Setup Script
nano setup_wiki.sh
#!/bin/bash
set -e
echo "Setting up CompleteNoobs Wiki..."
# Initialize MariaDB
if [ ! -d "/var/lib/mysql/mysql" ]; then
mysql_install_db --user=mysql --datadir=/var/lib/mysql
fi
service mariadb start
# Wait for MariaDB
for i in {1..30}; do
if mysql -e "SELECT 1;" &>/dev/null; then
echo "MariaDB ready!"
break
fi
sleep 2
done
# Setup database
mysql -e "CREATE DATABASE IF NOT EXISTS completenoobs_wiki CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;"
mysql -e "CREATE USER IF NOT EXISTS 'wikiuser'@'127.0.0.1' IDENTIFIED BY 'wikipass';"
mysql -e "GRANT ALL PRIVILEGES ON completenoobs_wiki.* TO 'wikiuser'@'127.0.0.1';"
mysql -e "CREATE USER IF NOT EXISTS 'wikiuser'@'localhost' IDENTIFIED BY 'wikipass';"
mysql -e "GRANT ALL PRIVILEGES ON completenoobs_wiki.* TO 'wikiuser'@'localhost';"
mysql -e "FLUSH PRIVILEGES;"
# Install MediaWiki
cd /var/www/html
php maintenance/install.php \
--dbtype=mysql \
--dbserver=127.0.0.1 \
--dbname=completenoobs_wiki \
--dbuser=wikiuser \
--dbpass=wikipass \
--server="http://localhost:8080" \
--scriptpath="" \
--lang=en \
--pass=AdminPass123! \
"CompleteNoobs Wiki" \
"admin"
# Download and install extensions
cd extensions/
git clone https://gerrit.wikimedia.org/r/mediawiki/extensions/PageNotice --branch REL1_44 || echo "PageNotice download failed, continuing..."
git clone https://gerrit.wikimedia.org/r/mediawiki/extensions/YouTube --branch REL1_44 || echo "YouTube download failed, continuing..."
cd /var/www/html
# Configure LocalSettings.php
cat >> LocalSettings.php << 'EOF'
# Basic settings
$wgEnableUploads = true;
$wgUseImageMagick = true;
$wgImageMagickConvertCommand = "/usr/bin/convert";
$wgDefaultSkin = "vector-2022";
$wgAllowExternalImages = true;
# Debug (can be removed later)
$wgShowExceptionDetails = true;
$wgDebugLogFile = "/tmp/mediawiki-debug.log";
# PageNotice extension (if available)
if ( file_exists( "$IP/extensions/PageNotice/extension.json" ) ) {
wfLoadExtension( 'PageNotice' );
}
# YouTube extension (if available)
if ( file_exists( "$IP/extensions/YouTube/extension.json" ) ) {
wfLoadExtension( 'YouTube' );
}
# SyntaxHighlight (usually bundled)
if ( file_exists( "$IP/extensions/SyntaxHighlight_GeSHi/extension.json" ) ) {
wfLoadExtension( 'SyntaxHighlight_GeSHi' );
$wgPygmentizePath = '/usr/bin/pygmentize';
}
EOF
# Import XML dump if available
if [ -f "/tmp/completenoobs_dump.xml" ]; then
echo "Importing XML dump..."
if php maintenance/importDump.php --uploads < /tmp/completenoobs_dump.xml; then
echo "XML import completed!"
else
echo "XML import had warnings"
fi
# Basic maintenance
php maintenance/update.php --quick || echo "Update completed with warnings"
php maintenance/rebuildrecentchanges.php || echo "RecentChanges rebuilt with warnings"
php maintenance/initSiteStats.php || echo "SiteStats initialized with warnings"
if [ -f "/tmp/dump_info.txt" ]; then
echo "Import: $(cat /tmp/dump_info.txt)" > /var/www/html/.last_import
echo "Date: $(date)" >> /var/www/html/.last_import
fi
else
echo "No XML dump found - starting with empty wiki"
fi
# Copy update script to accessible location
cp /usr/src/update_xml.sh /var/www/html/update_xml.sh
chmod +x /var/www/html/update_xml.sh
# Create user-friendly update wrapper
cat > /var/www/html/check_updates.sh << 'UPDATE_WRAPPER_EOF'
#!/bin/bash
echo "=== CompleteNoobs Wiki Update Checker ==="
echo ""
echo "This tool checks for new content from the CompleteNoobs XML repository"
echo "and imports ONLY new pages, preserving all your local edits."
echo ""
/var/www/html/update_xml.sh
UPDATE_WRAPPER_EOF
chmod +x /var/www/html/check_updates.sh
# Create simple status script
cat > /var/www/html/check_status.sh << 'STATUS_EOF'
#!/bin/bash
cd /var/www/html
echo "=== Wiki Status ==="
echo "Pages: $(mysql --user=wikiuser --password=wikipass completenoobs_wiki -e "SELECT COUNT(*) FROM page;" -s -N 2>/dev/null || echo "Error")"
echo "Users: $(mysql --user=wikiuser --password=wikipass completenoobs_wiki -e "SELECT COUNT(*) FROM user WHERE user_id > 0;" -s -N 2>/dev/null || echo "Error")"
echo ""
echo "=== Extensions ==="
if [ -d "extensions/PageNotice" ]; then
echo "PageNotice: Installed"
else
echo "PageNotice: Not installed"
fi
if [ -d "extensions/YouTube" ]; then
echo "YouTube: Installed"
else
echo "YouTube: Not installed"
fi
if [ -d "extensions/ContributionScores" ]; then
echo "ContributionScores: Installed"
else
echo "ContributionScores: Not installed"
fi
echo ""
echo "=== Update System ==="
if [ -f ".last_import" ]; then
echo "Current version: $(grep 'Import:' .last_import | cut -d' ' -f2)"
echo "Import date: $(grep 'Date:' .last_import | cut -d' ' -f2-)"
else
echo "No version info available"
fi
echo ""
echo "To check for updates: docker exec -it completenoobs_wiki /var/www/html/check_updates.sh"
STATUS_EOF
chmod +x /var/www/html/check_status.sh
echo ""
echo "Setup completed!"
echo "Admin: admin / AdminPass123!"
echo "Update scripts installed:"
echo "- /var/www/html/check_updates.sh (user-friendly)"
echo "- /var/www/html/update_xml.sh (direct)"
# Final counts
PAGES=$(mysql --user=wikiuser --password=wikipass completenoobs_wiki -e "SELECT COUNT(*) FROM page;" -s -N 2>/dev/null || echo "0")
echo "Pages imported: $PAGES"
service mariadb stop
2.4: XML Update Script
nano update_xml.sh
#!/bin/bash
set -e
echo "=== CompleteNoobs Wiki XML Updater ==="
echo "This will check for new and updated pages"
echo ""
# Function to check if running in container
check_environment() {
if [ ! -f "/var/www/html/LocalSettings.php" ]; then
echo "Error: This script must be run inside the wiki container"
exit 1
fi
}
# Function to get current XML version
get_current_version() {
if [ -f "/var/www/html/.last_import" ]; then
grep "Import:" /var/www/html/.last_import | cut -d' ' -f2
else
echo "none"
fi
}
# Function to check for updates
check_for_updates() {
python3 - << 'PYTHON_EOF'
import requests
from bs4 import BeautifulSoup
import re
import sys
BASE_URL = "https://xml.completenoobs.com/xmlDumps/"
def parse_date_from_dump(dump_name):
match = re.match(r'(\d{2})_(\d{2})_(\d{2})\.Noobs', dump_name)
if match:
day, month, year = match.groups()
year_int = int(year)
full_year = 2000 + year_int if year_int <= 49 else 1900 + year_int
return (full_year, int(month), int(day))
return (0, 0, 0)
def get_latest_dump():
try:
response = requests.get(BASE_URL, timeout=30)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
dumps = [link.get('href').rstrip('/') for link in soup.find_all('a')
if re.match(r'\d{2}_\d{2}_\d{2}\.Noobs/$', link.get('href', ''))]
if dumps:
latest = sorted(dumps, key=parse_date_from_dump, reverse=True)[0]
# Get XML files from latest dump
response = requests.get(f"{BASE_URL}{latest}/", timeout=30)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
files = [link.get('href') for link in soup.find_all('a')
if link.get('href', '').endswith('.xml')]
if files:
newest_xml = sorted(files, reverse=True)[0]
print(f"{latest}/{newest_xml}")
sys.exit(0)
except Exception as e:
print(f"ERROR: {e}", file=sys.stderr)
sys.exit(1)
print("ERROR: No dumps found", file=sys.stderr)
sys.exit(1)
get_latest_dump()
PYTHON_EOF
}
# Function to download new XML
download_xml() {
local dump_info="$1"
local dump_dir=$(echo "$dump_info" | cut -d'/' -f1)
local xml_file=$(echo "$dump_info" | cut -d'/' -f2)
local url="https://xml.completenoobs.com/xmlDumps/${dump_info}"
echo "Downloading: $xml_file"
echo "From: $dump_dir"
echo "URL: $url"
if wget -O /tmp/new_dump.xml "$url" --progress=bar:force 2>&1; then
echo "Download successful!"
return 0
else
echo "Download failed!"
return 1
fi
}
# Function to backup current database
backup_database() {
echo "Creating database backup..."
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mysqldump --user=wikiuser --password=wikipass completenoobs_wiki > /tmp/wiki_backup_${TIMESTAMP}.sql
echo "Backup created: /tmp/wiki_backup_${TIMESTAMP}.sql"
}
# Function to analyze and import changes
analyze_and_import() {
echo "Analyzing differences between XML and local wiki..."
# Create analysis and import script for MediaWiki 1.44+
cat > /tmp/analyze_import.php << 'PHP_EOF'
<?php
require_once '/var/www/html/maintenance/Maintenance.php';
class AnalyzeAndImport extends Maintenance {
private $db;
private $new_pages = [];
private $changed_pages = [];
private $unchanged_pages = [];
public function __construct() {
parent::__construct();
$this->addDescription('Analyze and selectively import from XML dump');
}
public function execute() {
$this->db = new mysqli('127.0.0.1', 'wikiuser', 'wikipass', 'completenoobs_wiki');
// MediaWiki 1.35+ uses slots and content tables
// Get existing pages with their content
$query = "
SELECT p.page_title, p.page_id, c.content_address, c.content_sha1
FROM page p
JOIN revision r ON p.page_latest = r.rev_id
JOIN slots s ON r.rev_id = s.slot_revision_id
JOIN slot_roles sr ON s.slot_role_id = sr.role_id
JOIN content c ON s.slot_content_id = c.content_id
WHERE p.page_namespace = 0 AND sr.role_name = 'main'
";
$result = $this->db->query($query);
if (!$result) {
$this->error("Database query failed: " . $this->db->error);
return;
}
$existing = [];
while ($row = $result->fetch_assoc()) {
// Get actual text content
$text_content = $this->getTextContent($row['content_address']);
$existing[$row['page_title']] = [
'id' => $row['page_id'],
'content' => $text_content,
'sha1' => $row['content_sha1']
];
}
// Parse XML and compare
$xml = simplexml_load_file('/tmp/new_dump.xml');
foreach ($xml->page as $page) {
$title = str_replace(' ', '_', (string)$page->title);
$xml_content = (string)$page->revision->text;
if (!isset($existing[$title])) {
// New page
$this->new_pages[$title] = $xml_content;
} else {
// Compare content using SHA1 for efficiency
$xml_sha1 = sha1($xml_content);
if ($existing[$title]['sha1'] !== $xml_sha1) {
// Content is different
$this->changed_pages[$title] = [
'local' => $existing[$title]['content'],
'xml' => $xml_content,
'page_id' => $existing[$title]['id']
];
} else {
$this->unchanged_pages[] = $title;
}
}
}
// Display summary
$this->output("\n=== Update Analysis ===\n");
$this->output("New pages to import: " . count($this->new_pages) . "\n");
$this->output("Changed pages found: " . count($this->changed_pages) . "\n");
$this->output("Unchanged pages: " . count($this->unchanged_pages) . "\n");
// Save analysis for review
file_put_contents('/tmp/update_analysis.json', json_encode([
'new' => array_keys($this->new_pages),
'changed' => array_keys($this->changed_pages),
'unchanged' => $this->unchanged_pages
], JSON_PRETTY_PRINT));
// Show changed pages with preview (limit to first 20 for readability)
if (count($this->changed_pages) > 0) {
$this->output("\n=== Changed Pages ===\n");
$count = 0;
$total_changed = count($this->changed_pages);
foreach ($this->changed_pages as $title => $data) {
$count++;
if ($count <= 20) {
$this->output("\n$count. $title\n");
// Create a simple diff preview (first 300 chars)
$xml_preview = substr($data['xml'], 0, 100);
// Save full diff to file
$safe_title = preg_replace('/[^a-zA-Z0-9_-]/', '_', $title);
$diff_file = "/tmp/diff_${safe_title}.txt";
file_put_contents($diff_file, "=== FULL DIFF FOR: $title ===\n\n");
file_put_contents($diff_file, "--- LOCAL VERSION ---\n", FILE_APPEND);
file_put_contents($diff_file, $data['local'] . "\n\n", FILE_APPEND);
file_put_contents($diff_file, "--- XML VERSION ---\n", FILE_APPEND);
file_put_contents($diff_file, $data['xml'], FILE_APPEND);
$this->output(" Preview: " . $xml_preview . "...\n");
}
}
if ($total_changed > 20) {
$this->output("\n... and " . ($total_changed - 20) . " more changed pages.\n");
$this->output("All diff files saved to /tmp/diff_*.txt\n");
}
}
// Interactive selection
if (count($this->new_pages) > 0 || count($this->changed_pages) > 0) {
$this->output("\n=== Import Options ===\n");
$this->output("1. Import new pages only (preserve all local changes)\n");
$this->output("2. Import new pages + update ALL changed pages (overwrites local changes)\n");
$this->output("3. Selective import (choose which updates to apply)\n");
$this->output("4. Cancel (no changes)\n");
// Save state for import script WITHOUT the XML object
$import_data = [
'new_pages' => $this->new_pages,
'changed_pages' => $this->changed_pages,
'xml_file' => '/tmp/new_dump.xml' // Save path instead of object
];
file_put_contents('/tmp/import_data.ser', serialize($import_data));
} else {
$this->output("\nNo changes detected. Your wiki is up to date!\n");
}
}
private function getTextContent($address) {
// Handle different content storage formats in MW 1.35+
if (strpos($address, 'tt:') === 0) {
// Text table reference
$text_id = substr($address, 3);
$result = $this->db->query("SELECT old_text FROM text WHERE old_id = $text_id");
if ($row = $result->fetch_assoc()) {
return $row['old_text'];
}
} elseif (strpos($address, 'es:') === 0) {
// External storage - would need special handling
return "[External storage content]";
}
// Direct content
return $address;
}
}
$maintClass = AnalyzeAndImport::class;
require_once RUN_MAINTENANCE_IF_MAIN;
PHP_EOF
cd /var/www/html
php /tmp/analyze_import.php
}
# Function to perform selected import
perform_import() {
local choice=$1
cat > /tmp/do_import.php << 'PHP_EOF'
<?php
require_once '/var/www/html/maintenance/Maintenance.php';
class DoImport extends Maintenance {
public function __construct() {
parent::__construct();
$this->addOption('mode', 'Import mode', true, true);
}
public function execute() {
$mode = $this->getOption('mode');
$data = unserialize(file_get_contents('/tmp/import_data.ser'));
// Load XML file
$xml = simplexml_load_file($data['xml_file']);
$new_imported = 0;
$updated = 0;
// Import new pages (always for modes 1-3)
if ($mode != '4') {
foreach ($data['new_pages'] as $title => $content) {
$this->importPage($title, $content, $xml);
$new_imported++;
$this->output("Imported new page: $title\n");
}
}
// Handle changed pages based on mode
if ($mode == '2') {
// Update all changed pages
foreach ($data['changed_pages'] as $title => $info) {
$this->output("Updating: $title\n");
if ($this->reimportPage($title, $xml)) {
$updated++;
$this->output("Updated page: $title\n");
} else {
$this->output("Failed to update: $title\n");
}
}
} elseif ($mode == '3') {
// Selective update
$this->output("\n=== Selective Import Mode ===\n");
$this->output("For each changed page, choose:\n");
$this->output(" y = yes, update this page\n");
$this->output(" n = no, keep local version\n");
$this->output(" d = show diff file\n");
$this->output(" a = update all remaining pages\n");
$this->output(" s = skip all remaining pages\n\n");
$update_all = false;
$skip_all = false;
foreach ($data['changed_pages'] as $title => $info) {
if ($skip_all) {
$this->output("Skipped: $title\n");
continue;
}
if ($update_all) {
if ($this->reimportPage($title, $xml)) {
$updated++;
$this->output("Updated: $title\n");
}
continue;
}
$this->output("\nPage: $title\n");
$this->output("Action (y/n/d/a/s): ");
$handle = fopen("php://stdin", "r");
$line = trim(fgets($handle));
while ($line == 'd') {
// Show diff
$safe_title = preg_replace('/[^a-zA-Z0-9_-]/', '_', $title);
$diff_file = "/tmp/diff_${safe_title}.txt";
if (file_exists($diff_file)) {
system("head -50 $diff_file");
$this->output("\n[Showing first 50 lines - full file at $diff_file]\n");
}
$this->output("Action (y/n/a/s): ");
$line = trim(fgets($handle));
}
if ($line == 'a') {
$update_all = true;
if ($this->reimportPage($title, $xml)) {
$updated++;
$this->output("Updated: $title\n");
}
} elseif ($line == 's') {
$skip_all = true;
$this->output("Skipped: $title\n");
} elseif ($line == 'y') {
if ($this->reimportPage($title, $xml)) {
$updated++;
$this->output("Updated: $title\n");
} else {
$this->output("Failed to update: $title\n");
}
} else {
$this->output("Skipped: $title\n");
}
}
}
$this->output("\n=== Import Complete ===\n");
$this->output("New pages imported: $new_imported\n");
$this->output("Pages updated: $updated\n");
}
private function importPage($title, $content, $xml) {
// Create single page XML for import
$tempFile = '/tmp/single_page_' . md5($title) . '.xml';
$singlePage = new SimpleXMLElement('<?xml version="1.0" encoding="utf-8"?><mediawiki></mediawiki>');
if ($xml->siteinfo) {
$siteinfo = $singlePage->addChild('siteinfo');
foreach ($xml->siteinfo->children() as $child) {
$siteinfo->addChild($child->getName(), (string)$child);
}
}
// Find and add the page
foreach ($xml->page as $page) {
if (str_replace(' ', '_', (string)$page->title) == $title) {
$newPage = $singlePage->addChild('page');
foreach ($page->children() as $child) {
if ($child->getName() == 'revision') {
$revision = $newPage->addChild('revision');
foreach ($child->children() as $revChild) {
$revision->addChild($revChild->getName(), (string)$revChild);
}
} else {
$newPage->addChild($child->getName(), (string)$child);
}
}
break;
}
}
$singlePage->asXML($tempFile);
exec("php /var/www/html/maintenance/importDump.php < $tempFile 2>&1");
unlink($tempFile);
}
private function reimportPage($title, $xml) {
// For updating existing pages, delete then reimport
$db = new mysqli('127.0.0.1', 'wikiuser', 'wikipass', 'completenoobs_wiki');
// Delete the existing page
$safe_title = $db->real_escape_string(str_replace(' ', '_', $title));
$db->query("DELETE FROM page WHERE page_title = '$safe_title' AND page_namespace = 0");
$db->close();
// Now import the new version
foreach ($xml->page as $page) {
if (str_replace(' ', '_', (string)$page->title) == $title) {
$tempFile = '/tmp/update_page_' . md5($title) . '.xml';
$singlePage = new SimpleXMLElement('<?xml version="1.0" encoding="utf-8"?><mediawiki></mediawiki>');
if ($xml->siteinfo) {
$siteinfo = $singlePage->addChild('siteinfo');
foreach ($xml->siteinfo->children() as $child) {
$siteinfo->addChild($child->getName(), (string)$child);
}
}
$newPage = $singlePage->addChild('page');
foreach ($page->children() as $child) {
if ($child->getName() == 'revision') {
$revision = $newPage->addChild('revision');
foreach ($child->children() as $revChild) {
$revision->addChild($revChild->getName(), (string)$revChild);
}
} else {
$newPage->addChild($child->getName(), (string)$child);
}
}
$singlePage->asXML($tempFile);
$result = exec("php /var/www/html/maintenance/importDump.php < $tempFile 2>&1", $output, $return);
unlink($tempFile);
return ($return === 0);
}
}
return false;
}
}
$maintClass = DoImport::class;
require_once RUN_MAINTENANCE_IF_MAIN;
PHP_EOF
cd /var/www/html
php /tmp/do_import.php --mode="$choice"
}
# Main execution
main() {
check_environment
# Start MariaDB if not running
service mariadb status > /dev/null 2>&1 || service mariadb start
# Wait for MariaDB
for i in {1..30}; do
if mysql -e "SELECT 1;" &>/dev/null; then
break
fi
sleep 1
done
CURRENT=$(get_current_version)
echo "Current version: $CURRENT"
echo ""
echo "Checking for updates..."
LATEST=$(check_for_updates 2>/dev/null)
if [ $? -ne 0 ] || [ -z "$LATEST" ] || [[ "$LATEST" == *"ERROR"* ]]; then
echo "Failed to check for updates"
exit 1
fi
echo "Latest available: $LATEST"
echo ""
# Always proceed to analysis even if versions match
# (there might be content updates in the same version)
echo "Proceeding with content analysis..."
echo ""
# Backup database
backup_database
# Download new XML
if ! download_xml "$LATEST"; then
echo "Failed to download new XML"
exit 1
fi
# Analyze differences
analyze_and_import
# Check if there are changes to import
if [ -f "/tmp/import_data.ser" ]; then
echo ""
read -p "Choose option (1-4): " -n 1 -r
echo
if [[ $REPLY =~ ^[1-4]$ ]]; then
if [ "$REPLY" != "4" ]; then
perform_import "$REPLY"
# Update version info
echo "Import: $LATEST" > /var/www/html/.last_import
echo "Date: $(date)" >> /var/www/html/.last_import
# Rebuild indices
echo "Rebuilding indices..."
php maintenance/rebuildrecentchanges.php
php maintenance/initSiteStats.php
else
echo "Update cancelled"
fi
else
echo "Invalid option. Update cancelled"
fi
fi
# Clean up temp files
rm -f /tmp/import_data.ser /tmp/update_analysis.json /tmp/diff_*.txt /tmp/analyze_import.php /tmp/do_import.php 2>/dev/null
echo ""
echo "Done!"
}
# Run main function
main "$@"
2.5: Entrypoint Script
nano entrypoint.sh
#!/bin/bash
echo "Starting CompleteNoobs Wiki..."
service mariadb start
# Wait for MariaDB
for i in {1..30}; do
if mysql -e "SELECT 1;" &>/dev/null; then
echo "MariaDB ready!"
break
fi
sleep 1
done
echo ""
echo "CompleteNoobs Wiki ready at: http://localhost:8080"
echo "Admin login: admin / AdminPass123!"
echo ""
echo "Features:"
echo "- Complete wiki content imported from XML"
echo "- License notices on all pages (via PageNotice)"
echo "- SyntaxHighlight for code blocks"
echo "- YouTube video embedding"
echo "- Contribution Scores special page"
echo "- XML update system (preserves local edits)"
echo ""
echo "To check for updates: docker exec -it completenoobs_wiki /var/www/html/check_updates.sh"
echo ""
apache2-foreground
Step 3: Build and Run
3.1: Build the Image
docker build -t completenoobs/wiki:latest .
This will take several minutes.
3.2: Run the Container
docker run -d -p 8080:80 \
-v completenoobs_mysql:/var/lib/mysql \
-v completenoobs_images:/var/www/html/images \
--name completenoobs_wiki completenoobs/wiki:latest
- NOTE: The above
docker run
command is for quick testing, if you want to be able to export your wiki's database to an XML file you can backup and share, please use method in expanding info box below.
If you want to be able to Export Your MediaWiki Database to Dated XML File - use this docker run
method:
To export your MediaWiki database to a dated XML file (e.g., 20250901.xml
) and save it to the host’s ~/wiki-container
directory, run the export script inside the Docker container and use a volume mount to write the file to the host.
Step 1: Create Host Directory
On the host, ensure the ~/wiki-container
directory exists:
mkdir -p ~/wiki-container
Step 2: Run Container with Volume Mount
If your container isn’t already using a volume for ~/wiki-container
, stop and remove it, then restart with a volume mount:
docker stop completenoobs_wiki
docker rm completenoobs_wiki
docker run -d -p 8080:80 \
-v ~/wiki-container:/export \
-v completenoobs_mysql:/var/lib/mysql \
-v completenoobs_images:/var/www/html/images \
--name completenoobs_wiki \
completenoobs/cnoobs-wiki:0.1
Step 3: Run Export Script in Container
Access the container’s shell:
docker exec -it completenoobs_wiki bash
Inside the container, run the export script to create a dated XML file:
DATE=$(date +%Y%m%d)
php /var/www/html/maintenance/run.php dumpBackup.php --full --output=file:/export/$DATE.xml
exit
This writes the file (e.g., 20250901.xml
) to /export
in the container, which maps to ~/wiki-container
on the host.
Step 4: Verify the File
On the host, check for the XML file:
ls ~/wiki-container
You should see a file like 20250901.xml
.
Notes
- If you encounter permission issues, ensure the container’s user has write access to /export
:
chmod -R 777 /export
- The script must be run inside the container, as it requires MediaWiki’s environment and database access.
- If your container uses a different volume setup, adjust the mount point accordingly.
Step 4: Test Everything
4.1: Check the Wiki
- Visit: http://localhost:8080
- You should see the PageNotice at the top
- Login with: admin / AdminPass123!
Restore the completenoobs Main Page:
By default, the completenoobs
MediaWiki instance overwrites the Main Page with content like "MediaWiki has been installed." To revert to the page’s state before this change, undo the initial revision:
1. Go to http://localhost:8080
in your browser.
2. On the Main Page, click View History (top-right corner).
3. Find the top revision by MediaWiki default
and click Undo.
4. Scroll down and click Save changes to revert the Main Page.
Change Admin Password:
By default, the admin user's password is AdminPass123!
. It's highly recommended to change this immediately. You can do this either through the wiki's web interface or directly in the Docker terminal.
Method 1: Change Password via Web Interface
This is the easiest method. You can change your password directly from the wiki itself.
1. Log in to your wiki with the default credentials: admin
/ AdminPass123!
.
2. Once logged in, click your username (admin
) in the top-right corner of the page.
3. From the drop-down menu, select Preferences.
4. On the Preferences page, go to the Password tab.
5. Enter the current password (AdminPass123!
), then enter your new password twice.
6. Click Change password.
Your password is now changed, and you will need to use the new one for future logins.
Method 2: Change Password via Terminal (No-Email Reset)
If you have forgotten the password or prefer to use the command line, you can reset it directly inside the Docker container using a MediaWiki maintenance script.
1. Access the container's shell with the following command from your host machine:
docker exec -it completenoobs_wiki bash
2. Once inside the container, use the changePassword.php
maintenance script to change the password. This is the modern, recommended way to run MediaWiki maintenance scripts.
- Change NEWPASSWORD to your new password
php /var/www/html/maintenance/run.php changePassword.php --user=admin --password=NEWPASSWORD
3. Type exit
to leave the container's shell.
The admin password has now been reset. You can log in to your wiki with the new password.
4.2: Test Extensions
- YouTube: Edit any page, add
- PageNotice: Should already be visible at the top
- SyntaxHighlight: Add code blocks with <code><syntaxhighlight lang="python">code here</syntaxhighlight></code>
4.3: Check Status
docker exec completenoobs_wiki /var/www/html/check_status.sh
Step 5: XML Update Operations
- NOTE: update_xml.sh still needs alot of work, this just idea placeholder for now.
5.1: Check for Updates (Interactive)
docker exec -it completenoobs_wiki /var/www/html/check_updates.sh
This will:
- Check the CompleteNoobs XML repository for new dumps
- Compare with your current version
- Ask for confirmation before updating
- Import ONLY new pages (preserves your edits)
- Create a backup before making changes
5.2: Force Update (Non-Interactive)
docker exec completenoobs_wiki bash -c "echo 'y' | /var/www/html/update_xml.sh"
5.3: Manual Update Process
# 1. Enter container docker exec -it completenoobs_wiki bash # 2. Check current version cat /var/www/html/.last_import # 3. Run update /var/www/html/check_updates.sh # 4. Exit container exit
Step 6: Troubleshooting Commands
Access container shell:
docker exec -it completenoobs_wiki bash
Edit configuration:
docker exec -it completenoobs_wiki nano /var/www/html/LocalSettings.php
Check logs:
docker logs completenoobs_wiki
View update logs:
docker exec completenoobs_wiki tail -f /tmp/mediawiki-debug.log
Complete restart:
docker stop completenoobs_wiki docker rm completenoobs_wiki docker run -d -p 8080:80 \ -v completenoobs_mysql:/var/lib/mysql \ -v completenoobs_images:/var/www/html/images \ --name completenoobs_wiki completenoobs/wiki:latest
Expected Results
- Working wiki with imported CompleteNoobs content
- PageNotice visible at top of all pages
- All extensions functional
- Text editors (nano/vim) available in container
- Utility scripts for maintenance
- XML update system that preserves local edits
need to add
- way for user to backup there local custom wiki - xml exporter