Date: February 16, 2026
Prepared for: Forums on Fire Owner & Appnate Team
Project: Website Migration & Merger
This document outlines the technical plan for migrating Forums on Fire (currently on Duda CMS) content to Appnate.com (WordPress). The migration involves:
Key Challenge: Duda CMS does not provide a native export-to-WordPress feature. Migration requires API extraction or web scraping, followed by WordPress REST API import or WP All Import tool.
forumsonfire.com (DNS not resolving) - May be internal/staging or domain needs verification⚠️ Need from Forums on Fire owner: 1. Confirm the correct domain/URL 2. Duda admin access credentials 3. Duda API credentials (Business Tools → API Access) 4. Complete inventory of pages/posts to migrate
Duda provides a comprehensive REST API for content extraction.
API Base URL: https://api.duda.co/api/
Key Endpoints for Export:
| Endpoint | Method | Description |
|---|---|---|
/sites/multiscreen/{site_name} |
GET | Site details |
/sites/multiscreen/{site_name}/pages |
GET | List all pages |
/sites/multiscreen/{site_name}/content |
GET | Content Library data |
/sites/multiscreen/{site_name}/blog/posts |
GET | List blog posts |
/sites/multiscreen/{site_name}/blog/posts/{post_id} |
GET | Individual blog post with content |
Authentication: Basic Auth with API username/password from Duda Dashboard (Business Tools → API Access)
Blog Post Object Fields:
- id - Unique post identifier
- title - Post title
- content - HTML content
- publication_date - Publish timestamp
- featured_image - Main image URL
- author - Author info
- categories - Post categories
- tags - Post tags
- slug - URL slug
Limitations: - API access requires Business plan or higher - Rate limits: 429 errors if exceeded - Page content (non-blog) requires Page Elements API for full extraction - Images are referenced by URL (need separate download)
Duda's support portal mentions export capabilities, but: - No direct WordPress export format - May export HTML/assets as a ZIP - Requires manual content restructuring
If API access unavailable: - Scrape public pages using Python (BeautifulSoup/Scrapy) - Extract HTML content, clean and format - Download all images separately - More labor-intensive, but works without API access
API Base URL: https://appnate.com/wp-json/wp/v2/
Key Endpoints:
| Endpoint | Method | Description |
|---|---|---|
/posts |
POST | Create new post |
/pages |
POST | Create new page |
/media |
POST | Upload image/file |
/categories |
POST | Create category |
/tags |
POST | Create tag |
Create Post Request:
POST /wp/v2/posts
Authorization: Basic base64(username:app_password)
Content-Type: application/json
{
"title": "Post Title",
"content": "<p>HTML content here</p>",
"status": "publish",
"date": "2024-01-15T10:00:00",
"slug": "post-url-slug",
"categories": [1, 2],
"tags": [3, 4],
"featured_media": 123
}
Authentication Requirements: - Application Passwords (WordPress 5.6+) - Or JWT Authentication plugin - Admin user credentials
Plugin: WP All Import (Free + Pro version)
URL: https://wordpress.org/plugins/wp-all-import/
Process: 1. Export Duda content to CSV/XML file 2. Upload to WordPress via WP All Import 3. Map fields using drag-and-drop interface 4. Import handles images, custom fields, etc.
Advantages: - No coding required - Visual field mapping - Handles large imports in batches - Can re-run to update existing posts
Pro Version Features ($99): - Import images from URLs - Custom field support - Scheduled/recurring imports - Direct URL import
For small sites (<20 pages): - Copy content directly - Re-create pages in WordPress editor - Upload images manually - Time-consuming but simple
Based on typical Duda site complexity, here's the recommended path:
#!/usr/bin/env python3
"""
duda_exporter.py - Export content from Duda CMS via API
Usage: python duda_exporter.py --site your-site-name --output ./export
"""
import requests
import json
import os
import argparse
from urllib.parse import urlparse
from pathlib import Path
class DudaExporter:
def __init__(self, api_user, api_pass, site_name):
self.base_url = "https://api.duda.co/api"
self.auth = (api_user, api_pass)
self.site_name = site_name
def get_site_info(self):
"""Get basic site information"""
url = f"{self.base_url}/sites/multiscreen/{self.site_name}"
response = requests.get(url, auth=self.auth)
response.raise_for_status()
return response.json()
def get_pages(self):
"""Get all pages from the site"""
url = f"{self.base_url}/sites/multiscreen/{self.site_name}/pages"
response = requests.get(url, auth=self.auth)
response.raise_for_status()
return response.json()
def get_content_library(self):
"""Get content library data"""
url = f"{self.base_url}/sites/multiscreen/{self.site_name}/content"
response = requests.get(url, auth=self.auth)
response.raise_for_status()
return response.json()
def get_blog_posts(self):
"""Get all blog posts"""
url = f"{self.base_url}/sites/multiscreen/{self.site_name}/blog/posts"
response = requests.get(url, auth=self.auth)
if response.status_code == 404:
return [] # No blog configured
response.raise_for_status()
return response.json()
def get_blog_post(self, post_id):
"""Get individual blog post with full content"""
url = f"{self.base_url}/sites/multiscreen/{self.site_name}/blog/posts/{post_id}"
response = requests.get(url, auth=self.auth)
response.raise_for_status()
return response.json()
def download_image(self, image_url, output_dir):
"""Download an image and return local path"""
try:
response = requests.get(image_url, stream=True)
response.raise_for_status()
# Get filename from URL
parsed = urlparse(image_url)
filename = os.path.basename(parsed.path)
if not filename:
filename = f"image_{hash(image_url)}.jpg"
local_path = os.path.join(output_dir, filename)
with open(local_path, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
return local_path
except Exception as e:
print(f"Failed to download {image_url}: {e}")
return None
def export_all(self, output_dir):
"""Export all content to output directory"""
os.makedirs(output_dir, exist_ok=True)
os.makedirs(os.path.join(output_dir, 'images'), exist_ok=True)
export_data = {
'site_info': self.get_site_info(),
'pages': self.get_pages(),
'content_library': self.get_content_library(),
'blog_posts': []
}
# Get full blog posts
posts_list = self.get_blog_posts()
for post_summary in posts_list:
if 'id' in post_summary:
full_post = self.get_blog_post(post_summary['id'])
export_data['blog_posts'].append(full_post)
# Download featured image if exists
if 'featured_image' in full_post and full_post['featured_image']:
img_path = self.download_image(
full_post['featured_image'],
os.path.join(output_dir, 'images')
)
full_post['local_featured_image'] = img_path
# Save JSON export
with open(os.path.join(output_dir, 'duda_export.json'), 'w') as f:
json.dump(export_data, f, indent=2)
# Create CSV for WP All Import
self.create_wp_import_csv(export_data, output_dir)
return export_data
def create_wp_import_csv(self, data, output_dir):
"""Create CSV file for WP All Import"""
import csv
# Blog posts CSV
if data['blog_posts']:
with open(os.path.join(output_dir, 'blog_posts.csv'), 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow([
'title', 'content', 'date', 'slug',
'featured_image', 'categories', 'tags', 'status'
])
for post in data['blog_posts']:
writer.writerow([
post.get('title', ''),
post.get('content', ''),
post.get('publication_date', ''),
post.get('slug', ''),
post.get('featured_image', ''),
','.join(post.get('categories', [])),
','.join(post.get('tags', [])),
'publish'
])
print(f"Export complete! Files saved to {output_dir}")
def main():
parser = argparse.ArgumentParser(description='Export Duda CMS content')
parser.add_argument('--site', required=True, help='Duda site name')
parser.add_argument('--user', required=True, help='Duda API username')
parser.add_argument('--password', required=True, help='Duda API password')
parser.add_argument('--output', default='./duda_export', help='Output directory')
args = parser.parse_args()
exporter = DudaExporter(args.user, args.password, args.site)
exporter.export_all(args.output)
if __name__ == '__main__':
main()
#!/usr/bin/env python3
"""
wp_importer.py - Import content to WordPress via REST API
Usage: python wp_importer.py --config config.json --input ./export
"""
import requests
import json
import os
import base64
import argparse
from pathlib import Path
import mimetypes
class WordPressImporter:
def __init__(self, site_url, username, app_password):
self.site_url = site_url.rstrip('/')
self.api_url = f"{self.site_url}/wp-json/wp/v2"
# Basic auth with application password
credentials = f"{username}:{app_password}"
self.auth_header = base64.b64encode(credentials.encode()).decode()
self.headers = {
'Authorization': f'Basic {self.auth_header}',
'Content-Type': 'application/json'
}
def upload_media(self, file_path, alt_text=''):
"""Upload an image to WordPress media library"""
filename = os.path.basename(file_path)
mime_type, _ = mimetypes.guess_type(file_path)
with open(file_path, 'rb') as f:
file_data = f.read()
headers = {
'Authorization': f'Basic {self.auth_header}',
'Content-Type': mime_type or 'image/jpeg',
'Content-Disposition': f'attachment; filename="{filename}"'
}
response = requests.post(
f"{self.api_url}/media",
headers=headers,
data=file_data
)
response.raise_for_status()
media = response.json()
# Update alt text if provided
if alt_text:
requests.post(
f"{self.api_url}/media/{media['id']}",
headers=self.headers,
json={'alt_text': alt_text}
)
return media['id']
def create_category(self, name, slug=None):
"""Create a category and return its ID"""
data = {'name': name}
if slug:
data['slug'] = slug
# Check if exists first
response = requests.get(
f"{self.api_url}/categories",
headers=self.headers,
params={'slug': slug or name.lower().replace(' ', '-')}
)
existing = response.json()
if existing:
return existing[0]['id']
response = requests.post(
f"{self.api_url}/categories",
headers=self.headers,
json=data
)
response.raise_for_status()
return response.json()['id']
def create_tag(self, name, slug=None):
"""Create a tag and return its ID"""
data = {'name': name}
if slug:
data['slug'] = slug
# Check if exists first
response = requests.get(
f"{self.api_url}/tags",
headers=self.headers,
params={'slug': slug or name.lower().replace(' ', '-')}
)
existing = response.json()
if existing:
return existing[0]['id']
response = requests.post(
f"{self.api_url}/tags",
headers=self.headers,
json=data
)
response.raise_for_status()
return response.json()['id']
def create_post(self, title, content, **kwargs):
"""Create a blog post"""
data = {
'title': title,
'content': content,
'status': kwargs.get('status', 'draft') # Start as draft for review
}
if kwargs.get('date'):
data['date'] = kwargs['date']
if kwargs.get('slug'):
data['slug'] = kwargs['slug']
if kwargs.get('featured_media'):
data['featured_media'] = kwargs['featured_media']
if kwargs.get('categories'):
data['categories'] = kwargs['categories']
if kwargs.get('tags'):
data['tags'] = kwargs['tags']
response = requests.post(
f"{self.api_url}/posts",
headers=self.headers,
json=data
)
response.raise_for_status()
return response.json()
def create_page(self, title, content, **kwargs):
"""Create a page"""
data = {
'title': title,
'content': content,
'status': kwargs.get('status', 'draft')
}
if kwargs.get('slug'):
data['slug'] = kwargs['slug']
if kwargs.get('parent'):
data['parent'] = kwargs['parent']
response = requests.post(
f"{self.api_url}/pages",
headers=self.headers,
json=data
)
response.raise_for_status()
return response.json()
def import_from_duda_export(self, export_dir):
"""Import from Duda export JSON"""
export_file = os.path.join(export_dir, 'duda_export.json')
with open(export_file, 'r') as f:
data = json.load(f)
imported = {
'posts': [],
'pages': [],
'media': []
}
# Import blog posts
for post in data.get('blog_posts', []):
print(f"Importing post: {post.get('title', 'Untitled')}")
# Upload featured image if exists
featured_media = None
if post.get('local_featured_image') and os.path.exists(post['local_featured_image']):
try:
featured_media = self.upload_media(post['local_featured_image'])
imported['media'].append(featured_media)
except Exception as e:
print(f" Warning: Could not upload image: {e}")
# Create categories
category_ids = []
for cat in post.get('categories', []):
try:
cat_id = self.create_category(cat)
category_ids.append(cat_id)
except:
pass
# Create tags
tag_ids = []
for tag in post.get('tags', []):
try:
tag_id = self.create_tag(tag)
tag_ids.append(tag_id)
except:
pass
# Create the post
try:
wp_post = self.create_post(
title=post.get('title', 'Untitled'),
content=post.get('content', ''),
date=post.get('publication_date'),
slug=post.get('slug'),
featured_media=featured_media,
categories=category_ids,
tags=tag_ids,
status='draft' # Review before publishing
)
imported['posts'].append({
'duda_id': post.get('id'),
'wp_id': wp_post['id'],
'title': wp_post['title']['rendered']
})
print(f" ✓ Created WordPress post ID: {wp_post['id']}")
except Exception as e:
print(f" ✗ Failed to import: {e}")
# Save import report
report_path = os.path.join(export_dir, 'import_report.json')
with open(report_path, 'w') as f:
json.dump(imported, f, indent=2)
print(f"\nImport complete! Report saved to {report_path}")
return imported
def main():
parser = argparse.ArgumentParser(description='Import content to WordPress')
parser.add_argument('--site', required=True, help='WordPress site URL')
parser.add_argument('--user', required=True, help='WordPress username')
parser.add_argument('--password', required=True, help='WordPress application password')
parser.add_argument('--input', required=True, help='Duda export directory')
args = parser.parse_args()
importer = WordPressImporter(args.site, args.user, args.password)
importer.import_from_duda_export(args.input)
if __name__ == '__main__':
main()
#!/usr/bin/env python3
"""
redirect_generator.py - Generate 301 redirect rules
Outputs: .htaccess rules, nginx config, or WordPress Redirection plugin format
"""
import csv
import json
import argparse
def generate_htaccess(mappings, output_file):
"""Generate Apache .htaccess redirect rules"""
with open(output_file, 'w') as f:
f.write("# 301 Redirects - Forums on Fire to Appnate\n")
f.write("RewriteEngine On\n\n")
for old_url, new_url in mappings.items():
# Remove domain, keep path
old_path = old_url.replace('https://forumsonfire.com', '').replace('http://forumsonfire.com', '')
f.write(f"RewriteRule ^{old_path.lstrip('/')}$ {new_url} [R=301,L]\n")
print(f"Generated {output_file}")
def generate_nginx(mappings, output_file):
"""Generate nginx redirect rules"""
with open(output_file, 'w') as f:
f.write("# 301 Redirects - Forums on Fire to Appnate\n")
f.write("# Add this to your nginx server block\n\n")
for old_url, new_url in mappings.items():
old_path = old_url.replace('https://forumsonfire.com', '').replace('http://forumsonfire.com', '')
f.write(f"rewrite ^{old_path}$ {new_url} permanent;\n")
print(f"Generated {output_file}")
def generate_wp_redirection(mappings, output_file):
"""Generate CSV for WordPress Redirection plugin"""
with open(output_file, 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(['source', 'target', 'regex', 'http_code', 'match', 'hits', 'title'])
for old_url, new_url in mappings.items():
old_path = old_url.replace('https://forumsonfire.com', '').replace('http://forumsonfire.com', '')
writer.writerow([old_path, new_url, 0, 301, 'url', 0, ''])
print(f"Generated {output_file}")
def main():
parser = argparse.ArgumentParser(description='Generate redirect rules')
parser.add_argument('--input', required=True, help='JSON file with URL mappings')
parser.add_argument('--format', choices=['htaccess', 'nginx', 'wp-redirection'], default='htaccess')
parser.add_argument('--output', default='redirects')
args = parser.parse_args()
with open(args.input, 'r') as f:
mappings = json.load(f)
if args.format == 'htaccess':
generate_htaccess(mappings, f"{args.output}.htaccess")
elif args.format == 'nginx':
generate_nginx(mappings, f"{args.output}.conf")
elif args.format == 'wp-redirection':
generate_wp_redirection(mappings, f"{args.output}.csv")
if __name__ == '__main__':
main()
Create this file to map old URLs to new URLs:
{
"https://forumsonfire.com/": "https://appnate.com/forums-on-fire/",
"https://forumsonfire.com/about": "https://appnate.com/about/",
"https://forumsonfire.com/blog/post-title": "https://appnate.com/blog/post-title/",
"https://forumsonfire.com/contact": "https://appnate.com/contact/"
}
| Phase | Duration | Dependencies |
|---|---|---|
| Phase 1: Audit & Planning | 1-2 days | API credentials, site access |
| Phase 2: Content Export | 1 day | Export script, API access |
| Phase 3: WordPress Prep | 1 day | Staging environment |
| Phase 4: Content Import | 1-3 days | Depends on content volume |
| Phase 5: QA & Testing | 2-3 days | All content imported |
| Phase 6: Redirects Setup | 1 day | URL mapping complete |
| Phase 7: Go-Live | 1 day | All testing passed |
| Phase 8: Monitoring | 1 week | Post-launch |
Factors that can extend timeline: - Large volume of content (100+ pages/posts) - Complex custom functionality to recreate - eCommerce migration required - Multiple languages/localization - Approval delays between phases
| Risk | Impact | Likelihood | Mitigation |
|---|---|---|---|
| API access unavailable | Major | Medium | Fall back to web scraping |
| Content formatting issues | Medium | High | Manual review and editing |
| Broken internal links | Medium | High | Find/replace after import |
| Lost SEO rankings | Major | Medium | Proper 301 redirects |
| Missing images | Medium | Medium | Pre-download all media |
| Risk | Impact | Likelihood | Mitigation |
|---|---|---|---|
| Form data loss | Medium | Low | Export submissions first |
| DNS propagation delays | Low | Medium | Plan 24-48h buffer |
| Plugin compatibility | Medium | Low | Test on staging first |
| Mobile layout issues | Medium | Medium | Test responsive design |
Export all form submissions
During migration:
Test before publishing anything
After migration: