I'd been putting it off for over a year. I had two websites I had made in the past that were never going to change again.
One was the former version of this blog: nine years of posts, hosted for the moment on a full WordPress install. Every so often I would get my hosting company's automated message that they had auto-upgraded my WordPress version. I didn't need to worry about broken links (there were likely plenty) and I didn't need to worry about URL path integrity (Google can find and update it at the new one). But I didn't want to keep paying the WordPress hosting fees for a site I was never going to update again.
The other site was one of Heidi's: http://sabbatical.vicarofbolingbrook.net/ It was a great site, for a very specific moment in time, and it was also our first test of the Ghost hosting platform. Ghost is a phenomenal editor. If you're using their hosting, you get a great all-in-one setup that includes a great theme, a great traffic allotment, and tons of other features. It was great for Heidi's blog while we were on sabbatical at the end of 2014 -- but it was not worth the $15/mo it would take to keep it on Ghost.
How I got the sites downloaded
Browsers' "Save Websites" functionality are always lacking. They either throw it into a proprietary format that doesn't transfer well (including to future/past versions of the same browser), or they save just that one page and don't follow links and don't make those links relate to each other.
Fortunately, there's an open source tool called HTTrack Website Copier. Tools with websites like this make me either 1) think they're immediately legit because it looks like a single programmer made it, or 2) they're made by hackers and downloading the app will cause every secret on my computer to get stolen. It IS fully published on GitHub, however, so nefarious bugs are much less likely. Just to be safe, I installed it on one of my local virtual machines with a different OS that I use for all trial software.
Downloading the websites worked like a charm, giving me a folder with files with correct relative paths within them. I could open them from my desktop, from a remote server - anywhere - and they worked as I expected.
Then came the question of what to put them ...
Amazon S3 Static Website Hosting
I'd mentally filed a note several years ago when Amazon announced they were supporting website hosting with S3 buckets. For those who have never heard of S3, it is one of the building blocks of Amazon web services. It's where to put files. It doesn't care what kind, what extension ... in fact, I'm not even sure if it cares that the file has an extension. You just dump files there - sometimes for use in other things, sometimes as backups - and they can be given permission to either be super-secret, or open for the world, or anywhere in between.
Chad Thompson, a multi-talented developer in Iowa (perhaps best known for creating VagrantPress), wrote an article several years ago about how to configure S3 to do the web serving. It worked like a charm.
I uploaded, I set the permissions, I edited the DNS CNAME records, and I logged out for the night. I had downloaded two complete website and migrated them and re-served them within 30 minutes (probably 10 of which were spent reading the tutorials and documentation).
Anytime a church or organization is ready to switch websites, this is by far one of the best, cheapest, and easiest ways to not lose the old.
Then again, sometimes it's OK to let websites die. :)