CDN-ing my Ghost blog

From pure pessimism after reading a reddit post around unwanted traffic due to image assets, the engineer in me wanted to make sure that everything I was building was following the best practices so that even when things blow up, it should minimally affect my servers.
My self-hosted services run on a Raspberry Pi cluster running a Docker Swarm, meaning that even this Ghost instance is running as a Docker Container. Myself being an Amazonian, I wanted to utilize AWS for anything that I can't solve with my consumer-grade hardware and consumer-grade internet plans, thus I decided that utilizing S3 and Cloudfront as a CDN for my Ghost assets was the way to go. Considering I'm already using Route53 as my DNS solution, integration with Cloudfront, ACM for SSL etc should be a bit easier, or so I thought.
After digging for some open source options, I landed on Ghost-Storage-S3 as it had more recent commits than other options. Note that I don't really have a clue about how stable each of the solutions are, but the implementation seemed actually pretty simple that I'd be able to change things as needed.
The repository also provides some ideas on how to integrate with the s3 storage adapter without requiring to rebuild a new docker image, so I decided to give this a go.
On my NFS mount that is already mounted and accessible to my entire Pi fleet (and already mounted as a volume in the Ghost container), I cloned the repo and built the package with npm:
$ git clone https://github.com/abstractvector/Ghost-Storage-S3
$ cd Ghost-Storage-S3/
$ npm run build
Considering my Pis were fresh Raspbian installs with essentially only Docker installed on top, npm was throwing a build issue:
[webpack-cli] Error: Cannot find module 'path-browserify'
Require stack:
- /.../Ghost-Storage-S3/webpack.config.js
- /usr/share/nodejs/webpack-cli/lib/webpack-cli.js
- /usr/share/nodejs/webpack-cli/lib/bootstrap.js
- /usr/share/nodejs/webpack-cli/bin/cli.js
- /usr/share/nodejs/webpack/bin/webpack.js
at Module._resolveFilename (node:internal/modules/cjs/loader:1134:15)
at Function.resolve (node:internal/modules/helpers:188:19)
at Object.<anonymous> (/home/jilouis/mount/ghost/Ghost-Storage-S3/webpack.config.js:18:31)
at Module._compile (node:internal/modules/cjs/loader:1356:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1414:10)
at Module.load (node:internal/modules/cjs/loader:1197:32)
at Module._load (node:internal/modules/cjs/loader:1013:12)
at Module.require (node:internal/modules/cjs/loader:1225:19)
at require (node:internal/modules/helpers:177:18)
at WebpackCLI.tryRequireThenImport (/usr/share/nodejs/webpack-cli/lib/webpack-cli.js:224:22) {
code: 'MODULE_NOT_FOUND',
requireStack: [
'.../Ghost-Storage-S3/webpack.config.js',
'/usr/share/nodejs/webpack-cli/lib/webpack-cli.js',
'/usr/share/nodejs/webpack-cli/lib/bootstrap.js',
'/usr/share/nodejs/webpack-cli/bin/cli.js',
'/usr/share/nodejs/webpack/bin/webpack.js'
]
}
Which meant I had to play some whack-a-mole and install these missing dependencies:
$ npm install path-browserify
$ npm run build
While npm was busy, I was working on setting up an S3 bucket in parallel.
In the AWS console, I created a new S3 bucket ghost-assets-<my-account-id>
with default settings.
I already had an ACM cert for my domain, but you can create one too in just a few clicks. I made sure that my ACM cert used a wildcarded subdomain, *.seattlebubbles.com
with a plan to use the subdomain cdn.seattlebubbles.com
for the CloudFront distribution.

Next step was to create a CloudFront distribution, choosing the S3 bucket I created above as the Origin domain (it will show up as a drop-down item).
Origin access control settings should be enabled, and you want to click on "Create new OAC" as well. We'll later update the bucket policy after.
Under the Viewer Protocol policy, we change the settings to Redirect HTTP to HTTPS, pointed to my ACM certificate, and also add a custom CNAME.

I also enabled WAF, but this is optional. Every other option I've kept as default or recommended.
And lastly, we add this CloudFront distribution to our Route53 hosted zone.

Now back to the S3 adapter!
$ npm run build
ERROR in /.../Ghost-Storage-S3/src/index.ts
./src/index.ts 133:6-9
[tsl] ERROR in /.../Ghost-Storage-S3/src/index.ts(133,7)
TS2322: Type 'string' is not assignable to type 'ObjectCannedACL | undefined'.
err..
Time to make some changes to the code:
Add the following line to the imports:
import { ObjectCannedACL } from '@aws-sdk/client-s3'
And we modify the PutObjectCommandInput
's ACL
property from
ACL: this.acl
to
ACL: ObjectCannedACL.private
$ npm run build
> @abstractvector/ghost-storage-s3@0.1.0 build
> webpack
assets by chunk 127 KiB (id hint: vendors)
asset 574.index.js 52.7 KiB [emitted] (id hint: vendors)
asset 631.index.js 41 KiB [emitted] (id hint: vendors)
asset 563.index.js 33.4 KiB [emitted] (id hint: vendors)
asset index.js 1.63 MiB [emitted] (name: main)
asset 897.index.js 16.9 KiB [emitted]
asset 791.index.js 14.5 KiB [emitted]
asset 789.index.js 12.6 KiB [emitted]
asset 610.index.js 8.43 KiB [emitted]
asset 109.index.js 4.33 KiB [emitted]
asset 819.index.js 3.32 KiB [emitted]
orphan modules 1.03 MiB [orphan] 716 modules
runtime modules 2.43 KiB 8 modules
built modules 1.75 MiB [built]
modules by path ./node_modules/moment/ 659 KiB
modules by path ./node_modules/moment/locale/*.js 486 KiB 133 modules
+ 2 modules
modules by path ./node_modules/@smithy/ 257 KiB
cacheable modules 203 KiB 30 modules
+ 2 modules
modules by path ./node_modules/@aws-sdk/ 791 KiB
cacheable modules 188 KiB 25 modules
+ 2 modules
modules by path ./node_modules/fast-xml-parser/src/ 62.4 KiB 11 modules
+ 15 modules
webpack 5.98.0 compiled successfully in 9054 ms
Great, it finally compiled! We should now see index.js
within the lib
directory:
$ pwd
/.../Ghost-Storage-S3/lib
$ ls
109.index.js 563.index.js 574.index.js 610.index.js 631.index.js 789.index.js 791.index.js 819.index.js 897.index.js index.js
I didn't write it up be we need to create an IAM user with a policy that has the following permissions to your bucket, ideally without using wildcards.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:PutObjectVersionAcl",
"s3:ListBucket",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "*"
}
]
}
Finally time to wire things together. I'm using Portainer to manage my Docker Swarm, so I'll be adding some environment variables to my compose file:
environment:
...
adapters__storage__active: s3
adapters__storage__s3__accessKeyId: iamUserAccessKey
adapters__storage__s3__secretAccessKey: iamUserSecretAccessKey
adapters__storage__s3__region: yourS3BucketRegion
adapters__storage__s3__bucket: yourS3BucketName
adapters__storage__s3__assetUrl: https://cdn.seattlebubbles.com
We also need to bind mount the index.js
file:
volumes:
- /.../Ghost-Storage-S3/lib/index.js:/var/lib/ghost/content/adapters/storage/s3/index.js
Finally, it's time to try uploading an image to my blog post:

And we can see that our bucket is now being populated! Lastly, verifying the URL of the very image above:
https://cdn.seattlebubbles.com/2025/02/image.png
We can see that the image is now coming from our CloudFront + S3 setup!
Reach me out on Discord meldavy
for any questions!