Skip to main content
  1. Posts/

Web Revamp - PART 1

·6 mins

I wanted to learn new things and set up a framework for serving static web pages. The easy way out of course would be to just use OpenBSD httpd and some type of templating, but that would’ve been too easy and for me it definitely falls under the “been there, done that” department.

This first part is more or less documenting what I’ve accomplished so far on my test setup, which still lacks some features I’ll detail at the end.

In practice my setup took the following form:

user <-> relayd <-> httpd <-> ipfs

Requirements #

  1. To be able to use a static site generator and git on my local machine for the web content.
  2. To be able to easily to sync the content to the server machine and publish it.
  3. Use httpd + acme for handling the X.509 with acme-client.
  4. To use NSD as the glue between the front-end and backend resources.
  5. To use relayd to create the gates to httpd and the backend.
  6. To serve the content through a private IPFS gateway.

Static Generator: hugo, git #

For the static generation I chose, Hugo, as it allows you to use Markdown for the web pages and has nice themes for formatting and laying out the content. It seemed like a good fit, as I am familiar with git and it is natural to work on the content in Markdown even if it is lacking definition lists.

To see the results of your editing you can use the hugo web server to view the content:

hugo server -D

For more information, please turn to the Hugo website.

Syncing content: hugo, rsync #

Syncing content is relatively easy with rsync from the public directory.

$ hugo -D
$ rsync -Pav public/ dest:public/
$ ipfs add -r public

Publishing Content: IPFS, NSD #

Publishing content requires interacting with IPFS and updating the relevant DNS TXT record to point to the right recursive resource on IPFS.

IPFS, a.k.a. Interplanetary File System, is a distributed file system under heavy development. I’ve set up the IPFS gateway through the OpenBSD package, which handles things like privsep and daemon control in an easily accessible form. It is far from being ready for serious production use, but I am using it as a backend to host the content for this website.

So in practice to publish content to publish content, you must have:

  • A recursive IPFS resource pin hash, which you have sourced from the add.
  • a CNAME record, which points to the front-end server’s TLS interface ip(s).
  • a _dnslink TXT record, which points to the the IPFS resource.

Finding out the right hash to point to the site’s entry point is still a bit of a hack. I grep for the recursive resources and peek at the content with w3m. This is probably the right way to do it, but gets the job done.

$ ipfs pin ls | fgrep recursive
$ w3m http://127.0.0.1:8080/ipfs/Qmd4YnkQgU3G2qDDw8Bnq5vmzTDWUxSHwmTo4mABXhhM2m

So for example for my test resource, I created the following records:

test            IN CNAME srv2
_dnslink.test   IN TXT "dnslink=/ipfs/Qmd4YnkQgU3G2qDDw8Bnq5vmzTDWUxSHwmTo4mABXhhM2m"

In practice, the CNAME resolves into the servers IPv4 and IPv6 addresses, on which the relayd is listening in for the incoming connections on ports 80 and 443.

The mind-blowing thing is that the actual resource to be requested is the pinned hash of the recursive resource you have added to IPFS with ipfs add. I’ve only managed to use the ipfs add to add resources and then remove the earlier ones as follows:

$ ipfs pin ls | fgrep recursive
$ ipfs pin rm HASH
$ ipfs repo gc

The documentation is a bit unclear how this maintenance should be done and I didn’t manage to figure out what the ipfs pin update actually does.

So once you have managed to find the right resource hash and added that record to your DNS, you will be able to query it as follows:

$ dig _dnslink.test.huttu.net TXT +short
"dnslink=/ipfs/Qmd4YnkQgU3G2qDDw8Bnq5vmzTDWUxSHwmTo4mABXhhM2m"

Front-end: httpd, acme, relayd #

The actual relayd setup below will have to be done in two parts:

  1. Before the acme-challenge has been accepted and the certificates have been issued.
  2. After the certificates exist and can be referenced with the tls keypair parameter.

This is a bit of a kludge, but I got this idea from Aaron D. Parks.

Relayd Setup: PART 1 #

Let's Encrypt ACME challenge <-> relayd public port 80 <-> httpd private port 80

I’m using the following httpd configuration to handle all the ACME requests coming in, as the backend allows for multiple sites to be hosted in the backend. In addition, all the rest of the requests in plain HTTP are directed to the TLS interface provided by relayd with the help of the HOST header.

local="lo0"

# Handle all the incoming challenges, no matter what
# the domain name is.
server "*" {
        listen on $local port 80
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                request strip 2
        }
        location "*" {
                block return 301 "https://$HTTP_HOST/$REQUEST_URI"
        }
}

Since httpd is only listening on lo0, the relevant relayd configuration to relay the plain HTTP requests to it is as follows:

public="vio0"
private="lo0"

table <acme-challenge> { $private }
table <static-sites> { $private }

log connection
log state changes

http protocol "http" {
        pass request quick path "/.well-known/acme-challenge/*" \
                forward to <acme-challenge>
}

relay "http" {
        listen on $public port http
        protocol "http"
        forward to <acme-challenge> port 80
}

Relayd Setup: PART 2 #

user <-> relayd public port 443 <-> ipfs gw port 8080

Once the key-pairs have been generated and functional, we can add the rest of the pathway into the relayd configuration. This will take the user request of an IPFS resource, hand it over to the IPFS gateway and deliver the results back to the user.

http protocol "https" {
        return error
        match request header "Host" value "test.huttu.net" tag "REQ_OK"
        match request header "Host" value "dev.huttu.net" tag "REQ_OK"
        match request method "GET" tag "REQ_OK"
        block request
        pass tagged "REQ_OK"
        match header log "Host"
        match header log "User-Agent"
        match header log "Referer"
        match url log 
        tls keypair "test.huttu.net"
        tcp { nodelay, socket buffer 65536, backlog 100 }
        tls no tlsv1.0
        tls ciphers "HIGH:!aNULL"
        match request tag "static"
        match response tagged "static" header set "cache-control" value \
                "max-age=3600" 
        match response header set "Referrer-Policy" value "strict-origin"
        match response header set "X-Frame-Options" value "SAMEORIGIN"
        match response header set "X-XSS-Protection" value "1; mode=block"
        match response header set "Strict-Transport-Security" value "max-age=31536000; includeSubDomains"
        match response header set "Content-Security-Policy" value "default-src 'self'; img-src *; script-src 'unsafe-inline' *; connect-src *; style-src-elem 'unsafe-inline' *"
        match response header set "Permissions-Policy" value "allowlist 'self'"
        match response header "X-Content-Type-Options" value "nosniff" tag "NOT_200"
        block response tagged "NOT_200"
}

relay "https" {
       listen on $public port https tls
       protocol "https"
       forward to <static-sites> port 8080
}

ToDo: A Call for Help :) #

  1. The IPFS gateway is a public one and it adds a lot of insecurity to the whole setup.
  • Have to figure out what the benefits of running a private swarm might be.
  • Have to figure out how cleaning up the old content should happen in an automated manner.
    • DNS TTL is one thing to consider with content updates, as well as keeping the pinned content in check.
  1. Tighten up the relayd configuration in terms of: