Subscribe to our blog to get the latest articles straight to your inbox.

Single Page Applications (SPAs) have been my go-to choice for building  front-end applications for several years now. They are becoming easier to build daily, and building them helps me better isolate where code needs to be written for business logic or presentation. While SPAs are not a silver bullet, they fit a wide range of use cases and are a necessary tool for full-stack and front-end engineers today.

While they are easy enough to build, I have yet to determine my favorite way to serve them in production. I've used static site hosting services like Surge, I've built a lightweight server with express, and served them from Heroku. I've even uploaded my build output to AWS S3 for hosting. Recently, I worked with a client who already had infrastructure on the Google Cloud Platform (GCP) and therefore needed a way to host my SPA there.

There is information online for how to serve an SPA from a GCP Bucket, but I felt the information for my exact use case was incomplete and scattered across several blog posts and wikis. In this article, I will talk about my use case where I built a React SPA with webpack 4 and deployed it to a GCP Bucket.

Please note, you could use create react app for building your SPA, but it is important to know how your underlying tools work so you can debug them when you run into errors, which is inevitable in development.

The Technical Requirements

In order to follow along, you will need to make sure that you have a domain, a GCP Account with admin permission to buckets, and an SPA built with webpack to deploy. If you don't have an SPA already setup don't worry, we have a sample repo for you to use on GitHub.

Build Process

I am going to use webpack 4 to build my sample SPA. To keep the setup as simple as possible, but also development and production friendly, I have broken the webpack configuration into three files: common, dev, and prod. This allows for the configuration of the necessary entry, output, loaders, resolvers, and basic plugins in the common file along with environment specific plugins in their own webpack configurations. The entire source can be found in the spa-deploy-demo repo, but you will find the contents of the wepack files that I am using below.

webpack.common.js

const path = require('path')
const HtmlWebPackPlugin = require('html-webpack-plugin')

module.exports = {
  entry: {
    index: './src/index.tsx'
  },
  output: {
    filename: 'main.js',
    path: path.resolve(__dirname, 'dist'),
    publicPath: '/'
  },
  module: {
    rules: [
      {
        test: /.tsx?$/,
        use: 'awesome-typescript-loader',
        exclude: /node_modules/
      },
      {
        test: /.html$/,
        use: 'html-loader'
      }
    ]
  },
  resolve: {
    extensions: ['.css', '.tsx', '.ts', '.js'],
    modules: [path.resolve(__dirname, 'src'), 'node_modules']
  },
  plugins: [
    new  HtmlWebPackPlugin({
      template: './src/index.html',
      filename: './index.html'
    })
  ]
}

webpack.dev.js

const webpack = require('webpack')
const merge = require('webpack-merge')
const common = require('./webpack.common.js')
const BrowserSyncPlugin = require('browser-sync-webpack-plugin')

module.exports = merge(common, {
  devtool: 'inline-source-map',
  devServer: {
    contentBase: './dist',
    historyApiFallback: true,
    port: 3003
  },
  plugins: [
    new BrowserSyncPlugin({
      host: 'localhost',
      port: '8080',
      proxy: 'http://localhost:3003',
      reload: false
    })
  ]
})

webpack.prod.js

const webpack = require('webpack')
const merge = require('webpack-merge')
const UglifyJSPlugin = require('uglifyjs-webpack-plugin')
const common = require('./webpack.common.js')

module.exports = merge(common, {
  devtool: 'source-map',
  plugins: [
    new UglifyJSPlugin({
      sourceMap: true
    }),
  ]
})

The common webpack configuration is as minimal as possible for this demo. I prefer to write TypeScript as I do not want to configure Babel and awesome-typescript-loader is my goto loader for TypeScript. I am also using the html-loader and html-webpack-plugin for loading html so my index.html is included in the build process so the output directory can be added to the gitignore list.

The development configuration configures BrowserSync so it is easier to develop for multiple device types at once and also utilizes the inline-source-map to aid with debugging.

The production configuration sets up uglifyjs-webpack-plugin to help reduce the size of the codebase. You will notice that I have not configured code splitting and that is because I felt that I was over-engineering the webpack config for such a simple example. If your app is a substantial enough size, you should definitely add code splitting to your webpack configuration, but configuring it is outside of the scope of this blog post.

Now that you have the webpack configuration setup, you will need the necessary scripts in your package.json file for your development and production environments.

package.json scripts exerpt

scripts": {
  "build": "webpack -p --config webpack.prod.js",
  "start": "webpack-dev-server --config webpack.dev.js",
},

In my case, the start script is used to run my webpack-dev-server and uses the development webpack config. The build script runs the production webpack config and stores the output in the dist folder.

Setting up the Infrastructure

When setting up your infrastructure you will need to do two things: configure your DNS and create the storage bucket. We will configure the DNS first so there is proper time for it to propagate.

When updating your DNS configuration, you will need to add a CNAME with your desired subdomain name and the value c.storage.googleapis.com.

DNS_Config

In order to create your storage bucket, first login to console.cloud.google.com. If you currently do not have a google cloud project, you will need to create one and enter the appropriate billing information. Don't worry, the hosting charges are very cheap for cloud storage.

Once you are logged in, navigate to the "Storage" service underneath the "Storage" header in the navigation menu.

Navigating_to_Storage

On this page click on the "Create bucket" button. When entering the name of the bucket, you will want to enter the url that you created in your DNS configuration. For example, if you created the DNS record www on your domain maybe.com, the name of your bucket would be www.maybe.com. This will also work if you want to host your SPA on a subdomain like spa.maybe.com.

Create_Bucket_Form

When you click the "Create" button at the bottom of the page, you will receive a prompt for domain ownership verification. Click on the provided link and follow the directions for domain ownership verification.

Now that you're bucket is created, you are ready to run your production build and upload the files to your bucket.

First Deployment

For the first deployment, we are going to do everything manually. Begin by opening your terminal to your SPA and run yarn build. Once the build process is complete, go back to the GCP console and upload the build output to your bucket. If you are using the spa-deploy-demo repo the index.html, main.js, and main.js.map files from the dist folder will need to be uploaded.

Uploading_Files

Now that the files are uploaded, they are still not publicly available. In the bucket listing make sure the "Share publicly" checkbox is checked. In addition, you will need to configure the website configuration. In order to do this, navigate back to the all bucket listing page. You should see your new bucket in the listing table. On the right, click on the three dot menu and then on "Edit website configuration". In the modal that pops up, be sure to enter index.html for the main page and the not found page. The reasoning for this is we are using React Router and want it to handle the not found page.

Website_Config

Congratulations, your SPA is live now. If you deployed spa-deploy-demo repo you will be able to navigate through the hello, goodbye, and not found routes.

Scripting Deployments

Now that you have your SPA up and running, more than likely there will be updates that you would like to make. However, it is cumbersome to perform the build, login, upload the files, and share them publicly again. Good news is this is relatively easy to script using google-cloud-sdk.

To install google-cloud-sdk on a Mac, I find that using homebrew is the easiest solution. Use the following command to install:

$ brew cask install google-cloud-sdk

If you are developing on a Linux distro or Windows, Google has documentation for the best way for you to install it as well.

Once you have the google-cloud-sdk installed, you need a way to authorize it with your project. You can either do this by running gcloud init or you can create a service account in your GCP project. I personally recommend creating a service account so that it will be easier to work with multiple projects in the future.

In order to create a service account, in the GCP console, navigate to the IAM & admin, and then the Service accounts page. Once on the service accounts page click on the "Create service account" button. In the modal form that pops up, enter SPADeploy as the name. For the role, scroll down to "Storage" and select "Storage Admin". Make sure the "Furnish a new private key" checkbox is checked and set the "JSON" key type is selected. After hitting create, you will be given a prompt to download the key. Be sure to save it in your home folder as SPASample.json.

Once you have your key downloaded, you can use the following script to deploy your SPA:

bin/deploy-gcloud.sh

yarn build

export BUCKET_URI=xxxxxxxxxxxxxxxxxxxxxxxxx
export KEY_FILE=SPASample.json
export GCLOUD_PROJECT=xxxxxxxxxxxxxxxx

echo ${BUCKET_URI}

# Authorize and set project
${HOME}/google-cloud-sdk/bin/gcloud auth activate-service-account --key-file ${HOME}/${KEY_FILE}
${HOME}/google-cloud-sdk/bin/gcloud config set project ${GCLOUD_PROJECT}

# Copy Files
${HOME}/google-cloud-sdk/bin/gsutil cp ./dist/index.html gs://${BUCKET_URI}/index.html
${HOME}/google-cloud-sdk/bin/gsutil cp ./dist/main.js gs://${BUCKET_URI}/main.js
${HOME}/google-cloud-sdk/bin/gsutil cp ./dist/main.js.map gs://${BUCKET_URI}/main.js.map

# Make Files Publically Accessible
${HOME}/google-cloud-sdk/bin/gsutil acl ch -u AllUsers:R gs://${BUCKET_URI}/index.html
${HOME}/google-cloud-sdk/bin/gsutil acl ch -u AllUsers:R gs://${BUCKET_URI}/main.js
${HOME}/google-cloud-sdk/bin/gsutil acl ch -u AllUsers:R gs://${BUCKET_URI}/main.js.map

# Edit the website configuration
${HOME}/google-cloud-sdk/bin/gsutil web set -m index.html -e index.html gs://${BUCKET_URI}

In this script, the production build is the first thing that we execute. After the production build is complete, environment variables are created. The BUCKET_URI is the name of the bucket you created. The KEY_FILE is the name of the key file you saved incase you changed it. The GCLOUD_PROJECT is the project id of your GCP project. You can find this by trying to change projects in the top left of the GCP console.

After the variables, gcloud and gsutil from google-cloud-sdk are used to obtain authorization with the key file, copy the build artifacts, make them publically accessible, and ensure the website configuration is correct. Make sure this script has execute permissions on your machine by running:

$ chmod +x ./bin/deploy-gloud.sh

You can then run the deploy script with:

$ ./bin/deploy-gcloud

Congratulations, you now have an SPA hosted on the Google Cloud at a low cost with a scripted deployment.

Further Considerations

Now that you know how to build and deploy your SPA to a GCP Bucket you may think that your job is done. However, there are many other details for you to consider. One of the technical limitations of hosting your SPA on a GCP Bucket is your SPA is only accessible via HTTP. In order to protect your users you will need to setup a load balancer so you can serve your users via HTTPS (which will be covered in another blog post).

Also, all of the infrastructure for your SPA was created via a GUI interface, there are many tools that will allow you to script all of this so you can keep your infrastructure setup in version control. This is extremely helpful if you are building an API, webhooks, setting up databases, etc., to support the functionality of your SPA.

Finally, while it is nice to have the ability to run your deployment script at will from your local environment, it would be best if your SPA was deployed as part of a continuous integration (CI) process. This allows you to ensure your SPA builds correctly, tests pass (which you should be writing if you aren't), and deploys without failure in a repeatable and predictable manner so that when it comes time to ship your updates there are no surprises.

Happy coding!