Static Website

Building my First Website

I used Hugo, Google Domains and Analytics, and AWS (S3, Route 53, Certificate Manager, CloudFront) to build and host my first static website.


Google Domains


Google Analytics

My 2021 New Year’s Resolution was to have an online portfolio that would help me stay organized and honestly just have the right to flex with my friends.

This post is unlike my other content that have more visual creativity, but I think my first experience with website hosting deserves a write-up. Without having any knowledge prior, I spent about 40 hours getting this to work. So painful. :( But now that I actually understand this, I could probably manage this under 2 hours next time.

Creating a Website with Hugo

I chose Hugo to build my website because I was first introduced to it via its theme gallery. I started by downloading a template and following instructions on editing sample content. As I got more familiar with the framework, I made tweaks to the theme’s html files to fit my preferences. All that to say, Hugo has a friendly learning curve and is pretty flexible!

While there are many professional blogs that lists more pros and cons for Hugo, I created my own based on my own experience.



Purchasing a Domain Name

I used Google Domains because I like the idea of having my domain integrated with my gmail account. I’ll never have password issues, and it’s easy to access. If I weren’t so attached to my Google account, I would have bought a domain name with Route 53 in AWS because it’s easier and quicker to integrate with all the other AWS services that I would use to host my website. Rest assured that you’re able to transfer the domain between Google and Amazon with a lag of about 2 days, so there aren’t any consequences with choosing one over the other.

I used this Instant Domain Search website to find which domain name was available for purchase. I personally wanted a .com extention because it was the cheapest at $12/year and easiest to remember. I also preferred a catchy phrase over my own name because I’d probably lose over half my search audience who would’ve typed “serena” instead of “sarina.” #struggles

Hosting the website

I chose AWS to host my website because AWS has a free tier and pay-as-you-go pricing instead of a flat rate typically starting at $5/month. My website does not have much content and I don’t expect heavy traffic, so I could easily be under $1/month with the free tier and still under $5/month afterwards.

At a high level, hosting a website with AWS requires:


AWS S3 is a very cheap cloud storage that supports static website hosting. A general purpose bucket with standard pricing would pretty much cost nothing for storage but maybe a few cents for all the requests. For storage, AWS charges $0.023 per GB, and my website currently has around 20 MB. So wow, just $0.00043. In static websites, you would only be using the GET request to retrive static objects like HTML files and images from the S3 bucket, which costs about $0.0005 per 1,000 requests (and maybe $0.00001 per page visit if each page had around 20 GET requests). Pricing for requests differ slightly by region, but the difference for me is very negligible.

When I created my S3 bucket with the same name as my website domain (, I made the following configurations in the Properties tab.

  1. Under “Static Website Hosting”, select Enable.
  2. Under “Hosting type”, select Host a static website.
  3. Under “Index document”, type index.html. You should have this file in your bucket root, and this serves as the home or default page.
  4. Under “Error document”, type 404.html. You should have this file in your bucket root, and this serves as the error page.

If you added your website files into your S3 bucket, enabled static website hosting, and made the bucket public, you would actually be able to view your website using the S3 bucket’s url. However, it’s insecure and inefficient to have the public access your bucket directly. This leads me to use CloudFront, which receives the users' request and efficiently send the files from its cache or S3 bucket.


AWS CloudFront is a Content Delivery Network (CDN), which means that users from all over the world can view my website with low latency and high availability. This is possible because CloudFront will cache static content in its multiple edge locations so that a viewer would retrieve files from a geographically close area as opposed to a single spot in the world. For example, my relative in Taiwan sending a request to somewhere in Asia is faster than to the U.S. where I live. CloudFront also provides extra security features, such as converting all HTTP requests to HTTPS. CloudFront charges by the amount of data transferred out, starting at $0.085/GB in the U.S. and decreasing as you get to higher volumes of data.

I used this guide to set up most of my AWS infrastructure with exception to linking Google Domains to Route 53. I created 2 CloudFront distributions: 1 for the domain and the other 1 for the domain. These distributions share the same origin, which is the S3 bucket containing my files. They also have a configured behavior that redirects HTTP to HTTPS at $0.0100 per 10,000 requests instead of $0.0075 per 10,000 requests. Nowadays, having HTTPS is worth it, or else browsers or applications might block your link or mark it as unsafe.

While I followed the majority of this guide, I also want to point out that I used a feature called Lambda@Edge that is a function that makes Hugo’s urls prettier. The function I used came from this source. You only get charged for the time used for Lambdas, which is very little in this case.

Certificate Manager (SSL certificate)

When creating a CloudFront Distribution, you will need to attach an SSL certificate that will authenticate the website’s identity and encrypt data sent between the server and browser, turning HTTP to HTTPS. AWS Certificate Manager (ACM) creates them at no additional cost. It’s easy to set up, but verifying with Google Domains instead of Route 53 was confusing and inconvenient.

I created 1 certificate with the domain name and the additional name The process requires validating that the domain name belonged to me. After hours of trying to figure out how to use either email or DNS validation, I found this post with the clearest instructions on validating in Google Domains. Basically, I had to copy the Name, CNAME, and Value for each of my domains from ACM in a particular format into the Custom Resource Records section in Google Domains. A CNAME (canonical name) record aliases names, which validated my SSL certificate and allowed me to finish creating my CloudFront distribution.

Route 53

Next, I set up a hosted zone on AWS Route 53. Route 53 is a Domain Name System (DNS) service that routes domain names that the end user sees to IP addresses with the server hosting the website. My website only uses 1 hosted zone that costs $0.50/month and an additional $0.40 per million queries.

A hosted zone is an Amazon Route 53 concept. A hosted zone is analogous to a traditional DNS zone file; it represents a collection of records that can be managed together, belonging to a single parent domain name.

In my hosted zone, I started with 3 records: A (Address), NS (Name Server), and SOA (Start of Authority). The A record had my domain name and routed traffic to my CloudFront’s domain name ( I created a 2nd A record that had my domain and routed traffic to my CloudFront’s domain name. The NS record contained 4 name servers, which indicate which server (Route 53 in this case) contains all DNS records. Since everything except my domain name lived in AWS, I copied all these records from the NS record to the DNS section under Use custom name servers in Google Domains.

Deploying changes

Last but not least, I create (or more like copied and pasted) a Makefile using the one provided from this same guide. With a Makefile, I can type make deploy in my root directory and trigger the following commands:

  1. Build the Hugo website
  2. Copy new files to the S3 bucket
  3. Remove files from bucket that are not present in the newly generated site
  4. Refresh changed in CDN via invalidation (remove old files from cache to prevent displaying old content)
  5. Notify Google and Bing that a new sitemap of your site is available

A minor note that CloudFront’s invalidation requests have no additional charge for the first 1,000 paths but will charge $0.005 per additional path requested.

And the website is ready to go!

Linking to GitHub Pages

I started my website by only posting write-ups and linking to my GitHub code, but I really wanted to show my actual projects. Fortunately, I found out the GitHub does static website hosting with GitHub Pages! In the Settings tab of any repository, there should be a section called GitHub Pages. To get started, I set my main branch as the Source (default as empty), and then I add a custom domain name ( This will add a CNAME file to the root directory.

Then I went to Route 53 and added a new CNAME record that pointed to my GitHub profile url ( After 24 hours, which gives GitHub enough time to verify the domain name, I went back to the repository settings and checked “Enforce HTTPS”. Now you know that when you visit this website, all write-ups are built with Hugo and hosted on AWS while demos are hosted via GitHub pages. It’s crazy how I never realized there was so much routing going until I went through this process.

Adding Google Analytics

I added this section a month after I launched my website. I wasn’t planning on doing any sort of logging until I had more content and traffic, but I found that Google Analytics was free. Actually, they offer a freemium where small businesses and individuals like me can get web analytics for free, while bigger businesses can pay $150,000 for advanced features and analyze heavier traffic (more data). Intended for measuring marketing effectiveness, Google Analytics provides insight on user acquisition, engagement, monetization, and retention. That’s more than I need.

I followed this easy 6-minute tutorial on YouTube to set up my working Google Analytics account and tracking. It involved signing up for free via my Gmail (the same one holding my website domain), adding a data stream for the web, and copying and pasting a simple script (called a Global Site tag) in the HTML of all the pages I wanted to track. Following the tutorial, I visited my website using an incognito tab and became the first tracked user to visit!

As someone who wants to know what is interesting, I would mostly check for total page views, a user engagement metric. Google Analytics shows a bar chart of all my pages ranked by view count in descending order. Another potentially useful metric is traffic acquisition, which tells me how users ended up on my website. Because I don’t have SEO (yet?), the majority of my traffic sources would be when a user intentionally types the website in their search bar or clicks a mentioned link directly (if shared as a message).

I plan to stick with Google Analytics now that I realize it’s surprisingly easy to set up and use and is free of charge. The other option I had for logging was to use AWS Cloudfront to write logs into a designated S3 bucket. Not completely free of charge, this would still be an insignificant cost (<$0.01) due to paying for the S3 storage, but it would also take some effort and additional money to use Amazon Athena to query and analyze the logs in S3. The one upside for using CloudFront tracking is to monitor the number of HTTP Requests to troubleshoot my AWS bill, but please don’t let that happen. In my mind, Google Analytics is the better choice.

Total Costs

Since I’m currently using the free tier for my first 12 months, I’m paying $0 - $2 per month for AWS. This cost will increase as I add more projects and get more traffic. Here’s my estimated breakdown for a portfolio static website.