How to refactor a chain of asynchronous callbacks in Javascript

It’s a a fact that callbacks in Javascript are widely used for asynchronous code. Thus, it’s quite common the following scenario:

const mongoose = require('mongoose')
const post = require('./database/models/post')

post.create({
    title: 'My first blog post',
    description: 'Blog post description',
    content: 'Lorem ipsum content.'
  }, (error, post) => {
    if (error) {
      console.log(error)
    } else {
      //DO SOMETHING
    }
  })

The complexity comes when we need to chain on success another asynchronous function which will also received another callback, and so on. In this case, we can end up making our code more and more complex and unreadable.

See below example of creating, then searching and finally deleting elements from a Mongo DB, with mongoose.:

post.create({
    title: 'My first blog post',
    description: 'Blog post description',
    content: 'Lorem ipsum content.'
  }, (error, post) => {
    if (error) {
      console.log(error)
    } else {
      console.log('Created post: ' + JSON.stringify(post))
      post.find({title: 'My first blog post'}, (error, post) => {
        if (error) {
          console.log(error)
        } else {
          console.log('Retrieved posts: ' + JSON.stringify(post))
          post.deleteMany({title: 'My first blog post'}, (error, post) => {
            if (error) {
              console.log(error)
            } else {
              console.log('Deleted posts: ' + JSON.stringify(post))
            }
          })
        }
      })
    }
  })

The point is that there should be an easy way to get rid of nested indentation, being flexible enough to run N commands. In addition, new design should sort out some caveats, like lack of extensibility for adding a more complex error handling, or a more complex success handling.

Functions with same specification

Obviously, create, find and deleteMany are functions with similar arguments and design.

In other languages, every execution of any of those functions would be named as Command. In fact, in Java, we would likely create an interface like MongoDBCommand. However, in Javascript, there is not need to do that, as we can just send functions as parameters.

const runCommands = commands => {
  const {action, data, onError, onSuccess} = commands.shift();

  action(data, (error, post) => {
    if (error) {
      if (onError) {
        onError(error)
      }
      console.log(error)
    } else {
      onSuccess(post);
      if (actions.length > 0) {
        runCommands(commands);
      }
    }
  })
}

const title = 'My first blog post';
runCommands([
  {
    action: post.create.bind(post),
    data: {
      title,
      description: 'Blog post description',
      content: 'Lorem ipsum content.'
    },
    onSuccess: post => {
      console.log('Created post: ' + JSON.stringify(post))
    }
  },
  {
    action: post.find.bind(post),
    data: {title},
    onSuccess: post => {
      console.log('Retrieved posts: ' + JSON.stringify(post))
    }
  },
  {
    action: post.deleteMany.bind(post),
    data: {title},
    onSuccess: post => {
      console.log('Deleted posts: ' + JSON.stringify(post))
    }
  },
])

In previous code, Command Executor function, named runCommand, chains execution of commands in a recursive way. It defines the callback function, so it can easily decorate both, the error and success handling. It can also provide any extra common functionality, like event logging.

Obviously, all functions must have a common design, i.e., in this example:

  • All functions receive a first argument with data to be processed, i.e. element to be inserted, search or deleted.
  • All functions receive a second argument, with the callback that processes 2 arguments: error and data.

Functions with different specification but same callback definition

We may face a case when functions are heterogeneous (different arguments), but they have same callback specification.

In such scenario, we could, however, apply the same design, considering the following changes:

  • Command’s data property, would receive an array of parameters, that is, all parameters, but callback.
  • A new property callback index would be part of every Command. I would contain the argument index of callback.
  • Executor would create the callback in the same way we have done in previous example, and then it would insert it into data array in the specified index.
  • Resulting array would be applied to function, by using Javascript apply function.

However, in this case, you may not be really happy with the resulting code. The lack of readability of the resulting code (where de callback index is not obvious), are a trade off you may not willing to have.

How to Deploy a Dockerized web app or service to Google Compute Engine (GCE) in Google Cloud Platform (GCP)

By the time I am writing this, I have just deployed for first time a web app into Google Cloud Platform, using Google Compute Engine. This post intends to help beginners to do the same.

After having worked in IT for many years, I am not proud the way I got it working, since I had to connect to instance with Google Cloud Shell, and create the Docker image from there. I am sure that any DevOps guy who has ever set CD pipeline or with minimal experience with Google Cloud Platform would blame for that, since there must be at least 10 ways of doing that into an automated and pragmatic way for a proof of concept, with zero cost

Any feedback is therefore welcomed through comments section below, but please, keep in mind that this post is aimed for beginners, keep a pragmatic approach.

Find below a simple step-by-step tutorial:

1. Create a Google account

In case you don’t have an account yet, you will need to create a google account.

2. Enable Google Cloud Platform

Once you are logged in, create a free account into Google Cloud Platform and enable it, you may get some free credits to spend.

Once the service is added, you can access to Google Cloud Console. Note that a default project will have been created, but you may create a new one.

If you just created the account, a default project will have been created. Go to the top bar, and click on the name of the project. A new pop up will be displayed. showing the Project Id. Please, take note of Project Id of the project you want to use. You will need it later. You can find it

3. Enable billing for your Google account

Even if you use free initial credits, you will need to enable billing on Billing section and provide card details.

Bonus point: if you do care a lot about your credits, go to Billing section, Budgets and Alerts page and setup some alerts. You will receive emails on completing defined percentages of your budget.

4. Setup your local environment

Install Docker. You will need it for building your Docker Image and testing it locally.

5. Build a Docker image

Dockerize your web app or service, i.e., create a Dockerfile that will describe how the docker containers must be setup to run your web app or service.

Build your Docker Image, in case you didn’t do it before. The tag format is very important, as it tells Docker where to store the image into private Google Cloud Registry.

 docker build -t <IMAGE_NAME>:<TAG_VERSION> 

6. Create a New VM Instance

Launch Google Cloud Console, and, in Compute Engine section, access to VM instances page. From there, click on Create instance button on top.

A new page is displayed. Before seeing the picture below, note the following points:

  • Region and Zone have an impact on user’s network latency and cost. Some regions may not have available any shared CPUs so if you try to create one shared instance in there, you would get an error. Regarding cost, some regions are cheaper than others. For Europe, eureope-west4 is cheap (but it didn’t have any shared CPUs available, so I chose a different one for my proof of concept).
  • Machine Type: the more CPUs and memory, the more you pay. On a budget, you can click on Customize link, and then select 0 cores, which means 1 shared CPU, so your app will be sharing resources with other apps. Minimum RAM is 0.6GB, for shared CPUs, but you may need more RAM (remember you are sharing). Shared CPUs don’t offer GPUs, so that’s another point to consider.
How to Deploy a Dockerized web app or service to Google Compute Engine (GCE) in Google Cloud Platform (GCP) 1
  • We tick Deploy a container image to this VM instance, which change the Boot Disk to an container optimized OS image, which includes Docker command line tool.
  • Container image: if you published your Docker image to either a Docker Registry or Google Cloud Registry, you can just specify the image here and you won’t need any additional configuration. In my case, I was testing my image in my local Docker, so I just setup an image in that field, like jetty:9.4.14-jre11, which is the base of my Docker Image.
  • Tick Allocate a buffer for STDIN and Allocate a pseudo-TTY, just in case you want to connect to the container (but you won’t need it for a proof of concept).
How to Deploy a Dockerized web app or service to Google Compute Engine (GCE) in Google Cloud Platform (GCP) 2
  • Note that instance has a 10GB boot disk with an image optimized for containers. The more space you provide, the more you pay, but 10GB is the minimum disk size, though.
  • If you want to invoke your web app or service from outside the network, check Allow HTTP traffic and/or Allow HTTPS traffic, depending on your web app or service, in order to setup the firewall.
How to Deploy a Dockerized web app or service to Google Compute Engine (GCE) in Google Cloud Platform (GCP) 3

Click Create and your new VM instance should be up and running shortly.

6. Deploy your web app or service to your VM instance

Note that this step is not needed if you had push your Docker image to either a Docker Registry or Google Cloud Registry and specified the Docker Image path on creating the VM instance in previous step.

In Google Cloud Console, in Compute Engine section, access to VM instances page. From there, click on SSH button on the row regarding the created instance. A new SSH terminal will open and you will be able to change your instance.

I am not proud of this, 🙁 . From there, I just cloned with git my project, built the Docker Image again (in the same way I did in previous step), and just run the image:

docker run --rm -p <EXTERNAL_PORT>:<INTERNAL_CONTAINER_PORT><IMAGE_NAME>:<TAG_VERSION> 

7. Allow incoming calls in Google Cloud VPC firewall

App is up and running, but is not accessible yet from outside, in Google Cloud Console, in VPC Network section, access to Firewall Rules page and add a rule for incoming request to <EXTERNAL_PORT> exposed by Docker in previous step. You can specify IP ranges or just 0.0.0.0/0, if you don’t really need them.

How to Deploy a Dockerized web application or service to Google Kubernetes Engine (GKE) in Google Cloud Platform (GCP)

By the time I am writing this, I have just deployed for first time a web app into Google Cloud Platform, using Google Kubernetes Engine. This post intends to help beginners to do the same.

Any feedback is welcomed through comments section below, even more if you have plenty experience with GCP and GKE and you think any of the steps could be improved, but please, keep in mind that this post is aimed for beginners, keep a pragmatic approach.

Find below a simple step-by-step tutorial:

1. Create a Google account

In case you don’t have an account yet, you will need to create a google account.

2. Enable Google Cloud Platform

Once you are logged in, create a free account into Google Cloud Platform and enable it, you may get some free credits to spend.

Once the service is added, you can access to Google Cloud Console. Note that a default project will have been created, but you may create a new one.

If you just created the account, a default project will have been created. Go to the top bar, and click on the name of the project. A new pop up will be displayed. showing the Project Id. Please, take note of Project Id of the project you want to use. You will need it later. You can find it

3. Enable billing for your Google account

Even if you use free initial credits, you will need to enable billing on Billing section and provide card details.

Bonus point: if you do care a lot about your credits, go to Billing section, Budgets and Alerts page and setup some alerts. You will receive emails on completing defined percentages of your budget.

4. Setup your local environment

Install Docker. You will need it for building your Docker Image, testing it locally, and pushing the image to Google Container Registry.

Install the Google Cloud SDK. You will need it to setup Docker, and alternatively, run some commands instead of using Google Cloud Console.

Setup your Google Cloud SDK with your Google Cloud Project Id.

gcloud config set project <PROJECT_ID>

Install the Kubernetes Command Line tool.

 gcloud components install kubectl

5. Build a Docker image and push it to your private Google Cloud Container Registry

Dockerize your web app or service, i.e., create a Dockerfile that will describe how the docker containers must be setup to run your web app or service.

Configure Docker to use gcloud to as credential helper. This will allow Docker to push the image to your private Google Cloud Container Registry.

gcloud auth configure-docker

Build your Docker Image, in case you didn’t do it before. The tag format is very important, as it tells Docker where to store the image into private Google Cloud Registry.

 docker build -t <HOST>/<PROJECT_ID>/<APP_NAME>:<TAG_VERSION> 

The four current options for host are: gcr.io, us.gcr.io, eu.gcr.io and asia.gcr.io, depending on the Docker Registry you want to use (you can read this help page for more details).

Note that if you already built previously your image, you still need to create a new tag for such image with the previous format:

docker tag <IMAGE_ID> <HOST>/<PROJECT_ID>/<APP_NAME>:<TAG_VERSION>

Push your image to your private Google Cloud Container Registry.

docker push <HOST>/<PROJECT_ID>/<APP_NAME>:<TAG_VERSION>

By the time I am writing this, I am not aware of any feature in Google Cloud Console, which allows you to push to your private Google Cloud Container Registry, so command line is the way I went.

5. Create a new cluster in GCP for your new web app or service

Despite, you can do this with Google Cloud SDK, for beginners, I would recommend to do it from Google Cloud Console . Launch it and, in Kubernetes Engine section, access to Clusters page. From there click on Create a new cluster.

On the left side, select Standard Cluster.

Before seeing the picture below, note the following points:

  • Location type is Zonal, since we are in a budget, with a low demanding project (note that a region contains several zones, and Regional option replicates nodes in all zones of the selected region).
  • Zone: remember, some regions are cheaper than others. For Europe, eureope-west4 region is cheap.
  • Default pool: for this proof of concept, one single pool is ok (remember you can change it later). The more CPUs and memory, the more you pay. On a budget, you can click on Customize link, and then select 0 cores, which means 1 shared CPU, so your app will be sharing resources with other apps. Minimum RAM is 0.6GB, for shared CPUs, but you may need more RAM (remember you are sharing). Shared CPUs don’t offer GPUs, so that’s another point to consider.
How to Deploy a Dockerized web application or service to Google Kubernetes Engine (GKE) in Google Cloud Platform (GCP) 4

As we won’t change Advanced Options, you don’t need to change them, but it is good you have a look to them.

Just click on create and wait until cluster is created.

6a. Deploy your web app or service to Kubernetes with Google Cloud Console

You can run this task with Kubectl or with Google Cloud Console. I will show how I did it with KubeCtl, but I would recommend to do it with Google Cloud Console, since it is more descriptive.

From Google Cloud Console, go to Kubernetes Engine section, open the Clusters page, and click the Deploy button on top of the page (if you have more than one cluster, you may need to do it from details page of the selected cluster). Since you will actually create a new Workload, you can also do it from Workloads page.

Click Existing container image, since you previously uploaded your image, and click Select button, in order to select your instance.

How to Deploy a Dockerized web application or service to Google Kubernetes Engine (GKE) in Google Cloud Platform (GCP) 5

One important Environment Variable to ad is Port to be set with the internal port used internally by the container for receiving requests (called <INTERNAL_CONTAINER_PORT> in the next section).

Once you have finished editing the container, click Done. You can also add more containers to the deployment, but for current example one is enough. Click Continue, for displaying the next page.

How to Deploy a Dockerized web application or service to Google Kubernetes Engine (GKE) in Google Cloud Platform (GCP) 6

Just set the application name (name for given deployment), any Kubernetes labels you may need, select the cluster, and click Deploy. Once is finished your container in the node will be running the web app or service included in your Docker image.

Expose the deployment of the web app or service to Internet, through a Load Balancer.

From Google Cloud Console, go to Kubernetes Engine section, open the Workloads page, select the recently created deployment, and from de deployment details page, click the Actions button on top of the page and then the Expose button.

Set a mapping from internal container port (called <INTERNAL_CONTAINER_PORT> in the next section), specified in deployment, to the external port that will be exposed by load balancer (called <EXTERNAL_PORT> in the next section) . Click Done, then select Load Balancer in Service Type field, and then click Expose. A new Service will have been created (you can see it from Services page, in Kubernetes Engine section).

How to Deploy a Dockerized web application or service to Google Kubernetes Engine (GKE) in Google Cloud Platform (GCP) 7

6b. Deploy your web app or service to Kubernetes with Kubernetes Command Line Tool

This is the way I did it, but I think that Google Cloud Console is much better for beginners.

Setup Kubernetes Command Line Tool to connect to your new cluster in Google Kubernetes Engine

gcloud container clusters get-credentials <CLUSTER_NAME>

Create a new Kubernetes Deployment, specifying the internal port of the container.

kubectl run <DEPLOYMENT_NAME> --image=<HOST>/<PROJECT_ID>/<APP_NAME>:<TAG_VERSION> --port <INTERNAL_CONTAINER_PORT>

Expose the deployment of the web app or service to Internet, through a Load Balancer.

kubectl expose deployment <DEPLOYMENT_NAME> --type=LoadBalancer --port <EXTERNAL_PORT> --target-port <INTERNAL_CONTAINER_PORT>

Selecting source code repository tools

Some context

As part of my adventure as entrepreneur, I came up with an idea, which could be great for a digital marketplace, so I considered to launch a Startup and develop products for validating that idea.

However, despite I I have participated in many projects, but it was when I started working in my own idea, when I got involved into selecting the right tool for managing it, a really source code repository tool.

The source control versioning system: Git

Regarding the source control versioning system I would use there was no doubt: Git.

After using in the past other old-fashion centralized SCMs like Source Safe, CVS and SVN, working with Git had been a great experience in the last years, mostly in UK.

I remember even having used it within a company managed source with SVN, and being one of 2 devs using GIT thanks to git-svn library, provided by GitScm. That was my first approach to Git, and it gave so much flexibility to work simultaneously with several versions of the code (tickets, issues, proofs of concepts), that enpowered me to reach another level as developer.

Next projects were started with pure Git, and I got much better in conflict resolution.

Yeahh, I know there are other distributed version control systems as Mercurial. However, I have no reasons, so far, to consider other source control versioning systems apart from Git. Partially since it has become nearly a standard, but mostly since I am still learning something new about it after using it last 6 years.

The repository tool

It was time now to choose a repository tool for Git. These were my main requirements:

  • Free: I would choose a free pricing tier, at least until team didn’t reach 2 or 3 devs, so all other requirements should be offered by a free pricing tier.
  • Git support: of course, as SCM would be Git.
  • Private repositories: the code of main projects wouldn’t be public, so privacy was really a key point for me. The more private repositories were included in the free tier the better.
  • Basic issue tracker: unless in a further stage I could considering issue tracking tools like Jira, Version One or Crocagaile, I needed a basic issue tracker from day zero.

In that stage, I was not focus in CI/CD support. That would be a key point a bit later, depending on hosting tools for my projects, but not in that stage.

Top 3 providers by that time where GitHub, Bitbucket and Gitlab.

GitHub

I was really happy with GitHub. In fact, all my personal public projects were hosted in GitHub, and experience was great.

It was also the mostly used repository tool.

However, by the time I selected the tool, much before it was purchased by Microsoft, GitHub did not offer private repositories in its free tier, and that was a red line for me.

GitLab

Free GitLab’s tier provided unlimited private repositories, so it suited well my requirements.

Unfortunately, issue tracker was provided as part of workflow features, so it was not offered in free tier.

Attlasian BitBucket: the selected tool

BitBucket was the only tool offering unlimited private repositories and basic issue tracker in its free tier. In addition, it offered a similar wiki support to the one I had enjoyed with GitHub.

This was the only tool that fulfilled all my requirements, so BitBucket was obviously the selected one.

However, later in time, once other collaborators joined the team, BitBucket issue tracker lacked agile features:

  • It lacked a Kanban board
  • It lacked scrum support.
  • Board extensions, did not offer a good user experience on sorting issues and setting priorities. In fact, they had quite a few issues related to sorting tickets.

However, BitBucket issue tracker worked well while project was very small and I was the single participant on it. This was due, in fact, to the fact, I didn’t actually need any board to follow progress.

Once my partner joined the team, a better tool was required for giving her more visibility about the current status of project.

However that is another story

Setting up your own domain in NameCheap with a mailbox hosted in Hostalia

Some context

As part of my adventure as entrepreneur and launching the Startup, the team decided to empower branding by creating mailboxes with our custom domain.

After contracting a basic professional email hosting with Hostalia, which give us 2GB email inbox with unlimited aliases and a great customer service for as little as 0,99€ a month, we wanted to setup those email boxes with a specific domain, previously purchased in NameCheap. This was the point where I realized that NameCheap’s customer services was not that great.

Setup your mailbox in Hostalia‘s admin panel

Access to your Hostalia’s admin panel, and, in Mail Management section, select your mailbox and link it to an external domain name my-domain.com.

Once your domain is linked, you should it listed below, so you can click DNS Managment from there or from left menu.

You can see now all DNS entries that would be required if you hosted both, mail and web in Hostalia. You will need them for setting up DNS resolution.

Setup DNS records in NameCheap

Make sure NameServers property in Domain home setup screen contains value “NameCheap Basic DNS”.

Add in “Advance DNS” configuration, in “Host Records” section, all records required for the mail inbox. Copy values from Hostalia’s DNS entries displayed in previous step to new corresponding NameCheap’s DNS entries:

TypeHostalia’s DNS entryNameCheap’s Host value of DNS entry
Amy-domain.com@
Aimap.my-domain.com imap
Amx.my-domain.commx
Apop3.my-domain.compop3
Asmtp.my-domain.com smtp
Awebmail.my-domain.com webmail
TXTmy-domain.com @

Once, DNS records are propagated, everything should work as expected.


Setting up your own domain in Aerobatic with NameCheap

Some context

As part of my adventure as entrepreneur, I created a web app in Javascript, with ReactJS. In order to boost development, I decided to use
Create React App (CRA). After deploying my web app to Aerobatic (see previous post Deploying my CRA website), next step was setting up my custom domain in NameCheap.

Note that there may be better way to do this step, since I am not expert in DNS resolution.

Buy and register domain in NameCheap

After selecting name of the product related to the website, I purchased the domain in NameCheap.

I must confess that I wouldn’t likely go for NameCheap if I had to do it again. Basically, I experience lack of support with other domains when I wanted to have different providers for mail hosting and web hosting, and since I was out of confort zone, it was really troublesome.

In addition, looked like NameCheap DNS resolution does not support Apex (i.e. “naked”) domains, fact I would likely use for certain domains.

Register domain in Aerobatic

You can set your domain up via command line (with Aerobatic CLI) or through Aerobatic console.

Via Aerobatic CLI

aero domain --name mydomain.com --subdomain ww

Via Console:

Namemy-domain
Domain Namemy-domain.com
Subdomainwww

You have wait until Aerobatic sends you an email with name and value of CNAME, required for verifying ownership of the domain.

Once you have values, it is time to change your setup in NameCheap

Setup DNS records in NameCheap

Make sure NameServers property in Domain home setup screen contains value “NameCheap Basic DNS”.

Add in “Advance DNS” configuration, in “Host Records” section, a new entry with the CNAME entry (name and value) required for ownership verification.

You should receive another email from Aerobatic saying that your SSL certificate has been provisioned.

In addition, once Aerobatic provision your domain in AWS, they will send you an email with CNAMEs required for *.my-domain.com and www.my-domain.com, including the cloudfront url for the mapping. Ej:

TypeNameValue
CNAME Record www.my-domain.comcloudfront-id.cloudfront.net
CNAME Record *.my-domain.com cloudfront-id.cloudfront.net

Finally, add following DNS entries in “Avance DNS” configuration , in “Host Records” section:

TypeHostValue
CNAME Record www cloudfront-id.cloudfront.net
CNAME Record www–stage cloudfront-id.cloudfront.net

Once, DNS records are propagated, everything should work as expected.

Hosting a Create-React-App (CRA) website

Some context

As part of my adventure as entrepreneur, I created a web app in Javascript, with ReactJS. In order to boost development, I decided to use
Create React App (CRA). It was on version 1 when I started using it, but it paid off quickly.

I had been developing my web app with Create React App (CRA) for a while, using local environment in localhost for testing, so I started looking for a way to host it

Some technical details about the website

Pure Create React App

My web app had been developed with Create React App. It was a SPA developed respecting the CRA guidelines, so I had not intention to eject it, so far.

Heterogenous routing

Most of my app used client routing, with React Router and React Router Dom using browserHistory.

Those pages, displayed a spinner when configuration had not been loaded yet from REST API. Otherwise, displayed content, according to configuration.

Other parts of the app used static routing, dispatching static content with ReactJs.

There were also some static resources (like robots.txt), which should be reachable.

SEO: Dynamic HTML meta tags in every page

I had different HTML meta tags in every page, depending on content, which were injected using React Helmet.

Which sort of hosting to use? PaaS, IaaS?

In order to select PaaS or IaaS, I took into consideration the following points:

  • Our technical capacity was wery limited. Basically one single person was implementing web app, mobile apps, backend platform, database, and doing much more tasks related to Startup.
  • Experience with IaaS and cloud was limited at the time we were making the decision.

We went therefore for PaaS providers.

PaaS: extra bonuses

  • HTTP with autorenewal of certificates
  • CDN
  • Pipeline for deploying new versions
  • Support for different environments

Considering proposed platforms at CRA documentation

CRA documentation described deployment process with several providers.

Regarding Azure, Firebase, S3 and CloudFront, in this stage, we were not considering big products like Azure, Google Cloud products or AWS products. We preferred some third party product, specific on hosting web apps through CDN with minimal configuration, and since our experience with big platforms was limited, we actually wanted simplicity at a fix cost, much more than flexibility with dynamic pricing.

First versions: Heroku

I developed first versions of my app in Heroku.

In that moment, I had only client routing with React Router and React Router Dom using browserHistory, and web worked as expected.

Everything changed when I started using React Helmet, to inject different HTML meta tags in every dynamic page, so I could have better SEO ranking and I added static routing for certain javascript pages (with ReactJS).

Somehow, Helmet was not injecting HTML meta tags properly for every page.

Next approach: Surge

In order to use Surge, as Surge did not support client side routing, I had to prerender my CRA app.

After trying react-snapshot and react-snap, I could generated all static pages only
react-snapshot, but even after setting up a generous snapshotDelay parameter, it look like static pages generation didn’t play well with the Spinner and loading of configuration from REST API, so all pages were rendered with infinite spinner.

I realized in this point that it could be really hard to setup static server rendering with app, so I had to look for a provider that supported client side rendering. Therefore, GitHub Pages could not be considered since it does not support client side routing. Additionally, there was not mention in Now documentation to client side routing for Javascript apps, so I moved on.

In addition, Surge did not fit as it did not serve my robots.txt file or other static resources

First success deployment: Netlify

Netlify was really awesome, since native support for client side routing worked just by adding _redirects file with the index.html redirection.

Just adding a .netlify file with my site id for my build folder, everything worked as expected.

I kept netlify for a long time, but there was something missing: multiple environments.

Netlify and password-protected stage environment

I had foreseen an scenario previous to the release of the product, with the following scenario:

  • Production environment with some specific content
  • Stage environment, password-protected with the current web app.

Nelify did not support multiple environments, but at least, I needed to password-protect my website, so it could be available as a stage pre-release environment, available online to rest members of the team (mostly my marketing partner).

Netlify pricing strategy

However, by the time I was using it, free tier didn’t allow me to password-protect the website, and I had to upgrade plan and pay $300 a month to get it. In addition, $300 a month was quite a expensive cost for a website of the Startup, since early stages of product wouldn’t bring enough income to make it affordable.

So Netlify pricing strategy was clearly unsuitable for early stage website products.

The ideal hosting: Aerobatic

It was long time before I found Aerobatic. There was not free tier but a 30 days tryial, and I had to pay for certificates and custom domain name, but price was as low as $15 a month (versus $300 a month I had to pay to Netlify), so pricing suited much more a Startup web app.

There will be of course more performance considerations to keep in mind, but in that stage, I just had some requirements, a web app to deploy and minimal budget to spend at.

What I got in Aerobatic for $15 a month:

  • 2 environments (called stages in Aerobatic): stage and prod.
  • Password-protected stage environment.
  • Autorenewed Amazon certificated for custom domain.

In order to get that, I performed some changes in my web app:

  1. A new .env.stage file was created, with the environment variables required for a new stage environment. I had then .env (development), .env.stage and .env.production.
  2. Since I could not change node_env variable without ejecting, I just created support for feature flags (new REACT_APP_FEATURE_FLAGS property) so flags were different in stage and production environments.
  3. Finally, I renamed build CRA script as build:production, and I created a new build:stage script with “env-cmd .env.stage npm run build:production”

My package.json

"scripts": {
    ...
    "build:production": "react-scripts build",
    "build:stage": "env-cmd .env.stage npm run build:production",
    ...
},

My Aerobatic deployment code for STAGE environment looked like:

npm run build:stage             
aero deploy --stage stage

While my code for Production:

npm run build:production
aero deploy --stage production

Trackmania Nations Forever: forever

If you love Arcade car video games, you have a bit crazy and you want a great free game, this is your game.

10 years ago I found this game while searching free reacing games.

Trackmania Nations Forever is actually madness: you drive a race car through imposible tracks. Tracks with multiple jumps, loops and obstacles, where you don’t even know where to go on your first attempt.

Different game modes (one player, play with friends and online game), creation of tracks, and a great online community.

The thing is that, some days ago, I discovered that the game is still operative, free download is still available and after giving it a go, there are still online players!!!

It is amazing that a game released in 2007 is fun and keeps a community even 12 years later.

My first computer and video consoles

My teenage years were not involved in technology, on the contrary to some of my friends and school mates.

Video consoles

First thing comes into my mind, in relation to computer science, is all time I spent playing with my cousins in their places, in Cartagena, from 10 years old, onward.

Sega Master System II – Alex Kidd in Miracle World

My first computer and video consoles 8

I remember to have played so many Saturdays with Jesús and Rufino, some of my cousins to the awesome Sega Master System II, and its amazing game Alex Kidd in Miracle World, an incredible platform game, that was in the same level as the famous Super Mario Bross 3, most likely the best platform game ever. It was, in that time, without any doubt, the best ever Sega game.

NES (Nintendo) – Super Mario Bross. 3

My first computer and video consoles 9

I also keep some good times in my memory with my cousing José David, playing with NES (Nintendo Entertainment System), or rather a clone video console (likely NASA), and its 200 o 300 preinstalled games.

Sega Master System II – Sonic the Hedgehog

My first computer and video consoles 10

It wasn’t until I was 16 years old that I would purchase my first Sega Master System II, spending nearly most of my savings, this time with a preinstalled Sonic The Hedgehog, barely a couple of years before starting University, at the same time some of my friends bought their recently released Playstation One. To be honest, I liked graphics engine of Sonic, but playability of Alex Kidd, was pretty out of the league.

Computers

Logo

My first approach with computer science and computers happened when I was 12 years old, and I attended some classes in school, and learned some of Logo programming language. That was pretty much first time I used a computer, a PC with a 5 1/4 floppy disk reader and I was such impressed by what that turtle could draw in the computer screen.

Sinclair Spectrum +2A y Amstrad CPC 264

My first computer and video consoles 11

Some time later, when I was 13, I would enjoy a Sinclair Spectrum +2A at  a good friend’s place and a couple of years later, a Amstrad CPC 264. I still remember desperation on waiting for computer to load video games.

My first computer and video consoles 12

My first computer 80386 SX

When I was 14, I asked my parents to purchase me a computer, an Amstrad CPC that was being sold in a close store, but unfortunately, I was unlucky.

However, everything changed when, while my friends enjoyed their powerful 80486 and got into the DVD era, I got lucky when my father brought from work a decommissioned computer, a IBM PS/2 56SX, a 80386 SX with a 20MHz processor.

My first computer and video consoles 13

Learning was never so cool. It had MS DOS 3.3, with 2 3,5” disks, y with barely 8MB hard drive, with only 2MB free. I was so excited for playing  PC FUTBOL game, a game for managying football teams, so that I tried to compress the whole hard drive, and even then, there was not enough space, I just deleted all content in hard drive, including operative system. It sounds funny now, but how desperate I was when I tried to start computer without any success.

Learning the hard way

How was it possible? After so long time, I had a computer, but I couldn’t install anything. Happinnes got back when I discovered that I had 2 partition disks, since there was a hidden partition, that had been used in my father’s company, so wow! I had 40MBs now. I started discovering computer science there: Windows 3.1, MS DOS 6.1, WordPerfect, etc, and some games like PC FUTBOL, The Secret of Monkey Island, and mainly, the game that teached me all F1 tracks by heart, and got so many sleep time from me:  Formula One Grand Prix, de Microprose.

Formula One Grand Prix

I can’t explain why I loved it so much. I didn’t care graphics, or simulation engine, or fact I didn’t have any analogic playing device, but there was such excitment on passing all cars.

Internet and much more

By that time some of my friends spoke about Internet, specially one of them, Javier, that knew about encripting, hacking, etc. I didn’t have a clue about what he often said, but I started feeling interested about the network of networks.

However, I was light years behind knowledge others had by that time about video consoles, computers, programming languages or Internet, but something in me had grown, the seed of knowledge that later got, the interest for technology.

Hello world!

As you may know, “Hello world!” uses to be the first software that developers use to code when learning a new programming language, basically the starting point for something more important, the beginning of a learning curve, so this is the beginning of my personal blog.

I hope this blog to be a hotchpotch of experiences lived since I starting working, back in 2002. However, this is such a special day, the best day for starting a personal blog.

This day is so special for 2 reasons: first one, today is my birthday, and second one, today is the International Day of Gamers. This last point may mean nothing for most of you, but this may be a signal for me.

But only time will tell this.