Technically Speaking

ExpressJS and building a solid NodeJS stack


We at Orpheus Digital made a decision to try out a NodeJS stack. After much research, we decided to go for the most famous stack – ExpressJS. Currently, PHP over a CodeIgniter framework drives almost all our server-side systems. This stack performs well. However, the decline in popularity of CodeIgniter pressured us to adopt a more modern tech stack.

Despite this, we do not see ourselves deprecating the CodeIgniter stack anytime soon. It is quite fast, reliable and incredibly easy to setup. NodeJS will serve as another choice when developing a server-side application – it will exist beside our tried and tested solution that is CodeIgniter.

Although this is the first in-depth dive into NodeJS, we have several tutorials using NodeJS stacks like using GCE to run a NodeJS app.

Easier ExpressJS development

By default, updating application files will not update the server during ExpressJS development. Therefore, we need to manually start and stop the server. To prevent this, we can use a tool called nodemon. Servers started with nodemon will automatically reload files as they are being saved.

Next, we will look at how we can install nodemon. It is quite easy:

npm install -g nodemon

Adding nodemon is not compulsory. However, it is an incredible time saver!

ExpressJS file structure

ExpressJS does not enforce a strict directory structure. Therefore, we have total freedom over file management. Firstly, we will use the express-generator tool to setup an ExpressJS project.

npx express-generator -h
npx express-generator --view=hbs --git ej-server

This creates a folder called ej-server which has a directory structure similar to the one shown below

Directory structure of a folder created using the ExpressJS generator tool
An example directory structure created using the express-generator tool

Next, we need to download the dependencies (which will create a node_modules folder).

cd ej-server
npm install
nodemon bin/www

The advantage of this method is that it sets up some default behaviour out of the box:

  • Adds support for a public directory where static resources can be stored. The generated boilerplate already has support for serving these static files!
  • Two sample routes (/ route and a /users route) along with route handling
  • Handlebars-based templating system

An advantage of our previous CodeIgniter stack was that API calls and the backend UI could be handled on the same repository. We will add similar functionality for the ExpressJS stack as well.

Now that we have created our stack and files, let’s look at a way to deploy the ExpressJS application.

Deploy ExpressJS app on cPanel

Most cPanel-based shared hosting services also let advanced users serve NodeJS apps. We will look into how this can be done.

If your cPanel account supports this feature, you will see an icon like this:

Click on the “Create Application” button to create a new NodeJS application. You will be greeted with a form similar to the one shown below:

cPanel page to set up a new NodeJS app

This feature does not provide a file uploader. You have to manually upload the NodeJS files using FTP or GIT. This location is the one you have to specify in the ‘Application root’ property.

Running npm install

After getting files, you should run npm install before the application can work. Copy the code to enter the virtual environment that is provided on the Node.js Application page. After pasting the code on an SSH terminal, you should be in the virtual environment of the application. Here you can execute your npm install command.

Serve a public folder

Since this application will also serve HTML pages, the pages need to be able to access static content like CSS and JS. ExpressJS has a built in function to facilitate this:

express.static(path.join(__dirname, 'public'))

Serve as a subfolder

When it comes to serving on a production server, you have to serve as a child URL of the top-level domain. For example, if the domain is, the application will have to be served from This is a production-specific problem. The best way to go about this is to have redesign the routing hierarchy – a parent route will take care of handling requests to the subfolder, i.e. /app. All requests to this subfolder will be routed to child routes of the parent. This method is detailed in this article.

What should be remembered is that if you opt to go with this method, the public folder will also be served as a sub-route of /app. Therefore, to access a CSS in the public folder from an HTML file, you will have to load the resource from /app/stylesheets/style.css instead of stylesheets/style.css


cPanel uses an application called Phusion Passenger to serve NodeJS applications. It uses something called reversed port binding to serve the NodeJS app. An in-depth look at this method and its consequences is available on

An implication of this architecture is that the NodeJS application can have only one createServer().listen() otherwise Passenger gets confused. NodeJS applications served this way cannot specify their port. Passenger ignores any specified ports. Instead, it listens on a random Unix domain socket.

Another common mistake is specifying app.js as the application startup file. This property should always mention the script that calls the initial createServer().listen()

Technically Speaking

TestFlight from ReactNative: How do you do it?


You have finally completed your awesome React Native app after the initial setup following our Technically Speaking article, Using GitHub with Expo and Vanilla React Native. Your next logical step is to test your app on your friends’ and family’s mobiles. On Android this is relatively straightforward. You can simply generate an APK and distribute using your website. But for iOS, you have to use the TestFlight platform to install the app on iPhones and iPads.

Before you get started, make sure you have an Apple Developer account, a MacOS device and a iPhone or iPad.

Firstly, we need to create an IPA build.

Setting up the developer account

Make sure you have signed in at least once to iTunes Connect using the same account you will be using to upload the app to the App Store.

To upload your app to the App Store, we will be using a macOS app called Transporter. After that, make sure you enrol to the Apple Developer Program.

Creating an IPA build

We need specifically an archive build to deploy on TestFlight. For this, we can run the Expo command;

expo build:ios -t archive

Then, Expo will then create an IPA file – you can upload this either to the App Store or TestFlight.

The CLI will prompt you to provide other information as well:

  • Will you provide your own Apple Distribution Certificate?
  • Will you provide your own Apple Push Notifications service key?
  • Will you provide your own Apple Provisioning Profile?

For all these questions you can opt to let Expo handle these for you.

If you receive an error like this, just re-install Expo by running

npm i -g exp
Expo terminal window showing a common gotcha

If the problem still exists, make sure you are on the latest version of expo by running:

npm install -g expo-cli

Examples of other errors that occurred and their how to resolve them:

To resolve this, simply delete the .fastlane folder at the path given in the error message.

Uploading IPA to Transporter APP

Firstly, you need to create an App Store listing for the app. To do this, visit and choose to add a “New App”. Specify the name and other details. Select the bundle ID that has the bundle ID you specified during the IPA creation process.

Finally! This is the final step before you can get to see the app on your iOS home screen. This step is simple. You simply drag the IPA file and drop it on the Transporter app.

If everything went well, you can see a nice blue deliver button. Press this!

Press the 'Deliver' button to submit the IPA to the App Store an TestFlight
Press the ‘Deliver’ button to submit the IPA to the App Store an TestFlight

Setting up external testing on TestFlight

You can now view the app on App Store Connect. Open your app listing on App Store connect. This page will have a tab named “TestFlight”. Before you can deploy the app to external testers, you need to fill in some required information.

Fill in the required TestFlight information
Fill in the required TestFlight information

Then, the Builds section will show that your app is being processed. Afterwards, you can start internal testing by supplying the Apple account addresses.

Technically Speaking

UPchieve – Adding Subject Categories

Previously on Technically Speaking, we discussed about setting up the UPchieve platform and making fundamental configurations to both the web frontend and the server. In this article, we discuss about other configuration options

Before we can add categories, we have to remove the existing categories. The system determines categories using the categories the questions are added under.

The easiest way to do is to use the mongo command line.


use upchieve

Or we can use the Edu Admin dashboard by going to

UPchieve Edu admin - add a question view
UPchieve Edu admin – add a question view

But make sure you have logged in as an Admin user to the UPchieve web client first.

After we have cleared the question collection, we can add our own questions using the Edu Admin dashboard. However, this procedure on its own will not update all mentions of the categories on the site. Some mentions we do have to update ourselves.

Changes to the web client

Firstly, we need to update the categories that appear in the subject selection area of the student dashboard. We need to update the topics.js file at;


When creating the topics and subtopics in topic.js, make sure you make the displayName and the key name the same. (This might be a bug)

Changes to the server-side

We need to manually update the questions model at;

/models/Question.js (line 24 onwards)

The categories and subcategories here are not referring to the category and subcategory on the student dashboard! These subcategories refer to certain areas of the the subject. For example: if there is a a super category on the student dashboard called “IAL”, with a subcategory called “Maths”, the Question.js category should be “science” and subcategory should be, let’s say, “calculus”.

/models/Session.js (line 6)
/controllers/TrainingCtrl.js (line 7 onwards)

Then we have to modify the certification entries that are stored in the database along side the user names, we have to modify;

/models/User.js (line 236 onwards)

If everything has worked fine, you should be able to see the new categories throughout the server API and web frontend.

Technically Speaking

UPchieve remote tutoring platform deployment

With the ongoing Covid19 situation, platforms that specialize in remote learning and education distribution, have become invaluable. UPchieve is such an open source platform. This platform connects a volunteering tutor and a student to connect and learn. The communication methods also include an interactive whiteboard and audio calling.

UPchieve also has iOS and Android apps, built on the React Native platform.

In a previous Technically Speaking installment, we outlined the steps needed to upload a Node.JS web app onto the Google Compute Engine. The NodeJS app used as the sample is the UPchieve/web app. The procedures for deploying the web app can be found in a previous Technically Speaking installment. In this article, we talk about how the server portion of the platform can be deployed on the Google Compute Engine.

This NodeJS stack comes with a twist – we will be using nginx as a reverse proxy server for NodeJS. This extra server setup has an advantage – NodeJS itself will run without requiring root permissions. Instead, nginx will handle http/s access for us and route requests locally to the NodeJS server.

As requirements, we are assuming that you have already created a new virtual instance on the Google Compute Engine.


Install Applications

Initially, a VM would not have any software. We need to populate it with the software that fits our needs. For our scenario, we are going to need these software:

  • git
  • NodeJS
  • MongoDB
  • make
  • certbot (for SSL connectivity)
  • nginx

The commands to install these tools on a Debian 9 system are given below:

# Install git
sudo apt-get install git

# To install nodeJS, we need to install curl
sudo apt-get install curl software-properties-common
curl -sL | sudo bash -
sudo apt-get install nodejs

# Install mongodb
#Update 02Jul20 : Run `sudo apt-get install wget` if wget is not there
wget -qO - | sudo apt-key add -
echo "deb stretch/mongodb-org/4.2 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.2.list
sudo apt-get update
sudo apt-get install -y mongodb-org

# Install make
sudo apt-get update
sudo apt-get install build-essential

# Install nginx
sudo apt update
sudo apt install nginx
systemctl enable nginx # start the nginx service

To install some of the tools on macOS (maybe as a development environment):

brew tap mongodb/brew
brew install [email protected]

Clone the Repository

We need to pull the source from the remote GitHub server and store it on our server locally. The open source UPchieve server source can be pulled from its GitHub repository.

# Clones the repository onto a folder named 'server'
git clone server
cd server

Starting Applications

Next, we will start the servers necessary for our UPchieve server to perform its first run/ setup. First, we will start the MongoDB background service.

For Linux systems:

sudo systemctl daemon-reload
sudo systemctl start mongod
systemctl status mongod # check if the service works

If the service has been set up successfully, you will see an output similar to this:

UPchieve requires MongoDB service to run properly

For macOS:

brew services stop [email protected]
brew services start [email protected]

mongod --config /usr/local/etc/mongod.conf --fork

# check if MongoDB is running
ps aux | grep -v grep | grep mongod

Next, we need to set up the UPchieve server and databases.

# setup database and install dependencies
cd server
bash bin/setup # if there is an error, run npm rebuild
node init
npm run dev # start upchieve server
# if you get a New Relic error, run
# cp node_modules/newrelic/newrelic.js newrelic.js
# if you get a bcrypt error, run `npm rebuild`
# if you still get the bcrypt error, run `npm install bcrypt`

You should be able to check if the server is working at this point. Open your browser and open the page at

http://<VM IP Address>:3000/eligibility/school/search?q=test

If it works, you might want to open a new shell (the current shell will be running the node server) to execute the other commands.


We need a few more changes to make the application server ready for production. At the moment, we need to type in the IP address and the port number to access the server. Although this should also be fine since the consumers of the application would not have to access the server directly, this is not recommended. Besides, there is no SSL facilities on the server.

To set up free SSL using Let’s Encrypt SSL, you can refer to our previous article that outlines the procedure also for the Google Compute Engine.

Configure nginx

We will use nginx as a reverse proxy for our NodeJS server. Assuming that nginx is already installed, we need to configure it.

sudo nano /etc/nginx/nginx.conf

Add a server in this file to listen on port 80 (http port). We do this by adding an entry inside the http block (make sure to replace by your sever name):

	server {
		listen 80;

		location / {
			proxy_set_header   X-Forwarded-For $remote_addr;
			proxy_set_header   Host $http_host;
			proxy_pass         http://localhost:3000;

Now you can test the server using the domain name instead of the IP address and port number

http://<DOMAIN NAME>/eligibility/school/search?q=test

Configure SSL

Before starting configuration of SSL, we might have to stop both our NodeJS and nginx servers

sudo systemctl stop nginx
ps aux | grep -i node # to find our node processes and PIDs
kill -9 <PID> # here PID is the ID of the node process

Since we have nginx running as a proxy, the usage of slightly different and tailored for an nginx environment:

sudo apt-get install certbot python-certbot-nginx
sudo certbot --nginx # automates the editing of nginx configuration file

sudo systemctl start nginx # start nginx service
cd server
npm run dev # start our NodeJS server

Depending on the selections you made during the SSL configuration, you would be able to access the server on both http and https at this point.

http://<DOMAIN NAME>/eligibility/school/search?q=test
https://<DOMAIN NAME>/eligibility/school/search?q=test


The protocol is used by the server to trigger request notifications on the volunteer dashboard and the session chat system. This should not be confused with WebSockets. These are two different protocols.We will be configuring WebSockets separately.

By default, the NodeJS server is listening to based requests port 3001. But we need to route it through our nginx server if we are to enable SSL for requests.

Our game plan to cover all these grounds is to:

  1. Add a destination server on port 3001 to our NodeJS server (which will be http://localhost:3001)
  2. Add a reverse proxy for the location / – this specific location is defined by the protocol. This proxy will take care of other requirements such as http upgrade
  3. Publicize a SSL supported port, 3002, that can be accessed externally by our web app

We cannot use 3001 or 3000 in place of the 3002 without changing any NodeJS config code as this is what the NodeJS server itself is listening on. Instead, we define an unused port 3002.

http {
  upstream upstream-nodejs { # NodeJS destination
    server; # OR localhost:3001
  # other stuff...
  # Add SSL support to port 3002 which will be publicized
  listen 443 ssl; listen 3002 ssl; # managed by Certbot
  # Other SSL properties...
  location / {
    # listen for this location on port 3002
    proxy_pass              http://upstream-nodejs;
    proxy_redirect off;
    proxy_http_version      1.1;
    proxy_set_header        Upgrade                 $http_upgrade;
    proxy_set_header        Connection              "upgrade";
    proxy_set_header        Host                    $host;
    proxy_set_header        X-Real-IP               $remote_addr;
    proxy_set_header        X-Forwarded-For         $proxy_add_x_forwarded_for;

Configure Websockets

WebSockets also requires the http upgrade method. In contrast to the method we followed, we use a map symbol to describe this.

Next, we specify the URI that we should listen for WebSocket requests is /whiteboard/. Requests to this URI will be automatically referred to the back-end WebSockets server on NodeJS.

The specialty with this NodeJS server is that it is listening on port 3000. If you remember, this is also the port NodeJS processes normal HTTP requests. The difference is the URI. Only requests sent to /whiteboard/ will be upgraded to WebSocket protocol. This is quite sensible as WebSockets operates on the same ports as HTTP/S, 80 and 443.

http {
  map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
  upstream websocket {
    server localhost:3000;
  # other stuff...
  location / {
    # stuff
  # add right after location definition
  location /whiteboard/ { # this URI is where WebSockets is used
      proxy_pass http://websocket;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "Upgrade";
      proxy_set_header Host $host;

Configure Database Backups

A method to backup and restore is crucial when it comes to production environments. MongoDB has built-in functions do carry out these procedures.

To backup the entire database:

# Install `zip` utility to compress the exported folder for download
sudo apt install zip unzip

sudo mongodump -d upchieve -o home/backups/
# this will create a backup directory (upchieve) with JSON or BSON of the collections
cd /home/backups
sudo zip -r upchieve

The resultant zip file can be downloaded from the SSH terminal by clicking on ‘Download File’ in the cog icon.

To restore the entire database, the zip file should be extracted to show the upchieve folder. Then;

mongorestore -d upchieve upchieve

UPchieve Server Gotchas

Git ignores certain configuration files like config.js. Therefore if you need to update them, you have to edit the copy on the production server. Updating on the git repository will have no effect.

That has been all for today’s Technically Speaking discussion. Although we focused specifically on UPchieve, we hope this document will act as a summary of setting up any server based on a similar stack (nginx and NodeJS). Hope you will use your newly learnt knowledge for improving the accessibility of education in the current times. Don’t hesitate to leave any comments or suggestions below!

Technically Speaking

Using Google Compute Engine to run a Node.js app

In this new installment of Technically Speaking, we bring to you another article talking about the features of the Google Cloud platform. This article would walk you through the procedure for setting up a Node.js server app from a GitHub repository. Then we are going to deploy it on a Google Compute Engine instance.

As the GitHub repository, we are going to select UPchieve/web. This is the front-end web app portion of a homework help platform called UPchieve. We think this is a highly relevant platform to know about in the current day. Platforms that encourage remote learning are becoming increasingly important in lieu of the global pandemic. We are hoping to release an article detailing the deployment of the server portion. So, you or your educational institute can continue delivery of material.


You will be expected to create the virtual machine instance on the Google Compute Engine dashboard which is a straightforward and very quick procedure. The walk through is using a VM instance running Debian 9 (OS can be selected during the creation process). But the idea should be the same regardless of the paltform.


First, you need to access an SSH connection to the VM instance. There are several ways to do it but we will go the default way and select a browser-based SSH terminal. This can be accessed by going to the VM Instances page and clicking on the SSH button as shown in the screenshot.

Screenshot of the VM Instances page showing the location of the SSH button

If you have more than one instance, make sure you select the correct one!

Installing Git

This is the first thing we have to do if we are going to deal with GitHub repositories. The method would be quite familiar for the average Linux user:

sudo apt update
sudo apt install git
git --version

The last command is of course to verify that the installation was successful.

Installing Node.js

To install Node.js, we need to add the PPA to our system before installation. To do this, we need to first install curl.

sudo apt-get install curl software-properties-common
curl -sL | sudo bash -
sudo apt-get install nodejs

Make sure you replace the version number with the version number you want (preferably the latest version) before you execute the command.

Cloning the repository

We will clone the repo onto the instance. This is done using:

git clone
cd web

If you plan on on cloning from a private repository, use the same command. The difference will be that you will be prompted for the GitHub login details.

Installing Dependencies

Next, we need to install the app’s dependencies. This simply means that we are going to create and populate the app’s node_modules folder. All of this is automated (thank God!) and can be executed by running;

npm install


You are ready to run the Node.js app! To run a development build, simply run;

npm run dev

If you got to this point in the procedure, you should be able to at the least, run the app on the localhost.

Configuring the Firewall

To allow external access to our VM instance, so that anyone can view our awesome app, wee need to configure the firewall. By default, the Google Compute Engine restricts external access to the instance.

To do this, go to the Network interfaces detail of the VPC Network Compute section by clicking on the following menu option from the VM Instances page:

Click on the Firewall rules option in the right hand side drawer. Then click on Create Firewall rule button on the top.

Give any name for the rule you want to. And make sure the other options match the options as per the screenshot:

Google Compute Engine add firewall page
The options to select for our firewall

You should be able to access the Node.js app using <externalIp>:8080 where external IP mentioned on the VM Instances page. If it still does not work, make sure that both these options are unchecked on the VM Instances details page:

If they are not, click on the Edit button at the top to edit the details.

Configuring the App

Depending on the app you choose to deploy, there might be certain configuration options to modify. In the case of the UPchieve/web app, we need to point it to the server of our UPchieve server. To do this, we need to edit the Environment files which are used to provide the server location…etc.

vim .env.development

The command above can be run to edit the environment variables used when running in the development mode. For production mode, please use .env.production instead. If you are new to vim, you can watch this crash course in vim.

Edit the file so that it points to the server you want.

You can run the development build again and see the changes. My deployed app has an end-result that looks a little like…this

UPchieve deployed on the Google Compute Engine
UPchieve web app up and running!

Deploying a Production Build

The UPchieve web app is based on the VueJS platform. Therefore, we follow VueJS production build generation options. To create a production build, run;

npm run build

This would create a dist folder that can be served via HTTP. To do this, we need to run;

npm install -g serve
sudo serve -s -l 80 dist

This will serve the app over the default HTTP port (80) so you can simply enter the IP address without having to explicitly mention the port. To serve on this port, you need elevated access.

But in practice, we will have to serve our application on both HTTP and HTTPS (443) ports. We can specify multiple ports for the serve command;

sudo serve -s -l 80 -l 443 dist

Setting up SSL

At this point, if you tried accessing the HTTPS version of the app, you would immediately get an SSL protocol error. If you have a rough idea about how SSL works, this shouldn’t come as a surprise for you. We need to install an SSL certificate.

The SSL procedure is a bit of a lengthy affair. There are certificate authorities (CAs) that grant paid certificates as well as ones that are free. You can get a free certificate from Let’s Encrypt. The recommended way of setting up an SSL certificate from Let’s Encrypt is using Certbot. They have detailed instructions on their site. In brief, you just have to run two commands;

sudo apt-get install certbot
sudo certbot certonly --standalone

Certbot will ask some questions including prompting for an email address and domain/sub domain (if you are registering a sub domain, enter the sub domain, ex:

The last command will generate a public key and a private key. The paths to the both the keys will be listed at the end. To add SSL support to our server command, we need to add some extra parameters referencing the keys.

sudo serve -s -l 80 -l 443 --ssl-cert /etc/letsencrypt/live/<domain>/fullchain.pem --ssl-key /etc/letsencrypt/live/<domain>/privkey.pem dist

Both of these paths will be listed out for you. And <domain> will be replaced by the domain or sub domain you selected.

Renewing SSL certificates (Update 07 July 2020)

Due to the procedure used to install the SSL certificates, auto renewal feature of certbot would not work some times. However, the certificate can be manually renewed using the command below:

sudo certbot renew

This will renew all installed certificates on the instance. Before executing this command, it would be necessary to stop any applications using the ports 80 and 443.

Now, you would have a production ready instance of UPchieve on your Cloud VM. There is a few more problems however. How do you make sure the script runs continuously even when the SSH terminal is closed? How do you ensure that service starts with at startup – if a restart occurs how would the server start back up? These are questions on their own and deserve their own article!

Technically Speaking

Using GitHub with Expo and Vanilla React Native


One of the major disadvantages of making use of Expo is when you have to stop using it – when you come across a missing feature in Expo. The only option left is to eject from the Expo workflow and move to a vanilla react-native stack. Any SDKs you borrowed from Expo can be added using ExpoKit. This immediately sounds like a lot of work and we will discuss about a workflow to integrate both Expo and react-native using GitHub.

About the Project

I am writing this while I am in the process of adapting one of my projects to this new workflow – to say the least, I am learning as I am writing; I am writing as I am learning. This would serve as a personal reference as well while sharing my process with other fellow developers so that they can run where I, crawled.

The project started as a test project but quickly evolved into a financially backed contract. I had not integrated GitHub to the initial project either. What I had was two separate folders. One for the react-native workflow and one for the Expo workflow. Modifications that had a native module part was done in the react-native distribution while using Expo for the UI/ UX modifications and other logic that did not require the custom modules to function. The custom module I wanted to use was react-native-nfc-manager – Expo does not support NFC features as of now.

Expo is a tool at Orpheus that we can not live without. Due to this, we have a bunch of other tutorials that delve into great depths about Expo like the implementation of material design in Expo-based apps.

Using expo eject

A feature introduced in SDK 34 upwards is the ability to eject to what Expo terms a bare workflow. Initially you could only eject to an ExpoKit workflow. Personally, not having used this workflow, I will refrain from commenting on its functionality. But seeing that the folks at Expo are phasing it out and recommending new users eject to bare workflow, I would say that ExpoKit was not successful at its job. What happens to any Expo-based APIs and libraries? The process of moving to the bare workflow will also move the Expo libraries already in use. This is an automatic process.

The beautiful thing with the new bare workflow environment is that it allows you to use the Expo client as long as Native code is restricted from running. Expo provides an API call to detect if the app is running in an Expo mobile client or as a vanilla app.

To eject, simply run,

expo eject
A screenshot showing the manual steps of the eject command used on a GitHub repo project
A screenshot showing the manual steps of the eject command

Do not forget to select “Bare” from the options that come up.

To run the project as an Expo app, simply run –

expo start

Adding native plugins

The next logical step is to add a plugin that is based on native code. In other words, a plugin that would not be able to be added in a managed Expo environment.

The plugin we will use to test this is react-native-nfc-manager plugin. To add this plugin, we use the yarn add command as shown below:

yarn add react-native-nfc-manager --save

To run the app as a vanilla React Native app on a mobile connected via USB, simply execute the following command –

yarn android

Committing to GitHub

Another question that would arise at this point of the process is what folders should be committed to GitHub. The easiest method is to use the online service. This website allows you to select your technology stack to produce a gitignore file. For your easiness, we have already selected and produced a .gitignore file for use on Windows or macOS stacks:,windows,reactnative

The most significant change is that the certain files from android and ios folders (that were made during the eject process) are also chosen to be synced. Commits will exclude build-related files and folders. These files are auto-generated from the source.

Cloning from GitHub

After cloning the repository to a local folder, we need to install the libraries and instantiate Expo (basically, re-creating the node_modules folder) . To do this, you will only require one command:

npm install

Library and script downloads will take some time. After that, we can run it on Expo using;

expo start

Or if we want to check native code and run vanilla React Native, simply run;

yarn android
yarn ios

Sometimes the react-native builder fails to recognize the Android SDK directory. In that case you can set an environment variable pointing to the directory (command shown for Windows):

set ANDROID_HOME=C:\Users\<UserName>\AppData\Local\Android\sdk

The placeholder <UserName> should be replaced with the user’s Windows profile directory name.

We do not have to perform expo eject as our commits already contain an ‘ejected’ work space.

This concludes the Technically Speaking post for today. Hope you learnt something from the post!

Technically Speaking

React Native Apps styled with Material Design

React Native is the development platform if you are a startup and thinking about developing for both iOS and Android. Apart from containing a ton of source material and community support, development happens in JavaScript. JavaScript is a language that is easy to grasp. Developers are quite likely to have come across JavaScript at least once in their lives. Facebook is its the main contributor and initiator.

Orpheus Digital recently moved from Cordova to React Native. Cordova basically serves web pages in an embedded browser. This has serious performance bottle-necks, specially on budget or mid-range devices. However, Cordova had been around for a longer time therefore the community was larger and more varied. This strong point was being exhausted slowly as React Native was maturing and collecting a strong community.

The strong community resulted in intricate tool kits to make development easier. One of these tools is Expo. We at Orpheus Digital are fans of this tool. This toolkit makes coding production-ready mobile apps a breeze. It takes care of all the mundane tasks and provides infinitely helpful extras like OTA debugging. Code-wise there is not much that differs from vanilla React Native development.

StoreBoss is an app based on this stack that we created. The first app we made using this stack is also StoreBoss. The link #ReactNative will get you to the other articles we have regarding this mobile development stack.

Let’s assume that you have already set up React Native and Expo (seemingly straightforward process that is provided in their respective websites). We will now move onto the material design framework that we, at Orpheus Digital, also use!

React Native Paper

This is the Material Design framework we will be using. Installing this framework is as straightforward as it can get:

yarn add react-native-paper

If you are not using Expo, you would also have to run,

yarn add react-native-vector-icons
react-native link react-native-vector-icons

This adds the library to our project. Now we will use it in our code to render some Material Designed UI elements.

The React Native Code

Startup Code

We like to isolate the starter code from the actual app. So we will have have two JS files: one named App.js which will contain our starter code and another called MainScreen.js (in a folder called src) which will contain our first actual UI.

import * as React from 'react';
import { Platform, StatusBar } from 'react-native';
import { DefaultTheme, Provider as PaperProvider } from 'react-native-paper';
import MainScreen from './src/MainScreen';

These import statements all reflect the aforementioned structure of our app. PaperProvider is the root component that our Main component should be wrapped under for the theme to work.

We will also look into defining our own colors for the app, hence the import statement for DefaultTheme.

const theme = {
  roundness: 2,
  colors: {
    primary: '#3498db',
    accent: '#f1c40f',

The ellipsis (triple dot notation) allows you to only modify the parameters of the theme that you want to while keeping the other parameters unchanged. We have defined our own roundness parameter, and primary & accent colors.

Next, we will code our Main component.

export default function Main() {
  return (
    <PaperProvider theme={theme}>
		<StatusBar barStyle={Platform.OS === 'ios' ? "dark-content": "light-content"} />
		<MainScreen />

We specify the custom theme we created earlier as a parameter for the PaperProvider. The StatusBar element refers to the area of the UI where the time, battery status…etc are shown. We are using a neat trick to make the content area darker or lighter based on the OS for improved legibility using the Platform React Native API.

MainScreen Code

Next we will layout our Appbars and buttons for our main UI screen.

import React from 'react';
import { Text, View, Alert } from 'react-native';
import { Button, Appbar } from 'react-native-paper';

The good thing about the Paper framework is that the components need to be imported as required. This is a good thing for two different reasons:

  1. Decreases overhead – you import only what you need for that view
  2. Improves flexibility – maybe you want to use a button from another library in just one screen
export default class MainSceen extends React.Component {
    constructor (props) {
        this.state = {
	render () {
		return ([
			<Appbar.Header key="1">
			  <Appbar.Action icon="filter" onPress={() => {
				Alert.alert("Alert", "You pressed on the filter button!");
			<View key="2" style={{margin: 10}}>
				<Button icon="camera" mode="contained" onPress={() => Alert.alert("Alert", "You pressed on the contained button!")}>
					Press me
				<Button icon="camera" mode="outlined" onPress={() => Alert.alert("Alert", "You pressed on the outlined button!")} style={{marginTop: 10}}>
					Press me


This is our entire MainScreen.js A relatively new feature in React Native is the ability to use a key parameter in the return statement of the render() function to string together different components without having to wrap the components in a parent element. The individual components are instead provided as an array.

Also note how the Appbar component is consumed.

Now, the project can be run using the start command

yarn start

The output app will look similar to this on the two different platforms. The main reason for our liking of the React Native Paper library when opting for a material design interface is how the styling adapts to more of an iOS-like theme when the app is run on iOS.

The interaction with the buttons reveals the ripple effects.

A screenshot from the React Native project running on an Android environment
Our test app running on Android
A screenshot from the React Native project running on an iOS environment
Our test app running on iOS

Thus, we conclude the Technically Speaking post for today. The project can be accessed from our GitHub repository