Effective attendance management software can significantly reduce the workload of both the HR department and the finance department. Automated attendance management goes even a step further to totally take reporting out of human hands.
Amzy caters to both these requirements. Companies with thousands of employees can easily utilise existing attendance capture devices like fingerprint, RFID…etc. There is no need to rip out existing infrastructure. Amzy attendance management system will integrate right in.
Amzy web application
It is a SaaS software that runs on any web browser, be it desktop or mobile. This is why it is perfect for modern workplaces where working from home is often required.
Different companies have different ways of calculating overtime. Due to this, Amzy comes with the unique ability to change calculation algorithms to suit the company. Add to this, Amzy can also cater to the needs of all managers by facilitating tailor made reports and graphs.
Orpheus Digital has deployed Amzy for several companies already! So, there is definitely clients to back us up on our claims!
Attendance data capture
Download data from existing attendance capture devices like fingerprint scanners, RFID scanners…etc and upload onto the Amzy portal
Use IoT devices to directly upload attendance data to Amzy in realtime
Dashboards and tables
See employee stats and any other detail at an instant using rich graphical dashboards.
On the other hand, there are all kinds of table views to get more detailed data:
Attendance reporting and leave management
It is incredibly easy to use the calendar view to manage employee leaves, and even public holidays. And guess what, you can do that right on your mobile phone too!
Report generation is the one of the key value areas when it comes to Amzy. Amzy can generate any type of report, regardless of its complexity. The reports are generated as Excel spreadsheets that can be easily printed or even processed further.
Security and data protection
Amzy uses industry-standard SSL certificates. To use the system, all users must sign in with a password unique to each user. Super admins can grant different levels of permission to each user in the system. The system encrypts sensitive data like bank details on a database level. Furthermore, Amzy also has an innovative action log that contains entries for all actions and the user who made the actions:
This gives a whole new level of accountability!
I want it!
Like what you see? Fill in your details below and send us a message for a demo or to share a cuppa!
Fuel queues in Sri Lanka were just starting in March – April. After countless hours spent at petrol queues, there just had to be a better way? There was significant variation between the queues at different locations. Certain sheds had close to no vehicles while some sheds had long queues. A solution could be a way to check the queues at locations before stepping out of the house – which shed would get you the required fuel type with the least waiting time.
Enter where.lk. A community-driven fuel location web app for every one in Sri Lanka. All you need to do is login with your Google account and select the type of essential item you need (i.e. octane 95, diesel, super diesel…etc). A map with all the locations selling the item opens up.
Requirements of the fuel locator
The solution had to tick a few boxes to be practical and usable:
There should be very little friction to use the solution
It should be available to as large a consumer base as possible
The status of as many petrol sheds should be presented at a glance
Support should be readily available
Easy and inexpensive to maintain
The solution was to give the user an idea about the queue at a particular shed and an idea about the length of the queue. The best and most accurate way to obtain this information would be from the shed itself. However, this also means that there would be a significant process to introduce the app to shed owners/ managers and educate them on using the app. The practicality of such an implementation is highly doubtful and will be insanely time consuming and expensive.
Fuel locator app
A much easier approach would be to let the users themselves rate the sheds. The rating will have to communicate the queue length as well as the actual availability of the fuel at the time the rating is made. However, it would be inconvenient to have a huge form to fill. Rating a shed should be an incredibly swift process otherwise users will not be motivated to make that effort.
After several iterations of testing and user feedback, the UI/ UX above materialised. The rating smileys instantly convey the status of the shed that the user clicked on while inviting them to provide their own rating.
The last updated time, fuel type selector and the navigate button was added later on; in fact, the last updated time was a critical feature that was requested.
The last updated time was considered so crucial that it had to be represented on the marker itself too. The border around the shed marker represents the updated time. A solid border denotes a recently updated location.
The user can click on the map to add a shed at that point. User will see a form like the one below. The coordinate field is autofilled with the coordinates of the point clicked on. The user should fill in the name and rating fields. Mobile number is optional but if provided would show up as a call button on the shed card.
Behind the scenes
Despite switching the default backend to an ExpressJS powered backend (as explained in our previous Technically Speaking article) we decided to use a Laravel-based backend. Shared hosting permits the deployment of Laravel applications without too much of a performance loss. ExpressJS deployed on shared hosting is still experimental in certain edge cases. Since Laravel was after all PHP, it had first class support in shared hosting.
The Laravel app functions both as an API as well as an admin panel. The admin panel is greatly helpful in reducing the time taken to verify and validate new places and updates.
The best option for the web app was React – fast development, easy deployment and client-side caching. Client-side caching saves bandwidth on the server as the user only needs to download the app resources one time (unless there is an update). This saves the user’s bandwidth too!
Efficacy
Daily ratings submitted by users can be averaged to get an idea of the overall rating for that fuel type on that day. This kind of statistic can be used to determine how the availability of the fuel has changed throughout a time period. We can use these statistics to determine the correctness of the solution – if an approach based on community-driven ratings is successful.
The ratings database was connected to Grafana so that the resulting graphs could be displayed beautifully. We can use the data to generate graphs like the one above for any of the fuel types.
We can infer from the graph above that there was a sudden drop in availability from first of May onwards. This makes sense because that was when the private fuel tank owners commenced a strike. In this way, if the drops and rises tally with the real world status, we can say that the approach, in fact, was effective.
We at Orpheus Digital made a decision to try out a NodeJS stack. After much research, we decided to go for the most famous stack – ExpressJS. Currently, PHP over a CodeIgniter framework drives almost all our server-side systems. This stack performs well. However, the decline in popularity of CodeIgniter pressured us to adopt a more modern tech stack.
Despite this, we do not see ourselves deprecating the CodeIgniter stack anytime soon. It is quite fast, reliable and incredibly easy to setup. NodeJS will serve as another choice when developing a server-side application – it will exist beside our tried and tested solution that is CodeIgniter.
Although this is the first in-depth dive into NodeJS, we have several tutorials using NodeJS stacks like using GCE to run a NodeJS app.
Easier ExpressJS development
By default, updating application files will not update the server during ExpressJS development. Therefore, we need to manually start and stop the server. To prevent this, we can use a tool called nodemon. Servers started with nodemon will automatically reload files as they are being saved.
Next, we will look at how we can install nodemon. It is quite easy:
npm install -g nodemon
Adding nodemon is not compulsory. However, it is an incredible time saver!
ExpressJS file structure
ExpressJS does not enforce a strict directory structure. Therefore, we have total freedom over file management. Firstly, we will use the express-generator tool to setup an ExpressJS project.
This creates a folder called ej-server which has a directory structure similar to the one shown below
Next, we need to download the dependencies (which will create a node_modules folder).
cd ej-server
npm install
nodemon bin/www
The advantage of this method is that it sets up some default behaviour out of the box:
Adds support for a public directory where static resources can be stored. The generated boilerplate already has support for serving these static files!
Two sample routes (/ route and a /users route) along with route handling
Handlebars-based templating system
An advantage of our previous CodeIgniter stack was that API calls and the backend UI could be handled on the same repository. We will add similar functionality for the ExpressJS stack as well.
Now that we have created our stack and files, let’s look at a way to deploy the ExpressJS application.
Deploy ExpressJS app on cPanel
Most cPanel-based shared hosting services also let advanced users serve NodeJS apps. We will look into how this can be done.
If your cPanel account supports this feature, you will see an icon like this:
Click on the “Create Application” button to create a new NodeJS application. You will be greeted with a form similar to the one shown below:
This feature does not provide a file uploader. You have to manually upload the NodeJS files using FTP or GIT. This location is the one you have to specify in the ‘Application root’ property.
Running npm install
After getting files, you should run npm install before the application can work. Copy the code to enter the virtual environment that is provided on the Node.js Application page. After pasting the code on an SSH terminal, you should be in the virtual environment of the application. Here you can execute your npm install command.
Serve a public folder
Since this application will also serve HTML pages, the pages need to be able to access static content like CSS and JS. ExpressJS has a built in function to facilitate this:
express.static(path.join(__dirname, 'public'))
Serve as a subfolder
When it comes to serving on a production server, you have to serve as a child URL of the top-level domain. For example, if the domain is example.org, the application will have to be served from example.org/app. This is a production-specific problem. The best way to go about this is to have redesign the routing hierarchy – a parent route will take care of handling requests to the subfolder, i.e. /app. All requests to this subfolder will be routed to child routes of the parent. This method is detailed in this article.
What should be remembered is that if you opt to go with this method, the public folder will also be served as a sub-route of /app. Therefore, to access a CSS in the public folder from an HTML file, you will have to load the resource from /app/stylesheets/style.css instead of stylesheets/style.css
An implication of this architecture is that the NodeJS application can have only one createServer().listen() otherwise Passenger gets confused. NodeJS applications served this way cannot specify their port. Passenger ignores any specified ports. Instead, it listens on a random Unix domain socket.
Another common mistake is specifying app.js as the application startup file. This property should always mention the script that calls the initial createServer().listen()
You have finally completed your awesome React Native app after the initial setup following our Technically Speaking article, Using GitHub with Expo and Vanilla React Native. Your next logical step is to test your app on your friends’ and family’s mobiles. On Android this is relatively straightforward. You can simply generate an APK and distribute using your website. But for iOS, you have to use the TestFlight platform to install the app on iPhones and iPads.
Before you get started, make sure you have an Apple Developer account, a MacOS device and a iPhone or iPad.
Firstly, we need to create an IPA build.
Setting up the developer account
Make sure you have signed in at least once to iTunes Connect using the same account you will be using to upload the app to the App Store.
We need specifically an archive build to deploy on TestFlight. For this, we can run the Expo command;
expo build:ios -t archive
Then, Expo will then create an IPA file – you can upload this either to the App Store or TestFlight.
The CLI will prompt you to provide other information as well:
Will you provide your own Apple Distribution Certificate?
Will you provide your own Apple Push Notifications service key?
Will you provide your own Apple Provisioning Profile?
For all these questions you can opt to let Expo handle these for you.
If you receive an error like this, just re-install Expo by running
npm i -g exp
If the problem still exists, make sure you are on the latest version of expo by running:
npm install -g expo-cli
Examples of other errors that occurred and their how to resolve them:
To resolve this, simply delete the .fastlane folder at the path given in the error message.
Uploading IPA to Transporter APP
Firstly, you need to create an App Store listing for the app. To do this, visit https://appstoreconnect.apple.com/ and choose to add a “New App”. Specify the name and other details. Select the bundle ID that has the bundle ID you specified during the IPA creation process.
Finally! This is the final step before you can get to see the app on your iOS home screen. This step is simple. You simply drag the IPA file and drop it on the Transporter app.
If everything went well, you can see a nice blue deliver button. Press this!
Setting up external testing on TestFlight
You can now view the app on App Store Connect. Open your app listing on App Store connect. This page will have a tab named “TestFlight”. Before you can deploy the app to external testers, you need to fill in some required information.
Then, the Builds section will show that your app is being processed. Afterwards, you can start internal testing by supplying the Apple account addresses.
Previously on Technically Speaking, we discussed about setting up the UPchieve platform and making fundamental configurations to both the web frontend and the server. In this article, we discuss about other configuration options
Before we can add categories, we have to remove the existing categories. The system determines categories using the categories the questions are added under.
The easiest way to do is to use the mongo command line.
mongo
use upchieve
db.question.remove({})
Or we can use the Edu Admindashboard by going to
http://<server>.example.org/edu
But make sure you have logged in as an Admin user to the UPchieve web client first.
After we have cleared the question collection, we can add our own questions using the Edu Admin dashboard. However, this procedure on its own will not update all mentions of the categories on the site. Some mentions we do have to update ourselves.
Changes to the web client
Firstly, we need to update the categories that appear in the subject selection area of the student dashboard. We need to update the topics.js file at;
/src/utils/topics.js
When creating the topics and subtopics in topic.js, make sure you make the displayName and the key name the same. (This might be a bug)
Changes to the server-side
We need to manually update the questions model at;
/models/Question.js (line 24 onwards)
The categories and subcategories here are not referring to the category and subcategory on the student dashboard! These subcategories refer to certain areas of the the subject. For example: if there is a a super category on the student dashboard called “IAL”, with a subcategory called “Maths”, the Question.js category should be “science” and subcategory should be, let’s say, “calculus”.
With the ongoing Covid19 situation, platforms that specialize in remote learning and education distribution, have become invaluable. UPchieve is such an open source platform. This platform connects a volunteering tutor and a student to connect and learn. The communication methods also include an interactive whiteboard and audio calling.
UPchieve also has iOS and Android apps, built on the React Native platform.
In a previous Technically Speaking installment, we outlined the steps needed to upload a Node.JS web app onto the Google Compute Engine. The NodeJS app used as the sample is the UPchieve/web app. The procedures for deploying the web app can be found in a previous Technically Speaking installment. In this article, we talk about how the server portion of the platform can be deployed on the Google Compute Engine.
This NodeJS stack comes with a twist – we will be using nginx as a reverse proxy server for NodeJS. This extra server setup has an advantage – NodeJS itself will run without requiring root permissions. Instead, nginx will handle http/s access for us and route requests locally to the NodeJS server.
As requirements, we are assuming that you have already created a new virtual instance on the Google Compute Engine.
Procedure
Install Applications
Initially, a VM would not have any software. We need to populate it with the software that fits our needs. For our scenario, we are going to need these software:
git
NodeJS
MongoDB
make
certbot (for SSL connectivity)
nginx
The commands to install these tools on a Debian 9 system are given below:
# Install git
sudo apt-get install git
# To install nodeJS, we need to install curl
# https://tecadmin.net/install-latest-nodejs-npm-on-debian/
sudo apt-get install curl software-properties-common
curl -sL https://deb.nodesource.com/setup_13.x | sudo bash -
sudo apt-get install nodejs
# Install mongodb
#Update 02Jul20 : Run `sudo apt-get install wget` if wget is not there
wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add -
echo "deb http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.2 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.2.list
sudo apt-get update
sudo apt-get install -y mongodb-org
# Install make
sudo apt-get update
sudo apt-get install build-essential
# Install nginx
sudo apt update
sudo apt install nginx
systemctl enable nginx # start the nginx service
To install some of the tools on macOS (maybe as a development environment):
We need to pull the source from the remote GitHub server and store it on our server locally. The open source UPchieve server source can be pulled from its GitHub repository.
# Clones the repository onto a folder named 'server'
git clone https://github.com/UPchieve/server.git server
cd server
Starting Applications
Next, we will start the servers necessary for our UPchieve server to perform its first run/ setup. First, we will start the MongoDB background service.
For Linux systems:
sudo systemctl daemon-reload
sudo systemctl start mongod
systemctl status mongod # check if the service works
If the service has been set up successfully, you will see an output similar to this:
For macOS:
brew services stop [email protected]
brew services start [email protected]
mongod --config /usr/local/etc/mongod.conf --fork
# check if MongoDB is running
ps aux | grep -v grep | grep mongod
Next, we need to set up the UPchieve server and databases.
# setup database and install dependencies
cd server
bash bin/setup # if there is an error, run npm rebuild
node init
npm run dev # start upchieve server
# if you get a New Relic error, run
# cp node_modules/newrelic/newrelic.js newrelic.js
# if you get a bcrypt error, run `npm rebuild`
# if you still get the bcrypt error, run `npm install bcrypt`
You should be able to check if the server is working at this point. Open your browser and open the page at
http://<VM IP Address>:3000/eligibility/school/search?q=test
If it works, you might want to open a new shell (the current shell will be running the node server) to execute the other commands.
Production-ready!
We need a few more changes to make the application server ready for production. At the moment, we need to type in the IP address and the port number to access the server. Although this should also be fine since the consumers of the application would not have to access the server directly, this is not recommended. Besides, there is no SSL facilities on the server.
We will use nginx as a reverse proxy for our NodeJS server. Assuming that nginx is already installed, we need to configure it.
sudo nano /etc/nginx/nginx.conf
Add a server in this file to listen on port 80 (http port). We do this by adding an entry inside the http block (make sure to replace server.example.com by your sever name):
Before starting configuration of SSL, we might have to stop both our NodeJS and nginx servers
sudo systemctl stop nginx
ps aux | grep -i node # to find our node processes and PIDs
kill -9 <PID> # here PID is the ID of the node process
Since we have nginx running as a proxy, the usage of slightly different and tailored for an nginx environment:
sudo apt-get install certbot python-certbot-nginx
sudo certbot --nginx # automates the editing of nginx configuration file
sudo systemctl start nginx # start nginx service
cd server
npm run dev # start our NodeJS server
Depending on the selections you made during the SSL configuration, you would be able to access the server on both http and https at this point.
The socket.io protocol is used by the server to trigger request notifications on the volunteer dashboard and the session chat system. This should not be confused with WebSockets. These are two different protocols.We will be configuring WebSockets separately.
By default, the NodeJS server is listening to socket.io based requests port 3001. But we need to route it through our nginx server if we are to enable SSL for socket.io requests.
Our game plan to cover all these grounds is to:
Add a destination server on port 3001 to our NodeJS server (which will be http://localhost:3001)
Add a reverse proxy for the location /socket.io/ – this specific location is defined by the socket.io protocol. This proxy will take care of other socket.io requirements such as http upgrade
Publicize a SSL supported port, 3002, that can be accessed externally by our web app
We cannot use 3001 or 3000 in place of the 3002 without changing any NodeJS config code as this is what the NodeJS server itself is listening on. Instead, we define an unused port 3002.
http {
upstream upstream-nodejs { # NodeJS socket.io destination
server 127.0.0.1:3001; # OR localhost:3001
}
# other stuff...
# Add SSL support to port 3002 which will be publicized
listen 443 ssl; listen 3002 ssl; # managed by Certbot
# Other SSL properties...
location /socket.io/ {
# listen for this location on port 3002
proxy_pass http://upstream-nodejs;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Configure Websockets
WebSockets also requires the http upgrade method. In contrast to the socket.io method we followed, we use a map symbol to describe this.
Next, we specify the URI that we should listen for WebSocket requests is /whiteboard/. Requests to this URI will be automatically referred to the back-end WebSockets server on NodeJS.
The specialty with this NodeJS server is that it is listening on port 3000. If you remember, this is also the port NodeJS processes normal HTTP requests. The difference is the URI. Only requests sent to /whiteboard/ will be upgraded to WebSocket protocol. This is quite sensible as WebSockets operates on the same ports as HTTP/S, 80 and 443.
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server localhost:3000;
}
# other stuff...
location /socket.io/ {
# socket.io stuff
}
# add right after socket.io location definition
location /whiteboard/ { # this URI is where WebSockets is used
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
}
Configure Database Backups
A method to backup and restore is crucial when it comes to production environments. MongoDB has built-in functions do carry out these procedures.
To backup the entire database:
# Install `zip` utility to compress the exported folder for download
sudo apt install zip unzip
sudo mongodump -d upchieve -o home/backups/
# this will create a backup directory (upchieve) with JSON or BSON of the collections
cd /home/backups
sudo zip -r db_29Jul20.zip upchieve
The resultant zip file can be downloaded from the SSH terminal by clicking on ‘Download File’ in the cog icon.
To restore the entire database, the zip file should be extracted to show the upchieve folder. Then;
mongorestore -d upchieve upchieve
UPchieve Server Gotchas
Git ignores certain configuration files like config.js. Therefore if you need to update them, you have to edit the copy on the production server. Updating on the git repository will have no effect.
That has been all for today’s Technically Speaking discussion. Although we focused specifically on UPchieve, we hope this document will act as a summary of setting up any server based on a similar stack (nginx and NodeJS). Hope you will use your newly learnt knowledge for improving the accessibility of education in the current times. Don’t hesitate to leave any comments or suggestions below!
In this new installment of Technically Speaking, we bring to you another article talking about the features of the Google Cloud platform. This article would walk you through the procedure for setting up a Node.js server app from a GitHub repository. Then we are going to deploy it on a Google Compute Engine instance.
As the GitHub repository, we are going to select UPchieve/web. This is the front-end web app portion of a homework help platform called UPchieve. We think this is a highly relevant platform to know about in the current day. Platforms that encourage remote learning are becoming increasingly important in lieu of the global pandemic. We are hoping to release an article detailing the deployment of the server portion. So, you or your educational institute can continue delivery of material.
Requirements
You will be expected to create the virtual machine instance on the Google Compute Engine dashboard which is a straightforward and very quick procedure. The walk through is using a VM instance running Debian 9 (OS can be selected during the creation process). But the idea should be the same regardless of the paltform.
Procedure
First, you need to access an SSH connection to the VM instance. There are several ways to do it but we will go the default way and select a browser-based SSH terminal. This can be accessed by going to the VM Instances page and clicking on the SSH button as shown in the screenshot.
If you have more than one instance, make sure you select the correct one!
Installing Git
This is the first thing we have to do if we are going to deal with GitHub repositories. The method would be quite familiar for the average Linux user:
Make sure you replace the version number with the version number you want (preferably the latest version) before you execute the command.
Cloning the repository
We will clone the repo onto the instance. This is done using:
git clone https://github.com/UPchieve/web.git
cd web
If you plan on on cloning from a private repository, use the same command. The difference will be that you will be prompted for the GitHub login details.
Installing Dependencies
Next, we need to install the app’s dependencies. This simply means that we are going to create and populate the app’s node_modules folder. All of this is automated (thank God!) and can be executed by running;
npm install
Deployment
You are ready to run the Node.js app! To run a development build, simply run;
npm run dev
If you got to this point in the procedure, you should be able to at the least, run the app on the localhost.
Configuring the Firewall
To allow external access to our VM instance, so that anyone can view our awesome app, wee need to configure the firewall. By default, the Google Compute Engine restricts external access to the instance.
To do this, go to the Network interfaces detail of the VPC Network Compute section by clicking on the following menu option from the VM Instances page:
Click on the Firewall rules option in the right hand side drawer. Then click on Create Firewall rule button on the top.
Give any name for the rule you want to. And make sure the other options match the options as per the screenshot:
You should be able to access the Node.js app using <externalIp>:8080 where external IP mentioned on the VM Instances page. If it still does not work, make sure that both these options are unchecked on the VM Instances details page:
If they are not, click on the Edit button at the top to edit the details.
Configuring the App
Depending on the app you choose to deploy, there might be certain configuration options to modify. In the case of the UPchieve/web app, we need to point it to the server of our UPchieve server. To do this, we need to edit the Environment files which are used to provide the server location…etc.
vim .env.development
The command above can be run to edit the environment variables used when running in the development mode. For production mode, please use .env.production instead. If you are new to vim, you can watch this crash course in vim.
Edit the file so that it points to the server you want.
You can run the development build again and see the changes. My deployed app has an end-result that looks a little like…this
Deploying a Production Build
The UPchieve web app is based on the VueJS platform. Therefore, we follow VueJS production build generation options. To create a production build, run;
npm run build
This would create a dist folder that can be served via HTTP. To do this, we need to run;
npm install -g serve
sudo serve -s -l 80 dist
This will serve the app over the default HTTP port (80) so you can simply enter the IP address without having to explicitly mention the port. To serve on this port, you need elevated access.
But in practice, we will have to serve our application on both HTTP and HTTPS (443) ports. We can specify multiple ports for the serve command;
sudo serve -s -l 80 -l 443 dist
Setting up SSL
At this point, if you tried accessing the HTTPS version of the app, you would immediately get an SSL protocol error. If you have a rough idea about how SSL works, this shouldn’t come as a surprise for you. We need to install an SSL certificate.
The SSL procedure is a bit of a lengthy affair. There are certificate authorities (CAs) that grant paid certificates as well as ones that are free. You can get a free certificate from Let’s Encrypt. The recommended way of setting up an SSL certificate from Let’s Encrypt is using Certbot. They have detailed instructions on their site. In brief, you just have to run two commands;
Certbot will ask some questions including prompting for an email address and domain/sub domain (if you are registering a sub domain, enter the sub domain, ex: app.example.com).
The last command will generate a public key and a private key. The paths to the both the keys will be listed at the end. To add SSL support to our server command, we need to add some extra parameters referencing the keys.
Both of these paths will be listed out for you. And <domain> will be replaced by the domain or sub domain you selected.
Renewing SSL certificates (Update 07 July 2020)
Due to the procedure used to install the SSL certificates, auto renewal feature of certbot would not work some times. However, the certificate can be manually renewed using the command below:
sudo certbot renew
This will renew all installed certificates on the instance. Before executing this command, it would be necessary to stop any applications using the ports 80 and 443.
Now, you would have a production ready instance of UPchieve on your Cloud VM. There is a few more problems however. How do you make sure the script runs continuously even when the SSH terminal is closed? How do you ensure that service starts with at startup – if a restart occurs how would the server start back up? These are questions on their own and deserve their own article!
One of the major disadvantages of making use of Expo is when you have to stop using it – when you come across a missing feature in Expo. The only option left is to eject from the Expo workflow and move to a vanilla react-native stack. Any SDKs you borrowed from Expo can be added using ExpoKit. This immediately sounds like a lot of work and we will discuss about a workflow to integrate both Expo and react-native using GitHub.
About the Project
I am writing this while I am in the process of adapting one of my projects to this new workflow – to say the least, I am learning as I am writing; I am writing as I am learning. This would serve as a personal reference as well while sharing my process with other fellow developers so that they can run where I, crawled.
The project started as a test project but quickly evolved into a financially backed contract. I had not integrated GitHub to the initial project either. What I had was two separate folders. One for the react-native workflow and one for the Expo workflow. Modifications that had a native module part was done in the react-native distribution while using Expo for the UI/ UX modifications and other logic that did not require the custom modules to function. The custom module I wanted to use was react-native-nfc-manager – Expo does not support NFC features as of now.
A feature introduced in SDK 34 upwards is the ability to eject to what Expo terms a bare workflow. Initially you could only eject to an ExpoKit workflow. Personally, not having used this workflow, I will refrain from commenting on its functionality. But seeing that the folks at Expo are phasing it out and recommending new users eject to bare workflow, I would say that ExpoKit was not successful at its job. What happens to any Expo-based APIs and libraries? The process of moving to the bare workflow will also move the Expo libraries already in use. This is an automatic process.
The beautiful thing with the new bare workflow environment is that it allows you to use the Expo client as long as Native code is restricted from running. Expo provides an API call to detect if the app is running in an Expo mobile client or as a vanilla app.
To eject, simply run,
expo eject
Do not forget to select “Bare” from the options that come up.
To run the project as an Expo app, simply run –
expo start
Adding native plugins
The next logical step is to add a plugin that is based on native code. In other words, a plugin that would not be able to be added in a managed Expo environment.
The plugin we will use to test this is react-native-nfc-manager plugin. To add this plugin, we use the yarn add command as shown below:
yarn add react-native-nfc-manager --save
To run the app as a vanilla React Native app on a mobile connected via USB, simply execute the following command –
yarn android
Committing to GitHub
Another question that would arise at this point of the process is what folders should be committed to GitHub. The easiest method is to use the gitignore.io online service. This website allows you to select your technology stack to produce a gitignore file. For your easiness, we have already selected and produced a .gitignore file for use on Windows or macOS stacks: https://www.gitignore.io/api/macos,windows,reactnative
The most significant change is that the certain files from android and ios folders (that were made during the eject process) are also chosen to be synced. Commits will exclude build-related files and folders. These files are auto-generated from the source.
Cloning from GitHub
After cloning the repository to a local folder, we need to install the libraries and instantiate Expo (basically, re-creating the node_modules folder) . To do this, you will only require one command:
npm install
Library and script downloads will take some time. After that, we can run it on Expo using;
expo start
Or if we want to check native code and run vanilla React Native, simply run;
yarn android
yarn ios
Sometimes the react-native builder fails to recognize the Android SDK directory. In that case you can set an environment variable pointing to the directory (command shown for Windows):
set ANDROID_HOME=C:\Users\<UserName>\AppData\Local\Android\sdk
The placeholder <UserName> should be replaced with the user’s Windows profile directory name.
We do not have to perform expo eject as our commits already contain an ‘ejected’ work space.
This concludes the Technically Speaking post for today. Hope you learnt something from the post!
React Native is the development platform if you are a startup and thinking about developing for both iOS and Android. Apart from containing a ton of source material and community support, development happens in JavaScript. JavaScript is a language that is easy to grasp. Developers are quite likely to have come across JavaScript at least once in their lives. Facebook is its the main contributor and initiator.
Orpheus Digital recently moved from Cordova to React Native. Cordova basically serves web pages in an embedded browser. This has serious performance bottle-necks, specially on budget or mid-range devices. However, Cordova had been around for a longer time therefore the community was larger and more varied. This strong point was being exhausted slowly as React Native was maturing and collecting a strong community.
The strong community resulted in intricate tool kits to make development easier. One of these tools is Expo. We at Orpheus Digital are fans of this tool. This toolkit makes coding production-ready mobile apps a breeze. It takes care of all the mundane tasks and provides infinitely helpful extras like OTA debugging. Code-wise there is not much that differs from vanilla React Native development.
StoreBoss is an app based on this stack that we created. The first app we made using this stack is also StoreBoss. The link #ReactNative will get you to the other articles we have regarding this mobile development stack.
Let’s assume that you have already set up React Native and Expo (seemingly straightforward process that is provided in their respective websites). We will now move onto the material design framework that we, at Orpheus Digital, also use!
React Native Paper
This is the Material Design framework we will be using. Installing this framework is as straightforward as it can get:
yarn add react-native-paper
If you are not using Expo, you would also have to run,
yarn add react-native-vector-icons
react-native link react-native-vector-icons
This adds the library to our project. Now we will use it in our code to render some Material Designed UI elements.
The React Native Code
Startup Code
We like to isolate the starter code from the actual app. So we will have have two JS files: one named App.js which will contain our starter code and another called MainScreen.js (in a folder called src) which will contain our first actual UI.
/*App.js*/
import * as React from 'react';
import { Platform, StatusBar } from 'react-native';
import { DefaultTheme, Provider as PaperProvider } from 'react-native-paper';
import MainScreen from './src/MainScreen';
These import statements all reflect the aforementioned structure of our app. PaperProvider is the root component that our Main component should be wrapped under for the theme to work.
We will also look into defining our own colors for the app, hence the import statement for DefaultTheme.
The ellipsis (triple dot notation) allows you to only modify the parameters of the theme that you want to while keeping the other parameters unchanged. We have defined our own roundness parameter, and primary & accent colors.
We specify the custom theme we created earlier as a parameter for the PaperProvider. The StatusBar element refers to the area of the UI where the time, battery status…etc are shown. We are using a neat trick to make the content area darker or lighter based on the OS for improved legibility using the Platform React Native API.
MainScreen Code
Next we will layout our Appbars and buttons for our main UI screen.
/*MainScreen.js*/
import React from 'react';
import { Text, View, Alert } from 'react-native';
import { Button, Appbar } from 'react-native-paper';
The good thing about the Paper framework is that the components need to be imported as required. This is a good thing for two different reasons:
Decreases overhead – you import only what you need for that view
Improves flexibility – maybe you want to use a button from another library in just one screen
/*MainScreen.js*/
export default class MainSceen extends React.Component {
constructor (props) {
super(props);
this.state = {
}
}
render () {
return ([
(
<Appbar.Header key="1">
<Appbar.Content
title={<Text>Hello</Text>}
subtitle={<Text>Subtitle</Text>}
/>
<Appbar.Action icon="filter" onPress={() => {
Alert.alert("Alert", "You pressed on the filter button!");
}}/>
</Appbar.Header>
),
(
<View key="2" style={{margin: 10}}>
<Button icon="camera" mode="contained" onPress={() => Alert.alert("Alert", "You pressed on the contained button!")}>
Press me
</Button>
<Button icon="camera" mode="outlined" onPress={() => Alert.alert("Alert", "You pressed on the outlined button!")} style={{marginTop: 10}}>
Press me
</Button>
</View>
)
]);
}
}
This is our entire MainScreen.js A relatively new feature in React Native is the ability to use a key parameter in the return statement of the render() function to string together different components without having to wrap the components in a parent element. The individual components are instead provided as an array.
Also note how the Appbar component is consumed.
Now, the project can be run using the start command
yarn start
The output app will look similar to this on the two different platforms. The main reason for our liking of the React Native Paper library when opting for a material design interface is how the styling adapts to more of an iOS-like theme when the app is run on iOS.
The interaction with the buttons reveals the ripple effects.
Thus, we conclude the Technically Speaking post for today. The project can be accessed from our GitHub repository
Sri Imports is a rapidly expanding flooring and tiling company in Australia. They have been in business since 2015 and grown since then. They had a website which was set up at the start of the business. Though it looked quite up-to-date at the time, 4 years without any design updates had stagnated the site. They needed a WordPress powered website that suited today’s design aesthetics and web usage.
Welcome the new Sri Imports website. We created the new website from the ground-up using the WordPress platform. This meant that the site, once hosted, could easily be managed by Mr Sri (the owner) himself. They already had a solid online marketing campaign which utilized ads on Gumtree and the Sri Imports Facebook page. We created the new WordPress-powered website with these practices in mind; to enhance the effectiveness of these practices.
Praveen took on this project armed with previous experience of sites created on the WordPress platform by us. We made sure to include large displays of photos. It’s all about the appearance and seeing for oneself the quality and beauty of the flooring.
The company offered a wide range of flooring options. This meant that the options had to be categorized and ordered in an accessible way. Due to this reason, we opted for a hidden dropdown menu in the navigation bar. All flooring options and items had thumbnails attached to them so the user could see a preview of the options before even stepping into the physical shop.
Social media integration is key these days. The company already had an active Facebook fan page. We added a Messenger chat button so that prospective clients can get in touch instantly.