Notes from 11/3–17/3/19

Hum Qing Ze
11 min readMar 17, 2019

--

I spent quite a lot of time this week at FOSSAsia! Thoroughly enjoyed myself and learned so much. Here are some collective notes taken by my friends and I.

Getting started with Git and GitHub: the complete beginner’s guide

Only looking at this because i’m trying to find better ways to introduce github to people thanks to opensutd.

Arthur Schopenhauer: The Two Things That Stop Us From Being Happy

In fact, thought, in some instances of boredom and pain, does nothing but augment what causes dissatisfaction. Quite often, it’s not as simple as thinking about something else to get away from what you don’t want to face. We don’t always have control over that.

When stated, it’s quite evident that the mind and the body work together, that they have a feedback loop that connects them, but in reality, we often ignore this at our own peril.

Dissatisfaction exists whether we want it to, but how we deal with it makes all the difference.

Interesting that this was not about overcoming pain but about acknowledge other aspects of life that make pain seem meek.

Buckminster Fuller Rails Against the “Nonsense of Earning a Living”: Why Work Useless Jobs When Technology & Automation Can Let Us Live More Meaningful Lives

“We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery…. He must justify his right to exist.”

This hit me right there. I remember having a conversation about my Aunt and she said that automation was coming for her job. I told her that she ought to be happy! Move on to something more meaningful or challenging… or simply retire and relax. Work is inherently meaningless and the sort of work she describes to me doesn’t even sound enjoyable.

“We should be excited about automation,” she went on, “because what it could potentially mean is more time to educate ourselves, more time creating art, more time investing in and investigating the sciences.” However that might be achieved, through subsidized health, education, and basic services, new New Deal and Civil Rights policies, a Universal Basic Income, or some creative synthesis of all of the above, it will not produce a utopia — no political solution is up that task. But considering the benefits of subsidizing our humanity, and the alternative of letting its value decline, it seems worth a shot to try what economist Bill Black calls the “progressive policy core,” which, coincidentally, happens to be “centrist in terms of the electorate’s preferences.”

Asymptotic Analysis Explained with Pokémon: A Deep Dive into Complexity Analysis

Asymptotic Analysis is the evaluation of the performance of an algorithm in terms of just the input size (N), where N is very large. It gives you an idea of the limiting behavior of an application, and hence is very important to measure the performance of your code.

Complexity: using the highest order term of an algorithm’s run time to be the term of significance

Tools for complexity analysis:

  1. Big O — the upper bound on the complexity of an algorithm
  2. Big Omega — to define the asymptotic lower bound on the performance of an algorithm (best cases)
  3. Big Theta — a tight bound on the behaviour of an algorithm, defining the upper and lower bounds for a function

Space complexity: how much memory the algorithm will take up

We may trade in some time to optimise on the space. One way is through a hash table. Has table has O(1) time complexity but O(N) space complexity.

Sorting

  1. Bubble sort — compares adjacent elements of an array and swaps them if they are out of order. the elements gradually bubble to their correct position in the array

Time complexity: nested loop structure resulting in N²+N iterations. So it is of O(N^2) complexity.

Space complexity: only does one operation at a time, so you don’t actually use any external memory. So it is of O(1) complexity.

2. Insertion sort — starts from first index of the array and traverses the array and places it at where it is supposed to be

Time complexity: nested while loop within a for loop. the while loop runs for j+1 times and j is dependent on i. Once again there are N²+N iterations. It is of O(N^2) complexity.

Space complexity: it only re-arranges the numbers in the original array and doesn’t use external memory either. It is of O(1) complexity.

3. Merge sort — divide the array into two equal sub-arrays, conquer by sorting the smaller arrays and combine the two sorted halves

I remember reading about this idea this in Djikstra’s handbook

So this algorithm recursively divides the array untils smaller halves

Splits elements into two, copies the elements into a temporary buffer. (this takes O(N) considering there are N elements in the array). Then the while loop iterates over the shorter sub-array. (this takes O(N))

This means merging is a linear time algorithm.

Next is the merge_sort function

This is a recursive function, it calls itself. To analyse its time complexity, there are two ways:

i. Recurstion Tree analysis

Each node represents a subproblem and the value at each node is the time spent at each subproblem. N/2 is because the array is split into two and subsequently into powers of 2 (eg. N/4, N/8) N = 2^X, X = log_2(N) and then each node takes O(N) because at each level it works on the (number of nodes) * (N/2^n) (eg. 2*O(N/2) = O(N) and 4*O(N/4) = O(N)). Therefore the complexity is O(NLog(N)).

ii. the Master Method

Work done at the root is f(n) and the work done at the leaves are dependent on the height of the tree. The number of leaf nodes at the last level is a^{log_b(n)} = n ^ log_b(a) nodes

Here there are two competing functions that take different amounts of time. Divsion, which takes aT(n/b) and conquer, which takes f(n).

So if Dividing takes more time, then the leaves are the dominant part and the result will be the work done at the leaves. If conquer takes more time, then it will be the amount of work done at the root and you can ignore the work done at the leaves. However both can take equally long then it’s just the work done at any level * height of the tree.

Space complexity: you are mostly concerned with the temporary buffer array to store the sub arrays. So it depends on the depth of the recursion tree which is Log_2(N) and size of the array N. So it is N+log_2(N). You take the first term resulting in O(N).

4. Binary search — with the precondition that the array is already sorted

A recursive algorithm that keeps dividing the array into half.

This means out of N=1000 you just need to check 10 to find what you need.

Business

In brief: Singapore-based hostel startup nabs seed funding for global expansion

Tribe Theory got 1M

So who is Tribe Theory? I like the name though.

Wow so it’s a hostel for ‘hackpacker’, startup community curated business friendly.

We want to create a place where entrepreneurs have a comfortable and affordable stay, meet like-minded people work effectively and be inspired. We offer affordable yet soulful accommodation in the heart of every capital city. A space where you can work engage with others who are on the same entrepreneurial journey and get some much-needed and every elusive sleep.

Climate

Humanity’s fight against climate change is failing. One technology can change that.

This huge increase in production is thanks to a process called “enhanced oil recovery,” and it’s the largest current market for carbon dioxide. The oil we use to produce energy is typically found in a porous, rocky layer of the Earth’s crust. When an oil field is first discovered, the initial drilling is easy. But after the first easy pickings are sucked out, oil companies need to flood the field with water to push out more of the fossil fuel. Because water and oil don’t mix, however, only a limited amount of the total available oil makes it to the surface even then. Compressed CO2 solves the problem. The gas can get into hard-to-reach crevices of the rocky layer and dissolve the oil there (much like a detergent removes stains from your clothing) flushing out more of it to the surface.

Use CO2 to extract more oil. I am very confused now. Either way, carbon capture technologies are touted as the only solution to resolving climate change. We need to revert the amount of carbon dioxide in the atmosphere or… we’re doomed.

Blockchain

Understanding Hyperledge Fabric — January 2019

Useful to keep abreast of things. Doesn’t seem like much has changed other than the proposed ideas for 2019 which seems to bring focus on increasing commercialisability of Hyperledger products

X-Force Red Blockchain Testing

This is super interesting! Consulting for blockchain with an emphasis on security.

Testing the entire environment includes reviewing web and mobile applications that interact with the blockchain technology, APIs, ingress and egress points in the blockchain, public key infrastructure (PKI), user certificates, configuration and networks.

There is so much to learn in this ecosystem, it’s both motivating and intimidating.

“Is the Block Chain a Solution Looking for a Problem?” — Keynote Remarks by Ravi Menon, Managing Director, Monetary Authority of Singapore at CordaDay Singapore 2019 on 7 March 2019

Ironic that this page is not https.

I enjoyed how he explained blockchain technology.

The blockchain is not only, or even mostly, about crypto currencies or tokens.

The blockchain is essentially a technology to establish consensus in a decentralised system.

It allows diverse entities to collaborate and execute transactions without trusted central parties. It does this by recording and sharing data across all the nodes of the blockchain network so that everyone in the network can see and verify the data.

The crypto token is an economic incentive to encourage miners to perform the computational work required to keep the network running. This is important for public networks, but may not be required in private consortium networks.

Indeed, the first generation blockchains that relied on proof-of-work consensus among unidentified parties suffered from these drawbacks.

But today, most of the use cases are for business-to-business transactions using private, permissioned blockchain networks.

In networks of known participants, governed by agreements that are enforceable in the physical world, consensus models are being developed that are faster, more efficient, and more scalable.

The purists may quibble that these new generation networks do not meet all the characteristics of a blockchain. But what matters is what works and what the use cases are.

This Cryptocurrency Miner Says It Solved Bitcoin’s Power Problem

Put miners at hydropower plants in australia.

Am I missing something? How does this solve the power problem? It just uses a different source.

Blockchain-Ethereum

MythX API Developer Guide

Note to self: learn about what these APIs do and what are some common applications

Joe Lubin at SXSW: Ethereum Will Handle Millions of Transactions Per Second Within Two Years

“Currently, the economy is 80 trillion dollars. When blockchain fully ramifies in 10–15 years, the economy will probably be 10 times larger, and blockchain will probably be a significant part of that. In dollar terms, it’s a lot! We think this technology allows people to build collaboratively and not competitively. Blockchain enables the automation of trust and guaranteed execution of agreements. What industries need trust and agreements? All of them.”

Layer 2 scalability is here already, and it’s making blockchain very useful. In Layer 2 over the next 18–24 months, we’ll probably have millions of transactions per second.”

At this point really people can say anything. What I’m thinking is that his role is really to inspire confidence in the ecosystem. One thing I like is that this might really be the only way open source might survive in the future. Cheers to Consensys! I really like the people there too!

Scaling Civility: How to Preserve Ethereum’s Most Crucial Strength

How about… let’s learn from what Red Hat does? I’m really curious why I read so much about open source communities but the OG communities like GNU don’t seem to come up as often. Where are these communities?

Security

The OSINT-ification of ISIS on the Dark Web

So what is OSINT?

Open-source intelligence (OSINT) is a recent phenomenon that involves performing analysis of information that is freely available on the open Internet using a combination of various application tools, techniques, and websites to uncover identities or unlinked disaggregated information. Before we get started with this OSINT stuff, please be advised that it takes a lot of time to research thoroughly and be forewarned that you may find some things that could be very disturbing.

Get Started with MakerDAO CDPs and Dai.js

Life Optimisation

What Happens When You Spend a Year Using Science to Improve Your Brain

Key concept being tested here: neuroplasticity

The intervention was a mixture of brain stimulation — which basically worked by knocking out a part of the brain, like strapping down a dominant part to force you to strengthen the other one — and practicing keeping your attention on a pretty boring stimulus. I finally found this zone where I was relaxed and engaged, and it felt totally different. I asked them at the end of the test whether they’d given me the same version. I felt like I had all the time in the world and it was so easy.

Article.. actually doesn’t really say what happens.

What I Wish I Knew Before I Learned to Code

What I got was that coding, like any learning journey is a humbling experience. Coding is challenging because there is usually a very definitive solution to the problem you are facing. And there are several external stumbling blocks that make it harder.

Trying Things

Build a simple chat app with node.js and socket.io

Almost immediately faced an issue just installing the npm packages.

Nodemon package came up as 404 not found. So had to npm cache --force in order to clear the cache before npm install --save-dev nodemon to get it to install. and it worked after that

Oh so then I was supposed to set up the sockets.io part of it and it broke. But this time the mistake was quite hilarious. Basically I just put some scripts into the .ejs file without setting up the usual html head and body tags.

Intersting lesson on how the client-server dynamic works. The client requests a page from the node server and the server provides the data. In this case, the server provides the html page displaying the chat.

Note to take a look on sockets.io and expressJS a bit deeper. It really does simplify quite a lot of things.

Overall pleasant learning experience. I think I’ve a better idea of how these pieces fit together.

Building an Employee Churn Model in Python to Develop a Strategic Retention Plan

Putting this here because it’s based on a demonstration dataset. Which means it’s more of a trial and demonstration of what are some tools to use when analysing people data.

A commonly used method is the kernel density plot — non-parametric way to estimate the pdf of a random variable.

Uses some baseline algorithms: Logistic Regression, Random Forest, SVM, KNN, Decision Tree Classifier, Gaussian NB

Proposes giving a risk score based on the parameters that predict employees leaving best.

--

--

No responses yet