Notes from 28/12–14/1

Hum Qing Ze
7 min readJan 19, 2020

So I spent my time from 28th December to 14th Jan in Delhi. Came back and conducted a blockchain workshop. Suddenly, we’re halfway through January.

COOLEST THINGS I LEARNED IN 2019

Surprised how similar we read. I’ve heard of about half of the things he mentioned? Maybe I should do this too!

I coached 101 CEOs, founders, VCs and other executives in 2019: These are the biggest takeaways

“We’re all just big, complicated bags of emotion walking around.”

Power comes with the ability to receive a “no”

“Maybe this isn’t what 1:1s are about. Maybe it’s about really listening…”

Learning to manage your focus, not your time

Detachment is not non-attachment

Chat with an ex-Government Artificial Intelligence Consultant

Brenda Hali gave a really friendly introduction to what habits one might have to take on in order to thrive in such an industry.

Artificial Intelligence is evolving rapidly, and it doesn’t matter your seniority, the sector where you apply your skills or the budget that you have to develop your projects. To be sure that you keep your work and skillset relevant, you should never stop learning, build a network and learn how to communicate efficiently.

Best of 2019 Lists

I like the idea, and also they even gave us an idea of how they did it

CBInsights newsletter

Blockchain, quantum computing,ethereum, crispr,5G,stablecoins,edge computing, smart cities, geoengineering,

AI

Trade and Invest Smarter — The Reinforcement Learning Way

Basically a short course on using TensorTrade. Worth giving it a shot!

Neural Architecture Search — Limitations and Extensions

  1. Learning with a latent dynamics model — PlaNet learns from a series of hidden or latent states instead of images to predict the latent state moving forward.

Latent dynamics models are being more commonly used now since researchers argue that “the simultaneous training of a latent dynamics model in conjunction with a provided reward will create a latent embedding sensitive to factors of variation relevant the reward signal and insensitive to extraneous factors of the simulated environment used during training.”

2. Model-based planning — PlaNet works without a policy network and instead makes decisions based on continuous planning.

Model-based reinforcement learning attempts to have agents learn how the world behaves in general. Instead of directly mapping observations to actions, this allows an agent to explicitly plan ahead, to more carefully select actions by “imagining” their long-term outcomes. The benefit of taking a model-based approach is that it’s much more sample efficient — meaning that it doesn’t learn each new task from scratch.

3. Transfer learning — The Google AI team trained a single PlaNet agent to solve all six different tasks.

After the first game, the PlaNet agent already had a rudimentary understanding of gravity and dynamics and was able to re-use knowledge in next games. As a result, PlaNet was often 50 times more efficient than previous techniques that learned from scratch.

Game Theory in Artificial Intelligence

Game Theory can be divided into 5 main types of games:

  • Cooperative vs Non-Cooperative Games: In cooperative games, participants can establish alliances in order to maximise their chances to win the game (eg. negotiations). In non-cooperative games, participants can’t instead form alliances (eg. wars).
  • Symmetric vs Asymmetric Games: In a symmetric game all the participants have the same goals and just their strategies implemented in order to achieve them will determine who wins the game (eg. chess). In asymmetric games instead, the participants have different or conflicting goals.
  • Perfect vs Imperfect Information Games: In Perfect Information games all the players can see the other players moves (eg. chess). Instead, in Imperfect Information games, the other players’ moves are hidden (eg. card games).
  • Simultaneous vs Sequential Games: In Simultaneous games, the different players can take actions concurrently. Instead in Sequential games, each player is aware of the other players’ previous actions (eg. board games).
  • Zero-Sum vs Non-Zero Sum Games: In Zero Sum games, if a player gains something that causes a loss to the other players. In Non-Zero Sum games, instead, multiple players can take benefit of the gains of another player.

Applied

  • Multi-Agent RL

Modelling systems with a large number of agents can become a really difficult task. That’s because, increasing the number of agents, makes increase exponentially the number of possible ways the different agents interact with each other.

In these cases, modelling Multi-Agents Reinforcement Learning Models with Mean Field Scenarios (MFS) might be the best solution. Mean Field Scenarios can, in fact, reduce the complexity of MARL models by making the assumption a priori that all agents have similar reward functions.

  • Adversary training in Generative Adversarial Networks (GANs).

This process resembles quite closely the dynamics of a game. In this game, our players (the two models) are challenging each other. The first one creates fake samples to confuse the other, while the second player tries to get better and better at identifying the right samples.

This game is then repeated iteratively and in each iteration, the learning parameters are updated in order to reduce the overall loss.

This process will keep going on until Nash Equilibrium is reached (the two models become proficient at performing their tasks and they are not able to improve anymore).

Blockchain

Coincodecap

For inspiration when running workshops

Ethereum’s true killer app: Endogenous Political Reform

Useful idea but far from reality

Ethereum

What is MetaMask? Really… What is it?

i suppose this will be useful to be included in the workshop. Metamask isn’t really complicated to use at all

How we used Ethereum and DAI to create, tokenise and settle a ‘self-executing’ smart invoice

PoC for trading

Ethereum’s Istanbul Fork — Technical Explanation

Got to stay abreast of things!

How did Hyperledger Fabric blockchain development improve in 2019?

Great work, but hard to use for chill projects. Hyperledger so far has had a great user experience demonstrating what blockchain can do really easily and it’s super impressive. But you still need to run some sort of node yourself.

Need to try this tool which is the VSCode extensions for hyperledger fabric

Data

NLP: Building Text Summarizer — Part 1

Finding sentences that are most similar and picking the first ones.

Design

stonly

Great idea once again, it simply automates the user explanation process, totally useful for discoverSUTD

userguiding

Expensive but there’s definitely a market

Neumorphism in user interfaces

something really interesting to play with!

Development

How to Build a Complete Back End System with Serverless

Everything done off AWS

Product

My product management toolkit (40): managing time

Fig. 2 — Make time consists of four steps — Taken from: Jake Knapp and John Zeratsky, Make Time

  • Highlight — Choose a single activity to prioritise and protect in your calendar.
  • Laser — Beat distraction to make time for your Highlight.
  • Energise — Use the body to recharge the brain.
  • Reflect — Take a few notes before you go to bed, adjust and improve your system based on your reflections.

Fig. 3 — Make time consists of four steps — Taken from: Jake Knapp and John Zeratsky, Make Time

  • Urgency — What’s the most pressing thing I have to do today?
  • Satisfaction — At the end of the day, which Highlight will bring me the most satisfaction?
  • Joy — When I reflect on today, what will bring me the most joy?

Tools

Parsr Transforms PDF, Documents and Images into Enriched Structured Data

This might be surprisingly useful

A diagramming tool for systems

Basically draw.io

Build a corporate R package for pleasure and profit

Time for an Analytics Edge package?

Knowflow

concept maps

How to deploy a static website for free in just 3 minutes straight from your Google Drive, using Fast.io

Another alternative to Github Pages.

The Cornell Note-taking System

1. Record: During the lecture, use the note-taking column to record the
lecture using telegraphic sentences.
2. Questions: As soon after class as possible, formulate questions based on
the notes in the right-hand column. Writing questions helps to clarify
meanings, reveal relationships, establish continuity, and strengthen
memory. Also, the writing of questions sets up a perfect stage for exam-studying
later.
3. Recite: Cover the note-taking column with a sheet of paper. Then, looking
at the questions or cue-words in the question and cue column only, say
aloud, in your own words, the answers to the questions, facts, or ideas
indicated by the cue-words.
4. Reflect: Reflect on the material by asking yourself questions, for example:
“What’s the significance of these facts? What principle are they based on?
How can I apply them? How do they fit in with what I already know?
What’s beyond them?
5. Review: Spend at least ten minutes every week reviewing all your previous
notes. If you do, you’ll retain a great deal for current use, as well as, for the
exam.

--

--