90th Monthly Technical Session

Sean Li
henngeblog
Published in
6 min readMar 9, 2022

Every month at HENNGE, we hold a mini internal conference-like session for sharing knowledge and ideas called Monthly Technical Session — MTS for short. The 90th MTS was held on Zoom on 21 January 2022, including talks from members of some of our development teams and interns who were participating in our Global Internship Programme at the time.

Learning iOS as an Android Developer

A screenshot of Zoom with a person (Charles) giving a presentation
Charles talking about iOS development

Charles gave the first presentation about his foray into iOS development. Charles has been working mainly as an Android developer over the years but recently has dipped his toes into various technologies such as Flutter and, as per the title of the talk, iOS and Swift.

He recounted some of the difficulties he faced, such as using XCode, the concepts used by it, and the lack of appropriate-level learning resources. Many of the tutorials he encountered were targeted at people with no programming knowledge. Ideally, he would have liked to have more resources that were aimed at people with a development background.

Additionally, he found that Apple’s documentation was not as user-friendly as Google’s, and there were some additional pain points such as the difficulty of releasing software on iOS compared to Android.

However, there were some similarities he did not expect, with Swift and Kotlin sharing a lot of language concepts. He believes that now is a decent time to be learning iOS as the frameworks are improving and becoming more similar across platforms.

Spotify Backstage

A screenshot of Zoom with a person (Yoel) giving a presentation
Yoel talking about Spotify Backstage

Next up was Yoel, who gave a talk on Spotify Backstage, an “open platform for building developer portals”. He described the reason why the team at Spotify decided to develop the tool:

As an organization grows, information fragmentation may occurs among software engineers. This causes inefficiency for software engineers to produce meaningful work. Backstage help to alleviate this situation by centralizing information in one place so it is easy to index and access.

The platform provides a centralized location to manage and find documentation about the products and services being developed at an organization. An example of one of its features is to create templates for projects so that a new project can be easily created that is aligned with the organization’s best practices.

He then quickly ran us through their public demo, which you can find at https://demo.backstage.io. There is a list of services, each containing TechDocs for documentation, API definitions, proposals and more. Yoel also showed us an example plug-in that can be used to document the tooling and methods that an organization is already using for quick reference.

Fooling Neural Networks: A Brief Introduction to Adversarial Examples

A screenshot of Zoom with a person (Simon) giving a presentation
Simon talking about neural networks and adversarial examples

Following this, Simon gave an introduction to adversarial attacks against modern machine learning systems:

The success and wide adoption on neural network based systems has led to a strong reliance on these systems in critical scenarios like security and self driving cars. Research shows how well trained AI systems can be fooled into wrong predictions on even the simplest of tasks by showing them a slightly altered input image which looks to normal to the human eye. Not publicising your AI algorithms and training data can help but there are not too many ways to create AI resilient to these types of attacks because of the vast range of possibilities for input data, therefore research is being done to create more robust systems.

He told us that due to the fact that neural networks, machine learning, and artificial intelligence is becoming more widely adopted, there is a growing incentive for creating malicious interference to systems based on related technology.

He gave some examples of the kinds of interference, such as minor, specialized adjustments applied to an image that is unnoticeable to the human eye but that could completely fool a machine learning model. Discovering and inventing countermeasures to create systems that can withstand such interference is an area that is currently important and ripe for research.

Analyzing Discourse on Contraception in Filipino Reddit Communities using Python

A screenshot of Zoom with a person (Dani) giving a presentation
Dani talking about analyzing discourse on Reddit

After a short break, we continued the session with a talk from Dani related to sexual health education in the Philippines— something he clearly is passionate about:

Sexual health education has been a significant topic in the Philippines, considering the lack of concrete integration of sexual health in the country’s education as well as the taboo nature of the topic in this predominantly Catholic country. We use Python to analyze the discourse on topics related to sex and contraception on Filipino Reddit communities.

He described how he worked on the project with a partner to scrape Reddit posts and comments in the communities of r/SafeSexPH and r/Phillipines using Python Reddit API wrapper. They mined keywords, manually labeled them, and preprocessed them before using various methods of text analysis, including WordCloud and VADER Sentiment Analysis.

They found that the posts and comments were generally positive and with negative comments tending to complain about the taboos of the topic in the country. He hypothesized that the positive results may be due to the general younger demographic that is present on Reddit.

React Native: an average consumer perspective

A screenshot of Zoom with a person (Jesson) giving a presentation
Jesson talking about working with React Native

Jesson followed up with a talk about working with React Native:

I mainly talk about cross-platform development and how they work internally especially React Native. I briefly talk about what is React Native, the React Native internals, and some frameworks developers should expect to be using when developing with it. Finally I give some pros and concerns about React Native and future speculations about cross-platform development.

He described his experiences, including some of the challenges he faced. For example, he found it quite hard to debug his application in React Native compared to React. For managing state, he recommended Redux as he found it really useful. It has up-to-date documentation and Redux Toolkit provides an opinionated and straightforward set of principles for implementation.

He encountered several bugs and unfixed issues and found the documentation to be lacklustre and incomplete. He found many limitations with React Native and ended by saying that he does not think there is a clear winner between it and Flutter, but with speculation on improvements in the future.

Deepfake Videos in the Wild: Analysis and Detection

A screenshot of Zoom with a person (Ken) giving a presentation
Ken talking about deepfake analysis and detection

Rounding up the session was Ken, who gave the final presentation on detecting deepfakes — “synthetic media in which a person in an existing image or video is replaced with someone else’s likeness”

In the last few years, deepfakes have emerged as a new form of fake news. This has led to the creation of a new research field that creates machine learning models to detect them. However, current datasets used in the field do not accurately reflect what’s out there on the web, so much work is yet to be done!

Ken is currently researching automatic fake news detection using natural language processing (NLP). Fake news comes in many forms, including text, images, audio, and video, making deepfakes an important part of his research.

He gave us a brief overview of how they can be generated, such as the use of Generative Adversarial Networks (GAN) and Variational Autoencoders (VAE), along with some examples of software that can be used to create them.

He then introduced research that looked into detecting deepfakes, where models were trained on a dataset created specifically for research. These models were tested with deepfakes “from the wild” and were found to perform poorly. He covered some ways that such detection models could be improved to combat the issues they currently face.

No MTS is complete without is Beer Bash, which we held on Zoom 🍻🍺

A screenshot of a Zoom party

--

--

Sean Li
henngeblog

began life in the UK, now working on software in Tokyo