Python, Big Data and not Enough Memory

Beware of the man who won’t be bothered with details. — William Feather, Sr. Goal In this essay I want to shed some light on the details when dealing with large datasets (read: arrays), while working with Python and numerical code when datasets become increasingly large (or local memory being not enough or the overall computation being too slow using Python’s standard libraries). ...

August 31, 2023 · 1027 words

All Your Data are Belong to Us!

Recently we have seen OpenAI, a company excessivly funded by Microsoft, trialing a Large Language Model (LLM) called chatGPT. Guess what? They are selling you, your own data. ...

February 11, 2023 · 356 words

The Annotated Transformer Revisited

In this article we have an illustrated annotated look at the Transformer published in “Attention is all you need” in 2017 by Vaswani, Shazeer, Parmer, et al. The Transformer architecture was groundbraking as it achieves 28.4 BLEU on the WMT 2014 English-to-German translation task with comparatively very little training. Even though it is eclipsed by the “Reformer: The Efficient Transformer” published by Nikita Kitaev, Łukasz Kaiser and Anselm Levskayain in this year/2020, it is still interesting to have a look at the fundamental idea of the comparatively “simple network architecture […] based solely on attention mechanisms”. ...

February 22, 2020 · 980 words

Getting started with offline Voice Recognition

With voice assistants becoming ubiquitous we get used to things listening to us. In this article we explore how to get a headless RaspberryPi Zero W to be our offline voice assistant. ...

January 26, 2020 · 1193 words

Getting started with Coral Edge TPUs

After reading up some nice stats on Google’s Coral Edge TPUs by Sam Sterckval at @RacoonsGroup, I got curious. Let’s see how this works. ...

January 25, 2020 · 888 words