Welcome to this week's edition of “Weekend readings”, a series in which we revisit the week's most interesting stories in the world of IT.
Most of the things we recommend here are related to software. However, the infrastructure world can be as fascinating (for some even more fascinating) as the software one. It turns out that most of the algorithms can be boosted up (not in terms of complexity but it terms of execution time) by using dedicated hardware. We have seen this happening in Bitcoin. Nowadays it is being mined mostly by ASIC circuits. Apparently, the same pattern was applied to Machine Learning. Google has built its own computer chip – TPU, Tensor Processing Unit – designed for running deep neural networks. TPU outperforms standard processors by 30 to 80 times.
I still remember when being a young analyst and working for telco company I first heard term „idempotency”. First I made big eyes in surprise, then I laughed and finally I checked dictionary. And that word really existed. It described perfectly the situation I have encountered. When someone retried (due to timeout) to recharge prepaid service, the prepaid got recharged twice. Later it turned out that idempotency and retries compose a bigger issue with a much broader scope than I thought in the beginning. The use cases are all around. I strongly encourage to read the article dedicated to this subject.
This link may not look like such a big deal, but let's analyze it:
- on the right, there is a book about machine learning
- on the left, there is a development environment very often used for machine learning called Jupyter. There is also some simple code doing classification (one of the fields of machine learning)
- the title of the tweet says "The new NBA."
The last may not be that clear what it means until we look at who is twitting this: Mark Cuban. He is the owner of one of the NBA teams (Dallas Mavericks) and has estimated net worth of 3,2 billion dollars. And he is learning machine learning in Python. This may be because of his last prediction that the first trillionaire will be in AI.
Yes, this is the exact the same news Paweł has already, but it is sure worth mentioning twice with alternative source. It seems that not only Cuban believes that the next trillion in AI. Companies are pouring billions of dollars into AI research in both software and hardware. More or less a year ago [NVIDIA released it's AI chip], now Google is creating a hardware side for its TensorFlow framework that has become the market standard according to Gartner.
Have you ever wondered what data platform architecture runs behind Facebook or Netflix? How do they handle petabytes of data effectively? Do they use any special software? Read this blog post by Michelle Wetzler to find out that even the biggest businesses in the world run some well-known platforms and services to deal with a huge amount of data.
Imagine the world where all necessary data for analytics are at hand. Imagine people playing with open data to find solutions to world’s biggest problems – disasters, poverty, wars. Hard to imagine, isn’t it? But there are some cases proving that the democratization of data could bring us some serious gains. Here is one of them: read how Dustin Cabal used open data and a data visualization tool to analyze dams in U.S. from a security perspective.