Artificial Intelligence. Do you know what AI can do for your business?
Table of Contents

Artificial Intelligence. Do you know what AI can do for your business?

Artificial Intelligence (AI), Machine Learning, Neural Networks … most of us know these words. They’re banded about as the ‘next big things’ that promise to revolutionize the way we do business. This might well be true, but there are still many people that aren’t aware of exactly what these technologies are, how they’re already impacting our lives and how they have the potential to transform the way we do business.

We know Siri and Alexa, the personal assistants by Apple (Siri) and Amazon (Alexa) to help us out in an increasing number of ways. Most of us know that these are AI-powered assistants, but few of us know about the thousands of algorithms, neural networks, random forests and gradient boosting that help make these assistants what they are and contribute to their growing sophistication and usefulness. Julia Medvid, Senior Client Partner at ITMAGINATION and Łukasz Dylewski, Head of Data Science, have extensive experience in introducing ITMAGINATION clients to AI and helping them to benefit from this game-changing technology. Here, Julia and Łukasz explain what you need to know about AI.

A definition of AI

In the early 1980s, two academics, Avron Barr and Edward Feigenbaum, proposed a definition of Artificial Intelligence (which we commonly refer to today as AI).

Artificial Intelligence (AI) is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior – understanding language, learning, reasoning, solving problems and so on.

The Handbook of Artificial Intelligence, Volume 1

Today, we consider AI as a number of algorithms and programming systems. One of the key functions of AI is its ability to perform certain functions that have previously been considered to be possible only by the human brain. For example, AI can solve problems that a human would solve by taking several related decisions, based on reason and having had certain ‘human’ experiences (e.g. to know which actions have the highest probability of success).

The main attributes of AI are its ‘ability’ to learn and to use the information that it has received or been exposed to, and to transform this into new data. Among the most-common uses of AI thus far have been to process, make use of, understand and respond to spoken and written inputs (Natural Language Processing), image recognition and outcome prediction based on historical data. In the business world, this outcome prediction is known as predictive analytics and is being used by companies to interpret customer needs and create informed sales forecasts.

What is an algorithm?

The first computers operated based on a specific configuration of simple commands. Although the complexity has increased dramatically, it can be said that algorithms are little more than a process or set of rules to be followed in calculations or other problem-solving operations. Every computer program or application that we use works in the same way. They all work on the same principle that, if a command is given, the action associated with that command should be performed.

Algorithms in ‘conventional’ computing

An algorithm that is not constructed correctly will not bring about useful outcomes for a user. Computers, programs and apps are not capable of evaluating a situation and proposing a suitable outcome. If the appropriate command and action are not built into the algorithm, then the computer or application has no way of knowing what action to perform. In the past, this might have led to a dreaded blue screen or error messages popping up on screen. These are a computer’s reaction to being given a request that it does not recognize.

Imagine a situation where you give a robot the following instructions:

Before you cross the road, look left and then right. If there are no cars coming, proceed to cross the street.

The robot processes the message. It looks left. It looks right. There are no cars coming. But there is a cyclist coming from the left. But a cyclist is not a car, and the robot has not been taught to consider cyclists. As such, it proceeds to cross the street and causes a collision with the cyclist. It would proceed to do so again and again (probably causing several accidents in the process) until the programmer changes the algorithm.

This example tells us that the algorithm will only perform as well as the programmer has written it. If it’s not built into the algorithm, there can be no expectation that the computer, robot or application will anticipate what was really meant by the algorithm’s author (i.e. to check that the road is clear of all vehicles and travelling objects before proceeding).

Algorithms and Machine Learning

But what if a machine could learn? This is the long-held dream of scientists and academics all around the world. This is the prospect of Machine Learning.

With Machine Learning, the outcome of the scenario above would be the same, at least in the first instance – the robot would not recognize the oncoming cyclist as a threat and would proceed to cross the road. There would be a collision. However, with Deep Learning, the robot would be able to record the collision with a cyclist as an experience. Next time, faced with the same set of circumstances, the robot would know that an oncoming cyclist poses a threat of collision and would ‘learn’ to avoid the cyclist.

It’s not that the robot itself is learning from its experiences. It’s that the set of algorithms is being built out to factor in different data. The presence of the cyclist is a form of data or pattern and the detection of a certain pattern then becomes associated with a certain behavior or outcome (i.e. a collision). This is the evolution from a simple algorithm or set of algorithms, to neural networks.

Neural networks

Neural networks are the connection of different elements, artificial neurons, that create three layers: the input layer, the hidden layer and the output layer. At the input layer, parameters are defined, and each parameter is given a weighting. This weighting helps define how much influence this input data has on the outcome. Next, the different data sets from all input are inserted into the formula that has been defined by the programmer. These are the hidden layers, and it is here that any computing or calculation, as defined by the programmer, takes place. If the input exceeds a predefined value, the neural network provides a numerical output, which could be a positive result, negative result or a neutral result.

Artificial intelligence 1

Source: An AI neural network

The idea of the neural network was inspired by the human brain. Simply put, our senses take in different inputs, our brain computes them, and then we provide an appropriate response.

Nose smells cookies,  Eyes see cookies, Mouth tastes cookies,

Brain computes inputs from various different senses

Eyes light up and mouth says “yummy cookies”.

How do neural networks ‘learn’?

The value of a neural network is its ability to ‘learn’. It doesn’t follow one specific strategy but, instead, follows a method of trial and error. The programmer can enter various different patterns at the input level and the neural network modifies the weightings based on the differences at the output level relative to the patterns entered originally. In this way, the neural network can be trained according to a desired function.

For example, a system can be taught to identify objects within pictures. The system is ‘fed’ a number of different images of a specific object in a variety of different environments and contexts. Eventually, it is able to recognize it and identify it in the images. With minimal human intervention (e.g. to correct mistakes), it will amass ‘experience’ and improve its accuracy. As an example of this, check out how ITMAGINATION has developed an AI-based solution that enables brands to monitor the quality of product-placement photographs posted to Instagram by influencers.

What do you really need to understand and know about neural networks?

Practically speaking, neural networks are modern machines that process numbers. They don’t understand that they’re dealing with photographs and recognizing faces, or that they are driving cars. Although the algorithms are sophisticated, and the volumes of data can be huge, neural networks still work on the basis that data is input, data is processed, and an output is produced.

Despite their apparent sophistication, neural networks are not able to improvise. They can operate in unpredictable situations (data sets) but they are not capable of coming up with original solutions to problems. A simple neural network can be made to operate on almost any type of computer. It doesn’t require any kind of special equipment. If, however, you want to run the type of neural network that detects and recognizes objects within images or that understands, analyzes and responds to natural language, then much more processing power is required (e.g. like those used to render our high-quality graphics, such as in high-end gaming machines or machines used by 3D designers). This processing power is required to process and make calculations with the data that is input to the neurons.

Classic algorithms and machine learning

‘Regular’ algorithms are built to provide a predefined result or outcome. For example, if a developer writes code aimed at calculating the size of an apartment based on a plan of the apartment, then it is necessary to write out each of the steps involved in making that calculation. That means: division, adding, subtracting, etc. Following this series of steps, the algorithm is able to calculate the size of the apartment. The programmer is also able to adjust the algorithm to optimize it or to make changes based on external factors.

Where machine learning is involved, however, Data Scientists – instead of building algorithms – will use many different ways of making this same calculation, but without determining the precise calculation formula. Instead, the Data Scientist takes care of the complexity, connections and overall architecture of the network. As a result, the neural network independently learns to improve the accuracy of the output value. For example, the Data Scientist enters ten thousand apartment floorplans, each of which includes measurements and dimensions. The model begins to estimate what the output should be. A relatively simple neural network could do this with numerical values as the input (e.g. in Excel) that are easy to identify. A more sophisticated neural network might be able to extract the values from design files or meta information. When the system produces outputs that differ dramatically from the desired result, the Data Scientist makes corrections to ‘teach’ the neural network how to produce more-accurate results. In this way, he or she is making adjustments to the analytical model or to the hyperparameters.

Whereas a classic algorithm performs functions exactly as it is programmed to do and provides clear and predictable results, a machine-learning based system will independently learn the relationship between the different input values and how they are combined to achieve an expected (but different each time) result. In the long run, this saves a huge amount of time and effort. What the neural network is able to ‘learn’ independently would require huge amounts of classic programming time and resources.

How can AI be used?

The scope of what AI can be used for is wide and varied. It can enhance and optimize the way we use technologies that are already familiar to us and it is being used with technologies that are still to reach mainstream use. The diagram below shows just a few of the most-commonly referenced AI uses cases.

Artificial intelligence 2

Source: PwC

AI in our daily lives

For those still learning about AI, some of the cases cited in the diagram above might still feel a little abstract and distant from our daily lives. Here are some examples that you will almost certainly be familiar with:

3. Voice Assistants on smartphones and home speakers.

In these scenarios, AI is being used to detect that a request is being made (“Hey Siri”, “OK, Google”, “Alexa”), to process our voice request, to find relevant data or information that addresses that request, and to convey the information back to us in a way that we will understand and (hopefully) find useful. These virtual assistants are helping us find the best route to work, they’re reminding us of things we might need to buy, as well as serving as interfaces to a growing number of third-party services (e.g. Amazon shopping, Spotify and more). The neural networks behind these services have been provided a level of initial training (e.g. before release) but now every request made to Siri, Alexa or Google is being used to improve the way that AI processes and responds to requests.

2. Chatbots on webpages.

Many webpages, especially e-commerce and service provider websites, feature avatars that serve as the face of a chatbot (typically hovering around the bottom-right of a screen as we scroll down or as an option in the ‘help’ section). The more sophisticated incarnations of such bots are able to decipher our questions and provide us with usable answers from a database of accumulated or taught knowledge. Such bots are commonly built on deep neural networks based on an engine such as TensorFlow.

Not all bots are created equal. Some bots are better than others at processing natural language – some are equally good at deciphering short and long questions, while others are more limited and respond primarily to the use of key words. The variance between bot sophistication explains why we still get frustrating responses like “I’m sorry, I don’t understand your question” or “please try to write in a more concise manner”. The difference isn’t just about the way the bot has been ‘built’ – it also depends on the initial level of teaching and correction that the bot has received, and the volume of questions and feedback that the bot has been exposed to (and thus is able to learn from). If there’s a silver lining to a frustrating bot experience, it’s that the questions you have entered and the frustration you have experienced should have been useful in helping to refine the bot so that it’s more useful in the future.

1. Intelligent camera filters.

Have you ever noticed how your digital camera or the camera on your smartphone is able to detect that you are taking a photograph in the dark or in an environment exposed to large amounts of sunlight? In such scenarios, a camera will typically adjust itself to create a more-balanced effect and provide you with a better image. Some cameras are even ‘smart’ enough to recognize that you are taking a selfie and will, proactively, show you a version of the image that ‘magically’ removes blemishes from your skin, smoothens your complexion, removes wrinkles and stray hairs, and adds a little sparkle to your eyes.

In this context, AI is using input from a variety of different neurons (such as light) as well as using object detection within images to identify elements such as blemishes, eyes, teeth, hair, etc. Relying on a huge volume of examples that have been ‘fed’ into the system, the filters are able to suggest enhancements with a degree of confidence that the user will find them preferable to the original.

AI in the future, with ITMAGINATION

AI is much more than a technology buzzword or hot topic. Businesses that aspire to be successful in the future must start becoming masters of AI today. It can be difficult to know where to start and to ensure that your time, money and resources are invested in a way that will provide rapid and effective returns. ITMAGINATION can help.

ITMAGINATION has proven experience in applying AI for business benefit.

Two of ITMAGINATION’s most-recent use cases with AI are:

An AI-based solution for retail that empowers retailers to reduce the time spent by customers in line at stores. By making use of existing CCTV infrastructure, the AI solution developed by ITMAGINATION is able to recognize when the store is becoming busier and determine what action the store staff need to take to avoid customer dissatisfaction and optimize sales. The system is also able to integrate historical data and external factors (such as public holidays, weather, etc.) to predict the level of activity within store. Based on these factors, it suggests optimal staffing levels and recommends which products to stock and where to display them to achieve maximum exposure.

A solution that enhances banks’ Know Your Customer and Anti Money Laundering capabilities. ITMAGINATION has deployed a solution for one of Poland’s leading banks that uses AI and machine learning to analyze millions of client transactions and activities simultaneously and in real time to identify possible rogue activities and even predict when and where they might happen. The system functions 24/7 and dramatically enhances the bank’s ability to be compliant with applicable regulations while preserving human time for high-priority or value-adding initiatives.

If you want your business to thrive in the AI era, talk to ITMAGINATION.

Learn it. Know it. Done.

Liked the article? subscribe to updates!
360° IT Check is a weekly publication where we bring you the latest and greatest in the world of tech. We cover topics like emerging technologies & frameworks, news about innovative startups, and other topics which affect the world of tech directly or indirectly.

Like what you’re reading? Make sure to subscribe to our weekly newsletter!
Categories:
Share

Join 17,850 tech enthusiasts for your weekly dose of tech news

By filling in the above fields and clicking “Subscribe”, you agree to the processing by ITMAGINATION of your personal data contained in the above form for the purposes of sending you messages in the form of newsletter subscription, in accordance with our Privacy Policy.
Thank you! Your submission has been received!
We will send you at most one email per week with our latest tech news and insights.

In the meantime, feel free to explore this page or our Resources page for eBooks, technical guides, GitHub Demos, and more!
Oops! Something went wrong while submitting the form.

Related articles

Our Partners & Certifications
Microsoft Gold Partner Certification 2021 for ITMAGINATION
ITMAGINATION Google Cloud Partner
AWS Partner Network ITMAGINATION
ISO 9001 ITMAGINATIONISO-IEC 27001:2013 ITMAGINATION
© 2024 ITMAGINATION. All Rights Reserved. Privacy Policy