Early AI adopters may gain “insurmountable advantage”

- McKinsey Global Institute, 2018

 

Artificial Intelligence is the most influential general-purpose technology of our era. With the rise of computing power and tremendous growth of data storage, machine learning gives companies tremendous opportunities to optimise and automate many business processes.

 

How you can benefit from AI

 

 
ICE logo.jpeg
 
Hongkong uni logo.png
 
UvA logo.png
 

 

What is AI?

Artificial Intelligence (AI) is a broad, complex field, and filled with it's own jargon and definitions. So lets first define AI. We believe that whenever a computer is able to make autonomous decisions based upon predictions derived from statistical models, it classifies as Artificial Intelligence. 

We are interested in AI’s impact on business processes. We have selected the most important themes in order to help you identify and find sustainable business opportunities.

What AI can do for you

Most of the AI that is used commercially is an automation of mindless and simple tasks in any type of business process. The most effective AI models have been developed in the fields of:

 DLP

Data analytics & prediction, the field where customer of business data is used to make in dept analyses, or to predict future behaviour or states.

 dasdf

Computer vision, this field is focusses mainly on image/video recognition. Also handwriting analysis falls into computer vision. Most of these model make us of deep learning.

Natural Language Understanding.png

Natural Language Processing, which is the ability of a computer to extract information, like analysing the sentiment of a piece of language, written or spoken.

Please don’t hesitate to contact us to find out where AI can improve your company!

 
 
AIcompany Field of AI
 
  • Further elaboration on the field of AI↓




    Originating from Computer Science, Machine Learning (ML) is considered a field within AI and Data (Advanced) Analytics that allows software applications to become more accurate in predicting outcomes without being explicitly programmed to. We define AI as whenever a non-human agent is able to make autonomous decisions based upon predictions derived from statistical models.

    Deep Learning (DL) is a subfield of ML and is responsible for the most exciting capabilities in diverse areas like natural language processing, image recognition and robotics.

    In recent years, investments from big companies have shifted the field of AI from being mostly research driven, to become a big and impactful component of a huge amount of industries. The major arms race among many businesses takes place in the realm of data. The utility that companies can create with the huge amounts of data the acquire is what can give them an edge against competitors.

    While the tech giants are fighting this big data 'war', AI is benefiting many companies in diferrent industries. AIcompany is interested in the possiblities that AI brings on a smaller scale. With the democratisation of many AI applications, it has never been so accessible for businesses to enjoy the power of AI. Please let us know if you want to have more information, or are ready to start implementing AI and need help!

 
About us.png

Our team

Our team of young IT professionals operate across many different industries. Being Amsterdam based, we are able to help companies around the world with their AI implementations. Our team culture: be dedicated, have fun, be creative and never be afraid to try doing things differently.


 

Get inspired by innovations in your industry!

Select your industry to see new machine learning solutions in your industry and assess if your company is ready!

 
 
  Health Care

Health Care

  Retail &   Marketing

Retail &
Marketing

 
  Finance

Finance

  Media

Media

 
  Transportation

Transportation

  ICT

ICT

 

4 Emerging trends will transform the field of artificial intelligence in 2018

Nowadays we see that technology alone is rarely enough to unlock sustainable business growth. When a new technology is combined with a ‘new way of doing business’, true value is created. Through our work and research, we have identified five emerging trends in artificial intelligence for 2018. Executives should learn to shape the outcome rather than just react to it.

  • Read about the 4 trends ↓

    1. Artificial Intelligence seen as commodity
    In recent years, many tech giants (Google, Microsoft Azure, IBM) invested heavily in the general-use of Machine Learning and Deep Learning. In 2018, more SME businesses will learn how to use their solutions and full service platforms. They have managed to optimize Computer Vision and Natural Language Processing in such a way that it will most likely outperform any other (smaller) player in this field. With help of API’s they will take over (market share up to 85%) the general-use machine learning industry in 2018.

    2. Democratisation of ‘Click’ Machine Learning
    In 2017 there has been an exponential use of so called ‘click – drag and drop’ tools. Usually highly specialised solutions are able to generate business value without any interference of writing code. It also has a great looks & feel and is very visualized.

    These tools allow users to model a solution without writing code (such as Orange, Dataiku, Exploratory). The threshold to start having fun with machine learning just decreased…

    3. “Guys, these geeks are infiltrating...”
    Many organisations started in 2015 and 2016 so called ‘AI labs’ and ‘Advanced Analytics’ teams. These teams used to operate in a highly centralised set up in order to ensure that they could learn from each other and operate across different departments. However, organisations learned that the actual best practice is to embed data scientists and analytic team members within individual business units.

    4. Predictive Analytics on the loose
    As mentioned, many organisations invested heavily in new ways to apply advanced analytics and predict certain outcomes. In 2018 we will see more and more autonomous agents that will use the outcomes of these predictive analytics tools to plan and interact directly with customers.

    In simple terms, The data and tools now available will not just convince someone to do something, but will actually do it themselves! Two examples,
    A) Customer data is used to identify which customers are most likely to unsubscribe and will be instantly offered personalised deals in order to keep them.
    B) Products that are connected to the internet, such as tires in the future, will not only provide the car and garage with information on their wear and tear, but also make and set appointments for maintenance and replacement.

Why do data scientist pre process and transform data?

Data pre-processing is not considered the sexiest part of machine learning; however, it is important to transform raw data into a format that will be more easily and effectively processed for the purpose of the user.

Data pre-processing describes any type of processing performed on raw data sets in order to prepare it for further procedures (i.e. the use of machine learning).

  • Read about data transformation ↓

    Luckily for us, almost all of the necessary tools and algorithms for data pre-processing already exist and can be imported onto your data sets.

    To begin, raw data sets are split into training sets and test sets. This is important to verify if the model is performing properly. If you would use 100% of your data set to train the model, it is not so easy to use that same data to test model accuracy. Gathering new data is often quite expensive. We recommend a 60/20/20 ratio. 60% of the data is used to train, 20% to validate the model, and 20% to test.

    In order to further elaborate on why we pre process raw data, we provided a few examples.
    • Sampling: an algorithm that selects a representative subset from a large population of raw data. In order to solve a problem, you might not need all of the raw data. The goal is to obtain a representative sample of the data. In the real world, we see that sampling is typically used when it is too expensive and time consuming to process all of the data.
    • Data cleaning: deals with incomplete, missing, or noisy data. In real world organizations we often see data warehouse quality issues. Missing values in a data set might influence the performance of a machine learning model. A common solution to this problem is to use the mean, median, or mode (most frequent) value in this section. Deleting the entire observation or data group is often unnecessary because the remaining values can still be of great help to enhance the performance of your model.
    • Transformation: manipulates raw data to produce a single input such as 1 or 0.
    • Dummy features: Allow you to differentiate categories. In order for category features to be useful in regression analysis, all of the features need to be numerical. However, you might want to include an attribute or nominal scale feature such as ‘Country’, ‘Type of Education’ etc. Dummy features don’t have a causal relation to the output, but they are created to ‘trick’ the regression algorithm into correctly analysing the features of interest. Say you have three countries; USA, The Netherlands, and Hong Kong. Labelling those countries to a number (1,2, and 3) would not mean anything since you can’t subtract country 1 from country 3. In this case, you would add two additional columns, and each value could be either 1 or 0 (take for instance Hong Kong and USA). For example, specifying that the observation happened in the Netherlands would result in 0,0. (or in Hong Kong; 1,0).
    • Normalisation: organises data for more efficient access. Data is scaled to fall within a small, specified range. The goal of standardisation or normalisation is to make an entire set of values have a particular property. Lets look at income as an example. Suppose that the minimum and maximum values for the feature income are €98,000 and €128,000, respectively. If these numbers were too large compared to other data sets, it would automatically give more ‘weight’ to the higher income numbers. We would like to map income to the range 0.0-1.0. By min-max normalisation, a value of €98000 for income is transformed to: 98,000 − 128,000 / 98,000 − 128,000 1.0 − 0.0 + 0 = 0.872
    • Feature extraction: identifies specified data that is significant in some particular context. Feature extraction involves reducing the amount of resources required to describe a large data set. When performing analysis on complex data a common problem is often encountered where too many features are involved. Analysis with a large number of features generally requires a large amount of memory and computational power. It may also cause a classification algorithm to be overfitting (see AIcompany bible, learning pages) to the model; generalising poorly new input data (making sense of new data).
    Data pre-processing and filtering usually takes quite some time. Especially because most data quality issues during mining or data warehousing processes are neglected. Data scientists spend their time on data preparation to ensure we all have a well-performing model (“Garbage in, Garbage out principle). .


Machine Learning Fundamentals

Download the Bible and expand your know theoretical knowledge. Also, we provide an extensive report on Key Learnings when it comes down to implementation. Check out our open assessment to test if you are ready to implement.

Need a second opinion?

We help you validating and defining a machine learning business case. We help you with ideation, validating machine learning models, selecting vendor, Roadmaps, Service Blue Prints, and Legal Counselling.

Partnership opportunities

Great things are achieved by a series of small things brought together. We can help you with deploying and implementing machine learning applications. We are here to help. Please do not hesitate to ask.