How to join the AI revolution?


Artificial intelligence is evolving from hype to enabler of sustainable business transformation. However, very few organizations succeed in implementing end-to-end artificial intelligence solutions. Reinventing your customer experience and business model with help of AI requires strong capabilities:

  1. Creativity to design technical assets, and combine data with real-life business challenges

  2. Hands-on implementation experience, and a thorough understanding of market opportunities

This is what we do. We have extensive experience in educating, implementing and advising on AI (related) topics.

Are you interested to see how your business can benefit? Take advantage of our free pilot.
We are here to help.


How you can benefit from AI




ICE logo.jpeg
Hongkong uni logo.png
UvA logo.png

About us.png

Our team

Our team of young IT professionals operate across many different industries. Being Amsterdam based, we are able to help companies around the world with their AI implementations. Our team culture: be dedicated, have fun, be creative and never be afraid to try doing things different.

4 Emerging trends will transform the field of artificial intelligence in 2018

Nowadays we see that technology alone is rarely enough to unlock sustainable business growth. When a new technology is combined with a ‘new way of doing business’, true value is created. Through our work and research, we have identified five emerging trends in artificial intelligence for 2018. Executives should learn to shape the outcome rather than just react to it.

  • Read about the 4 trends ↓

    1. Artificial Intelligence seen as commodity
    In recent years, many tech giants (Google, Microsoft Azure, IBM) invested heavily in the general-use of Machine Learning and Deep Learning. In 2018, more SME businesses will learn how to use their solutions and full service platforms. They have managed to optimize Computer Vision and Natural Language Processing in such a way that it will most likely outperform any other (smaller) player in this field. With help of API’s they will take over (market share up to 85%) the general-use machine learning industry in 2018.

    2. Democratisation of ‘Click’ Machine Learning
    In 2017 there has been an exponential use of so called ‘click – drag and drop’ tools. Usually highly specialised solutions are able to generate business value without any interference of writing code. It also has a great looks & feel and is very visualized.

    These tools allow users to model a solution without writing code (such as Orange, Dataiku, Exploratory). The threshold to start having fun with machine learning just decreased…

    3. “Guys, these geeks are infiltrating...”
    Many organisations started in 2015 and 2016 so called ‘AI labs’ and ‘Advanced Analytics’ teams. These teams used to operate in a highly centralised set up in order to ensure that they could learn from each other and operate across different departments. However, organisations learned that the actual best practice is to embed data scientists and analytic team members within individual business units.

    4. Predictive Analytics on the loose
    As mentioned, many organisations invested heavily in new ways to apply advanced analytics and predict certain outcomes. In 2018 we will see more and more autonomous agents that will use the outcomes of these predictive analytics tools to plan and interact directly with customers.

    In simple terms, The data and tools now available will not just convince someone to do something, but will actually do it themselves! Two examples,
    A) Customer data is used to identify which customers are most likely to unsubscribe and will be instantly offered personalised deals in order to keep them.
    B) Products that are connected to the internet, such as tires in the future, will not only provide the car and garage with information on their wear and tear, but also make and set appointments for maintenance and replacement.


Why do data scientist pre process and transform data?

Data pre-processing is not considered the sexiest part of machine learning; however, it is important to transform raw data into a format that will be more easily and effectively processed for the purpose of the user.

Data pre-processing describes any type of processing performed on raw data sets in order to prepare it for further procedures (i.e. the use of machine learning).

  • Read about data transformation ↓

    Luckily for us, almost all of the necessary tools and algorithms for data pre-processing already exist and can be imported onto your data sets.

    To begin, raw data sets are split into training sets and test sets. This is important to verify if the model is performing properly. If you would use 100% of your data set to train the model, it is not so easy to use that same data to test model accuracy. Gathering new data is often quite expensive. We recommend a 60/20/20 ratio. 60% of the data is used to train, 20% to validate the model, and 20% to test.

    In order to further elaborate on why we pre process raw data, we provided a few examples.
    • Sampling: an algorithm that selects a representative subset from a large population of raw data. In order to solve a problem, you might not need all of the raw data. The goal is to obtain a representative sample of the data. In the real world, we see that sampling is typically used when it is too expensive and time consuming to process all of the data.
    • Data cleaning: deals with incomplete, missing, or noisy data. In real world organizations we often see data warehouse quality issues. Missing values in a data set might influence the performance of a machine learning model. A common solution to this problem is to use the mean, median, or mode (most frequent) value in this section. Deleting the entire observation or data group is often unnecessary because the remaining values can still be of great help to enhance the performance of your model.
    • Transformation: manipulates raw data to produce a single input such as 1 or 0.
    • Dummy features: Allow you to differentiate categories. In order for category features to be useful in regression analysis, all of the features need to be numerical. However, you might want to include an attribute or nominal scale feature such as ‘Country’, ‘Type of Education’ etc. Dummy features don’t have a causal relation to the output, but they are created to ‘trick’ the regression algorithm into correctly analysing the features of interest. Say you have three countries; USA, The Netherlands, and Hong Kong. Labelling those countries to a number (1,2, and 3) would not mean anything since you can’t subtract country 1 from country 3. In this case, you would add two additional columns, and each value could be either 1 or 0 (take for instance Hong Kong and USA). For example, specifying that the observation happened in the Netherlands would result in 0,0. (or in Hong Kong; 1,0).
    • Normalisation: organises data for more efficient access. Data is scaled to fall within a small, specified range. The goal of standardisation or normalisation is to make an entire set of values have a particular property. Lets look at income as an example. Suppose that the minimum and maximum values for the feature income are €98,000 and €128,000, respectively. If these numbers were too large compared to other data sets, it would automatically give more ‘weight’ to the higher income numbers. We would like to map income to the range 0.0-1.0. By min-max normalisation, a value of €98000 for income is transformed to: 98,000 − 128,000 / 98,000 − 128,000 1.0 − 0.0 + 0 = 0.872
    • Feature extraction: identifies specified data that is significant in some particular context. Feature extraction involves reducing the amount of resources required to describe a large data set. When performing analysis on complex data a common problem is often encountered where too many features are involved. Analysis with a large number of features generally requires a large amount of memory and computational power. It may also cause a classification algorithm to be overfitting (see AIcompany bible, learning pages) to the model; generalising poorly new input data (making sense of new data).
    Data pre-processing and filtering usually takes quite some time. Especially because most data quality issues during mining or data warehousing processes are neglected. Data scientists spend their time on data preparation to ensure we all have a well-performing model (“Garbage in, Garbage out principle). .

Artikel plaatje goeie kleur.png

AI positions 2.png

Defining Artificial Intelligence

Artificial Intelligence (AI) is an umbrella term. It's a broad, complex field, filled with its own jargon and definitions. So let's unravel this field and see what it has to offer, under the hood.

First, we'll start with our definition of AI; We believe that whenever a computer is able to make autonomous decisions based upon predictions derived from statistical models, it classifies as Artificial Intelligence.

  • Read more about the field of AI ↓

    As displayed in our circle diagram, the closest and most overlapping fields with AI are Machine and Deep learning. Originating from Computer Science, Machine Learning (ML) is considered a field within AI and Data (Advanced) Analytics that allows software applications to become more accurate in predicting outcomes without being explicitly programmed to. We define AI as whenever a non-human agent is able to make autonomous decisions based upon predictions derived from statistical models. Deep Learning (DL) is a subfield of ML and is responsible for the most exciting capabilities in diverse areas like natural language processing, image recognition and robotics. Business AI In recent years, investments from big companies have shifted the field of AI from being mostly research driven, to become a big and impactful component of a huge amount of industries. The major arms race among many businesses takes place in the realm of data. The utility that companies can create with the huge amounts of data the acquire is what can give them an edge against competitors. While the tech giants are fighting this big data 'war', AI is benefiting many companies in different industries. AIcompany is interested in the possibilities that AI brings on a smaller scale. With the democratisation of many AI applications, it has never been so accessible for businesses to enjoy the power of AI. Please let us know if you want to have more information, or are ready to start implementing AI and need help! At AIcompany, we are interested in AI’s impact on business processes. We have selected the most important themes in order to help you identify and find sustainable business opportunities. Most of the AI that is used commercially is an automation of mindless and simple tasks in any type of business process. The most effective AI models have been developed in the fields of:
    • Data analytics and prediction
      The field where customer of business data is used to make in dept analyses, or to predict future behaviour or states.
    • Computer vision
      This field is focusses mainly on image/video recognition. Also handwriting analysis falls into computer vision. Most of these model make us of deep learning.
    • Natural Language Processing
      Which is the ability of a computer to extract information, like analysing the sentiment of a piece of language, written or spoken.
    Please don’t hesitate to contact us to find out where AI can improve your company!

Take advantage of our free pilot

Our promise to you: Co-create a proof-of-concept AI model (free of charge). Explore how intelligent automation can innovate your business case. We are able to design and build a first proof-of-concept model free of charge because of our portfolio of standardized models/vendors.

How does it work?

  1. Get in touch, to meet the team and get the administrative matters out of the way
  2. On our follow up meeting, we'll make the business case and set the scope of the project
  3. In a two-hour Demo Session we present the proof-of-concept and discuss the impact and next steps
  4. Now it's time to decide:
    Is the case viable? Together we estimate the cost and design the future


What to expect?

  • A kick ass demo
  • Initial results of a real model with your own data
  • Model performance scores
  • Insights in future challenges
  • Clear and on-point communication

What not to expect?

  • Live solutions
  • Detailed impact analysis of changes required in your environment
  • Legal/compliance advice
  • Training or education
  • Complete configuration of solution

Pilot artwork.png