(Approx. 6 minutes de lecture) (La suite de l’article en anglais) Cet article est le 2e d’une nouvelle série de contenu éducatif de Maxa AI qui exploite l’intersection entre la performance d’affaires pour enterprises et l’intelligence machine. ///
A quick AI orientation
Artificial intelligence (AI) can be described as any technique that enables computers to mimic human behavior. Machine learning (ML) is a subset of AI: it allows computers to learn without being explicitly programmed. Although machine learning lacks the marketing punch of its AI parent, ML is truly what people are excited about.
Traditionally, software programmers write code, feed it inputs, and try to produce desired results. ML turns this upside-down: feed inputs (i.e. lots of data) and a desired result, and a machine learning algorithm will learn, i.e. it codifies the relationship between inputs and results. This is called a “model”. Genius!
Despite these dynamics, the cost of ML is still largely driven by… humans. Humans write ML algorithms. Human data scientists shape data and tune models, created using ML algorithms, to solve problems. And domain experts make sure that all this wizardry drives business value.
Let’s explore five factors that are making AI a lot more affordable:
#1: The open source community joined the party
In the early days of AI, many expert firms developed their own ML algorithms, or “algos”. These crown jewels were crafted at great cost.
In the last five years, the open source community has been contributing better-and-better algorithms, which now dominate the ML landscape. Every week, new algos are released to the global community, often sponsored by the West Coast’s – and China’s – Big Tech.
But wait a minute – why would they give away the keys to the kingdom? Among other things, strong free algos lead to better AI applications, which in turn feed a handful of giant cloud platforms, differentiate them in a commoditized cloud environment, help build more attractive social media ecosystems, and create stickiness with future talent hires.
#2: Automating the automator
Anyone who looks at the org chart of a traditional AI firm may frown at the sheer number of data scientists. Data scientists operate the heavy machinery of AI. Similar to other trades and professions that mature, data science is being redefined across its abundance of low-value, tedious manual tasks that are prone to software automation, and high-value, problem-solving, conceptual competencies.
Many retailers, entertainment providers and transportation services have become software-centric companies. AI itself is becoming a software play (and less so a data science competition), bringing the economies of scale of software platforms. Ultimately, we believe that two species will dominate in the evolution of AI providers – productized software companies with a niche focus, and consulting firms called in for integration support or custom projects.
#3: Munching & processing data
Most executives have basic knowledge of “the cloud”, private clouds, data lakes, data warehouses, etc. They typically remember two things: expensive price tags, and a feeling that a techno-dude in California really liked nature metaphors. Reality is that in the last few years, aggregating complex enterprise data and storing it into one usable, secure repository has become quite cheap.
So what about processing, crunching – isn’t that expensive?
ML algorithms are notorious for tearing through computational power. Systems’ ability to process gargantuan amounts of data and calculations has long been a true limiting factor for ML advancement. Fun fact: neural networks – futuristic ML algorithms that loosely mirror how our brain functions – were conceived in the 1940s and 1950s, but became mainstream only recently.
Over a decade ago, in the quest for more power, geeks realized that a certain type of specialized electronic circuit designed to deal with heavy graphics, common in video games, performed particularly well for running advanced ML algos: the GPU – or graphics processing unit – took off as an AI crossover superstar. If you were a serious AI firm, you made seven-figure purchases of GPUs.
However, in recent years, the main cloud platform providers have caught up and allow AI players to compete with traditional AI setups without owning any specialized hardware. As a crude example: with custom software, skilled know-how and a credit card, a niche player can blow out of the water large corporate GPU farms.
#4: Long-term maintenance
ML models tuned to your business break over time – mostly for legitimate reasons. You may have heard the breezy expression “model drift”, which refers to poor or degrading performance of a data model as the relationship between real-world business data variables evolves (think COVID-19!). Contrary to most AI systems built on a time-and-materials basis, quality productized solutions support this issue from day one onwards, and eliminate long-term hidden costs of ownership.
#5: AI as an unexpected Samaritan
This factor deals less with AI’s affordability, and more with acknowledging its collateral benefits.
AI systems can greatly enhance business efficiency and productivity, and in the right context can “pay for themselves”.
There are also more subtle upsides. Deploying ML requires state-of-the-art horsepower, brainpower, data management, and a flexible environment for translating complex business problems into software code. With this impressive arsenal at your disposal, why not use it for traditional (AI-free) business analytics – unaddressed questions that frustrate you as a leader.
Here is a concrete example, in the words of a CEO we work with: “For the longest time, our management team has tried to calculate a few indicators that would really help drive our sales performance. It’s just too complicated for us, and beyond our financial and BI tools.” As part of an AI solution, this may be a relatively small ask to fulfill, with high perceived value. The “small” bolt-on deliverable can make AI look like a superhero and a Samaritan.