AI series part one: How artificial intelligence and machine learning have advanced with data proliferation
The concept of artificial intelligence has gained increasing importance in our daily conversations over the last two decades or so, but it actually existed long before. Forbes begins the timeline of AI at 1308, when “Catalan poet and theologian Ramon Llull publishe(d) Ars generalis ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts.”
Llull was onto something big — and yet, conceptually simple: input and output. Through machine learning (systems in-taking data and weighing that data in order to adjust responses), we can produce a system capable of artificial intelligence, i.e., the ability for a machine to do what a brain does: synthesize information and take resulting action. The definitions of the two ideas are close, though not exact; artificial intelligence is the broader concept of “smart” machines, and machine learning is the “application of artificial intelligence” based on the concept of feeding the machine data and the machine producing learnings sans human intervention (besides the data feed).
Of course, the greater and more complex the data and learning model, the more prolific the learnings have the potential to become. As data grows exponentially (in 2016, IBM Marketing Cloud released a report that detailed the growth: 90 percent of the data in the world was created in the last two years alone, at 2.5 quintillion bytes of data per day), so, too, do our learning capabilities and the presence of artificial intelligence in our businesses. Today, we are not yet in Terminator territory with artificial intelligence, but the proliferation of data and capabilities present within this area has certainly created alarm bells for some scientists concerned with moral consequence.
Today, a car’s global positioning system can learn your patterns of movements, adjusting its directions to correspond with your personal route, rather than the recommended route. A predictive model could tell you which product to purchase next based on your past years’ worth of purchases — or just your past week’s. The potential is near infinite for every industry, fuel and convenience retail included. Your prices, costs, volumes, brands, constraints, rules, exceptions — all of this data can be fed into systems that can weigh it and create appropriate recommended outputs.
But there’s a challenge: more data doesn’t always mean better data, and no data matters unless you can actually use it to make decisions. Because there is a level of subjectivity to human decisiveness and a level of objectivity to algorithmic decisiveness (which can, eventually, give way to subjectivity), there exists a delicate and necessary balance between algorithmic learning and data input and your own business decisions. Finding your sweet spot is a learning experience, too.
In our next post, part two in this three-part series, we will work through the ways machine learning and artificial intelligence can positively affect your fuel pricing strategies, and how you can incorporate data into your decisions without plunging your market into volatility.
Read more articles about:
UncategorizedSubscribe and get the latest updates
You may unsubscribe from our mailing list at any time. To understand how and why we process your data, please see our Privacy & Cookies Policy
Related blogs
Uncategorized
AI series part two: How artificial intelligence and machine learning fit into your fuel pricing strategy
As machine learning becomes more accessible for fuel retail teams, pricing strategies and processes will inevitably...
Uncategorized
AI series part 3: How much should you rely on artificial intelligence in fuel pricing?
As we mentioned in part two of this series, blindly relying on computational processes and models for pricing,...