Menu
Menu
What is artificial in artificial intelligence?

What is artificial in artificial intelligence?

The hype and the reality of AI, and why organisations - both private and government - may be missing the fundamentals.

Is the effort that AI/machine learning requires worthwhile?

Usually when a new technology starts getting noticed, the subsequent hype in the industry is inevitable. The hype is partly created by the vendor and consulting community looking for new business, partly created by professionals in the industry wanting to keep up and comment on the latest trends, partly created by companies with the ambition to be seen moving with the times and becoming early adopters.

Some hypes even have their lifecycle traced by Gartner, which coined the term “hype cycle”. As all this energy and enthusiasm sometimes ignores "the gap from an academic paper to reality and the application to an engineered product."

In actual fact, it does not matter whether the technology is new or it is a revival, or a new twist of an older technology.

A case in point is electric cars, popular in the 1880s and then superseded by internal combustion engines, and now the scope of a number of automobile brands.

It is the same with artificial intelligence which was founded as an academic discipline in 1956, some trace it back to Alan Turing in 1950. Understanding speech, for instance, started at Bell Labs in 1952, although significant advances have been more recent.

The hype contributes both to the diffusion of information and misinformation alike as the technology evolves from the lab to mainstream.

The revival of AI can be traced to 2012, from an online contest, the ImageNet Challenge. ImageNet is an online database with millions of manually tagged images of words such as “cat” or “dog”.

The challenge of the competition is to achieve visual recognition using software to categorise and classify objects from millions of images.

Competitors are encouraged to share their techniques. In 2015 the improvement in recognition surpassed the human capability of 95 per cent success rate of correctly labelling an image by using the “deep learning” technique.

It is machine learning that is powering AI and for instance, makes “AlphaGo” go, the Google project that created AI which defeated one of the world’s most skilled human players at the game Go.

All jobs will be impacted by AI in some way. So maybe we have to start thinking about job reconfiguration rather than disappearance.

Claudia Vidal

Progress in machine learning has been possible with the use of the technique called “deep learning”, which has many flavours and the most widely used variety is “supervised learning” using “neural networks”, mimicking the human brain it labels to train a system, very simplistically a filtering and, gradually adjusted, weighting of data input to give the right response when presented with particular input.

Machine learning is a subset of AI, it is like a giant look-up table to which great computational power is applied but, it is stupid. It only becomes intelligent when humans identify an outcome to support decision making.

Think concentric Venn diagrams where the outer circle is AI, the second most inner circle would be machine learning, a subset of AI; the most inner circle would be deep learning or a flavour of machine learning.

The expectation is that AI is going to lead to automation of jobs and, even further, to lead to new activities and sources of employment and also unemployment. What is certain is that all jobs will be impacted by AI in some way. So maybe we have to start thinking about job reconfiguration rather than disappearance.

For now, humans are feeding the AI ‘machine’ in many ways:

  • Writing queries to organise the data from the big-data indiscriminate loads

  • Creating and engineering algorithms to make sense of the data and discover data patterns

  • Creating mathematical models to sift through data to produce and accurate look-up table, and ensure it is reliable to translate into value, which is intrinsically the ability to predict.

And that is the end of the game after all this hard and intellectual work: to achieve accuracy in the prediction to create value.

The models created are rarely company-interchangeable. And, when all of the above activities are done, work starts again because the models need to be updated as the data changes and the experimentation with data evolves – it is not one and done!

As Apoorv Saxena says, AI still requires a tremendous amount of data to train. A five-year-old kid can look at two cats and suss out that is not a dog. So his view is that today’s AI systems are nowhere close to replicating how the human mind learns and it remains a challenge for the foreseeable future.

Algorithms, mathematical models, and statistical regression are skills of data scientists - remember when they were called statisticians performing data mining, and not so long ago analytics experts, and going way back, data warehouse specialists?

Is the effort that AI/machine learning requires worthwhile? Why? What would the purpose be? Supposedly companies investing in AI have great expectations and why this is of importance for their competitive advantage, it justifiably merits investment, and it is a priority (or not).

According to this survey, most executives have not yet seen substantial effects from AI on their offerings and processes, however they have high expectations for the next five years in the areas of IT, supply chain management, and customer-facing activities (see Exhibit 1 from Is your business ready for artificial intelligence?).

If we accept that:

  • There is no doubt on the potential of AI

  • AI has experienced a big revival due to the availability of computational power and big-data in its many forms: transactional, voice, from sensors in IoT, etc

  • Applying analytics on big data is how AI produces value by offering accurate predictions, using a number of techniques such as machine learning, supervised learning, and unsupervised learning

Then the creation of value goes hand in hand with understanding the business: what data to collect, what subset of data to keep and mine, and what models to build?

But wait, aren’t all these steps linked to the fundamental need for a company to know what questions are looking for answers?

Of course conversations with data scientists can help, but it is still a joint effort not a delegation.

Back to the old adage “what do you want to achieve?”

Because, after all that effort, there is no value if the findings are not followed by some action.

Claudia Vidal is an independent director for Skills4Work Inc, an advisory board member (industry representative) for ITP's (IT Professionals NZ) Accreditation Board; a member of the advisory board of the Strategic CIO Programme (Business School, the University of Auckland), and external committee member for Te Wānanga o Aotearoa. She is an editorial advisory board member of CIO New Zealand. She is a senior technology leader with specific strengths in business strategies enabled by digital, and programme delivery. Reach her at c.vidal@onpoint.net.nz

Related reading:

Challenges remain, from skills gap to ethical issues, according to new survey by analytics company SAS.

...but do it 'cautiously and opportunistically’

Inside Gartner’s 2017 Hype Cycle for Emerging Technologies

Grayson Zhang of the Ministry of Social Development issues clarion call for Kiwi firms to help unemployed youth get into ICT.

“We have to ensure these waves of advancement will be available to all,” says Emotiv CEO Tan Le.

Send news tips and comments to divina_paredes@idg.co.nz

Follow Divina Paredes on Twitter: @divinap

Follow CIO New Zealand on Twitter:@cio_nz

Join us on Facebook.


 

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.

Tags analyticsClaudia Vidalautomationbig datastrategyleadershihpartificial intelligence

More about Bell LabsFacebookGartnerGoogleIT ProfessionalsSASTwitter

Show Comments

Market Place