I was reading Dieter Bron’s excellent piece on ’s AI reinvention and it struck me how extraordinarily ambitious and visionary Sundar Pichai’s undertaking was. My colleague, Gurjeet Singh picked up on the move almost two years ago, but Google’s subsequent headlong rush into breach – through both hardware and software represent a historic transformation.

Here is one of the most valuable in the world turning on a dime to grasp an opportunity that feels, to many, both real and abstract.

Therein lies the problem for the CEO’s of other, equally impressive organizations. AI is both real and yet abstract.

  • It is real in the sense that their boards are asking about it.
  • It is real in the sense that they have innovation scouts in the major fintech capitals writing reports, calculating how much money was raised and at what valuations.
  • It is real in the sense that pesky “AI” upstarts are doing more with less and seemingly without sleeping.
  • It is real in the sense that not a day goes by that a major press outlet is not reporting on some breakthrough in the AI space.

But it is not real enough for them to turn the wheel hard (left or right carries too much baggage these days, so choose your own direction) and start to pivot the ship.

These leaders still see their businesses through the lens that got them to where they are today. Given the average age of the F500 CEO is 57 means that that while they needed to navigate and leverage technology to get where they are today – they are not digital natives. Thus AI feels – like the rise of the Internet, mobile and the cloud – a big deal, but not an existential threat.

And so the abstract nature of the challenge lingers.

As someone who has seen his fair share of technology waves I can relate to the problem. As someone who has a front row seat for this one, I want to impart some advice.

  1. See AI for the double edged sword that it is.

    AI is real, delivers orders of magnitude performance improvements against the “state of the art” and does so across almost any discipline. That is the good side.

    AI, used against you, especially at by another entity with similar resources represents a fight your company won’t walk away from.

    Here again, my use of the term AI feels “abstract” and that criticism is fair. It is a term, nothing more.

    Let me explain. Think of your business’s most pressing problem. The one that unlocks massive value, that you have struggled with for years, eking out 1-3% improvement. That is the opportunity and the risk.

    Consider risk models at financial institutions. This is a core element of the business and represents the business engine for these organizations. If, as a financial institutions, you could leverage AI to manage risk more effectively, more granularly, with more inputs – you would be able to materially impact the competitive landscape. This is where you must find a way to apply AI to the problem because if your top competitor, or your brand new competitor, figures this out they will hold every advantage.

    Financial institutions are but a single example, we see similar dynamics in pharma, manufacturing and healthcare.

    Get to the sword first.
  2. See the organizational change that comes with the technology

    Ultimately AI is a technology. We can debate if AI is the wave to end all waves, but we know it is a big one – the odds of Google, Amazon, Microsoft, Facebook, Baidu and IBM all being wrong about the same thing is rather small. Each one has “bet the company” on AI.

    Technology, however, is in service of some larger objective. Whatever, that objective may be, the CEO must acknowledge that this technology, AI, will bring with it massive organizational change.

    To use the millennial term, it will be a massive fail, if an enterprise were to do the work and make the investment to build out capabilities in the AI space only to find they have not prepared for the organizational changes it will bring.

    This isn’t a discussion about job loss (although that is a significant part of the discussion) it is a discussion about how to create an organization that can keep pace with, understand, monitor and most importantly act/react in concert with the outputs of these systems. This will be a challenge of same calibre as the technological one.

    Start thinking about it now.  
  3. Understand the difference between Justification and

    There is no abdication of responsibility associated with intelligent technologies. Sundar was clear on this point in his interview and it should be equally clear in every CEO’s mind. There is no excuse of “the machine did x.”

    Understanding what the machine is doing is paramount. Transparency is knowing what algorithm was used, which parameters were used in the algorithm and, even, why. Justification is an understanding of what it did, and why in a way that you can explain to a reporter, shareholder, congressional committee or regulator. The difference is material and goes beyond some vague promise of explainable AI.

    While much has been written on deep learning – the majority of the work is in academic settings. There is a reason for this, neural networks (convolutional, recurrent or otherwise) are a black box. There are very few settings in the real world, potentially none, where performance is more important than understanding.

    Consider something as simple as a churn predictor. If one approach yields a 50% better prediction, you still need to understand why. If you don’t – and it turns out the improvement is driven by redlining certain neighborhoods based on socio-economic indicators there will be explaining to do. This is one of the more benign examples. Thinking about ethical considerations of which customer gets offered what services is another can of worms.  Additionally, if one doesn’t understand what a technology is doing, one is vulnerable when it begins to malfunction.  If one does not understand how it is working, one cannot effectively  identify reasons for the failure or ways of correcting it.  

    Go beyond transparency, your job likely depends on it.
  4. Take a realistic view of your data problem

    I am intimately aware of the challenges presented by today’s data landscape. Siloed, distributed, unstructured, unlabeled – there are as many terms as there are “big data” vendors. That doesn’t change the simple fact that your data will never be perfect. Ever.

    The inclination is to wait until everything is just right before attacking the analytics portion. As noted above, waiting has real costs in emerging world of AI. If your data is really a mess, get focused on it. If it is sort of a mess, get going with AI.

    Data is like fuel, more refined is better, but any will do when you are trying to get away from Jason Voorhees.
  5. Think about scale

    Scale is another one of those words, like “big data” that has been stripped of its meaning by overuse and misuse. Fundamentally, however, scale is about having the capacity to grow, unfettered by constraint.

    When you think about intelligence in your organization, and you think about the organizational implications, you have to think about scale.

    Scale matters because scaling an organization in the AI era is different.

    Scale comes in multiple dimensions. It can be computational resource scale. It can be database scale. It can be headcount. But there is a different dimension of scale that is frequently overlooked – and that is UX scale.

    UX scale is the ability to build interfaces that provide the biggest leverage to a broad swath of the organization.

    To impact the entire organization AI needs be packaged in something familiar to your users. That is generally an application. Larry Ellison got on board with this concept the other day, indicating he was going to bake it into everything Oracle does (he better get the memo on Justification vs. Transparency). He was going to use the trusted vehicle of an application to carry his AI vision into the market. That is probably the right answer.

    AI needs UI.

There is a quite a bit to chew on here, but if I were to try and distill it I would do so as follows:

Make AI less abstract by attacking that mission critical objective inside your organization. Create , reduce abstraction and don’t forget the attendant advice in the process.

Good luck, now get going!



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here