Skip to main content

My Takeaways from Engage 2023

For the AI-curious, Coforge’s Engage 2023 proved to be a fascinating day and a half of listening to speakers that are truly experts in their field and interacting with clients and colleagues who are in various stages of their own AI journey. Engage 2023 was this year’s edition of Coforge’s annual customer event and was held at the luxurious Ritz-Carlton Grande Lakes, Orlando. This was my first year attending and I was taken aback by the candor, curiosity and warmth exhibited by the participants. Some were clearly further along the journey and were willing and open to engage with those that were not. Here’s a quick summary note on what I learned from Engage 2023.

1. The Power of AI is in ‘Human+’

One theme that emerged across sessions is that the true power of AI will be to augment not replace humans. As Lee Robertson put it “AI is not coming for your job, but AI powered humans are.” In my opinion, there will be jobs lost to AI especially in roles with rote, repetitive tasks. The hope, though, is that the humans involved in those tasks move on to more value-additive jobs.

The flip side of that, however, is a better customer experience for the consumers. Lee shared with us a personal story of baggage loss and the number of frustrating chatbot and human interactions it took for him, over a few weeks (I think), to finally reach a conclusion. He then shared a demo built by the team at BCG using a GenAI powered chatbot, which resolved the same issue in a matter of minutes.

Let’s get to the Human+ part. David Truog shared an example of how AI is assisting doctors when screening patients for cancer. The most interesting part of this example was the fact that the doctor + AI was more successful than either the doctor or the AI alone, thereby making the case for Human+. Other examples of this included co-pilots for wealth managers that could read and understand vast amounts of contextual information about their clients and help the wealth managers create customized strategies for their clients. Or the co-pilot for a restaurant manager that kept track of inventories, managed scheduling of staff and surfaced the top priorities that the manager needed to handle themselves.

The speed at which AI is adopted in our day-to-day lives is contingent upon how much trust we as humans are willing to place in it. Until you as a user are fully comfortable letting AI make a cancer diagnosis or fly your plane, having a highly skilled human acting in concert with AI is the way to go. Explainability is another concept that is key to driving AI adoption. Unlike a piece of code, which has deterministic outcomes, AI is probabilistic and has learned from a corpus of data. While you can have visibility into its inputs, you may not always understand the process. So, while a Human + AI cancer diagnosis may be easy to accept, an AI-only credit denial might still be hard.

Data is your differentiator.

‘R “Ray”’ Wang and David Truog both made strong cases that companies with access to more high-quality data will have an edge in the AI arms race. Large language models are consuming vast amounts of publicly available data. Incremental data inputs, thereafter, are likely to be privately owned and the cost to acquire that data will be exponentially higher. Most companies aren’t looking at open AI models to solve their business problems. The consensus seems to be to adapt these models to a private data set or corpus of knowledge, so that the probabilistic outcomes are somewhat contained. So, companies that can rely on historic data (with all the possible mapping issues) and continue to generate vast amounts of new data “at the edge” will have an advantage. Ray talked about examples of data at the edge. In the current context, a delivery driver making a delivery of a package might generate data such as delivery address, time of delivery, person taking delivery etc. For AI models to become more effective, data like weather at time of delivery, tone of the recipient etc. could lead to further innovations in the future.

In a classic example, Ray asked if anyone in the audience actually fills in a product warranty card. About 3 people in a room of almost 200 people raised their hands. The issue he wanted to highlight is that as a manufacturer of a product like a toaster, the first time you hear from one of your customers is likely when they have a complaint. You have no idea who they are, where they live, how they use your product etc. You spend time and money to resolve their complaint, which eats out of your margins on an already low-margin product. Instead, he proposed a subscription service for toast where instead of buying a toaster, you subscribe to ‘toast as a service.’ An IoT toaster coupled with a sign-up form at the outset, gives the manufacturer the ability to monitor performance, understand causes of failure, tie that back to specific suppliers, understand usage patterns, see the impact of different weather conditions on performance…you get the point. All of a sudden, the manufacturer is in a position to use this data to create other subscription models or sell performance data to third parties. Essentially, monetizing a data flow. This was a unique perspective that Ray brought to the table – thinking of processes as data flows and looking for ways to monetize those.

2. Strong governance is critical.

David Truog did an amazing job of going back to the first principles on GenAI. He dispelled several common misconceptions about GenAI. My biggest takeaway from his session was this “GenAI models ‘merely’ generate probabilistic auto-completions.” The outputs, while powerful and accurate, are not reasoned logically. Quoting David “There is no inference. There is no conclusion. There is no explanation.” Given all of this, it is extremely critical that organizations set guardrails around using GenAI in their business processes. The provenance of data (source, authenticity, quality), the types of LLMs used, the ‘temperature ’ settings, the need for humans in the loop to review outputs are all factors that need to be considered.

Several speakers touched upon the ethical and legal questions that remain unanswered with respect to the widespread use of AI. The ethical risks include bias (gender, racial etc.) and discrimination in model outputs, lack of transparency, use cases in widespread surveillance and rampant copyright/trademark infringement. David talked about a recent lawsuit filed by Getty Images against Stable Diffusion, the AI art generator, for allegedly copying more than 12 million images from its database without prior approval or compensation.

It remains to be seen what impact GenAI has on the upcoming presidential election in 2024. Cambridge Analytica proved beyond doubt that the outcome of an election could be swung one way or another by sharply targeted advertising driven by sophisticated analytics. Add to that the ability to create deep fake video messages, images and personalized messages and the average person is going to have a hard time distinguishing fact from fiction.

3. Culture of innovation

Recognizing that the power of AI in augmenting humans as opposed to replacing them is important in that it helps drive adoption internally. Prof. Pattie Maes’ fascinating discussion titled “Will AI live up to its promise?” focused on aspects of AI that I had not even considered before. AI’s potential to amplify human intelligence goes beyond decision-making, information processing and creativity to seemingly ‘softer’ areas like attention, motivation, memory, and learning. It’s important for companies to message this appropriately to their employees to dispel fears about job loss and instead foster a culture of innovation. Lee, in his presentation, made note of the fact that 70% of the ‘building blocks’ of an AI journey are people, processes and culture.

Therefore, it is incumbent upon leaders to set up sandboxes and allow innovation to flourish organically, under a framework of governance. My colleagues Deepak Bagchi and Sudharshan Seshadri demonstrated Coforge’s Quasar platform which allows organizations to create these sandboxes and allow teams to test AI use cases in a controlled environment. In the demo, they covered capabilities like choosing the right type of LLM for the specific use case at hand and using the Playground functionality to create use cases using the platform’s low code framework. A safe environment like this allows teams to develop and test their understanding of AI and its applicability to their specific businesses. I think Helen Johnson, Coforge client and panelist, when asked to sum up her advice to companies kickstarting an AI program, said it best “Play, but be safe.”

4. The possibilities seem endless.

Prof. Maes really opened my eyes to the endless possibilities of AI as she spoke of some of the work being done at MIT’s Media Labs. For example, using AI as a teaching aid to create new and innovative ways to capture a student’s attention. Their research showed that the likelihood of you retaining knowledge taught to you by someone you idolize is higher than a regular, random person. So, imagine learning about the Theory of Relativity from an avatar of Einstein himself. Or engaging in a virtual conversation with Vincent Van Gogh to learn about his life and sources of inspiration.

Another amazing use case was using AI to tackle the problems of memory loss that come with old age (or just in general). We heard that OpenAI was partnering with Jony Ive (former head of design at Apple) to create AI wearables. Imagine a discreet headset that listened into all of your conversations and prompted you with inputs when you needed it, just like a 24/7 personal assistant.

One of the speakers asked the question whether AI would be the next ‘iPhone moment’ or ‘Metaverse moment.’ All leading indicators point to an iPhone moment – where the technology becomes ubiquitous, an almost irreplaceable part of everyone’s lives and the use cases go far beyond what was originally envisaged. Whether you choose to take baby steps or giant strides, the way forward includes at the minimum some basic governance guardrails, a communication program that involves and excites your teams, a culture that rewards safe experimentation and innovation and an understanding of how Human+ brings about the best possible outcomes.

I will conclude by borrowing a quote from David Truog’s presentation. He, in turn, quoted Justin Trudeau who said, “The pace of change has never been this fast, yet it will never be this slow again.”

Let’s engage