Jim Snabe, Chairman of Siemens and co-CEO of SAP on digital transformation, AI, Covid19 and Tech for Life

Alexandra Mousavizadeh
9 min readMay 14, 2020

--

“The situation we find ourselves in drives home the need for a compass for the
responsible use of technology. One in which true north, as it were, is set by purpose, not profit alone.”

Jim Snabe is a man who certainly knows a huge amount about digital transformation. The Danish businessman is currently Chairman of Siemens and co-CEO of SAP which makes him one of Europe’s key industrialists. He also finds time to work with VC company Balderton Capital as well as being a member of the Board of Trustees at World Economic Forum.

Jim has also recently co-authored a book with Lars Thinggaard, entrepreneur and CEO of Milestone Systems, called Tech For Life — putting trust back into technology. He describes it as “a rallying cry to leaders and influencers in the tech community to take control of the use of technology. It provides them with the tools needed to harness technological change and channel it to develop, create and manage technology in a responsible way to improve the state of the world.”

At the Tortoise Global AI Summit on May 15th 2020 Jim will be speaking on the panel “The business reset: AI’s role in the recovery.”

He will be discussing whether automation and the movement towards robotics will accelerate as a consequence of the COVID-19 outbreak?

You can get your tickets for the event here. It is free to register.

A couple of years ago you told the FT “I have a hypothesis — which is starting to be proved — that we are entering a phase where traditional
companies, who are present in the physical world, begin to exploit digital ways of working to help develop their business.” In the ongoing process of digital transformation that you described then how important is Artificial Intelligence?

As traditional companies have digitized over the last few years, AI has increasingly served as the accelerant in their transition. At Siemens, some 25,000 engineers now work seamlessly with both software and hardware, and AI is found in almost every product. AP Moeller Maersk is digitizing what was once a manual labor-intensive industry, transportation. It is seeing efficiencies from the digitization of processes and the insourcing of software capabilities while gaining deeper insight into the supply chain made possible by IoT-enabled containers, for example.

This digital transformation needs to be seen in the broader context of value creation, where much greater value is now being extracted from business data than personal data. Compare the sharing of pictures with friends and family across social media with the deep insights now gained through business data. Consider the worth of the data gleaned from a better understanding of energy use — which allows us to optimize for greater efficiency, embrace renewable forms of energy and reducecarbon emissions, all the while making energy more affordable. That’s real value. I call it Industrial AI.

This value of Industrial AI is also being unlocked in transport, food production and healthcare, to name but a few sectors. Digitization is reducing delay, cost, and environmental impact in manufacturing and transport; minimizing waste in food value chains and giving us greater diagnostic insight and accuracy in healthcare.

Underpinning all this value creation is Industrial AI; the great enabler of converting real world data into decisions and actions. Industrial AI allows businesses to find new relevance in their physical assets and data. It helps them extract even greater efficiency from wind turbines, make autonomy possible in vehicles, and reduce the carbon emissions in ships that enable supply chains across the world.

From the perspective of the multinational companies you work with — what do you see to be the main barriers of adopting AI? Are they technical, cultural or a mix of the two?

Fundamentally, most of the barriers I have seen are political or ethical — not
technological. And these types of barriers, though visible, can be hard to remove. Consider the energy meter. In the hands of a responsible company, it provides valuable insight into energy use, allowing the efficiency savings I mentioned earlier.

However, in the wrong hands, that same energy meter allows a bad actor to
understand who is at home, when, and even what they are doing. AI should force organizations to consider how they are using data, and the
willingness of their customers to share that data. Is AI being employed to enable data monopolies further, or maximize value for society? And even if AI is being used to save lives, how far do we go in this pursuit? In the Tech for Life book, we ask three questions of leaders, practitioners and innovators in tech:
• How do we use data without losing privacy?
• How do we use platforms without creating monopolies?
• How do we use AI without losing trust and control?

The final question is critical, to define the role of Human beings. In my opinion Human beings must remain accountable and thus in control of AI and, to do so, we must be open about its use and we must solve the issue of explainability of AI.

Clearly, our understanding of the role of AI in various applications can differ, based on its relative importance. It is interesting, though perhaps unnecessary, for listeners to understand how Spotify selects the next song in a queue. But in Industrial AI and in particular in Healthcare explainability is required in order for Humans to be accountable for the decisions and actions recommended by AI.

Finally, one barrier to the wider adoption of AI is the greater focus on security which it demands of those that use it. System security has always be a challenge, but it will be even more so in the future as AI finds its way into every business process and onto every device.

Your new book is called Tech For Life — putting trust back into technology. How has the trust been eroded and how can we move to once again see technology as something that empowers us rather than something to fear?

While technology has always been a force for good, it is also open to abuse, misuse and malicious intent. And with the benefit of historical perspective, many of the noble uses to which it has initially been put have given rise to unwelcome and unforeseen consequences.

Technology is developing at such a rate that most of the issues it raises along the way — such as privacy — are not being adequately addressed. We trust the mailman or mailwoman to deliver letters to our homes without opening them. Yet smart speakers are listening to every word uttered in our homes, while our emails are read by those that provide the service in order to make money on advertising. In our digital world, we seem to have accepted a level of intrusion that previous generations would not have permitted. A younger generation of tech users — those currently in their teens — doesn’t seem to care. It should.

Worse, many people are now unable to discern between misinformation and fact. The inability to differentiate between a piece of political advertising, an innocent meme, or fake news on social media has seen elections influenced and democracy undermined. This is dangerous and, in many instances, seems to be driven by dubious business models that treat the customer as the product.

The key question Tech for Life asks is not what is possible to do with tech, but what is desirable. Considering — and properly addressing — this question will allow us to regain trust in technology.

How do you feel Covid19 has impacted the process of digital transformation, and in particular the adoption of AI by European enterprises?

If anything, the COVID-19 pandemic has accelerated the adoption of technology and the AI that increasingly underpins it. Lockdown has forced organizations to adopt new technology, and it has required people to work in entirely new ways.

I detect a shift from thinking in terms of ‘just in time’ to ‘just in case’, where
organizations are reconsidering the resilience of their supply chains and, indeed, the strength of their wider business operations. This can only result in greater reliance on Industrial AI, as more complex supply chains and business processes need to be managed and will serve up more data that needs to be acted upon. I also expect recovery investment to go primarily into digital transformation, as manufacturers automate processes to ready themselves for when the next crisis hits, whether this is a pandemic, climate change related, or political upheaval.

Of course, just like the broader digital transformation, the issue of responsible use of technology also underpins the current technological response to COVID-19. The same debate around privacy versus efficacy is seen in the development of tracing apps that aim to manage the transmission of the virus in the open by tracking our movements on our smartphones. Two distinct approaches have emerged; placing the data captured in a central database or leaving it on our devices.

Again, it is a matter of design and intent. Technology can — and will — help us solve the pandemic. Putting sufficient testing procedures in place and learning how people are moving around will allow us to focus on those most at risk and keep them safe.

Yet whether stored in the cloud or on our devices, this data must be used
responsibly. Following Tech for Life’s principles for the responsible use of technology makes a debate around privacy and efficacy unnecessary.

Do you think the crisis will accelerate the process of businesses putting a sense of purpose at the heart of their strategies?

I have noticed two particular types of companies emerge during this crisis. Those that are focused alone on the short term, and those that are looking to the longer term. The former considers the traditional bottom line above everything, slashing costs, laying off people and attracting understandable public criticism as a result.

Organizations with a view to the longer-term are instead using the crisis to develop, refine and demonstrate a higher sense of purpose. They are playing their part in the fight against the virus, of course, repurposing supply chains, modifying manufacturing processes or pivoting research efforts to support the fight to contain the virus. Most importantly, they are innovating and mapping out a future built around multiple stakeholders. The COVID-19 crisis is driving a shift towards stakeholder value over financial worth alone. This can only be a good thing.

People get understandably confused in a crisis and are tempted to think in terms of the short term alone. Doing so is dangerous, and results in a loss of orientation and lack of clarity about purpose. Instead, organizations need to strengthen their strategic orientation and commitment to where they are going, establishing a North Star that can be followed by all their people. This, in turn, will empower and encourage them to figure out how to deal with the practicalities of the current day-to- day situation.

In the current crisis, it is these people who are on an organization’s frontline — they are the heroes, not the Managers. But they need a purpose to help guide their priorities and decisions.

Your book was written before the crisis. Is there anything in it that you
would have written differently? Or maybe some points that you would now want to expand more on now?

As we call on technology to help us out of the COVID-19 pandemic, the current challenging times we find ourselves in only confirm how important it is that technology is used responsibly. Indeed, the subject is more relevant than ever.

Global efforts by Big Tech and Governments alike to establish the scope of tracing apps capture this need well. The two distinct schools of thought about how the data that these apps collect are stored are a case in point. Reading the Tech of Life book and understanding its aims will help understand which approach is best.

The situation we find ourselves in drives home the need for a compass for the
responsible use of technology. One in which true north, as it were, is set by purpose, not profit alone.

In return, our development and use of technology must focus on four areas:
• It must be relevant, in the same way that the UN’s Sustainable Development
Goals a call to immediate action.
• It must maximize value for society as a whole.
• It must respect human rights and maintain our privacy.
• It must be open and inclusive so that all may benefit.

Importantly, this has to be a bottom-up approach. It must be a movement: not a regulation or knee-jerk reaction. In the time we currently find ourselves in, the principles of Tech for Life are needed more than ever.

You can get your tickets for the Global AI Summit here. It is free to register.

--

--