Skip to main content

Service Lunch Smart Services: Transformation of the service business of Swiss industrial companies

With Boris Ricken, AWK Group

COVID-19 has posed enormous challenges to Swiss industrial companies over the past two years. The service sector has been particularly hard hit, as it relies on personal interactions with customers. At the same time, digital technologies have changed the service business.

In his presentation, Boris Ricken shed light on the implications of these developments for Swiss industrial companies. He showed how longterm trends in service provision have been reinforced, e.g., by local service provision in combination with central products and services. Given this, different fields of action were elaboarated, among others for new digital services and business models.

The presentation was accompanied by lively discussion and input from the participants. The expert group Smart Services is a very active platform for sharing and growing knowledge and expertise in this field.

Contact person: Jürg Meierhofer

Expert Group Meeting Big Data and AI

The Expert Group Meeting of the Big Data and AI Expert Group (lead by Luca Furrer, Trivadis, and Kurt Stockinger, ZHAW) on Tuesday March 22 was still under the influence of Covid and was held online. In an MS Teams meeting with around 15 participants, we could follow two interesting talks around the challenge to integrate data from different systems and harmonize them.

First Andreas Imthurn and Dominik Auer from actesy AG presented their product to integrate and model data fast using a standardized model. They introduced a use case from the banking industry where the data from different entities have been swiftly integrated in a short time span. This was possible due to the proprietary data model used by actesy. An interesting discussion about the approach has followed analyzing the advantages and disadvantage of proprietary models and some additional talks have been initiated.

Second Philip Kraus from Trivadis Part of Accenture talked about how Knowledge Graphs can be leveraged for the data exchange between Swiss hospitals and research institutions. This innovative approach allows to integrate data collected in different systems and make them compatible without losing data. The presented solution empowers interoperability while keeping flexibility and data security in mind.

Both talks sparked an interesting question and discussion round and hopefully lead to further contacts. It was exiting to hear about two different approaches to integrate various data sources.

Towards fair algorithms

By Markus Christen, Karin Lange, Christoph Heitz, Michele Loi

Meeting of Expert Group Data Ethics, March 7 2022

The Data Ethics Expert Group is advancing its expertise in algorithm and data ethics: Two trendy topics were discussed in the Expert Group meeting from March 7: fair algorithms and the demand for consulting in ethical and societal issues of data business. Furthermore, an Ethos study on digital responsibility of enterprises has been presented to the community and two events for 2022 were prepared at the meeting. 16 persons participated in the online meeting.

Algorithms are increasingly used to support or even automatize decisions with relevant impact on the life of humans: obtaining credits, insurance premiums or even access to certain resources is increasingly shaped by tools that use machine learning. This raises the issue of “fairness” of the decisions; in particular to avoid discrimination of certain people. Members of the Expert Group are involved in two ongoing research projects on that matter. One project – “Socially acceptable AI and fairness trade-offs in predictive analytics” – is part of the National Research Programme 77 on Digital Transformation, the other project – Algorithmic Fairness in data-based decision making: Combining ethics and technology – is funded by Innosuisse. Christoph Heitz from ZHAW has presented an overview on the ongoing research.

Both projects share the same intention, the development of an integrated methodology to create socially fair algorithms. In doing so, both philosophical questions (e.g., what is “fair” in a given context”) as well as computer science problems (e.g., how to implement fairness technically in an algorithm) have to be resolved. One intermediate result of the project so far is the “fairness lab” – a toolbox where data scientists can experience the consequences of implementing different fairness definitions into a maching learning algorithm.

Interested people can gain more insight on this topic in a special workshop that accompanies the upcoming Swiss Conference on Data Science on June 22. During the workshop “How to Develop Fair Algorithms?” participants with a background in data science …

… learn how to combine data-based prediction models with fairness requirements;

… learn how algorithmic (un)fairness is defined and measured in a practical context;

… learn how to construct fair decision algorithms while still harvesting the benefit of a good prediction model; and they

… will apply the methodology to concrete use cases and examples.

Ethics consulting for companies

In a second talk, Sophia Ding, managing consultant at the AWK Group, presented the emerging needs of businesses to better understand data ethics. Based on a survey in 2022 (online survey, N=225, source: https://fh-hwz.ch/content/uploads/2022/01/Trendradar-2022.pdf) dealing with data is the number one reason why companies face ethical issues in their projects. Around 70% of respondents reported potentially problematic projects. The majority of these projects concern the handling of data, for example questionable data evaluations. Much less common are measures that contribute to the monitoring of employees and the use of monitoring of employees, and the use of controversial technologies such as facial or voice recognition.

According to Ding, the lack of specific regulations for data driven AI systems could lead to a focus on ethical principles as guidance for self-regulation. Thus, companies with mature data science departments start gaining an interest in ethical topics regarding data driven systems. The demand is primarily driven by compliance or risk management, less by data science teams.

An expression for this increasing interest in data ethics is a study by the Ethos Foundation, presented by Jean-Henry Morin of the University of Geneva. The study “Corporate digital responsibility of SMI Expanded Index companies” from January 2022 (source: https://www.ethosfund.ch/en/news/ethos-publishes-its-first-study-on-the-digital-responsibility-of-swiss-companies) analyzed 48 companies. The results show that companies still lack transparency about their digital practices and that their degree of preparation for issues such as ethics in artificial intelligence are still in their early stages.

Upcoming events

The Expert Group will thus put a stronger focus on making current research on data ethics applicable and known for businesses. Two events are planned in that respect. One event on June 16 will focus on “Tools for Ethical Decision-Making”. At the event, the new “digital trust” label of the Swiss Digital Initiative will be presented. This label is awarded after a thorough audit process, which aims to create a certain standard: Meeting 35 mandatory criteria across four dimensions with an application. Sarah Gädig from the Swiss Digital Initiative will introduce the label and the criteria – just a few days after the Label will be presented for the first time at the WEF in Davos end of May. A second event organized in fall 2022 will focus on the role of ethics for sustainable data business. This event will be organized as a joint venture with the Digital Society Initiative of the University of Zurich and Economiesuisse. Stay tuned!

Databooster – To support SMEs

HEPIA, HES-SO, OPI, and NTN Innovation Booster Databooster join their forces to support SMEs on their way from a rough idea to a funded research project. On 1st March 2022, a joint event was organized at HEPIA in which 30 interested persons from the industry took part.

After the welcome of OPI (OPI – Office de Promotion des Industries et des Technologies) by Hélène Gache (Directrice at OPI) and HEPIA by Claire Baribaud (Directrice at HEPIA – Haute école du paysage, d’ingénierie et d’architecture de Genève – HEPIA) Nabil Abdennadher (Professor of Computer Science at HEPIA) presented the Databooster objectives and innovation process for the audience. He pointed out that the NTN Innovation Booster will support the preliminary phase of open innovation before it comes to an innovation project.

Two success stories of the last year were presented by SMEs.

First, Andreas Seonbuchner (CEO and partner of CitizenTalk) showcased his journey within the Databooster – starting from first idea discussions with a research group to securing appropriate team partners by a call for participation to the community. An interdisciplinary team with potential customers proceed in shaping further his idea. The vital clarity for implementation options was gained through an Innocheck for a feasibility study (together with an Applied University). Finally, a consortium was founded for an already accepted Innosuisse project.

Thereafter, Sami Jaballah, Co-Founder and CEO of DNEXT Intelligence SA, described his  success with the Databooster: with two matched partners and a solid framework that shaped his idea, he is now preparing the Innosuisse project submission. At one stage, Sami admitted being unsure about his idea. However, thanks to the competence and expertise the Databooster provided, he was able to solidify his relatively vague idea into a structured concept that reached maturation.

After these two presentations, an open discussion on various topics followed, such as the difference between Innovation Booster and Innosuisse Innocheck, confidentiality and IPR, funding model and budget allocation, the definition of innovation, etc. A delicious aperitif concluded the event. 
Many thanks to all persons involved in organizing the event. We are looking forward to many Call for Participations from the attendees.

The Potential of Swarm Learning (HPE)

The Expert Group took place as a virtual meeting on February 15, 2022. We were joined by 12 participants.

Tim Geppert from ZHAW opened the meeting and introduced Alexander Volk, Hartmut Schultze and Roger Fontana from HPE. He also introduced the speaker for the upcoming meeting, namely Andrew Knox. Andrew will introduce the group to the basics of differential Privacy.

Afterwards Alexander Volk, Hartmut Schultze and Roger Fontana, all experts from HPE in the areas of swarm learning, presented the concept of swarm learning. An overview about the content of the talk is given in the following sections. The meeting was closed with a discussion of potential use cases for this technology.

HPE Swarm Learning: Reduce bias in your ML models (and enjoy the side benefits)

By Alexander Volk, Hartmut Schultze, Raymond Freppel, Roger Fontana

The convergence of algorithmic advances, data proliferation, and tremendous increases in computing power and storage has propelled AI from hype to reality.[1] The quality of the results of AI applications are related to the underlying algorithm and the accessible training data. Both include bias, resulting in the (unintentional) inclination toward certain type of results. One of the early examples is the St. George’s Hospital Medical School during 1982 to 1986, where 60 women and ethnic minorities were denied entry due to a new computer-guidance assessment system that denied entry to women and men with “foreign-sounding names” based on historical trends in admissions[2]. This is only one example of many.

Machine Learning is here to stay, and it is upon all of us who are involved in creating and applying ML to reduce biases. To understand the impact of HPE Swarm Learning on the reduction of bias and the possible improvements it might help to have a short look back at the history of model training.

Initially, Local Learning was performed where local models were trained at each data source. This approach resulted in local data bias, plus inaccurate and suboptimal models delivering inferior results. Next came Centralized Learning to reduce bias and improve model accuracy. This improved model accuracy since larger data sets were used for training. However, data aggregation to a central location posed new challenges around data privacy, data ownership, and data movement. So along came Federated Learning to alleviate central learning challenges with a central custodian responsible for aggregating all learnings from multiple data sources while also preserving privacy. The central custodian, however, poses challenges around resilience, scalability, ownership and central power given to such persona. With the central custodian, local high-availability features may be used for resilience. But, if the central aggregator goes down, training stops, and scalability is limited to what the client-server model in a predominantly star topology can support. It’s time for a new modern approach to machine learning.[3]

The HPE Swarm Learning solution is a decentralized, privacy-preserving, and collaborative machine learning framework at the data source. Users of the easy-to-use API which architecture is based on Blockchain can benefit from data accessibility across geographies and across organizations, leading to larger data sets and an increase of their model accuracy. Since Swarm Learning negates copy or movement of training data, it works on heterogeneous infrastructure and therefore represents an efficient way to resiliently scale.

Does Swarm Learning work?

In June 2021, a study proved Swarm Learning’s feasibility. The classifiers[4] of Swarm Learning achieved higher accuracy than those developed at individual sites. In addition, Swarm Learning completely fulfils local confidentiality regulations by design.[5]

While Healthcare and life sciences have several well-suited use cases Swarm Learning, it can be applied cross all industries. The federal government sector can apply Swarm Learning for detecting anomalies and threats, as well as for research collaboration. Financial services organizations apply Swarm Learning to financial fraud detection activities. And in manufacturing, Swarm Learning provides predictive maintenance capabilities.

If you want to learn more about a solution that enlarges your dataset without moving or duplicating raw data while you can keep your data sovereignty and privacy, or simply want to learn more, you can always reach out to HPE. We are here to help.

Further Links:

Nature.com (2021). “A safer way to share health data“

Technical White Paper Swarm Learning


[1] Michael Chui; Vishnu Kamalnath Brian McCarthy (2020). An executive’s guide to AI. McKinsey.

[2] Nitin Aggarwal (2020). Biases in Machine Learning. towardsdatascience.com

[3] Arshad Khan (2022). HPE Swarm Learning: Increase accuracy and reduce bias in AI models. community.hpe.com

[4] In data science, a classifier is a type of machine learning algorithm used to assign a class label to a data input. An example is an image recognition classifier to label an image

[5] Warnat-Herresthal, S., Schultze, H., Shastry, K.L. et al. Swarm Learning for decentralized and confidential clinical machine learning. Nature 594, 265–270 (2021). https://doi.org/10.1038/s41586-021-03583-3

Challenges in Applied Computer Vision

By Philipp Schmid, CSEM, Andrea Dunbar, CSEM and Jakob Olbrich, PwC

Meeting of Expert Group Machine Learning Clinic, February 11 2022

What have expensive mechanical watches, sand, e-waste and cockpits in common? All areas have tough challenges in computer vision. Human eyes are very hard to outperform with cameras and image processing. What people perform with their visual sense every day is just amazing and and creating these capabilities remains a complex challenge for computer vision.

At this first in person meeting this year the expert group focused on various real world vision problems.
The event was hosted by PwC in their inspiring location in Oerlikon. Four speakers set the floor for great discussions followed by a lively sitting Apéro.

Lukas Schaupp, PwC «Detecting e-Waste»
The amount of electronic devices people dispose is growing exponentially. Not just talking about smartphones, laptops and earphones but as well larger household items like dishwashers, toasters and vacuum cleaners. As prices for raw materials are rocketing off automated recycling of e-waste is becoming attractive. Lukas demonstrated strategies to localize and classify different electronic devices in bulk on a conveyor belt.

Andrea Dunbar, CSEM «AI at the Edge – Safety in the next generation Cockpits»
There are multiple reasons and advantages to process at the edge. Andrea demonstrated this impressively in the use-case: next generation cockpits. Pilot drowsiness detection and more important high accuracy eye gaze detection (±1°) with rates of up to 60 frames per second are only possible at the edge. What is today already reality in the flight simulator will soon be introduced in each car for the safety of our roads.

Francesco Cicala, PwC «Automatic image thresholding for semantic segmentation»
The quality of concrete depends heavily on the right mixture of sand and pebbles. In the future a smartphone app should be able to classify the correct mix by assessing the size of the sand and pebbles. Francesco introduced a powerful method to extend Otsu’s thresholding technique into a locally adaptive threshold map for the whole image. This method is robust, fully explainable and there are no labels needed. In a next phase it will be extended with a U-Net algorithm to improve accuracy.

David Honzatko, CSEM «Photometric stereo in defect detection»
Swiss Made symbolizes perfect quality. Especially in the watch industry requirements are demanding. The small parts are highly reflective, complex shaped and defects can appear randomly at any position. The key to an automated defect detection solution is photometric stereo. David presented a dome setup which can project up to 108 illumination directions. To reduce the hardware requirements whilst keeping the performance David presented a new data augmentation technique, which boosts the training of any deep learning architecture processing the images.

A full evening of new insights and tough challenges in the field of computer vision. Thanks to everyone
for the great participation and especially to the host for the amazing location and the local Apéro.

Operational ML for Service Engineers: Successes and Pitfalls

By Lilach Goren Huber, Thomas Palmé, Manuel Arias Chao (all ZHAW), Maik Hadorn, Roche

Smart Maintenance Expert Group Meeting 20.01.2022

Once more, we met online for an interesting presentation followed by vivid discussions and networking. Yes, online networking!

We started by proudly introducing our new industrial Lead: Dr. Maik Hadorn, International Product Manager, from Roche Diagnostics. Welcome Maik, we are honored to profit from your expertise!

Next, Niels Uitterdijk, the CTO and founder of Amplo exposed us not only to success stories but also to challenges and pitfalls on the way to successful machine-learning-based predictive maintenance. As usual in our EG, this included concrete use case examples, this time from several different application fields.

After an intense Q&A session (we were 29 attendees!) we switched from Zoom to Wonder, where we had the chance to meet and network with group members. Similarly to previous meetings of our EG, this worked out really well!

We look forward to the next meeting – this time, finally, face to face.

Grounding sleep health interventions on objective data

By Ulrich Reimer, OST

Meeting of Expert Group Digital Health, 17 December 2021

Background:

Sleep disorders are frequent and often associated with stress and mental health problems such as depression. In industrialised countries the prevalence averages around 10%, with a range from 6% to 19% in European countries. Since this is a huge societal and economic problem various members of the expert group are aiming at developing personal digital assistants to support people to improve their sleep by giving advice on behavioral changes. This should not only help improve sleep but also reduce the need for medication. We coined the activities around this topic “Digital sleep health”. As opposed to existing approaches which aim at treating people with already manifested sleep disorders we rather target people who have sleep problems that have not (yet) developed into sleep disorders. We will focus on two kinds of sleep problems: excessive daytime sleepiness/hypersomnolence and insomnia.

Current and planned activities:

The group’s activities on digital sleep health are staged. In the ongoing first stage Clinic Barmelweid and Helsana collected and analyzed data on medication that is highly indicative of sleep problems. The goal of this stage is to get more accurate figures on how prevalent sleep problems are in the Swiss population. Ramin Khatami (Barmelweid) and Roman Sauter (Helsana) reported about this ongoing study in the meeting. Final results are expected for spring 2022.

The next stage in the digital sleep health activity will be a workshop in spring with a range of experts to discuss approaches for measuring sleep problems, esp. sleepiness, in an objective way that does not require a (costly) stay at the sleep lab. This can e.g. be done by using an app together with appropriate sensors. Based on these measurements alternative treatments can then be devised. Insomnia, the opposite to hypersomnolence, is a second aspect which might be addressed in the same workshop or in a follow-up event.

The third stage will then be about setting up project proposals to get funding for developing solutions along the ideas from the workshop.

In a nutshell:

Digital sleep health is about

  • developing solutions to support people with sleep problems – sleepiness/hypersomnolence and insomnia – that have not yet manifested into pathological sleep disorders;
  • developing means to objectively measure sleepiness/hypersomnolence and insomnia outside the sleep lab;
  • developing alternative treatments that do not require medication but aim at behavioral changes;
  • grounding sleep health interventions on objective data, i.e. suggesting personalized behavioral changes on the basis of the measured objective data.


No Time To Die

By Nicolas Lenz, Litix GmbH

The organization of the 11th meeting of the Spatial Data Analytics Expert Group included some unexpected twists. After postponing the original meeting in September, we also had to switch to an online format at short notice on the new date. Although the excitement couldn’t quite compete with a real agent movie, we were at least pleased that we could finally welcome a large number of participants.

The real excitement came from the announced contents. Dr. Joachim Steinwendner from FFHS had offered to host the meeting and had prepared a program with the topic GIS and Health. The two announced talks were titled after Bond movies.They addressed the interface between GIS and Health, once from the pharmacological point of view of and once from the perspective of geoinformatics.

PD Dr. Stefan Weiler focused on the first view. In his talk “On Her Majesty’s Secret Service” he presented the role of geodata in medicine with numerous illustrations (e.g. the Corona dashboards). Joachim Steinwendner then changed the perspective in his talk “The World Is Not Enough”. He asked the audience to imagine a GIS in which the coordinate system did not map the world, but rather the human body.
The meeting ended in an informal exchange under wonder.me. Plans were made for future collaborations or at least for the next visit to the cinema.

NFT Symposium

By Michael Lustenberger

The 14th meeting of the expert group “Blockchain Technology in Interorganisational Collaboration” was a special event. The expert group hosted the NFT Symposium in Winterthur on the 2nd December with more than 100 participants.

NFTs – non-fungible tokens – have become a major topic not only in the art space, but also beyond. This technological development based on blockchain technology allows to create ownership of digital assets. The most famous NFT art piece (Everydays from Beeple) has been sold for 69 million USD. Also, the Swiss post has issued so-called crypto-stamps. NFTs has even been given the top spot on ArtReview‘s annual Art Power 100 list. Enough reasons to take a closer look at this novel phenomenon.

The event started with a speech from an art historian, Dr. Yvonne Schweizer from the University of Berne. She showed that it is always worthwhile to look back to see what has happened in the past: The idea of digital scarcity is by no means new. Previous artists have already found ways to create digital ownerships, e.g., with contracts. Interestingly, Yvonne also showed that currently there is a split between the traditional art market (mainly frequented by the “boomer” generation) and the crypto market (Generation Z).

After this enlightening talk, Dr. Daniel Diemers allowed us to take a glance into the future. NFTs are not merely important in the art market, they are the building blocks for the metaverse (as for example envisioned by the company Meta, previously Facebook). They allow for the creation of new digital worlds in which users trade and interacts with each other. Already today, such digital worlds (Decentraland, Sandbox) are worth billions. In the metaverse, we will see the convergence of several different technologies (AI, VR, AR, Blockchain) and all of this is guided by NFTs.

After the symposium, the participants exchanged ideas and their astonishment on this new technology during an apéro.