Skip to main content

Geospatial insights for all – from unique applications to future trends

By Nicolas Lenz, Litix, Stefan Keller, OST, and Reik Leiterer, ExoLabs

The Expert Group Spatial Data Analytics used the 2022 General Assembly of the Data Innovation Alliance in Zurich to organise an expert meet up beforehand – and 18 experts from research and industry took the opportunity and participated in the event. The aim of this event was, on the one hand, to identify topics of particular interest for the spatial data community, which will then be taken up at special events in 2023. On the other hand, current trends in the field of geodata and applications/solutions related to geodata were presented and discussed. The meeting was concluded with the presentation of exciting data sets and tools that are of great importance in the current work of the participants.

In the area of trends, possible thematic clusters of particular interest were outlined, developments in methodological approaches were presented and new approaches to solutions and applications were discussed.

(© Zhu Difeng – AdobeStock)

In the context of the UN’s Sustainable Development Goals (SDGs), the Disaster Mitigation and Response theme complex stands out – themes, that are also of central importance in Switzerland and where geodata and their use/analysis are key to protecting the environment, infrastructure, and the population. This is linked to the wide field of Location Intelligence, e.g., visualizing (You all know heat maps, don’t you?) and analysing volumes of spatial data (often linked with non-spatial data), to enable holistic planning, insights for problem-solving, and advanced spatio-temporal forecasting.

Regarding data acquisition and evaluation, many new sensors, algorithms, and software packages are currently being developed in the field of 3D representation. This applies not only to the functionalities in existing solutions (e.g., 3D-GIS), but also to the linking of spatially explicit information with, e.g., the classic 3D model approaches in infrastructure planning (BIM) – with which we have gained another concept in the spatial universe: GeoBIM.

A lot of data means new possible approaches – and more and more use is being made of Machine Learning (ML) methods. But ML has very specific requirements for the data to unfold its full potential. One way to meet these requirements is to generate so-called Synthetic Data. This can not only help with an insufficient data basis, but also anonymise data in such a way that an exchange beyond the boundaries of one’s own organisational unit is possible even when working with sensitive information.

Also very exciting are the developments around SaaS applications and No-Code platforms, which will certainly lead to a strong increase in the use of spatial data. With the Metaverse, an additional field of development has opened in the last few months, which enables the spatialisation and visualization of our online activities. Hype, bubble, or opportunity – we will see.

New ideas, research projects and exciting applications were discussed in the subsequent exchange session: from the data pooling of freely available data (by Nicolas Lenz – Litix) and the integration of cloud computing services into locally running applications (by Dominique Weber – WSL), via interactive platforms for the joint work on requirements relating to the development/planning of spatial systems (Luis Gisler – cividi), to the power of customized machine learning tools in applied research (by László István Etesi – FHNW/ATELERIS). At this point, thanks to the presenters for the exciting insight!

You missed the Expert Day? – Don’t worry, there will be another one next year, along with several other exciting events on the topic of Spatial Data Analytics. Simply visit the website – and join the meet-ups where you can exchange ideas and initiate new collaborations with experts from research and industry. We are looking forward to you!

The majority of AI projects fail, right?

By Mark Pfaendler, La Mobilère

The transition towards data-driven business models has become highly relevant to most companies and industries. However, according to a Gartner publication back in 2018, managers find that 85% of AI use cases have not lived up to expectations or worse are deemed unsuccessful. Given this number being rather high, we were eager to validate this finding within our Expert Group “Data Driven Business Models” – held during the last meetup of this year – the Expert Day 2022. To bring our experts together, we organized a well visited workshop with 25 experts & practitioners from both industry & academia.

Our main undertaking was to first discuss the high failure rate reported by Gartner, followed by drivers behind successful AI projects. For the latter, we developed an AI project assessment framework covering 5 dimensions (Strategy, Talent & Leadership, Ways of Working, Data & Governance, Technology & Tooling). Each dimension comprises a set of best practices serving as prerequisites for running AI projects successfully, which D ONE has collected over the years from working with numerous clients from different industries. In this blog post, we would like to summarize the most important talking points & key takeaways.

First things first, our experts were skeptical about the AI project failure rate of 85%. In their opinion, a failure very much depends on how it is defined & the perspective the responsible team together with stakeholders take on failure in general. This is something interesting to better understand for which we will collect responses from a wider audience within our network.

The main goal of the workshop however was to discuss the AI project assessment framework. As an exercise, we asked the participants to apply the framework to their project experience & prioritize the set of best practices per dimension. The following table summarizes the results collected.

To wrap up, our 3 key takeaways from this workshop are:

  1. An AI project failure rate of 85% has been heavily challenged – this needs further investigation.
  2. According to our respondents, managing stakeholders & their expectations seems to be the strongest driver of AI project success.
  3. The project assessment framework was well received which allows us to go more in detail next time & start conducting a survey to collect solid results worth sharing with the public.

The workshop was the perfect opportunity to sound & refine the presented AI project assessment framework with industry experts who deal with success stories & challenges day by day. With this, we would like to thank all participants for the active discussion & contribution to make this workshop a success. We will keep you posted on the next steps taken!

LEU – A new currency for Zurich

Alternative currencies and the need for a more localized, circular economy as well as the
topic of universal basic income have all taken up more and more space in discussions about
the future of our economy. This has sparked the interest of our Expert Group and thankfully,
we were able to invite just the right guest speaker to give us insights from a hands-on
project tackling those topics, Malik El Bay.

At the 2022 data innovation alliance expert day that took place on the 9 th of November at
the Gleisarena in Zurich, the Expert Group “Blockchain Technology in Interorganisational
Collaboration” had the opportunity to learn about LEU. The LEU is a blockchain-based
currency and at the same time also a basic income that one receives by actively participating
in the community. Although the LEU is based on blockchain technology, it has little to do
with speculative cryptocurrencies: In order to receive the LEU, you have to meet regularly
with members of the community. You can spend the LEU in local businesses in Zurich to
promote the local economy.

Malik guided the Expert Group through the overview of LEU. First, he introduced the
Polkadot and Kusama ecosystem, which forms the backbone of the currency by providing
the security. Encointer (The chain, which LEU runs on), is a Parachain of Kusama and profits
from the shared security that the Kusama validators provide.
Coining a new currency that is distributed as a basic income comes with the challenge of
checking that one person can only receive one income per distribution round. To ensure
this, the LEU is distributed only to attendants of real-life meetings. Encointer plans to
expand by finding communities all over the world that want to implement their own local
currencies. The current experiment with LEU in Zurich is attracting attention from media
outlets and users and is gaining traction. This has led to a big push in demand for the new
currency which comes with its own challenges that were presented.
The session concluded with a Q&A that brought forward a diverse range of questions.
Whether the question was a technical inquiry or about the potential to fly to different time
zones in order to receive more local currencies, they all spawned enthusiastic discussions.

Because of the highly interesting presentation and the engaged audience, the event was a
big success and surely left many attendants with a new view of what a currency can do for a
community.

Expert Group Meeting Big Data & AI Technologies

By Kurt Stockinger, ZHAW and Thierry Bücheler, Oracle

On November 9, 2022 the Expert Group meeting was held directly before the General Assembly of the Data Innovation Alliance, which gave the group an extra boost.

The first talk was given by Thierry Bücheler from Oracle titled “De-buzz AI”. The talk described various AI uses cases as part of customer projects with Oracle and introduced some of Oracle’s AI technology solutions in the context of current Data Science (DS) projects across industries. The main focus was on “real-world” challenges and soft factors: Thierry discussed the importance of provisioning, managing, and sharing persistent DS notebooks, how to combine data silos, the collaboration between data scientists/ML experts and domain specialists who are not necessarily IT-savvy, import/export of code (e.g., Python) to GUI interfaces and visualizations for non-coders, lifecycle management and efficient re-use of models, automatic data pre-processing, the connection to business workflows and -goals, as well as the connection of open source software and libraries with “enterprise-hard” (secure, available) proprietary platforms for scaling and distributed collaboration. He also did a deep dive on anomaly detection based on modified MSET2 and which “advanced” ML tools you can get out-of-the-box nowadays without much coding.

The second talk was given by Kurt Stockinger from Zurich University of Applied Sciences titled “Talking to Data: Building Natural Language Interfaces for Databases”. First, the challenge of querying large databases was introduced, where end users need to be proficient in database languages such as SQL and SPARQL. Afterwards, Kurt introduced two different approaches for automatically translating natural language questions to either SQL or SPARQL. The first approach, called Bio-SODA, is based on a pattern matching algorithm that does not use machine learning. The second approach, called ValueNet, is based on complex neural network architectures leveraging state-of-the-art transformers. The algorithms have been applied to query several databases from the areas of astrophysics, bioinformatics but also from industry. Finally, Kurt also showed how ValueNet has been used to a world cup database with information on games dating back to the first world cup in Uruguay in 1930. The demo has been built in collaboration with the ZHAW Institute of Information Technology and the ZHAW Centre for Artificial Intelligence.

More information about using AI technology to access data can be found at the web site of the European Union Project called INODE – Intelligent Open Data Exploration.

Successful 5th Smart Services Summit

By Jürg Meierhofer, ZHAW

Since it’s origin in 2018, the Smart Services Summit has created a lively and relevant community of practitioners and researchers, with a relevant focus topic every year. Several implementation projects and other value creating initiatives emerged from this network.

On October 21, we got together for the 5th time to join industrial and academic experts to share ideas, this time with the focus topic „smart services creating sustainability“. The summit was hosted by Oracle at their excellent location in the Circle convention center at airport Zurich. We had a rich and extensive conference program and were able to establish excellent new contacts for future co-creation.

We will take this on to the 6th edition in 2023. The relevant topics are far from being exhausted, indeed they never run out, as in a dynamically changing context new opportunities and also urgent needs for the generation of benefits with smart service-oriented approaches are constantly arising.

Overwhelming Co-Creation at the Workshop “Data Driven Innovation”

By Jürg Meierhofer, ZHAW, and Philipp Schmid, CSEM

Almost 30 attendees actively participated at the workshop “Data Driven Innovation”. After an insightful introduction to the promising transition from predictive management to predictive quality by Philipp Schmid, Jürg Meierhofer gave insights into approaches for economic value creation. Upon this, the attendees gathered in small breakout groups and elaborated the data driven value creation patterns at their own case study examples. Some very interesting project ideas which could be further followed came out of this workshop. The databooster process provides a very helpful platform for further developing these ideas into implementation projects.

Contracts for Advances Services

By Jürg Meierhofer, ZHAW

In the late afternoon of September 27, 2022, the Expert Group Smart Services gathered at ZHAW in Zürich. We enjoyed a presentation by Shaun West about the topic “Contracts for Advanced Services”.

Shaun West provided hints and tips on how to design and deliver advanced services based on expert know how and best practice. This is relevant for firms who are integrating digital with their traditional product and service offerings.

When selling advanced services, the conceptual and contractual complexities of such contracts are all too often underestimated.  Experience shows that this is especially true when selling into traditional B2B markets. The developing and longer-term nature of advanced services and the need for collaboration between seller and buyer should be reflected in the contract.  For example, the traditional approach of using ‘specification and data sheets within specified operating parameters’ for service contracts will need to be replaced with contractual structures reflecting the dynamic, evolving nature of advanced service contracts.

This creates challenges for both sellers and buyers of advanced services:  traditional mind-sets must be overcome, high-level advanced services outcomes / measures have to be agreed, flexible / adaptable contractual framework should be developed, and collaborative structures are required in the contracts.

Databooster zu Gast bei Digital Health – alles dreht sich um Daten!

By Philipp Schmid, CSEM

Das Thema des 4. Digital Health Lab Days der ZHAW lautete: «Smart Healthcare & Digital Innovation». Über 200 Teilnehmer und Aussteller sind an diesem Montag in das historische Sulzerareal nach Winterthur gereist. Im Epizentrum des Maschinenbaus des letzten Jahrhunderts drehte sich heute für einmal alles um die Gesundheit und damit verbunden vor allem um Daten und Digitalisierung. Spannende Keynotes, 9 inspirierende Startup Pitches, 7 Smart Healthcare Workshops kombiniert mit Podiumsdiskussion und einer Posterausstellung – das Programm war vielseitig und spannend. Gerade im Digital Health Bereich stösst das Angebot des NTN Innovation Booster – Databooster auf grosses Interesse. Wir freuen uns auf viele neue Innovationsideen!

Shaping Value Creation by Smart Services at Mobiliar Forum Thun Workshop

By Jürg Meierhofer, ZHAW

Data Science, machine learning, artificial intelligence etc. are hot topics and deserve undisputedly a lot of attention. However, we always need to pay attention to whether and how we create value for the diverse actors in the ecosystem. For businesses, this means of course primarily economic value, but by far not only. Data-driven solutions also need to address other value dimensions of individuals, e.g., social or emotional values. This discipline of  “Smart Service Engineering” provides us with a set of tools and applicable procedures to achieve this. In the CAS Smart Service Engineering / Data Product Design, we work with these tools and directly apply them to case studies that are self-chosen by groups of participants.

Data Science, Machine Learning, Künstliche Intelligenz etc. sind hoch aktuelle Themen und verdienen unbestritten viel Aufmerksamkeit. Wir müssen jedoch immer darauf achten, ob und wie wir Wert für die verschiedenen Akteure im Ökosystem schaffen. Für Unternehmen bedeutet das natürlich in erster Linie betriebswirtschaftlichen Wert, aber bei weitem nicht nur. Datengetriebene Lösungen müssen auch andere Wertdimensionen von Individuen adressieren, z. B. soziale oder emotionale Werte. Die Disziplin des “Smart Service Engineerings” stellt uns dafür eine Reihe von Werkzeugen und direkt anwendbaren Methoden zur Verfügung. Im CAS Smart Service Engineering / Data Product Design arbeiten wir mit diesen Methoden und wenden sie direkt auf Fallstudien an, die von Teilnehmendengruppen selbst ausgewählt werden.

After three months of course, we had the wonderful opportunity to take our well-prepared case studies to the very inspiring environment of the castle of Thun, where we were made very welcome by our host Fabrizio Laneve, who is the lively and energetic manager of the Mobiliar Forum Thun. Brilliantly moderated by Ina Goller, the groups successfully further developed the value creation by their smart service concepts – with a strong focus on value creation in the ecosystem, considering all relevant actors. Thanks to these two consecutive days of workshop, accompanied by a nice dinner and an overnight stay in the castle, we not only brought our service concepts significantly further, but also learned a lot about methodology and additionally, very much strengthened our team spirit.

Nach drei Monaten hatten wir die wunderbare Gelegenheit, unsere gut aufbereiteten Fallstudien in die sehr inspirierende Umgebung des Schlosses Thun zu bringen, wo wir von unserem Gastgeber Fabrizio Laneve, dem sehr aktiven und inspirirenden Manager des Mobiliar Forums Thun, sehr herzliche empfangen wurden. Brillant moderiert von Ina Goller entwickelten die Gruppen die Wertschöpfung durch ihre Smart-Service-Konzepte erfolgreich weiter – mit einem starken Fokus auf die Wertschöpfung im Ökosystem unter Berücksichtigung aller relevanten Akteure. Dank dieser zwei aufeinanderfolgenden Workshop-Tage, begleitet von einem schönen Abendessen und einer Übernachtung im Schloss, haben wir nicht nur unsere Servicekonzepte deutlich weiterentwickelt, sondern auch viel über die Methodik gelernt und zusätzlich unseren Teamgeist deutlich gestärkt.

The Potential of Differential Privacy (decentriq)

The Expert Group took place as a virtual meeting on June 26, 2022.

Tim Geppert from ZHAW opened the meeting and introduced Andrew Knox from decentriq.
Andrew introduced the group to the basics of differential Privacy by giving an intuitive understanding of Differential Privacy.

The following paragraph highlights this information (The Reference to further information below)

To better understand how differential privacy works, we will use the example of the collaboration between the clothing brand and the digital newspaper. The first thing the brand wants to do with the digital newspaper data is understand how many users exist with similar interests as the cloth brand customers. Running these computations without any privacy control could easily allow the brand to single out specific newspaper customers as well as learning more than what they supposed to know about the reading habits of individual brand customers.

What Differential privacy says, is that for a given output, you are limited in how sure you are that a given input could have caused it. This privacy leakage limitation is the result of some noise being added at the process of asking each question. Practically this means that the (noisy) answer of the question brand is asking will be (almost) the same even if any single user was removed from the dataset completely. Consequently the clothing brand can never know if the result they got was coming from a dataset that included a specific user, effectively protecting the privacy of any specific individual. The tuning part comes into play when we talk about the amount of noise you can add to each answer.

The amount of noise is determined by the parameter ε (epsilon). The lower the ε the noisier the data is (and more private). However, a differential private system is not only adding noise, but is able to use the knowledge of ε to optimize the utility of the data by factoring the noise in the aggregate calculations. Determining the right ε in a Differentially private system is a non-trivial task and most of the time because it implies that the data owner is knowledgable about the privacy risks that the specific ε number entails and what level of risk they are comfortable undertaking.

Following the talk the participants discussed the opportunities and challanges of this privacy enhancing technology and possible industry use cases. Here a key takeaway was that Differential Privacy allows organizations to take more informed decisions about their data privacy, but the privacy/utility trade off still exists.

If you like to get more information about differential privacy read also the full introductory article by decentriq https://blog.decentriq.com/differential-privacy-as-a-way-to-protect-first-party-data/ which provides additional insights about limitations and features of differential privacy