Skip to main content

Tag: Big Data & Ai Technologies

Expert Day

Selected expert groups from the data innovation alliance will present themselves at this half-day event. Current projects, trends and potential collaborations will be presented, discussed and worked on in interactive sessions. 

Location
FHNW Campus Brugg-Windisch, 100m from Brugg station
Bahnhofstrasse 6, 5210 Windisch, Room 5.0A52
https://maps.app.goo.gl/SZdfXaTgGWbSvyvn7

13:30Welcome & Registration
14:00Welcome speech by FHNW: Eyes on Human-Data Interaction by Prof. Dr. Arzu Çöltekin, FHNW
14:15Keynote Lessons learned on scaling after 1 year of GenAI by Dr. Marcin Pietrzyk, co-founder and CEO of Unit8
14:45Expert Group Break-out (1)
15:30Coffee Break
16:00Expert Group Break-out (2)
17:00Apéro
17:45Closing

Human data interaction via visual analytics

Prof. Dr. Arzu Çöltekin, FHNW

Arzu Çöltekin is a professor of Human-Computer Interaction, Visualization and Extended Reality, and leads the Institute of Interactive Technologies at the University of Applied Sciences and Arts Northwestern Switzerland. She is also a research affiliate at the Seamless Astronomy group in the Harvard-Smithsonian Center for Astrophysics of the Harvard University in Cambridge, USA, collaborating on scientific data analysis and visualization research. She chairs the international Extended Reality and Visual Analytics working group with the ISPRS; co-chairs the Commission on Geovisualization with the ICA, and is a council member with the International Society of Digital Earth (ISDE). Her interdisciplinary work covers topics related to information science, visual analytics, visualization and cartography, virtual/augmented reality, gaze-contingent displays, eye-tracking, vision (perception and cognition), and human-computer interaction.

Smart Maintenance – Hybrid approaches to intelligent maintenance

Part 1: Predictive Maintenance @ABB: a Technology Company’s Point of View

Kai Hencken is Corporate Research Fellow for “Physical and Statistical Modeling” at the ABB Corporate Research Center, Baden-Dättwil, Switzerland. He holds a Ph.D. and a habilitation in theoretical physics from the university of Basel, where he is currently a lecturer. He joined the ABB Corporate Research Center in Baden-Dättwil in 2005 as a member of the theoretical physics group. His research interests are the combination of physical modeling, data analytics, and statistical methods to solve problems related to industrial devices. He works predominantly on developing diagnostics and prognostics approaches for different products, covering the range from sensors and signal processing to mathematical methods in prognostics.

Predictive Maintenance is one of the main application areas of the Industrial Internet of Things. The wide deployment of sensors and their connectivity allows to collect big amounts of data from devices in the field. The exponential increase of computing power and the recent developments in data analytics and machine learning makes the application of advanced algorithms possible. We are also facing changes in the way maintenance work is done and how its importance is seen.

ABB is a technology company providing devices and solutions in the area of electrification and automation. Many of their offerings in the area of digitalization and specifically predictive maintenance are geared towards their own products. This leads to topics that are specific for these cases in addition to the common ones.

In my talk I will discuss some of these issues and how they can be addressed: The domain knowledge and the simulation capabilities within the company are one of the big assets of any manufacturer. Reusing this for predictive maintenance solutions is an important aspect. For highly reliable products failure data will remain scarce even for a large installed base. This is a major bottleneck for any data-driven approach and needs to be overcome. The focus of many solutions developed is to provide monitoring and diagnostics capabilities. The prognostics aspect and the proposal of actions to be taken to remedy potential problems are often more important for the final customer. Examples are taken predominantly from electrification and motion devices.

Part 2: Discussion of combining physics and domain knowledge with AI for intelligent maintenance and operation

AI in Finance and Insurance – The Future of Financial Data Analytics
  • 14:45 – 14:55 – Intro and introductions
  • 14:55 – 15:30 – Presentation: Nicole Königstein, Chief Data Scientist, Head of AI & Quant Research, Wyden Capital AG: “Financial Times Series Prediction in the Age of Transformers”+ Open discussion and Q&A
  • 15:30 – 16:00 – Break
  • 16:00 – 16:30 – Presentation: Guillaume Raille, Engagement Director & Data Scientist at Unit8 SA: LLMs beyond Chatbots: Unveiling the challenges of advanced LLM applications based on a real-world use cases+ Open discussion and Q&A
  • 16:30 – 17:00 – Discussion on topics that might be relevant for the DIA in the future and how to organize
Spatial Data Analytics – AI-driven Probability Maps

In the field of environmental monitoring, artificial intelligence (AI) coupled with probability maps emerges as a powerful tool for comprehensively understanding and managing ecological systems. By harnessing machine learning algorithms, intricate patterns within environmental datasets can be discerned with unprecedented accuracy. Based on this understanding, probability maps can be calculated to offer valuable insights into the likelihood of various environmental events. These maps serve as crucial decision-making aids for policymakers, conservationists, and researchers alike, enabling proactive measures to mitigate ecological threats and promote sustainable practices.

Schedule:

  • 14:45 – 14:55 – Intro and introductions (Dr. László István Etesi, Prof. Gerd Simons – FHNW)
  • 14:55 – 15:30 – Probability Maps: Current research & applications in the field of environmental monitoring
  • 15:30 – 16:00 – Break
  • 16:00 – 16:30 – Indicate challenges around robust multi-sensor & multi-scale data integration and harmonization 
  • 16:30 – 17:00 – Discussion on ideas which can be further evaluated in the frame of the Innovation Booster
Workshop Governance for Growth with Data & AI

14:45 – 15:30
1. Growing with Data and AI, dealing with Governance. Introduction to the objectives of the new Expert Group in planning (Philipp Kuntschik, adesso Schweiz AG & Dr. Sarah Seyr, HSLU)
2. Transparency and Privacy for Data Governance – Can we have both? (Dr. Omran Ayob, SUPSI)
3. AI Maturity Framework (Frank Seifert, adesso Schweiz AG)

15:30 – 16:00 – Break

16:00 – 17:00
Maintaining integrity along the Data and AI value chain (interactive session):
– Explore data governance practices from data collection to data protection.
– Discuss algorithmic health and management of AI systems.
– Engage in user-centered communication and transparency strategies.

Experts, Experts, Experts…

The Data Innovation Alliance’s second Expert Day in March 2023 was a hub of activity as experts from four key areas – Smart Maintenance, NLP & AI Technology, Spatial Data, and Smart Services – gathered to share their insights and mingle with researchers and industry professionals. The event kicked off with leaders from each Expert Group pre-discussing their plans for 2023, generating a wealth of innovative ideas for joint events and initiatives, and paving the way for exciting collaborations in the (near) future.

But that’s not all! The NLP and Digital Health groups are teaming up to bring you joint events that will revolutionize the way we approach data. And with the next Expert Day set for August 2023, featuring four expert groups once again, get ready for even more ground-breaking discussions and initiatives, organized jointly with other Innovation Boosters. Keep an eye on our events calendar for more information.

While the keynote speech may not have met expectations in terms of insights, it set the stage for what was to come – dynamic discussions and collaborations in the expert group break sessions. To ensure everyone had access to the wealth of information shared, short summaries of the discussions were written by participants in each room.

In short, the second Expert Day was a superb success, bringing together a diverse group of experts to debate their ideas and shape the future of data innovation.

Smart Services for Sustainability – Circular Servitization by Jürg Meierhofer

The Smart Services for Sustainability – Circular Servitization discussion was a dynamic conversation among highly experienced individuals from different industries. They explored how value is created in business ecosystems, focusing on both individual and organizational perspectives.

It was inspiring to have diverse industry representatives in the same room and to create a common understanding. Departing from economic value creation, the group extended its scope to ecological factors. An intense discussion arose about how environmental value can be created without negatively impacting economic value. Statements that economic value creation is still the predominant requirement were made, meaning that in many cases, even a slight reduction of economic value for the sake of ecological value would be treated with suspicion. As sustainability becomes increasingly relevant and regulations loom, the balance between economic and ecological value may shift in the near future.

Overall, the Smart Services for Sustainability – Circular Servitization discussion was thought-provoking and left participants eager to continue exploring the intersection of business and sustainability.

Spatial Data by Reik Leiterer

In a room buzzing with ideas, each data expert chimed into the discussion about the creation of a platform that would benefit cantons, individuals, and service providers. There was a shared understanding that it might not be possible to cater to everyone’s needs and that a simpler visualization and analytics approach may be the way forward. However, some uncertainties still remained, such as identifying where the necessary data is available and how it can be integrated, setting limits, and ensuring that data is not misinterpreted. Despite these challenges, the group remained enthusiastic about the potential benefits of the platform and is looking forward to overcoming these obstacles.

NLP & AI Technology by Lina Scarborough

The group opened the floor with how chatbots are great to answer questions, but what happens when users don’t know where to begin asking questions? This is a common issue in legal situations where the average client may not have the necessary background to understand what information is needed. Retrieval augmented language models like KATIE have emerged as a solution to this problem. These models use grounded reasoning and promote a chain of thought to handle complex queries and create a context for users who may not know what subset of questions to ask.

With the rise of machine-generated text, it’s becoming more difficult to distinguish between human and machine-generated content. While probabilistic token selection and frameworks like SCARECROW can help scrutinize machine-generated text, it can still be difficult, to nigh impossible, to identify. However, ChatGPTZero, an app that uses watermarking to create a statistical fingerprint in the sampling method, claims to be able to detect whether an essay is written by ChatGPT or a human – for instance, ChatGPT generally makes redundancy errors whereas humans make grammatical mistakes. This approach hopes to maintain the integrity of human-generated content in the face of increased machine-generated text.

The discussion then flowed into a lively and engaging presentation on how AI technology can make the tricky SQL “minefield” as easy to navigate as a soccer player scoring a goal – literally, by demonstrating SQL prompts on the soccer World Cup!

Smart Maintenance by Melanie Geiger

The five use case presentations highlighted the versatility of data technology in different applications, showcasing how it can be adapted to meet various needs. With input data ranging from domain knowledge to error log data, these use cases demonstrated how AI models can process and analyze complex data sets to provide valuable insights and decision support.

One of the key themes that emerged was the use of AI for diverse condition-based maintenance, specifically anomaly detection and fault diagnosis. By leveraging ML algorithms, these use cases were able to detect potential issues and predict equipment failures for timely maintenance and preventing downtime.

The highlight of the event was not only the apèro treats, but the opportunity to engage with the 60 participants and learn about their projects, challenges, solutions, and ideas for collaboration. Many attendees seemed to share this sentiment, as numerous participants were still engrossed in conversation at the end of the event, and some discussions had to be continued elsewhere. Those who wish to follow up on these conversations have the option to do so at SDS2023. On a more lowkey note, maybe you wanted to add someone on LinkedIn and send them a message. Here you go, this is your reminder!

Our conclusion of the event: the Alliance has many experts in various subtopics of data-driven value creation, but only together we can move faster.

Expert Day & General Assembly

Immerse in one of four expert groups

  • Spatial Data Analytics
  • Big Data and AI Technologies
  • Blockchain Technology in Interorganisational Collaboration
  • Data-Driven Business Models

and exchange expertise. Get inspired by the keynote and network during the coffee break. For data innovation alliance members the event is followed by the General Assembly and an apero. Let’s foster the community for Applied Data Science in this event.

Agenda

  • 13:15 Welcome
  • 13:15 – 14:45 Expert Groups in 4 Breakout Rooms
  • 14:45 – 15:30 Coffee break
  • 15:30 – 16:00 Keynote – Erika Meins,  La Mobilière «Using the Force of Analytics for Responsible Digital Interactions»
  • 16:00 – 16:15 Break
  • 16:15 – 17:15 General Assembly (formal part)
  • 17:15 Apero

Detailed Program

13:15 – 14:45 Expert Groups (running in parallel):

Expert Group: Spatial Data Analytics – Geospatial insights for all – from unique applications to future trends
The Power of Where – this frequently used statement underscores the importance of spatial data and spatial data analytics. All people interested in spatial data are invited to actively participate and/or get an entertaining insight into the world of geospatial data.  Take the opportunity to make new contacts and exchange ideas with experts from industry and research.
In this open event, we will take a tour of your favourite datasets, look at the most unusual and fun applications, and discuss together trends in geospatial data and future challenges. Of course, current infrastructure topics such as low code platforms (GEE & friends), new machine learning concepts and applications (image segmentation, tiny ML & Co) and data creation/access developments (Open Data & GDPR) will not be missed. Intellectual nourishment is guaranteed.

Expert Group: Big Data and AI Technologies
De-buzz AI, Thierry Bücheler, Oracle
«AI» is discussed as the solution for many problems on almost all levels – from exec boards down to the deepest and darkest hacker hide-outs. But what is it, really?
This short impulse will try to de-buzz AI to a certain extent by using real-world examples across industries, supporting the following theses:
– AI is not really about “intelligence” today
– Only rarely it is about developing algorithms
– And it is also not about bringing together data from different sources technically
So what’s the focus in real-world applications right now? What are some examples where «AI» makes a difference?
Talking to Data: Building Natural Language Interfaces for Databases, Kurt Stockinger, ZHAW
Information systems are the core of modern enterprises and scientific exploration. They are often based on fundamental research developed in the database community. While enterprise data is typically stored in relational databases, data-intensive scientific disciplines such as bioinformatics often store their data in graph databases. To query these databases, end-users need to know the formal query languages SQL or SPARQL as well as the logical structure of the databases. However, even for technology experts it is very challenging to write the right queries to retrieve the desired data. Hence, a large part of the end-users is basically not able to effectively query their databases. In this talk we discuss how to build intelligent information systems that enable end-users to talk to their data similar to humans. The major goal is to combine artificial intelligence with human intelligence for novel ways of data exploration. In particular, we will show how we have built various natural language interfaces for databases using pattern-based and machine learning-based approaches to significantly increase the productivity of scientists and knowledge workers when interacting with data. We demonstrate that our system INODE (Intelligent Open Data Exploration), which we have been building as part of a European Union project with 9 partners across Europe, is uniquely accessible to a wide range of users from large scientific communities to the public. Finally, we elaborate on the lessons learned when developing such a system and discuss how the technology can be enhanced by researchers or knowledge workers for exploring their own databases in natural language.

Expert Group: Blockchain Technology in Interorganisational Collaboration
Zurich has a new blockchain-based local currency: the “LEU”, which has already been discussed in the press.
The LEU is a currency and at the same time also a basic income that one receives by actively participating in the community. Although the LEU is based on blockchain technology, it has little to do with speculative cryptocurrencies: to receive the LEU, you have to meet regularly with members of the community. You can spend the LEU in local businesses in Zurich to promote the local economy.
The association Encointer will present the exciting LEU project and talk about the current developments and possibilities of such an alternative currency.
After a short introduction, we will cover the following topics. 
 – Introduction of Polkadot and Kusama Ecosystem.
 – Parachains and their advantages
 – Proof of personhood and sybill attacks 
 – Encointer vision, protocols and global communities
 – Status quo and field report Leu in Zurich
At the end we have some time to install the Encointer wallet and interact with it.

Expert Group: Data-Driven Business Models
Data-driven business models have become relevant to companies and organizations. According to Gartner back in 2018, 85% of AI use cases were not successful or did not live up to expectations. Where do we stand today? – What is the potential in data driven business models we still haven’t addressed – and why not? 
In this workshop, we will discuss success factors & challenges of AI projects. Among experts and practitioners we will exchange our experiences and share insights. As a take away, we will be equipped with a set of hands-on best practices, ready to be applied in our environments.

15:30 – 16:00 – Keynote Speech
Erika Meins, La Mobilière, Head of Mobiliar Lab for Analytics at ETH Zurich“Using the Force of Analytics for Responsible Digital Interactions”

Virtual Reality to reduce stress, telematics to prevent road accidents or augmented reality to improve collaboration? Erika Meins illustrates some of the opportunities of advanced analytics and new digital technologies for society – and provides a brief look at the dark side.

ONLINE – Expert Group Meeting – Big Data & AI

Reserve the date! The next meeting of the Big Data & AI Expert Group will be in March.

How Knowledge Graphs Empower Interoperability in Life-Science
Philippe Kraus, Trivadis

Started with the need to harmonize complex multi-dimensional Life-Science resources from manifold sources, we found with Knowledge Graphs a flexible yet expressive approach to meet multi-faceted challenges. We will talk about the experience we gained together with our client on their journey to produce, consume knowledge graphs in a challenging, strictly regulated environment.

Biography:
Philip Krauss is a Consultant Semantic Web & Big Data at Trivadis – part of Accenture. Trivadis brings years of experience in the conceptual design and implementation of Semantic Knowledge Graphs to the table, trains project groups involved in RDF technologies and supports implementations.

Integration of different data sources as essential basis for Big Data technologies – actesy’s unique way to integrate data in a new and audited way
Andreas Imthurn and Sandro Secci (actesy)

Intermarket Bank AG, a subsidiary of Erste Group specializing in professional factoring solutions, has acquired a multinational corporation as a new factoring client. The «onboarding» of this customer was to be solved in 4 weeks based on a reusable SaaS platform. Successful cooperation in the factoring sector, where receivables management is outsourced to a financial institution, requires cross-company business processes such as the connection and exchange of data such as accounts receivable/accounts payable lists, open item lists, the purchase of receivables, the triggering of payments during advance payments on purchased receivables, etc. The new system by actesy enables the company to offer its customers a range of services to meet their needs with complete data governance.

In this talk, we are also going to introduce you actesy and the USPs of our unique end-to-end digitalization suite. We are especially focusing on integration challenges of new systems and data sources and how these obstacles can easily be minimized for Big Data and AI use cases.

Biography:
Dr. Andreas Imthurn is CEO and co-founder of actesy AG. He holds a double degree in Law and Business Administration and Ph.D. in Law from the University of St. Gallen on the topic of PSD2/Open Banking. During his studies, he founded the US startup Joinesty, which he led as COO and Managing Director Europe until the end of 2018. He was also a member of the Board of Directors and Lead Legal Counsel of GUS Schweiz AG. Finally, he teaches upcoming entrepreneurs at the Berliner Hochschule für Technik in starting successful companies

Sandro Secci is CIO and co-founder of actesy AG and responsible for the development and innovation. He has over 30 years of IT experience with migration, ERP and general IT projects. Mr. Secci studied IT Sciences at the Politecnico di Milano and holds an Msc. in Computer Science. He was a member of the management board of GUS Schweiz AG. Prior to that, he worked as an independent IT consultant for more than 14 years.

To register please use the form below.

Thank you
Luca and Kurt

Expert Group “Big Data & AI Tech” meeting 10.09.2021

By Kurt Stockinger, ZHAW

In our latest expert group meeting, the following talks were presented:

Methods of Statistical Disclosure Control applied on Microdata
Simon Würsten, SBB

Big Data and AI Technologies on Microsoft Azure Cloud
Gerald Reif, IPT

Reproducible Data Science
Luca Furrer, Trivadis

First, Simon Würsten from SBB introduced various methods of data anonymization. Each of the presented methods can be considered as a trade-off between anonymization strength and expressiveness of the data (i.e. to minimize disclosure risk and to maximize data utility. For instance, some methods randomly change the values of data while others reshuffle the content of values between different attributes. Depending on which types of data analysis is performed, the respective anonymization methods can be chosen along with a report about the strength of the methods. The presented approaches have a very high potential to be used in various data-sensitive areas such as health care or e-government. The technology is ready to be used, for instance, in a PoC by other Alliance members (see R library sdcMicro).

Next, Gerald Reif from IPT presented the big data and AI architecture blueprint on the Microsoft Azure Cloud. Currently, one of the most widely used approaches is the lambda-architecture which consists of the in three layers: (1) The Speed Layer for real-time stream processing, (2) the Batch Layer for processing big amount of stored data, and (3) the Service Layer for presenting and reacting on the analysis results. There is currently a clear trend of combining and consolidating big data and machine learning technology from Apache Spark and Azure PaaS services. The advantage of the combined solution is the bleeding edge open-source technology of Apache Spark coupled with the enterprise features and user management functionality of Microsoft.

Finally, Luca Furrer from Trivadis provided insights into latest tools to enable reproducibility of data science experiments. In principle, three different aspects need to be reproducible: Data, code/models, and parameters. Promising tools for these aspects are dvc, mlflow and git. The advantage of these tools is that data scientists can easily keep the history of code and data and track the results of various machine learning experiments along with the chosen parameters. The tools integrate well together through git.

The presentations were followed by lively discussions about the methods, the architectures, and the experiences of using them in real life. One of the main questions was about the experience of deploying machine learning models in production over longer periods of times. A typical phenomenon is that big data and AI technology is often successfully used in proof of concepts but there is little information of how the approaches “pass the test of time” in real production environment.

As part of a future event – and possibly in collaboration with the expert group on machine learning – we are planning to report on the experience of using machine learning models in production. Typical questions to be addressed are: What models should be deployed? How often should models be deployed? When should re-training be done? How do we handle rapidly changing data? How do models degrade over time and what can we do to mitigate model degradation?

Expert Group Meeting – Big Data & AI Tech

The Expert Group will come together in September at the following location:
Trivadis AG, room Zuse, Sägereistrasse 29, 8152 Glattbrugg

The topics of discussion are as follows:

Methods of Statistical Disclosure Control applied on Microdata
Simon Würsten, SBB

Simon will talk about how he applied methods of anonymization on microdata, his experiences and possible complications regarding big data.

Big Data and AI Technologies on Microsoft Azure Cloud
Gerald Reif, IPT

Modern cloud providers enable powerful AI and Big Data technologies, platforms and tools. We will have a look at the underlying concepts and specific implementations and services on Microsoft’s Azure Cloud.

Reproducible Data Science
Luca Furrer, Trivadis

Luca will discuss the different aspects of a reproducible data science process and explain why he thinks the question of reproducibility should be considered for every AI project. Furthermore, he will present a set of open source tools which can be useful achieve reproducibility.

Expert Group Meeting – Software & Tools

We are happy to invite you to our next expert group meeting hosted by SAP on April 12 at 14:00, Althardstrasse 80, 8105 Regensdorf. Go to the 5th floor and room ‘Demo room 6’.

 

13:30 – 14:00:  Check-In & Welcome Coffee

14:00 – 15:00: SAP Database as a Service (DBaaS)

Matthias Kupczak & Michael Probst, SAP Solution Advisor

SAP HANA Platform capabilities continue to expand making it the ideal platform to innovate next-generation applications. Next-generation applications not only require transactional capabilities but also require advanced analytical capabilities and the ability to process structured, unstructured and streaming data. In this talk Matthias will highlight the cloud enabled SAP HANA capabilities in regards to Database Services, Data Integration Services, Application Development Services and the Analytical Processing Services and will support this with live demonstration and inputs where you can get your hands-on experience yourself.

15:00 – 16:00:  SAP Data Orchestration across Enterprise Data

Matthias Kupczak & Michael Probst, SAP Solution Advisor

SAP Data Hub lets you integrate data, orchestrate analytical data processing, and manage metadata across your enterprise data sources and data lakes. It also lets you build powerful pipelines, as well as manage, share and distribute data. If there are IoT scenarios or Data Science & Machine Learning scenarios, the  SAP Data Hub comes with many prebuilt machine learning algorithms (such as TensorFlow) that you can use to build data scenarios seamlessly and put into production in your Enterprise data landscape. This session will also include many live demonstrations and will give you the insights you require to bring your data science projects to life.

16:00 – 17:00+: Networking Apéro

 

Please fill out the doodle if you can come or not.

https://doodle.com/poll/f5rs75ada5r5dziv

Looking forward to seeing  you soon

 

  • 1
  • 2