AI concrete
Interview with Jonas Rietsch from adigi GmbH, Parkstein
1. Please briefly introduce yourself. What is your background, how did you come to AI?
My name is Jonas Rietsch, I first studied physics in Erlangen and had my first contact with machine learning in my master's degree. In my master thesis I worked on the classification of sleep phases, a form of time series classification. At that time, I was already interested in Natural Language Processing and thus came across adigi during my job search. I have been employed there as an ML Engineer for 3 years.
2. Which company and product / service are we talking about specifically?
Adigi is a B2B service provider for travel agencies. They forward requests to us, which are automatically processed using AI. In this way, travel consultants can be relieved.
3. Where is AI being used in the company?
The use of AI is the basis for our business model.
4. What significance does AI play for this?
Therefore, AI plays a very large role. Nevertheless, manual steps are still necessary in some cases at present.
5. What algorithms / type of AI do you use?
We exclusively use neural networks. More precisely, pre-trained BERT-like transformers, which we further train unsupervised. In addition, we also use smaller networks that are themselves supervised trained "from scratch".
6. What added value does the AI provide for the user?
Our solution enables travel agencies to process typical requests, such as for package tours, very quickly and to generate very specific offers. As a result, a higher booking rate can be achieved overall than with purely manual processing.
7. Could you have solved the problem with a traditional algorithm without AI? If no: Why was an AI necessary?
No. The data is very unstructured, the "needs" have to be extracted from the text queries, and for example period number of people and preferences. Spelling errors can also occur and, for example, the place or hotel names would be particular hurdles. So a rule-based approach would not be possible because of the unstructured nature.
8. What hurdles were there in implementing the AI and how did you overcome them?
There are and were several hurdles. For one, detection alone is not enough; it is also important for us to be able to assess the reliability of the AI prediction. This allows us to decide whether a human needs to check the result. The pure model output (e.g. softmax values) is not enough for this. In addition, entities also need to be normalized. Errors can happen during this process, which have to be quantified. In general, the quality measurement of the different steps is not trivial. Another problem is the versioning of models and datasets, since the typical Git version management only works partially. Also, Test-Driven-Development methods cannot be easily transferred to Machine Learning projects. We are actively working on solving such problems, for example using hard-coded tests for validation and fuzzy matching with a list of possible values. If scores are not in the green zone, humans intervene.
9. Where does the data you use for training come from?
The queries we receive generate enough data for supervised learning. In combination, the pre-trained models are trained on large amounts of unstructured text.
10. Did your software developers have prior experience using AI?
The developers working on AI are specialized in it.
11. How did you design the deployment?
We use a micro-service architecture for the individual elements. These are deployed in the cloud.
12. What are their next steps? For example, is the model re-trained on a regular basis?
Re-training happens irregularly, for example, when there is a change to the data, its preparation or the model. Since we mainly fine-tune here, the costs for this are kept within limits. We are constantly developing our models, evaluating alternatives, and working to solve the problems mentioned above, among other things.
13. What would have supported you in your intention to use AI? E.g., advanced training, GPU computing power, memory....
The ability to pre-train your own language model. Because of domain-specific texts, such pretraining would be helpful. However, this requires a lot of computing power. Advanced training would definitely be interesting for us, especially with practical content like on deployment in the cloud. For experienced software engineers, beginner-friendly training courses on AI would also be useful.
Interview with Philipp Olenberg from Krones AG, Regensburg/Neutraubling
1. Please briefly introduce yourself. What is your background, how did you come to AI?
I first studied mechatronics in the bachelor's program and then information technology in the master's program. In my studies, I already had contact with AI and data science through lectures. At work, I have had a lot to do with cloud and AI through the project management of digitization projects. I recently became Head of Artificial Intelligence at Krones AG.
2. What company and product/service are you specifically talking about?
Krones uses AI at various points in the value chain - both for internal optimizations and for our products. A concrete example is our Linatronic AI, an inspection unit that has been able to significantly reduce the false rejection rate through Deep Learning.
3. What is the importance of AI for this?
The importance of AI for Krones is growing steadily. It is an important building block of the digital transformation. The importance of AI for Krones is growing steadily. It is an important building block of the digital transformation.
4. What algorithms / type of AI do you use?
We use the full range of AI methods. Both symbolic, knowledge-based AI, as well as machine learning and deep learning are used, depending on the use case. We not only used supervised and unsupervised learning, but also reinforcement learning (RL).
5. What added value does AI provide for the user?
Through AI, we support our customers in decision-making processes and relieve them of numerous activities.
For example, smart maintenance strategies allow our customers to maintain more efficiently themselves or to purchase support from Krones. Our goal is to maximize line output, reduce scrap, and shorten unplanned downtime. Here, AI also helps us keep production quality high, directly influencing output.
6. Could you have solved the problem with a traditional algorithm without AI? If no: Why was AI necessary?
Some applications could not be solved without AI. One example is scaled control processes that have too many process parameters for normal controllers. We addressed this with RL and initial field test experiences were successful.
In other cases, there were traditional solutions, but they have been significantly outperformed by AI and therefore superseded. These include visual quality inspection, which has been significantly improved by Deep Learning algorithms.
7. What hurdles were there in implementing the AI and how did you overcome them?
For visual quality inspection in production, we have very high real-time requirements for models, as high numbers of pieces have to be analyzed per second.
These requirements influence, for example, the decision where to deploy the model, which model type to choose, and how to optimize the model, since all processes on a target system compete for the same resources.
This must always be reweighed depending on use case to use case.
In addition, the risk of a total failure or a wrong decision/misclassification must be evaluated and considered accordingly in the design.
With RL, there was also the fact that, unlike trivial use in video games for example, one cannot make an arbitrary number of runs to learn the correct behavior. Especially not in a production system. To do agent training, we had to create simulation environments and digital twins. Machine learning was also used for this purpose.
8. Where does the data you use for training come from?
Fortunately, we can generate our own training data. However, there are always challenges with standardization and quality, harmonization, and for metadata or descriptions of the data.
9. Did your software developers have prior experience in using AI?
It varies a lot among us in the team. Some colleagues already had prior experience from their job, studies or doctorate. In some cases, however, they have also taught themselves the content. We place a lot of emphasis on training and continuing education.
10. How did you design the deployment?
Since the use cases and target systems (e.g. cloud or edge) are different, the deployment also looks different.
For example, we deployed AI models as Docker containers, but cloud services (including such as AWS Sagemaker) also provide ways to deploy ML models as serverless functions.
11. What are their next steps? For example, will the model be periodically re-trained?
Issues such as data drift and concept drift need to be addressed preemptively and measures for this need to be thought of at the design stage.
Therefore, we have designed monitoring & operations processes from the beginning, which allow us to monitor the quality and performance of deployed models.
If necessary, retraining can then be initiated.
12. What would have supported you in your plan to use AI? E.g., advanced training, GPU computing power, storage space....
Computing power and memory have already been cleared in our case and therefore would not have been necessary. However, since AI is very fast-moving, it is important to keep up with the times. That's why we are always open for networking and exchange, for example on best practices, research approaches, application areas and practical experiences.
Interview with Reinis Vicups and Timo Walter from TIKI GmbH, Weiden i.d. Obpf.
1. Please briefly introduce yourself. What is your background, how did you get into AI?
My name is Reinis Vicups and I am a co-founder and CTO at the Technological Institute for Applied Artificial Intelligence (TIKI). During my studies at TU Riga in Latvia, I worked for Siemens. There I had worked early with predecessors of Machine Learning (ML), e.g. Petri nets and dealt a lot with automation. Through assignments abroad, I ended up in Nuremberg and have thus been in Germany for 20 years. I moved from Siemens to Samhammer AG and worked there as a developer, architect and project manager. Around 2013 a new project came up at Samhammer on text analysis and clustering, which is how I got into ML. After a couple of years, most of the internal use cases were successfully implemented. However, we didn't just stop doing ML. Funded research projects were then the origin for the spin-off of TIKI from Samhammer AG. One goal was to share AI research results with Bavarian SMEs.
And I am Timo Walter, Machine Learning Engineer at TIKI. I first studied electrical engineering & information technology at the OTH Amberg-Weiden as part of a dual study program with BHS Corrugated Maschinen- und Anlagenbau GmbH. There I took a liking to software development and therefore I did a Master in Computer Science, again dual, in Regensburg. Afterwards, however, I wanted to do "more" than just software development and to bring about real change. I could immediately identify with the vision of TIKI and have been here for almost 4 years now.
2. What is the company about?
TIKI was founded in 2017 and the shareholders are Samhammer AG, Krones AG and Zollner AG. One of the founding impulses also came from the Bavarian Ministry of Economics and the University of Bayreuth. The TIKI business model is to build productive AI applications for our shareholders and selected customers as well as to integrate them into their respective productive environments. In order to solve this task effectively and sustainably, TIKI has built its own AI development environment in the form of the Data Science Platform (DSP) and operates it on its own infrastructure.
In the next step of TIKI development, we will make our AI processes & expertise available to the open market, in the form of joint projects with third-party customers.
Due to our success in building productive AI applications, we currently have a strong demand from some large Bavarian companies who would like to become shareholders in TIKI.
3. Where is AI used in the company?
The name of our company (TIKI) is "self-speaking". All activities are based on and focused on productive AI applications. For each use case, we are very quickly able to assess whether AI is worthwhile for this or whether other solutions lead to the goal. Once we have established the feasibility for AI, we are able to develop the productive AI application very quickly. This is where our slogan "To productive AI in 90 days" comes from. Speed plays a very central role for us.
We do not limit ourselves to individual algorithms or industries. TIKI works horizontally, where the production of ML systems takes place. The process for AI development and deployment should become the object of ordinary driving.
Other tasks in the IT world today are very standardized in their process. This is not yet the case with AI, because AI is multi-layered and "magical." We want to help make AI development and use in the future as easy as developing e-commerce projects, for example.
4. What algorithms / type of AI do you use?
We don't have any restrictions on that. We cover everything from classification to clustering to regression and anomaly detection. In the exploration phase, we try out a great many algorithms and finally use the one that adds the most value. This is made possible by our in-house developed DSP (AI development environment), which uses massive parallel processing to train many models and chooses the best algorithm through an empirical comparison. Our AI development environment is heterogeneous. So, depending on the step, different technology is used, always focusing on which tools do the best job for each task.
5. What added value does AI offer the user?
Of course, that depends very much on the specific project. A concrete example is our project with the discount grocer Netto Marken-Discount from Maxhütte-Haidhof. This involved the detection of anomalies in sales volumes. In everyday sales, for example, it can happen that a product is suddenly only sold in very small quantities or not at all. The possible reasons for this are numerous. The product may be sold out, incorrectly sorted or simply hidden by other products or inventory.
From the receipt data, we can extract daily forecasts of expected sales for each item per store. If anomalies occur, the store employee is informed so that causes can be investigated and remedied.
So, even though the reasons for the anomaly are chaotic, unknown and unpredictable, and do not appear in the receipt data, we manage to generate added value for the customer. This lies in drawing attention to the anomaly. The cause analysis and solution can then be easily carried out by the employee.
6. What hurdles did you face in implementing AI and how did you overcome them?
In general, the biggest hurdle for us is preparing the domain-specific data so that you can use it for training. It is estimated that feature engineering accounts for about 75% of the total processing time. The complexity arises from the fact that an understanding of the domain and context must first be created. Data must also often be collected from multiple heterogeneous sources and transformed for training.
To effectively address these challenges, we have developed specific transformation techniques in recent years. This enables us to solve even the most complex challenges in the shortest possible time.
7. Did your software developers have prior experience in the use of AI?
Sporadically, there was experience in previous jobs, but not to the intensity we need today. As an entry criterion, deep ML expertise is also not mandatory from our point of view. We have developed special induction methods that enable us to provide new employees with the necessary know-how within two to three months. In addition to technical expertise, we consider a high level of intrinsic motivation and the ability to familiarize oneself quickly with the topics to be decisive.
8. What would have supported you in your intention to use AI? E.g., advanced training, GPU computing power, storage space....
What we lack are technical discussion partners for applied machine learning. We are looking for exchanges with companies that build many productive ML systems. A community for practical use of AI, where it is not about the same basics and abstract questions all the time, but about symmetric exchange on hands-on experiences, problems and their solutions, and technical and algorithmic details.
9. Is there anything else you'd like to say that you haven't had the opportunity to say so far?
Anyone interested in productive AI topics is welcome to contact us. As addressed, we are eager to exchange ideas and talk in deep detail.
Our goal is to initiate a symposium for applied AI solutions in medium-sized businesses in the Amberg/Weiden area. For this purpose we are looking for partners who would like to drive this forward with us.
You can reach Mr. Reinis Vicups at reinis.vicups@tiki-institut.com and Mr. Timo Walter at timo.walter@tiki-institut.com.
Interview with Dr. Thomas Weig from ams OSRAM, Standort Regensburg
1. Please briefly introduce yourself. What is your background, how did you come to AI?
My name is Thomas Weig, I am originally from the Lake Constance region, and studied physics first in Augsburg for my bachelor's degree and then for my international master's degree in Stuttgart. Afterwards, I did my PhD in Freiburg at the Fraunhofer Institute for Applied Solid State Physics. There I worked on ultrashort pulse generation of blue laser diodes. Through joint projects, contact with OSRAM was established at an early stage. My dissertation covered many areas from simulation, fabrication, characterization to data analysis of LASER. I then joined OSRAM as a data analyst. This was driven by a fascination with cross-functional work and data analysis that I experienced during my PhD. At OSRAM, I performed data analysis on the entire process chain, which was very exciting. Since 2019, I am now leading the Data Science area in development in the Opto Semiconductors division at ams OSRAM.
2. What product / service are you specifically talking about?
In our team, we do not work on algorithms or software that are directly applied in the product at the customer's site, but we support development and manufacturing with our work and enable new products through "data-driven" development.
3. Where is AI being used at ams OSRAM?
There are several exciting areas: For optimizations, e.g. to increase efficiency, for automations of time-consuming business processes, and Virtual Metrology. The goal here is to predict certain product properties from production parameters. In some new product innovations the effort for testing increases and in some even certain tests are physically hardly possible. Algorithms that allow to predict the properties very reliably without actually performing the tests can help tremendously.
4. What role does AI play in this?
The importance of AI is growing for us. Currently, the use of Deep Learning, for example, is still in few solutions; for the most part, classic (machine learning) algorithms are still in use. For us, the solution to the problem is always in the foreground first. We are developing more and more solutions with the help of newer AI where we experience limitations with classic methods.
5. What algorithms / type of AI do you use?
On the one hand, as mentioned, we use many classical methods such as linear regressions, decision trees and clustering algorithms but also Bayesian methods and reinforcement learning (RL). A special feature in our case is that we also include causal research. For example, we also consider causal issues in conjunction with RL. After all, a production flow is a chain of many processes with many interactions. When you analyze the data from that, you find many correlations, but some of them are only spurious correlations. So there is no direct causal relationship. Our vision for this is to have a causal model in which RL can be used to intervene. Because the product properties at the end (e.g. luminous flux, color, ...), are already causally influenced by process steps at the beginning of the chain. So you could intervene early in the first steps to optimize the properties.
6. Your department works in a supportive way for other departments. How do you convince them of the added value of AI?
Most of the time, we don't have to! Engineers come to us with their ideas on how they could be more productive, more efficient, or achieve higher quality results through Data Science. This comes from our deeply rooted engineering mindset in the company. Most of the time, however, AI is needed to implement the ideas, so we don't actually have to do any convincing. The problem then is rather to have the data available in sufficiently high quality to implement the project quickly.
7. What hurdles did you face in implementing AI and how did you overcome them?
The biggest issue is definitely collecting large amounts of data in sufficient quality. In addition, we mainly work with manufacturing data from different plants, locations and products. In addition, production parameters are not static; changes are virtually part of everyday life. So in addition to the pure collection of data, data drift must also be taken into account. We are currently expanding the infrastructure and improving collaboration between the individual stakeholders to address this issue. This includes, for example, the establishment of a data lake.
8. Where does the data you use for training come from?
Our data is all self-generated on the systems.
9. Did your software developers have prior experience in using AI?
It varies. Some of the newer team members have experience and also PhDs in AI, but a lot of longer team members tend to have backgrounds in physics and math with a soft spot for data. So we are a very heterogeneous team that also benefits from very good doeman knowledge.
10. What would have supported you in your intention to use AI? E.g., advanced training, GPU computing power, storage space....
As mentioned earlier, the biggest hurdle is data. But a unified solution would also be helpful for deployment. We are generally not constrained by computing power.
11. How did you design the deployment?
Deployment turns out to be very diverse depending on the use case. For non-production-critical elements, it can simply be deployed on servers with microservices and used via a web interface. Otherwise, another solution must be found for reasons of reliability alone. In production operations, we therefore mainly rely on tools that we already used before for classical algorithms. Although these were not initially designed for AI, they have been expanded accordingly and meet our requirements.
Overall, we are deploying more and more in the cloud - also in combination with the data lake mentioned above.
12. What are the next steps? What is the vision for the future?
From a strategic perspective, we definitely want to improve data quality and ensure greater collaboration and synchronization with IT and production. The data infrastructure with Data Lake will then enable us to work on further use cases.
Closer integration with production will also help generate more ideas and find solutions.
In addition, we will drive forward the topic of causality and continue to research in this direction.
Interview with Dr. Christian Heining from up2parts, Weiden
1. Please briefly introduce yourself. What is your background, how did you come to AI?
My name is Christian Heining, I am Chief Innovation Officer at up2parts GmbH and responsible for Product Innovation, Data Science and Machine Learning. Originally, I graduated as a technomathematician and actually had nothing to do with Machine Learning during my education. But there was a lot of overlap, for example with statistics, computer science and mechanical engineering. So what I do today actually started during my studies. During my doctorate, I worked on simulation,CAx and algorithm development, and that's how I first came into contact with data processing, in a form that would probably be called data engineering today. After that, I worked in product development for a few years, where I also trained the first simple predictive models. At that time, the terms "Data Science" and "Machine Learning" were also slowly emerging in the industry. Then I worked as Head of Research and Development at BAM GmbH in Weiden, Germany, and since the spin-off of up2parts I am now CIO here. Machine Learning is now an essential part of my everyday professional life.
2. What company and product/service are you specifically talking about?
up2parts is a young software company from Weiden, Germany, which now has around 70 employees. We develop software solutions for manufacturing, especially for machining. We support manufacturing companies in their daily work to automate their processes, for example in the calculation of components, and to make implicit knowledge explicitly available. Our end product is a web-based service solution in which AI is an important component.
3. Where is AI being used in the company?
At our company, AI is part of the products and is made available to users via our solutions, for example through web services.
4. What significance does AI play in this regard?
Overall, the importance is great, but also highly dependent on the respective subtask. Let's take an example application with the task of automatically generating routings from CAD models. It must therefore be predicted how the article will be manufactured. This process consists of many sub-problems, which also require a lot of domain knowledge. However, some of the input data is so complex and unstructured that purely model-based methods are not possible. Overall, however, we only use AI where it is necessary. Machine learning is not an end in itself for us.
5. What algorithms / type of AI do you use?
Since we have to solve many individual problems, the range of algorithms used is wide. Much of it is supervised machine learning with classification or regression, but we also use clustering methods. No Deep Learning is used, as we have too little data for this. In part, we also use public and pre-trained models from the literature, e.g. Autoencoder. Due to the data situation, we are currently also working with synthetically generated data for one of our problems. We also always try to evaluate new methods and keep up with current scientific developments. Therefore, we are actively involved in various research projects.
6. What added value does AI offer the user?
The greatest added value is that we manage to make the user's personal, implicit manufacturing knowledge explicitly available in an automated way. The users who work with the CAD models are mostly real manufacturing experts with dedicated expertise. They need to understand the CAD model, for example, but also how manufacturing works in the field. We provide some core functionality, but the customer primarily works with their own data. This means that the AI provides suggestions, but these can also be edited by the customer. In a sense, he generates his own new labels. The machine learning models are automatically retrained every week. This allows the system to learn how the user works in the form of algorithms and automation. In the process, the user can also interact collaboratively with colleagues, as our system supports multiple users. Less qualified personnel can then also benefit from this, which will be a great advantage in view of the shortage of skilled workers. In this way, we make knowledge that is only implicitly available in the company available in a scalable way.
7. What hurdles were there in implementing AI and how did you overcome them?
We built the system from scratch, so a lot comes together in the beginning and sometimes you run into dead ends and fail. It was also not at all clear in the beginning what methods we would use, what the architecture and infrastructure would look like, etc. From a machine learning perspective, of course, missing data is always problematic. We maintain good contact with customers like BAM, which has helped us a lot and also gives us deep insights into the manufacturing domain. Some customers also lack an understanding of the importance of data and necessary data was not recorded for a long time, for example. However, this is currently getting better and better.
8. Where does the data that you use for training come from?
The basis is data from our partner companies. We use this as the basis for training with customer data - separated, of course, by security mechanisms. So the user automatically continues to train his "personalized" and individual AI in live operation.
9. Did your software developers have previous experience in the use of AI?
It is important for us to have cross-functional teams, because we always have a product for manufacturing employees as a goal. Therefore, it is important to combine many domains starting from manufacturing knowledge to Machine Learning in one team. However, the ML algorithms are only one part of the overall solution. In the same way, knowledge of Docker, microservices, continuous deployment and cloud technologies, for example, is important. Many of our employees have a background in mathematics, physics or software engineering, but some newer employees also bring knowledge of ML from their studies. But using ML models in reality is usually much more complex than you are used to from your studies, for example because the data sets are not prepared. Especially at the beginning, we built up the necessary machine learning knowledge through further training.
10. How did you design the deployment?
Our solution is based entirely on microservices implemented with Docker or Kubernetes and running in the cloud. In total, the infrastructure contains about 40-50 microservices, many of which of course perform more "typical" application tasks and have nothing to do with ML. Each service has its own CI/CD pipeline, so we can and do deliver on a daily basis.
11. What are your next steps? For example, is the model re-trained on a regular basis?
The models are re-trained every week at the respective customer with their current data, this happens fully automatically and without any intervention by us. In addition, we of course continuously improve our infrastructure and the models. At the moment, the predictions still have to be checked by humans. However, this fuzziness is very difficult to remove, as individual users would evaluate the situations differently. But we are working on being able to push automation even further in the future. Other than that, we also want to make the models explainable or interpretable, keyword xAI (explainable AI), so that the user can better evaluate the predictions. From a domain perspective, we want to understand and analyze the CAD models even better, since they are very complex data.
Interview with Robin Griehl of umlaut energy, Aachen
1. Please briefly introduce yourself. What is your background, how did you come to AI?
My name is Robin Griehl, I studied industrial engineering in Hamburg with a focus on energy technology and dealt with topics such as AI and machine learning (ML) in my master's degree. This is how I ended up at umlaut energy, where I was able to directly co-author a study on AI use cases for power grid operators. Through umlaut energy, I then also got the opportunity to write my master's thesis in the field of ML on a customer project. In it, I developed a predictive maintenance algorithm for solar inverters that determined the probability of fan failure.
After that, I stayed at umlaut energy and have been here for almost two years now, working on the topics of digitalization in grid engineering and the identification of AI use cases for power grid operators.
What I find very exciting about our projects is that we always work directly on the energy transition. All the topics you see in the news end up with the energy supplier and grid operator. That's exactly where our projects come in - so we are involved in the very current and socially important topics.
2. What services does umlaut energy offer?
We really start from scratch by first identifying AI use cases. These are then elaborated into a strategy and a prototype is made. If the customer so desires, we can also supervise the go-live, i.e., put the models into production.
The main use cases have turned out to be projects for various forecasts, e.g. of power generation and consumption. But also the prediction of prices on the power exchange or unusually high river levels, which can be responsible for failures of electrical plants, are relevant tasks of AI.
Another important topic is predictive maintenance. More and more sensor technology is being installed in operating equipment and the power grids. The amount of data that can be used for intelligent maintenance is therefore increasing overall.
In this context, the term data governance is very important. You have to clarify how the data is collected, who is responsible for the data, how and by whom it is evaluated and further processed. Only then can you actually devote yourself to AI models.
Predictive maintenance is also very interesting for power grid operators because there is the Incentive Regulation Ordinance, which creates financial framework conditions that require them to become a little more efficient every year. AI-based predictions are needed to intelligently maintain operating equipment and use it in a cost-efficient way. This field will definitely grow a lot in the coming years.
3. What is the significance of AI for the energy industry?
The topic of decarbonization of the energy system, i.e. the energy turnaround, is highly topical. Of course, the well-known problem with renewable energies is that they are not always available, but are subject to strong fluctuations. This poses challenges for power grid operators.
At the same time, more and more people are using electromobility and, keyword sector coupling, more and more heat is also being generated via electricity. These developments are leading to completely new electricity consumption profiles.
So we have strongly fluctuating profiles not only on the generation side, but also on the consumption side. This leads to completely new problems that have to be dealt with.
In our view, this is precisely what AI is absolutely necessary for, because in the future we will be dealing with very complex systems and very large volumes of data.
4. What algorithms / type of AI do you use?
This is very application and project specific.
We notice that you often get very far with simple methods. Also, explainability of predictions is often important, for which small, simple models are better because they are easier to interpret.
Especially when we are dealing with critical infrastructure, we cannot simply use black-box models.
5. What is the added value of AI for the customer?
The biggest added value is clearly increased cost efficiency. This ranges from more efficient maintenance of equipment to more clever management of power consumption, so that the cost to the end customer is reduced and the services that are useful to the grid can be provided.
If we think about this further in terms of system management, volumes of data come into play that are so large and complex that they can no longer be analyzed manually, but require AI tools.
We often start with smaller projects. These then bring the added value of integrating AI into the company processes in the first place and gaining experience with such projects. Because in the end, it is still necessary to react to the model forecasts. These quick wins are very helpful for this.
6. What hurdles did you face in implementing AI and how did you overcome them?
Digitization is not yet very advanced in parts of the energy industry and among power grid operators. The primary focus is on physical assets that are not very digitized, and data is not collected or is scattered in data silos. Often, the right amount of data of the right quality for large Deep Learning models is simply missing.
That's why we often start our projects with a strategy concept and strengthen the topics of data governance and data literacy for our customers.
In addition, the issue of data protection must always be considered. At the moment, for example, there are again debates about data from smart meters, the rollout of which has been slowed down for a long time for data protection reasons. We must therefore always take into account the GDPR and compatibility with data protection in our projects and use cases
7. Where does the data you use for training come from?
The data always comes specifically from the current customer project. That's why missing data and the quality of existing data often remain a hurdle.
8. Did your software developers have prior experience in using AI?
We are growing strongly as a team in all areas. Most colleagues have a background in industrial engineering or e-technology, of which some also have prior experience in the energy industry or AI. From our point of view, however, AI experience and machine learning skills alone are not exclusion criteria. Project-relevant knowledge of the energy industry can also be learned subsequently, e.g., via a Jumpstarter program.
9. When forecasting river levels, it is conceivable that changes in the climate could lead to a data drift. How is this taken into account? For example, is the model regularly retrained?
That always depends on the individual project and our order. Often this ends with the delivery of a prototype. For longer-term projects, drifts must of course be detected, for example through monitoring.
10. Would you like to say anything else that you haven't had the opportunity to say so far?
We are very happy to exchange ideas with the research community and try to give students and young professionals the opportunity to get a hands-on look at applications of AI, for example, through internships and thesis supervision.
In addition, we are also interested in exchanging ideas with other companies in the field of AI to further complement our knowledge in this area and discover possible future collaborations.
We also recently published our final report on the Data4Grid project commissioned by the German Energy Agency. The project provides a good overview of the main challenges and opportunities for AI deployment at power grid operators. In addition to identifying suitable AI use cases, the three most promising use cases were also prototyped as part of a start-up challenge.
The report with the project results can be found here: Link
Interview with Manuel Gollner of the GO! Institut
1. Please briefly introduce yourself. What does your career look like?
My career is characterized by a passion for innovation and change. My life took a decisive turn in 2000 when I left my position as a civil engineer to join an Internet startup. I wanted to be part of the exciting Internet industry, which was in its early stages at the time.
My first encounter with Artificial Intelligence (AI) and data then took place in 2012, when I co-founded a startup that would enable context-sensitive advertising using image recognition. We were looking into machine learning and labeling, and even had conversations with Google, which was working on a similar product at the time.
However, we had to realize that the topic was a challenge in terms of both algorithms and availability of data, which we were not yet able to master at the time. From today's perspective, I would describe us as rather naive. We had completely underestimated data. Nevertheless, it was an instructive experience. This later gave rise to various consulting and training projects in the field of digitization, which ultimately led to the founding of the GO! Institute in 2019. Our focus is on the importance of data for innovation.
2. What company and what service are we specifically talking about?
The GO! Institute is a combination of change and innovation consulting focused on helping organizations establish a positive data culture. Our clients are often chief data officers (CDOs) or heads of business intelligence (BI) departments who have great ideas and initiatives, but often encounter employees who are unaware of the value of data. Many managers also don't know how to effectively get the topic transported to the business departments.
So we help bring the topic to the breadth of the company so that data and AI can play supporting roles for our customers in the near future. We help answer the questions: How can I make the topic of data palatable? How can I break down (data) silos? What do I need to train and teach in order to make data usable as quickly as possible? In our view, the topic of data is actually even more important than the topic of AI.
In the end, there are concrete goals to be achieved often with customized training and information events that make the topic of data visible and attractive to all employees.
3. With such "soft" factors, one question is obvious: Why all of them? Isn't it enough to employ individual data specialists?
Our customers can often implement data projects themselves. However, not all of them succeed in translating the added value of data into the language of the business departments. We help to explain complicated things in a simple way in order to establish a uniform understanding of the topic of data. At the end of such a process, it would be great if there were also data managers in the departments who understood data and could evaluate the results of data scientists and classify them competently in terms of added value for the department. That would then be a living data culture.
The main goal of our customers is therefore to sensitize and activate the entire organization for the topic of data. Because without data there is no AI - this is a realization that I have also deeply internalized since 2012. Many employees are simply not aware of the potential of data and AI.
4. What are problems you encounter in companies?
A first step is often to break down silos. Here, too, you have to mediate, because some silo managers don't want that at all.
This can be achieved, for example, via podcasts or short video interviews on the topic of data and internal company projects. Specifically, we prefer to identify the "small lighthouses" that have brought incremental but real, data-driven improvements.
In doing so, we lay the groundwork for new ideas and desires for data innovation in the other departments. We also try to activate networking around data with other departments. The more data, the more potential for innovation.
Caution is advised, however, when people are forced to take an interest in the topic. That works too, but the trick is rather to activate the intrinsic motivation of employees and managers. The topic of data is extremely exciting and should be presented as such. If a senior manager becomes aware of the relevance of data and the added value for his own goals and his own area, then interest will follow all by itself. And then there are no more problems - at least data-cultural ones.
5. What significance does the progress of AI play for your business model?
The question is difficult to answer for our offering. Algorithms attributed to artificial intelligence have already changed our world massively and will continue to do so - more massively and even faster than we can currently imagine. So dealing with the topic of AI is not a question of differentiation, but rather a question of survival. The only thing is that I can't train AI without data. Nobody comes to us who wants to use AI, but rather someone who wants to anchor awareness of the elementary importance of data in their own organization. Currently, there is still work to be done - so I am not worried about us. Nor about whether humans will be replaced by AI. But if you don't deal with AI, you will be replaced by a human who knows how to use AI. We are convinced of that.
6. On your website you say "We need to be agile." Why?
Today, you often don't know which project will be successful. At the same time, digitalization is creating global competition and disruptors from many directions. This is accelerated once again by the topic of AI.
So as an entrepreneur, I have to be innovative, but at the same time not jump on every trend. This is a dilemma. Because I can only build up expertise and experience in topics such as data or AI with in-house projects. However, the costs and benefits can quickly drift apart.
You have to constantly check and be ready in good time to drop a project or change direction completely, in other words to act agilely. For me, agility means not losing - either you win or you learn. In small increments this is not difficult. It becomes problematic when a complete business model is threatened. AI is no exception, even in front of the tech giants. A good example of this is Google's current reaction to Microsoft's ChatGPT. Although Google invented the technology, Microsoft is now using it commercially. This example shows how quickly even the biggest tech companies can be disrupted. Now it will be exciting to see how agile Google really is and how it will react.
With digitalization, you don't know where change is coming from these days. Digitalization is ultimately data-driven and own data means innovation potential. Those who don't take advantage of that are at a disadvantage. This should be clear to everyone in the company
7. What measures can I take as an employee to strengthen the data culture in the company? And what measures can I take in a management position?
To strengthen the topic of data culture in the company, you should first identify fellow campaigners with a passion and sense of mission for the topic. It's also good to work in a small team to publicize the relevance of data internally piece by piece.
It can also help to pick up on current trends on the topic of data and digitization. E.g. What is happening in the field and in the industry? What basics do I need? The communications department can also be involved here to create short articles or videos, for example. The message should be: The topic is important, we deal with it as a company, we have competencies (or are currently building them) and any of your questions are welcome.
We also recommend getting feedback on data projects from employees who have nothing to do with data. This often helps you to quickly recognize whether the projects seem to make sense and to achieve interest and connection to the topic through participation.
A manager should try to integrate data as vividly as possible and point out how important the topic is. Of course, this creates work, but it also creates transparency and deepens the knowledge and the handling of data and its interpretation. One possibility, for example, is to have team members create statistics of the week, which are then presented and discussed in the team. There are many variations - even errors can be built into such data to make the topic more playful. Using interactive dashboards, making decisions based on experience and proven by data - or disproven by data - that would be the silver bullet.
And if it hasn't been done yet, you should definitely occupy a central place where all the data converges and is collected. You can compare this with Lego bricks. First of all, you need bricks (data) so that you can build something. For those who prefer a more practical approach: Predictive maintenance - i.e. the prediction of a certain behavior of a machine - only works if you have access to historical data and the resulting events. Without this treasure trove of data, current data is of little use to me in predicting future behavior. This may sound trivial to many, but in some companies it is rather unknown among the broad workforce. However, such "small" insights often already fundamentally change the view of the relevance of data for the improvement of various processes.
8. What prior experience and starting point is needed for such training?
The basic prerequisite is, of course, the desire in the company to allow data to be an objective voice, to use data and to search for data in the company. This requires a responsible person who not only has the authority and the mandate to develop the topic, but is also equipped with the corresponding capacities.
For a training itself, care should be taken to pick up the participants where they are. Of course, I need to know in advance who I want to address, what level of knowledge the group has, and what competencies and skills related to data should be taught. Accordingly, the content is then designed in such a way that preparation is not necessary. Especially with a cultural topic, the primary goal is to get people excited about a topic - this also applies to data culture. If I have to prepare first, then that would be more of an initial demotivation that must be avoided at all costs.
9. Would you like to say something else that you haven't had the opportunity to say yet?
I would like to emphasize how important it is to have an awareness of the fundamental importance of one's own data for future innovations, sound decisions and accurate predictions. In every company, a comprehensive and clean data collection should be established, preferably managed and maintained by internal knowledge holders and accompanied by a positive data culture throughout the organization.
Whether the GO! Institute can help you with this, of course I cannot say for sure. However, we would be happy to discuss this possibility in a meeting and find out together how we can support you. After all, we are always excited to talk about the fascinating world of data and happy to share our experiences!
Interview with Dr. Johann Neidl from HORSCH
1. please briefly introduce yourself. What is your background, how did you come to AI?
My name is Johann Neidl, I have been working for HORSCH since 2013. Previous positions were at Siemens Automotive (sales, project management, strategic planning), Grammer (purchasing, supplier management) and at the University of Applied Sciences in Landshut (professorship for procurement and quality management). Since 2019, I have been responsible for the newly created area of digitalization at HORSCH. This includes the units IT, SAP & Applications as well as the newly founded unit d.LAB. The topic of digitalization has been in focus for about 4 years and is of strategic importance. The application of artificial intelligence is a component of the strategic orientation here. The goal is to make products and processes more efficient and to further develop them in a customer-oriented manner.
2. Who is HORSCH and what does it have to do with digitalization?
HORSCH is a family-run company with currently about 3k employees. We are a manufacturer of agricultural machinery and currently have four product units - tillage, seeding, precision seeding and crop protection. In terms of manufacturing and sales, we operate globally.
When it comes to digitalization, we are primarily pursuing two directions.
- To digitalize/automate manual, recurring processes, using modern technologies.
- To equip our products with digital solutions in order to generate maximum added value for our customers.
We took our first steps in the field of AI by working on use-cases as part of the "AI Transfer Plus" program. We then successively built up the structure and know-how to drive forward the topic of digitization, and in particular the verification and application of new technologies such as AI.
3. Where is AI used in your company?
As mentioned, we want to digitize/automate manual, recurring activities as much as possible. In doing so, we carefully examine whether the use of AI makes sense and brings added value.
Enclosed are a few application examples where we are currently using AI:
One topic area is automating the creation and maintenance of master data, such as the creation of estimated prices, or the definition of customs tariff numbers. Depending on the use-case, we completely eliminate work processes, such as in the determination of the estimated price, or use the results as support for decision-making, such as the determination of customs tariff numbers. The optimization potential and efficiency gains are consistently very high.
Another use-case was the development of a dispo AI. The goal here was to realize an increase in the degree of automation in the processing of parts disposition and thus to create free space for the buyers to deal with more value-adding activities than continuously performing similar disposition activities day in and day out.
Through our Horsch Connect - IoT project, we are collecting machine-relevant data on the basis of which we want to handle topics such as predictive maintenance, spare parts forecasts, remote diagnostics, etc. in the future.
For each use-case, we critically examine to what extent it makes sense to use AI and whether the results from the AI model are valid, what risk we take and how we design the continuous training process of the AI model.
4. Which algorithms / type of AI do you use?
We use neural networks, simple regression models as well as classification models. Depending on the complexity of the problem, also ML models to save computing power. The choice is highly dependent on the project.
5. What added value does AI offer the user?
As already mentioned, our goal is to realize efficiency increases using new methods and approaches. This is given to a large extent in the application of AI solutions to eliminate or partially automate processes. Here, it always depends on the respective use-case.
Furthermore, we want to provide our customers with product-specific and agronomic information to help them make better and faster decisions. In this regard, we likewise use AI-supported solutions.
6. What were the hurdles in implementing AI and how did you overcome them?
The biggest hurdle was to quickly build up the necessary know-how. With the support of our partner trinnovative, we developed a concept on how to overcome this hurdle as quickly as possible in order to be able to implement the planned topics.
Another hurdle was and is the topic of data availability and data quality. In order to be able to address an issue in a targeted manner, the first step is to check whether the necessary data is available at all and in what quality, otherwise you put a lot of effort into something where you realize too late that it cannot be implemented in the planned form.
Another insight we have gained is that initial results and successes (POCs) can be realized very quickly. However, getting a solution ready for series production so that the entire organization can work with it with the usual stability and security always takes a longer period of time than expected.
7. did your software developers have previous experience in the use of AI?
No, we built up the AI area from scratch.
The first step was to focus on digitization in general. To start with, we motivated interested employees from different areas to work on the topic of digitization in the form of a "virtual team (loose bunch)".
This "virtual team" (no fixed structure, based on voluntariness) met regularly and dealt with topics such as the importance and impact of digitization for HORSCH, relevant use-cases, first POC, bringing the topic into the organization, etc. The second step was to set up a small "virtual team".
In the second step, we then permanently installed a small team (d.LAB) with the task of driving relevant digitization topics forward. In the course of this, we focused more and more on the topic of AI. The virtual team continues to exist and acts as an idea generator, discussion platform and, in particular, as a transformer into the respective areas.
At the same time, we are working with several start-ups in order to use this channel to accelerate the implementation of projects, the development of expertise and the training of employees.
8. what would have supported you in your intention to use AI? E.g., advanced training, GPU computing power, storage space....
A big help for us was the AI Transfer Plus program implemented by the state government. This allowed us to quickly build up necessary know-how based on real use-cases. In general, it is important to quickly build up your own know-how when dealing with the topic of AI. External support is very useful in the beginning to quickly gain momentum. However, for strategically oriented processing, one's own expertise is indispensable.
Using existing networks and clusters is also an effective means of getting started quickly.
9. What are your next steps, what are your plans for the future?
Over the next few months, we will successively roll out and further develop the aforementioned topics such as dispo AI, automation of master data systems, IoT applications, translation platform, etc. The basis for this is our roadmap.
Our roadmap, in which we have defined the strategic direction, serves as the basis for this. Topics such as autonomy, further automation of processes, and IoT topics are the focus here, with the aim of generating maximum benefits for our customers.
10. is there anything else you would like to say that you have not had the opportunity to say so far?
Our most important insight in the last few years is that if you don't deal with AI, you miss out on a lot of potential for optimization and further development. The important thing here is to have a strategy for what you want to realize with it, which questions/topics are the focus, so as not to burn up resources unnecessarily.