What is the configuration when the product has an AI?

In my previous article, I wrote about Export Control and Machine Learning, where it can be very difficult to determine how Export Rules apply in complex cases like Federated Learning. This series of articles was triggered by the excellent podcast ‘Maintaining the Physics Model within AI & ML’ Pt1 and Pt2 by Arnaud Hubaux from ASML and Max Gravel from IpX and the video from Arnaud on Taming the AI Demon to sustain innovation.

 This article will explore the question: What is the configuration of the product and how do you maintain it when AI is involved?

So what is the configuration, how do you record that configuration and how do you deal with the fact that while the AI is in use it keeps learning and changing the outcome? How do you deal with changes to the product but also to regulations like Export Control?

What about the fact that the application context of such an AI has an impact on the training set and therefore on the outcome and performance of the AI. How do you deal with updates to this? Do you need to make application context-dependent configurations of the AI?

Is the AI part of the configuration of the product?

Like any component or assembly, an AI is also part of the product, and therefore part of the configuration of the product. This statement might be oversimplifying things a bit because it can be a bit more complex. What if the AI runs in the Cloud and can support multiple devices/systems in the field? Is the function the AI provides an integral part of the device/system or is it a service that can be used but the device/system can also work without it? In the first case, it makes sense to define the AI as a part of the product baseline, but if it is a service that might be used or not or can be subscribed to, it can also be seen as a separate product or service. While there should be a link to the interface, the AI does not need to be a part of the product baseline but requires it to have its own baseline.

Photo by Possessed Photography on Unsplash
Modified by adding text.

In any case, if the AI is used, a record needs to be in place for the actual configuration of the instance that indicates this and also indicates the version of the AI that is being used and which training dataset was used to train the AI. Because when an old version of the AI was still being used, it could have different outcomes in certain conditions compared to a new version of the AI, which can be relevant in case of incidents.  Or if there are different training datasets, also the outcomes of the AI can differ depending on the training dataset that was used. This can have an impact on accountability, e.g. if the owner/user of the device/system did not upgrade to the newer version when it became available.

Is the training dataset used to train the ML Model part of the configuration of the product?

The training dataset is a vital part of building a usable ML Model. That means the training dataset needs to be under configuration control. The question is whether the training dataset is part of the baseline of the product or if it is part of a different baseline like tools you use to assemble your product? The training dataset might be used to train the ML Model, but is not necessarily shipped with the product and therefore not part of the actual baseline of the product. Like when you use a wrench to tighten a bolt or when you use a compiler to compile your code into an executable. The bolt is part of the configuration of the product, however the tool, in this case, the wrench is not, but it is part of the different baseline and linked to the product configuration as an enabling item via the Process Plan/Bill of Process and its Operations/Work instructions. Same for the code and executable, which are part of the product configuration, while the compiler is not. The same compiler or wrench might/can be used to compile/assemble other products.

How do you deal with changing regulations like Export Control?

Like other regulations, Export Control rules can change, sometimes very quickly. So where you were able to export certain information to certain countries one day, the next day it might no longer be allowed. That also means that if you use an AI as part of your product and you have Federated Learning in place for instance, that this change can have an immediate effect on whether or not you can send an update to a device that is located in a banned country. As indicated earlier, each device must know its location and user or you must be able to verify this before sending the update, otherwise, you might violate the Export Control rules. This must already be in place from the start because that allows you to make sure that updates are only sent to authorized receivers. In any case, status accounting must be done to record that the user and location were verified before sending the update.

What about Impact Analysis when the product has an AI?

In Part 2 of ‘Maintaining the Physics Model within AI & ML’ Max Gravel pointed out that (paraphrased):

 “Impact analysis must take into account the “personality of the machine”! This is redefining end-to-end thinking!”

A product that has an AI or uses an AI to function or to improve its function has a lot of dependencies. Different features and options have an impact on hardware and software settings, which have an impact on the behavior of the device. Knowing all these dependencies is key to be successful in impact analysis. This starts in MBSE (Model-Based System Engineering) where the functions, logical components, and hardware components and their dependencies are identified and defined. These dependencies need to be maintained in such a model to allow impact analysis to finding the relevant dependencies for a specific change.

If an AI consists of multiple physics models, domain models, and ML models, changing one of them can have a big impact on the behavior of the AI. At the same time, the way changes are processed needs to be very efficient. Especially in production environments, customers cannot wait days or even hours for a fix. This is a big challenge to ensure control but to have speed as well.

Can a change to the AI make it a different product?

In my previous post, Export Control and Machine Learning, I wrote: “Imagine that over time the autonomous driving capability is updated and reaches level 4 or 5 (full autonomous capability)? Is it then still the same car as it was when it was initially sold with only level 1 autonomous capabilities? Should the car be re-certified to drive completely autonomously?” So the question is, when is it no longer the same type/product. This is also a question governments need to address when more and more AI capabilities will be developed and deployed.

Photo by Possessed Photography on Unsplash
Modified by adding text.

With the introduction of the Boeing 737 MAX, several things have gone wrong. In ‘Lessons from the Boeing 737 Max Crisis’ Octavian Thor Pleter Associate Professor at University Politehnica of Bucharest Faculty of Aerospace Engineering mentions:

“By nature, Boeing 737 is different. Pilots can veto any automated system in all 737 families, except for 737 MAX and except for the robot called MCAS. When this robot takes over, it could kill everyone without any way for the human pilots to intervene.”

 

And

 

“MCAS is not only a robot who failed twice due to a faulty sensor. MCAS is an overturn of an aircraft control philosophy. This raises a worrying question: how could FAA agree to extend an airworthiness type certificate issued in 1967 for Boeing 737-100 and 737-200 down to 737 MAX, since 737 MAX includes a revolutionary flight control philosophy?”

 In other words, if you make a plane and change its behavior from Pilot operated, to Computer operated, you need to have a new Type certification. The same should apply to cars and the increase of autonomous driving or any other device/product that can pose any safety risks where the behavior will change due to software or AI being applied. The same applies if you replace the engine and powertrain of your fossil-fueled car with an electric engine, powertrain, and batteries, you need to apply for a type certificate to be allowed to drive the car on public roads. The appearance of the car did not change, but the behavior and specification of the car did significantly change. This type-certification can be done through self-certification as long as the relevant government bodies are properly involved and have ways to check and audit the self-certification process.

Conclusion

There is still a lot unclear on how to deal with the effects of AI on Configuration Management. I also think that governments will have significant impact in the years to come when more laws and regulations will dictate how to deal with AI in various products. 

Please share your thoughts and lets start the discussion.

Header Photo by Ali Shah Lakhani on Unsplash

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.