Haku

Deep Neural Classifiers in Safety-Critical Applications : Safety concerns and mitigation methods

QR-koodi

Deep Neural Classifiers in Safety-Critical Applications : Safety concerns and mitigation methods

Deep learning has demonstrated tremendous potential in solving complex computational tasks such as human re-identification, optical character recognition, and object detection. Despite achieving high performance on various synthetic and real-life datasets, the absence of functional safety standards in this field hinders the development of practical solutions for safety-critical applications. Therefore, this dissertation emphasizes the safety aspect of deep learning to dissect the problem and propose potential solutions.

The first objective of this study is to investigate classification as the fundamental component of most deep learning algorithms from a safety perspective. The aim is to identify and categorize faults and their underlying causes in a typical visual classification system. The research systematically categorizes faults from three key phases: training, evaluation, and inference. Subsequently, eight distinct safety concerns were defined, and the existing mitigation methods for each fault were discussed to evaluate their effectiveness and limitations. Furthermore, potential solutions were presented directed toward the limitations. This list could be used alongside other resources to build a safety case for utilizing deep learning methods in safety-critical applications.

The second objective delves deeper into the training phase and explores the faults related to the training dataset, aiming to enhance the existing mitigation methods with safety in mind. Improved algorithms are introduced to mitigate label noise, detect outlier data, and bridge the domain gap. These problems have been analyzed from various perspectives to find practical approaches to address them. The proposed methods utilize low-cost extra resources to improve overall performance. The tradeoff between cost and performance was a significant focus point in these studies. The proposed methods were compared to state-of-the-art alternatives with the help of public benchmarks to evaluate their performance. The AI tools used in my thesis and the purpose of their use have been described below: <b>OpenAI ChatGPT (GPT-3.5) Purpose of use and the part in which it was used: </b>

ChatGPT was used primarily to find and correct any grammatical mistakes, inconsistencies, or incoherent text over the entire thesis. I wrote the initial text and processed it by ChatGPT on a sub-chapter level. The prompt was check for grammatical mistakes and enhance the text to prevent inconsistencies and improve coherency without changing the overall writing style". Afterward, I manually checked the results, removed any artifacts, and reverted unnecessary changes that didn’t suit my writing style.

Moreover, ChatGPT has generated the description part for tools and datasets used in this thesis (e.g. ResNet structure or Clothing 1M dataset) based on the information given by their respective authors in the original webpage. The prompt was “generate a description for this tool/database for my doctoral thesis based on the provided information". Similarly, I double-checked the results to make sure the information was correct, and the text matched my writing style.

Finally, the "Preface" part of the manuscript heavily relied on using ChatGPT. My original text was processed multiple times by ChatGPT to get a sophisticated, dramatic entry to the thesis. The prompt was “make the written text more sophisticated and dramatic while keeping it to the same length".

I am aware that I am totally responsible for the entire content of the thesis, including the parts generated by AI, and accept the responsibility for any violations of the ethical standards of publications.

Tallennettuna: