All Issue

2025 Vol.13, Issue 4 Preview Page

Research Article

31 December 2025. pp. 89-99
Abstract
제조 공정에서 인공지능 기반 이상탐지 기술의 실용화를 위해서는 모델 실험의 재현성 확보가 필수적이나, 기존 도구들은 실험 추적과 재현 절차가 사용자 수작업에 의존하는 한계가 있다. 본 연구는 이러한 문제를 해결하기 위해 MLflow 기반 실험 추적을 노코드 워크플로우 엔진에 네이티브하게 통합한 제조 공정 이상탐지 실험 관리 프레임워크를 제안한다. 제안 프레임워크는 사용자가 GUI 환경에서 컴포넌트를 조합하는 것만으로도 하이퍼파라미터, 성능 지표, 모델 아티팩트및 환경 의존성 정보가 자동 기록되도록 설계하였으며, 모델을 버전 식별자로 참조하는 아키텍처를 통해 내부 학습 모델과 외부 이식 모델을 동일한 절차로 관리할 수 있다. 본 연구는 노코드 환경에서 실험 근거의 기록과 접근을 시스템 기본값으로 제공함으로써 제조 현장의 신뢰할 수 있는 AI 운영 환경 구축에 기여한다.
Ensuring reproducibility of model experiments is essential for the practical deployment of AI-based anomaly detection in manufacturing processes; however, existing tools often require manual effort to record and align experiment evidence. This study proposes an experiment management framework for manufacturing anomaly detection that natively integrates MLflow-based experiment tracking into a no-code workflow engine. The framework enables automatic logging of hyperparameters, performance metrics, model artifacts and environment dependency information via GUI-based component composition. We evaluate the framework under an operational reproducibility setting using experiment-tracking completeness and evidence-confirmation cost metrics, demonstrating improved reproducibility support compared with a code-based approach and a general-purpose no-code platform. The proposed design provides reproducibility evidence as a system default, facilitating trustworthy AI operations in manufacturing.
References
  1. National Academies of Sciences, Engineering, and Medicine, “Reproducibility and Replicability in Science”, National Academies Press, 2019.

  2. S. N. Goodman, D. Fanelli, and J. P. A. Ioannidis, “What does research reproducibility mean?”, Science Translational Medicine, Vol. 8, No. 341, 341ps12, 2016.

    10.1126/scitranslmed.aaf5027
  3. R. D. Peng, “Reproducible Research in Computational Science”, Science, Vol. 334, No. 6060, pp. 1226-1227, 2011.

    10.1126/science.1213847 >22144613 PMC3383002
  4. R. Tatman, J. VanderPlas, and S. Dane, “A practical taxonomy of reproducibility for machine learning research”, ICML 2018 Reproducible ML Workshop, 2018 (rev. 2023).

  5. O. E. Gundersen, and S. Kjensmo, “State of the art: reproducibility in artificial intelligence”, AAAI Conference on Artificial Intelligence, 2018.

    10.1609/aaai.v32i1.11503
  6. O. E. Gundersen, S. Shamsaliei, and R. J. Isdahl, “Do machine learning platforms provide out-of-the-box reproducibility?”, Journal of Systems and Software, Vol. 191, 111357, 2022.

    10.1016/j.future.2021.06.014
  7. O. E. Gundersen, Y. Gil, and D. W. Aha, “On reproducible AI: towards reproducible research, open science, and digital scholarship in ai publications”, AI Magazine, Vol. 39, No. 3, 2018.

    10.1609/aimag.v39i3.2816
  8. MLflow Project Team, “MLflow Documentation: Introduction (v2.10.1)”, https://mlflow.org/docs/2.10.1/introduction/index.html

  9. E. Kuprieiev, I. Shcheklein, and S. Nepomnyachiy, “Data Version Control: Git for Data & Models”, arXiv:2007.12650, 2020.

  10. K. Greff, A. Klein, and M. Chovanec, et al., “Sacred: a tool to facilitate reproducible machine learning”, arXiv:1703.01988, 2017.

  11. F. Liu, and K. Sirijoti, et al., “Kubeflow Pipelines: reproducible machine learning on kubernetes”, arXiv:2012.05246, 2020.

  12. KNIME Team, “KNIME Deep Learning Installation Guide (2023-12)”, https://docs.knime.com/2023-12/deep_learning_ installation_guide/index.html

  13. C. Collberg, T. A. Proebsting, and A. Morin, “Measuring reproducibility in computer systems research”, University of Arizona Technical Report TR-2014-003, 2014.

Information
  • Publisher :The Society of Convergence Knowledge
  • Publisher(Ko) :융복합지식학회
  • Journal Title :The Society of Convergence Knowledge Transactions
  • Journal Title(Ko) :융복합지식학회논문지
  • Volume : 13
  • No :4
  • Pages :89-99