Skip to main content

and
  1. No Access

    Article

    I/O separation scheme on Lustre metadata server based on multi-stream SSD

    As the price of NAND-flash storage decreases, large-scale backend distributed file systems are being constructed as all-flash storage without HDDs. In fact, the performance of an SSD can sharply decrease due t...

    Cheongjun Lee, Jaehwan Lee, Chungyong kim, Jiwoo Bang, Eun-Kyu Byun in Cluster Computing (2023)

  2. No Access

    Article

    Towards enhanced I/O performance of a highly integrated many-core processor by empirical analysis

    Optimized for parallel operations, Intel’s second generation Xeon Phi processor, code-named Knights Landing (KNL), is actively utilized in high performance computing systems based on its highly integrated core...

    Cheongjun Lee, Jaehwan Lee, Donghun Koo, Chungyong Kim, Jiwoo Bang in Cluster Computing (2023)

  3. No Access

    Article

    Comprehensive techniques of multi-GPU memory optimization for deep learning acceleration

    This paper presents a comprehensive suite of techniques for optimized memory management in multi-GPU systems to accelerate deep learning application execution. We employ a hybrid utilization of GPU and CPU mem...

    Youngrang Kim, Jaehwan Lee, Jik-Soo Kim, Hyunseung Jei, Hongchan Roh in Cluster Computing (2020)

  4. No Access

    Article

    Towards an optimized distributed deep learning framework for a heterogeneous multi-GPU cluster

    This paper presents a novel “Distributed Deep Learning Framework” for a heterogeneous multi-GPU cluster that can effectively improve overall resource utilization without sacrificing training accuracy. Specificall...

    Youngrang Kim, Hyeonseong Choi, Jaehwan Lee, Jik-Soo Kim in Cluster Computing (2020)

  5. No Access

    Article

    On the role of message broker middleware for many-task computing on a big-data platform

    We have designed and implemented a new data processing framework called “Many-task computing On HAdoop” (MOHA) which aims to effectively support fine-grained many-task applications that can show another type o...

    Cao Ngoc Nguyen, Jaehwan Lee, Soonwook Hwang, Jik-Soo Kim in Cluster Computing (2019)

  6. Chapter and Conference Paper

    Keep and Learn: Continual Learning by Constraining the Latent Space for Knowledge Preservation in Neural Networks

    Data is one of the most important factors in machine learning. However, even if we have high-quality data, there is a situation in which access to the data is restricted. For example, access to the medical dat...

    Hyo-Eun Kim, Seungwook Kim, Jaehwan Lee in Medical Image Computing and Computer Assis… (2018)

  7. No Access

    Article

    Adaptive hybrid storage systems leveraging SSDs and HDDs in HPC cloud environments

    Cloud computing should inherently support various types of data-intensive workloads with different storage access patterns. This makes a high-performance storage system in the Cloud an important component. Eme...

    Donghun Koo, Jik-Soo Kim, Soonwook Hwang, Hyeonsang Eom, Jaehwan Lee in Cluster Computing (2017)

  8. No Access

    Article

    Enhancement of a WLAN-Based Internet Service

    A wireless LAN (WLAN)-based Internet service, called NESPOT, of Korea Telecom (KT), the biggest telecommunication and Internet service company in Korea, has been operational since early 2002. As the numbers of su...

    Youngkyu Choi, Sekyu Park, Sunghyun Choi, Go Woon Lee in Mobile Networks and Applications (2005)