Suyoung Lee

suyounglee424 [at] gmail [dot] com

su-young.lee [at] samsung [dot] com

Welcome to my homepage. My name is Suyoung Lee, and I currently work as a staff engineer within the Language Intelligence Team at Samsung Research. I completed my PhD in Electrical Engineering at KAIST, under the supervision of Prof. Youngchul Sung at Smart Information Systems Research Lab (SISReL) (my past advisor: Prof. Sae-Young Chung). My research interest is in enhancing the practicality of reinforcement learning. This includes a focus on enhancing sample efficiency, improving exploration methods, fostering better generalization across unseen tasks, and refining offline reinforcement learning techniques.

Google Scholar  /  Github  /  CV

profile photo
Publications
Decision ConvFormer: Local Filtering in MetaFormer is Sufficient for Decision Making
Jeonghye Kim, Suyoung Lee, Woojun Kim, and Youngchul Sung
International Conference on Learning Representations (ICLR), 2024 as Spotlight presentation (366/7262= 5.0%)
Foundation Models for Decision Making (FMDM) Workshop at NeurIPS, 2023.
pdf

We propose Decision ConvFormer, a new decision-maker based on MetaFormer with three convolution filters for offline RL, which excels in decision-making by understanding local associations and has an enhanced generalization capability.

Parameterizing Non-Parametric Meta-Reinforcement Learning Tasks via Subtask Decomposition
Suyoung Lee, Myungsik Cho, and Youngchul Sung
Neural Information Processing Systems (NeurIPS), 2023.
pdf / code

We enhance the generalization capability of meta-reinforcement learning on tasks with non-parametric variability by decomposing the tasks into elementary subtasks and conducting virtual training.

Adaptive Intrinsic Motivation with Decision Awareness
Suyoung Lee and Sae-Young Chung
Decision Awareness in Reinforcement Learning Workshop at ICML, 2022.
pdf

We propose an intrinsic reward coefficient adaptation scheme equipped with intrinsic motivation awareness and adjusts the intrinsic reward coefficient online to maximize the extrinsic return.

Improving Generalization in Meta-RL with Imaginary Tasks from Latent Dynamics Mixture
Suyoung Lee and Sae-Young Chung
Neural Information Processing Systems (NeurIPS), 2021.
pdf / code

We train an RL agent with imaginary tasks generated from mixtures of learned latent dynamics to generalize to unseen test tasks.

Sample-Efficient Deep Reinforcement Learning via Episodic Backward Update
Suyoung Lee, Sungik Choi, and Sae-Young Chung
Neural Information Processing Systems (NeurIPS), 2019.
pdf / code

A computationally efficient recursive deep reinforcement learning algorithm that allows sparse and delayed rewards to propagate directly through all transitions of the sampled episode.

Awards

Outstanding Ph.D. Dissertation Award - Thesis: Meta-Reinforcement Learning with Imaginary Tasks, KAIST EE, 2024.

Qualcomm-KAIST Innovation Awards 2018 - paper competition awards for graduate students, Qualcomm, 2018.

Un Chong-Kwan Scholarship Award - for the achievement of excellence in the 2017 entrance examination, KAIST EE, 2017.

Education

2022~ 2024: Ph.D. in Electrical Engineering, KAIST, Daejeon, Korea (advisor: Prof. Youngchul Sung).

2019~2022: Ph.D. in Electrical Engineering, KAIST, Daejeon, Korea (advisor: Prof. Sae-Young Chung).

2017~2019: M.S. in Electrical Engineering, KAIST, Daejeon, Korea (advisor: Prof. Sae-Young Chung).

2012~2017: B.S. in Electrical Engineering, KAIST, Daejeon, Korea.

2010~2012: Hansung Science High School, Seoul, Korea.

2007~2009: Tashkent International School, Tashkent, Uzbekistan.

Teaching

2020 fall : TA, EE326 Introduction to Information Theory and Coding, KAIST.

2020 spring : TA, EE210 Probability and Introductory Random Processes, KAIST.

2019 fall : TA, EE105 Electrical Engineering: Changing the World, KAIST.

2019 spring : TA, EE405 Electronics Design Lab. Network of Smart Things, KAIST.

2018 fall : TA, EE807 Special Topics in Electrical Engineering. Deep Reinforcement Learning and AlphaGo, KAIST. (Course rewarded with the outstanding TA award)

2018 spring : TA, EE405 Electronics Design Lab. Network of Smart Systems, KAIST.

Academic Acivities

KAIST EE Graduate School REEsearch Party (invited talk): academic seminar by doctoral graduates who won outstanding thesis awards, Apr. 2024.

Conference reviewer: ICML 2021-2024, NeurIPS 2021-2023, ICLR 2024.

Program committee of FMDM workshop at NeurIPS 2023.

How I try to live

I view life as a meta-reinforcement learning task, reminiscent of the MuJoCo Ant-direction. Everyone has their own unique, albeit often obscured, optimal life direction T. The objective of life is to maximize the cumulative reward r=M·T, defined as the dot product of our chosen direction M (how we decide to live) and the unseen true direction T. I was fortunate to have guidance from two professors who instilled in me the importance of minimizing the angle ∣θ∣ and maximizing the magnitude ∣M∣.


Website template from here.