Publications

Interpretability


The Importance of Prompt Tuning for Automated Neuron Explanations

  • J. Lee*, T. Oikarinen*, A. Chatha, K.C. Chang, Y.Chen, T.W. Weng
  • NeurIPS 2023 ATTRIB workshop

Label-Free Concept Bottleneck Models

CLIP-Dissect: Automatic Description of Neuron Representations in Deep Vision Networks


Robustness


Corrupting Neuron Explanations of Deep Visual Features

  • D. Srivastava, T. Oikarinen, T.W. Weng
  • ICCV 2023, code

Robust Deep Reinforcement Learning through Adversial Loss


Applied ML


GraphMDN: Leveraging Graph Structure and Deep Learning to Solve Inverse Problems

  • T. Oikarinen, D. Hannah, S.Kazerounian
  • IJCNN 2021, code

Landslide Geohazard Assessment with Convolutional Neural Networks using Sentinel-2 Imagery Data

  • S. Ullo, M. Langenkamp, T. Oikarinen, M.P. Del Rosso, A. Sebastianelli, F. Piccirillo, S. Sica
  • IEEE IGARSS 2019, code

Deep Convolutional Network for Animal Sound Classification and Source Attribution using Dual Audio Recordings

  • T. Oikarinen, K. Srinivasan, O. Meisner, J. Hyman, S. Parmar, A. Fanucci-Kiss, R. Desimone, R. Landman, G. Feng
  • The Journal of the Acoustical Society of America, 2019, code