Nandan Kumar Jha

Nandan Kumar Jha

PhD student at NYU CCS

New York University

About me

I am a PhD candidate at the Center for Cybersecurity, New York University (NYU), advised by Prof. Brandon Reagen. I’m broadly interested in cryptographically secure privacy-preserving machine learning (PPML) and work at the intersection of deep learning and applied cryptography (homomorphic encryption and multiparty computation) as a part of DPRIVE projects. My research primarily focuses on developing innovative architectures and algorithms to optimize neural network computations on encrypted data.

In my early PhD, I worked on designing nonlinear-efficient CNNs and developed ReLU-optimization techniques (DeepReDuce, ICML'21), and proposed methods for redesigning existing CNNs (DeepReShape, TMLR'24) for end-to-end private inference efficiency.

My current research focuses on the privacy and security of large language models (LLMs). Specifically, I am investigating the role of nonlinearity in GPT models (Our preliminary findings, ATTRIB@NeurIPS'24), aiming to develop innovative methods for designing GPT models with fewer nonlinearities for efficient private inference.

I have also served as an invited reviewer for NeurIPS'23 and ‘24, ICLR'24, CVPR'24, and ICML'24. If you are interested in collaborating, please feel free to email me!

Interests
  • Privacy-preserving Machine Learning (PPML)
  • Efficient Design of LLMs for Privacy and Security
  • Cryptographic Methods for Secure Neural Network Computation
Education
  • Ph.D. in Privacy-preserving Deep Learning, 2020 - present

    New York University

  • M.Tech. (Research Assistant) in Computer Science and Engineering, 2017 - 2020

    Indian Institute of Technology Hyderabad

  • B.Tech. in Electronics and Communication Engineering, 2009 - 2013

    National Institute of Technology Surat

Recent Publications

Quickly discover relevant content by filtering publications.
(2024). ReLU's Revival: On the Entropic Overload in Normalization-Free Large Language Models. In ATTRIB (NeurIPS) Workshop.

PDF Cite Code

(2024). DeepReShape: Redesigning Neural Networks for Efficient Private Inference. In TMLR 2024.

PDF Cite Slides Video

(2023). Characterizing and Optimizing End-to-End Systems for Private Inference. In ASPLOS 2023.

PDF Cite Code Poster

(2021). CryptoNite: Revealing the Pitfalls of End-to-End Private Inference at Scale. In Arxiv Preprint.

PDF Cite

(2021). Circa: Stochastic ReLUs for Private Deep Learning. In NeurIPS 2021.

PDF Cite Poster

Experience

 
 
 
 
 
Seagate Technology
Electrical Design Engineer
Sep 2015 – Jul 2017 Bangalore, INDIA

Responsibilities include:

  • Designing power delivery circuit for M.2 Solid State Drives.
  • Electrical characterization of DRAM and NAND modules
  • Signal Intergrity verification of DRAM/NAND datapath
 
 
 
 
 
IIT Bombay
Project Research Assistant
Nov 2014 – Jun 2015 Mumbai, INDIA
Worked on the deployment of wireless broadband in rural areas using the TV white Space (unused licensed band in UHF band)

Contact