Omri Avrahami

I am a Ph.D. candidate at the School of Computer Science and Engineering at the Hebrew University of Jerusalem, under the joint supervision of Prof. Dani Lischinski and Dr. Ohad Fried.

Furthermore, I am a Research Intern at NVIDIA Research. Before that, I was a Research Intern at Google AI (Google Research) during 2023 and at Meta AI Research (FAIR) during the summer of 2022.

My research interests include machine learning, computer vision, and generative models. More specifically, I am interested in developing new tools for content synthesis and editing --- known popularly as Generative AI.


Break-A-Scene: Extracting Multiple Concepts from a Single Image

SIGGRAPH Asia, 2023

Given a single image with multiple concepts, annotated by loose segmentation masks, our method can learn a distinct token for each concept, and use natural language guidance to re-synthesize the individual concepts or combinations of them in various contexts.

Blended-NeRF: Zero-Shot Object Generation and Blending in Existing Neural Radiance Fields

ICCVW, 2023
Ori Gordon, Omri Avrahami, Dani Lischinski

Given a NeRF scene, our pipeline trains a NeRF generator model guided by a similarity loss defined by a language-image model such as CLIP, to synthesize a new object inside a user-specified ROI.

SpaText: Spatio-Textual Representation for Controllable Image Generation

CVPR, 2023

We suggest a new method for text-to-image generation using open-vocabulary scene control.

Blended Latent Diffusion

Omri Avrahami, Ohad Fried, Dani Lischinski

We present an accelerated solution to the task of local text-driven editing of generic images, where the desired edits are confined to a user-provided mask.

Blended Diffusion for Text-driven Editing of Natural Images

CVPR, 2022
Omri Avrahami, Dani Lischinski, Ohad Fried

We introduce a solution for performing local (region-based) edits in generic natural images, based on a natural language description along with an ROI mask.

GAN Cocktail: mixing GANs without dataset access

ECCV, 2022
Omri Avrahami, Dani Lischinski, Ohad Fried

We tackle the problem of model merging, given two constraints that often come up in the real world: (1) no access to the original training data, and (2) without increasing the size of the neural network.