Omri Avrahami

I am a Computer Science Ph.D. student at the School of Computer Science and Engineering at the Hebrew University of Jerusalem, under the joint supervision of Prof. Dani Lischinski and Dr. Ohad Fried.

I am currently a Research Intern at Snap Research. Previously, I had the privilege of working as a Research Intern at NVIDIA Research during the winter of 2024, at Google AI (Google Research) in 2023, and at Meta AI Research (FAIR) in the winter of 2022.

My research interests include machine learning, computer vision, and generative models. More specifically, I am interested in developing new tools for content synthesis and editing --- known popularly as Generative AI.


Publications

Stable Flow: Vital Layers for Training-Free Image Editing

arXiv, 2024

A training-free editing method that is able to perform various types of image editing operations, including non-rigid editing, object addition, object removal, and global scene editing.

Click2Mask: Local Editing with Dynamic Mask Generation

AAAI, 2025
Omer Regev, Omri Avrahami, Dani Lischinski

An image editing method that that given a click and a prompt, infers the desired area to edit.

DiffUHaul: A Training-Free Method for Object Dragging in Images

SIGGRAPH Asia, 2024

Given an image with an object, our method can seamlessly relocate it within the scene.

PALP: Prompt Aligned Personalization of Text-to-Image Models

SIGGRAPH Asia, 2024

Prompt aligned personalization allow rich and complex scene generation, including all elements of a condition prompt.

The Chosen One: Consistent Characters in Text-to-Image Diffusion Models

SIGGRAPH, 2024

Given a text prompt describing a character, our method distills a representation that enables consistent depiction of the same character in novel contexts.

Break-A-Scene: Extracting Multiple Concepts from a Single Image

SIGGRAPH Asia, 2023

Given a single image with multiple concepts, annotated by loose segmentation masks, our method can learn a distinct token for each concept, and use natural language guidance to re-synthesize the individual concepts or combinations of them in various contexts.

Blended-NeRF: Zero-Shot Object Generation and Blending in Existing Neural Radiance Fields

ICCVW, 2023
Ori Gordon, Omri Avrahami, Dani Lischinski

Given a NeRF scene, our pipeline trains a NeRF generator model guided by a similarity loss defined by a language-image model such as CLIP, to synthesize a new object inside a user-specified ROI.

SpaText: Spatio-Textual Representation for Controllable Image Generation

CVPR, 2023

We suggest a new method for text-to-image generation using open-vocabulary scene control.

Blended Latent Diffusion

SIGGRAPH, 2023
Omri Avrahami, Ohad Fried, Dani Lischinski

We present an accelerated solution to the task of local text-driven editing of generic images, where the desired edits are confined to a user-provided mask.

Blended Diffusion for Text-driven Editing of Natural Images

CVPR, 2022
Omri Avrahami, Dani Lischinski, Ohad Fried

We introduce a solution for performing local (region-based) edits in generic natural images, based on a natural language description along with an ROI mask.

GAN Cocktail: mixing GANs without dataset access

ECCV, 2022
Omri Avrahami, Dani Lischinski, Ohad Fried

We tackle the problem of model merging, given two constraints that often come up in the real world: (1) no access to the original training data, and (2) without increasing the size of the neural network.