top of page
  • gmail_edited_edited
  • X
  • GitHub
  • LinkedIn
  • icons8-google-scholar-1000
  • semantic-scholar-square.1023x1024

Computer Science PhD candidate (2nd year) at Tel-Aviv University advised by Prof. Jonathan Berant and NLP research lead at Ask-AI.

Ori Yoran

  • gmail_edited_edited
  • X
  • GitHub
  • LinkedIn
  • icons8-google-scholar-1000
  • semantic-scholar-square.1023x1024

PhD candidate researching Natural Language Processing. I am mostly excited about building and understanding systems that reason to perform tasks that are helpful and complex, such as multi-hop question answering and multi-modal reasoning. My most recent works focus on improving performance, efficiency, and robustness of retrieval-augmented large language models.

 

In my industry position, I am a founding team member and research lead at Ask-AI, where we are developing a generative AI sidekick that reasons over enterprise data. I was also fortunate to intern at Microsoft and AI2 during my studies.

About

PhD candidate researching Natural Language Processing. I am mostly excited about building and understanding systems that reason to perform tasks that are helpful and complex, such as multi-hop question answering and multi-modal reasoning. My most recent works focus on improving performance, efficiency, and robustness of retrieval-augmented large language models.

In my industry position, I am a founding team member and research lead at Ask-AI, where we are developing a generative AI sidekick that reasons over enterprise data. I was also fortunate to intern at Microsoft and AI2 during my studies.

Ori Yoran, Tomer Wolfson, Ori Ram, Jonathan Berant

Making Retrieval-Augmented Language Models Robust to Irrelevant Context

ICLR

Ori Yoran, Tomer Wolfson, Ben Bogin, Uri Katz, Daniel Deutch, Jonathan Berant

Answering Questions by Meta-Reasoning over Multiple Chains of Thought

 EMNLP 

Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, Mor Geva

Evaluating the Ripple Effects of Knowledge Editing in Language Models

 TACL 

Samuel Amouyal, Tomer Wolfson, Ohad Rubin, Ori Yoran, Jonathan Herzig, Jonathan Berant

QAMPARI: an Open-Domain Question Answering Benchmark for Questions with Many Answers from Multiple Paragraphs

 ArXiv 

Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, Omer Levy

SCROLLS: Standardized CompaRison Over Long Language Sequences

 EMNLP 

Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bhagavatula, Yoav Goldberg, Yejin Choi, Jonathan Berant

CommonsenseQA 2.0: Exposing the Limits of AI through Gamification

 NeurlPS 

Ori Yoran, Alon Talmor, Jonathan Berant

Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning Skills

 ACL 

Alon Talmor*, Ori Yoran*, Amnon Catav*, Dan Lahav*, Yizhong Wang, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi, Jonathan Berant

MultiModalQA: Complex Question Answering over Text, Tables and Images

 ICLR 

Publications

News

Ask-AI raises 11M$ in A Round funding

January

Attending EMNLP in Singapore

December

Published a new paper on how we can make retrieval-augmented language models robust to irrelevant context (Update: accepted to ICLR)

October

Part of the effort to release the RippleEdits benchmark for evaluating if knowledge editing methods can capture complex “ripple effects” (Update: accepted to TACL)

July

Attending ACL in Toronto

July

Published our paper on how meta-reasoning over multiple Chains-of-Thought can be a great way to improve current systems for complex open-domain question answering (Update: accepted to EMNLP)

April

Started my PhD! My research will be focused on the ability of tool- and retrieval-augmented Large Language Models to reason

November

Part of the effort to  release QAMPARI, a benchmark for evaluating the ability of current open-domain question answering systems to answer questions with many answers

May

Part of the effort to release the SCROLLS benchmark to evaluate the reasoning skills of Language Models over long tests (Update: accepted to EMNLP)

January

Joined the Ask-AI founding team raising a 9M$ seed-round to build a Gen-AI sidekick that reasons on text-heavy enterprise data

December

Graduated my MSc (Magna Cum Laude)

October

Published our work on using gamification to challenge commonsense reasoning of LLMs (Update: accepted as an oral presentation to NeurIPS)

July

Published my MSc thesis on how we can use tabular data to automatically generate training examples that can endow language models with reasoning skills (Update: accepted to ACL)

July

Published our work introducing the MultiModalQA dataset for evaluating multi-modal question answering (Update: accepted to ICLR)

April

Started an internship with the AI2 gamification team, researching how simulating games can be used for data collection

October

Started my MSc thesis working advised Prof. Jonathan Berant on enhancing the reasoning skills of Language models

August

2023
2022
2021
2020
2024

News

2021
2022
2020
2023
2024

January

Ask-AI raises 11M$ in A Round funding

December

Attending EMNLP in Singapore

October

Published a new paper on how we can make retrieval-augmented language models robust to irrelevant context (Update: accepted to ICLR)

July

Part of the effort to release the RippleEdits benchmark for evaluating if knowledge editing methods can capture complex “ripple effects” (Update: accepted to TACL)

July

Attending ACL in Toronto

April

Published our paper on how meta-reasoning over multiple Chains-of-Thought can be a great way to improve current systems for complex open-domain question answering (Update: accepted to EMNLP)

November

Started my PhD! My research will be focused on the ability of tool- and retrieval-augmented Large Language Models to reason

May

Part of the effort to  release QAMPARI, a benchmark for evaluating the ability of current open-domain question answering systems to answer questions with many answers

January

Part of the effort to release the SCROLLS benchmark to evaluate the reasoning skills of Language Models over long tests (Update: accepted to EMNLP)

December

Joined the Ask-AI founding team raising a 9M$ seed-round to build a Gen-AI sidekick that reasons on text-heavy enterprise data

October

Graduated my MSc (Magna Cum Laude)

July

Published our work on using gamification to challenge commonsense reasoning of LLMs (Update: accepted as an oral presentation to NeurIPS)

July

Published my MSc thesis on how we can use tabular data to automatically generate training examples that can endow language models with reasoning skills (Update: accepted to ACL)

April

Published our work introducing the MultiModalQA dataset for evaluating multi-modal question answering (Update: accepted to ICLR)

October

Started an internship with the AI2 gamification team, researching how simulating games can be used for data collection

August

Started my MSc thesis working advised Prof. Jonathan Berant on enhancing the reasoning skills of Language models

bottom of page