A Raycaster project.

This experiment aimed to expand the understandings of old and new museum’s collections, draws attention to the interdependence of collecting and sheds light on how collections form knowledge of cultural heritage on a larger scale.

In collaboration with Queens Museum and the support of Knight Foundation and NEW INC, we developed an online and accessible platform showcasing a dynamic image of mass production collectibles and the people that engage with them, open for others to add their own. In short, we set out to create a democratic collection.

Using a digital platform, machine learning and web scraper technologies, we aimed to highlight alternative forms of collections and speculate on the future of how they get displayed and designed.

This project focused on commemorative souvenirs from the 1939 and 1964 New York World's Fairs, which took place in the park where the museum stands today. The Fairs aimed to represent the whole world in one physical space through its nation states and, in 1964, also its corporations.

︎ Medium post on the research
This project has two platforms:

1- an online display of the global collection:

︎ Visit website 

2- a physical display at Queens Museum. Presenting a collection of machine learning generated plates. For this phase we used an image recognition, image classification and text to image technology, to explore how can we program a machine that “observes” the plate collection and generates a new plates on its own.

︎ View demo
Developed together with: Ziv Schneider. Additional credits: Regina Cantu de Alba (3D post processing), Sam Lavigne (web development), Eran Hadas (words to sentences generator) & Eyal Gruss (Text 2 Image Generator).

Date: October 2018
Platform: Web and physical installation.

Featured: Queens Museum International Biennial 2018 Supported by: NEW INC, Knight Foundation and Queens Museum.


I. As a start we developed an online scraping tool to automatically locate the relevant images based on keyword variation (“world’s fair plate”, “WF NYC plates”, and etc) from e-commerce website to compare with the type of plate in the original index of the museum.
II. In order to overcome the small number of sources (in this case 26) we could used to train our system, so we could improve the result and create much more accurate map of the items – we created 3D representation of each plate and rendered a synthetic dataset. We used Photogrammetry, a 3D scanning method, to compute a 3D reconstruction.
III. Then inserted them into Unity3D and rendered each one of them in different lighting and backdrops settings. These choices were inspired by the images people tend to upload to Etsy and Ebay to represent the items they are selling.
We generated Thousands of images- and now could train the machine learning system. Another outcome of this process is 3D documentation of the collection – available to the museum for future uses.


In step 1 there were also moments in which we learn about what we consider to be a “real” part of the collection and what we perceive to be a public form of culture, and the benefit that lies in merging the two. We wished to explore this space more and push the machine learning technology further in order to offer new kind of interpretation to the existing collection. We used a image classification to explore what the machine “sees” as a natural observation that undermine how humans collect memories. Then we generated sentences out of the extracted keywords.

Printed Machine Learning genearted plates:

UX slide:

︎   ︎