Visual Resources Update September 2021
ART100 virtual gallery platform
Visual Resources is working with graduate students Samuel Shapiro and Iheanyichukwu Onwuegbucha to use a virtual gallery platform called Artsteps in Art100 this semester. Students will select artwork in or around Princeton and will work in groups to create an exhibition with a cohesive narrative. We are excited to assess the performance of this software and may continue to use it in ART100. Unfortunately, copyright concerns require the gallery to be restricted to those associated with the course.
Our new transcription project: The Syrian Expedition notebooks
We have launched our new crowdsourced transcription project, focused on the notebooks and diaries of the Howard Butler Crosby Syrian Expeditions Archive (1899, 1904/5, 1909). This project aims to transcribe the writings that include descriptions of people and places missing from the published volumes. We intend to publish the data as a dataset and digital collection as well as create an interactive map to tell the story of the expeditions across time and space. Each location will include the photographs, drawings and descriptions the expedition team produced at the site. This will illustrate not only the exceptional nature of these travels, but also the process of this method of archaeological surveying. It is challenging handwriting to decipher, but we already have fourteen transcribers from across the globe onboard!
Interesting projects, resources:
Unsilencing the Archives: The Laborers of the Tell en-Nasbeh Excavations (1926-1935) is a unique and insightful online exhibition by the Bade Museum of Biblical Archaeology highlighting the work of archaeological excavators. Many of the workers at Tell en-Nasbeh also worked at the Princeton Antioch excavation.
Can artificial intelligence catalog art? In “Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation” by Zechen Bai, Yuta Nakashima, Noa Garcia (Osaka University), the authors give their take on this topic so pertinent to our time. (Given at the International Conference on Computer Vision, 2021.)
As always, please reach out if you would like help sourcing an image: