Robots Reading Vogue
Robots Reading Vogue is an innovative and stunning demonstration of how researchers, librarians, and students can ‘read’ one of (if not) the most prolific magazines of all time in new ways. Experimenting with data mining on over 2,700 covers, 400,000 pages, and 6 terabytes from Vogue, the team behind the project have put together a stunning runway show that provides a model of how big data techniques and methodologies can be applied to the humanities.
With a well thought out and neat design, the website has content split across four sections: ‘Home’, ‘Experiments’, ‘Bibliography’, and ‘About’. In ‘Home’, the audience is given a brief overview of the project as well as cross-section of the experiments that have been conducted on the Vogue back catalogue. ‘Experiments’, which will be covered in more detail in the next paragraph, produces a drop-down with a functionally similar list to the experiments listed on the home page. The ‘Bibliography’ section threads together the raw experimental data with analysis through peer-reviewed articles and presentations from the two researchers (Lindsay King and Peter S. Leonard); commentary from journalists; and a list of further readings that aficionados can consult to learn more about Vogue. ‘About’ features a slightly more detailed discussion of the project’s origins and the motivations behind creating the program.
The intricacy of Robots Reading Vogue can be seen in the individual experiments—just the ones that have been conducted with plenty more still to come according to the site—on applying different data mining techniques to analysing vogue. ‘Word Vectors’ from Sydney Bowen utilises a methodology known as word embedding that tasks an algorithmic program to analyse the term ‘beauty’ and a series of related words to trace how notions of beauty have shifted over time. Bowen demonstrates that ‘beauty’ has had a steady transformation from an ephemeral idea of a hard-to-define innate characteristic to something that can be produced through consuming products and services marketed towards a particular aesthetic ideal. In ‘Cover Averages’, researchers juxtaposed Vogue covers decade-by-decade from 1900 to 2000 and in so doing, created a striking visual representation of cover patterns over time. In creating these visualisations, we get a demonstration of how distinctive covers were in the mid-twentieth century as can be seen in the lack of any discernible pattern in the combined covers whereas those from the 1970s and 1980s reveal a tendency for a particular kind of cover model in the same positioning, gaze, and head angle. Linked to this experiment is ‘Colormetric Space’ which uses a software called ImagePlot to generate an impressive chart of every Vogue cover ever produced and allows analysis on the basis of hue, saturation, and brightness. The ‘n-gram Search’ function allows users to take a more active role in directing how the robots read Vogue by plugging in keywords to determine their frequency in comparison with other keywords across the entire catalogue. As part of this, the line charts produced are separated by both editor and decade with the ability to change between several metrics: words per million, percent of texts, number of words, and number of texts.
‘Topic Modelling’ takes the opposite approach, allowing the algorithms to scan through the entire catalogue to discern particular patterns of words that were clustered frequently in a statistically significant way and creating word and phrase clouds as well as a chart tracing each instance those topics appeared. In ‘Advertisements’, visitors are able to see what particular forms of advertising, its frequency, and the date in which it appeared in the magazine over time. More specifically, the advertisements are broken down into all advertisements, cars, cosmetics, stores, and tobacco with a click on each category revealing the specific companies advertising in the publication and their total, average, weighted average, and standard deviation of advertisements over the entire life of Vogue. ‘Statistics’ is a simple chart that provides figures on circulation, ratio of articles to advertisements, price per issue, and number of pages per year. Meanwhile, ‘Student Work’ shows off the ways in which Yale University students have applied data mining techniques to the Vogue archive. At the moment, there have only been three projects but each is utterly fascinating. The first undertakes pattern detection by analysing Vogue covers over periods of social change. Another utilises pictorial analysis of facial and body language to trace how the female image has changed over time. The final project investigates fashion photography through the lens of skin colour and particular patterns featured in the magazine over time. Returning to other experiments, ‘FabricSpace’ is an ingenious examination of the clustering and hierarchy of a vocabulary describing particular materials and fabric types to generate a material history over time. ‘Take a Memo’ is a delightful side-track that’s less of an experiment and more a procedural text generator that attempts to mimic the distinctive voice and style of Vogue’s iconic editor Diana Vreeland. Finally, ‘Slice Histograms’ produces visualisation of colour patterns in a series of charts put together in a timelapse that are in and of themselves quite stunning.
The evocatively named Robots Reading Vogue is the avant-garde when it comes to utilising algorithmic tools to analyse one of the most iconic publications in modern history. It demonstrates how using big data—alongside being in style—can generate important insights in the fields as diverse as gender studies, economics, behavioural science, visual studies, art history, and computer science among others.