Three Days of Publications

Friday N°4, October 25th, 2018

After having postponed my visit to Groningen because first of all I would have had to go to Leeuwarden anyway, and second because of the sheer wealth of the Boeles family archive (someone out there searching materials to write a niece solid biography with?). This prompted me to first get a better grip on Johannes Braun as to know what I really would want to look at among Boeles’ papers – and after a not that successful visit to the Utrechts archive on that behalf (well, at least I know now what Braun’s heterodoxy was, he was suspected to be a Unitarian[1] and an enemy of the social order[2]) – I decided that this week should be used to test my data model a bit more thoroughly.

What’s to be tested?

The basic idea behind the model is that is shall be able to diachronically capture reception processes. Therefore I was intrigued by NodeGoat from the start, because this is a core feature of the program, and have kept using it to my great satisfaction since (thanks a lot, Pim and Geerd!) To be able to capture these reception processes I first introduced the object category of “publications” for everything which was intended to be published.

 

The “publication” category in my NodeGoat database

For tracking references more specifically than just by a global descriptor “Refers to” I then introduced the sub-object category of “Reference” which allows me to capture who is referenced, which publications is quoted (if any), and whether a journal or other serial source is referred to (if any). To be able to run co-citation analysis later on, each reference is assigned the page number on which it occurs in the original publication.

A publication with its references

I decided to keep references as simple as possible. This means they are basically singular: Each reference is made to exactly one person, up to one publication, and up to one journal. They are not unique as there may be two references containing exactly the same data, which can happen if someone is quoted with the same work more than once on the same page. While being able to capture most of the relevant details very well, admittedly this model is a bit prone to proliferate references whenever things get complicated. If, for instance, a person such as Adrien Reland is referred to with two editions of his book “De religione Mohammedica” (that of 1705 and that of 1718) in one sentence, in the language of the model this translates into two references to Reland on the same page, one to the 1705 edition and one to the 1718 one. Especially in cases where there are many people and works referred to on the same page, it can get quite tiresome to enter all these data.

Named Entity Recognition, 100% handcrafted

And this is due to the fact that I formulated this concept while working with 18th century learned journals, knowing for sure that none of the data I needed for my analysis was available in standardized linked open data. So while working with databases all the time, which supply me with the digitized images and texts and sometimes even allow for full-text searches pointing me to the references to be picked, entering the data and identifying persons, places, and texts remains work to be done by hand. As this is really nothing else than plain Named Entity Recognition, in theory it might be done automatically. But I suspect that I would need a fairly advanced AI to be able to process data from these source matters, as they abound with non-standardized orthography, obsolete name variants, arbitrary abbreviations, misspellings and even outright mistakes such as mistaken identities or confusion of dates.

Starting from the other end of time

So what I did this week was trying to test the methodology and the model on a sample of a more recent variety of texts I also have to process, those that relate to my four main persons from today, or from not-so-long-ago (compared to the 18th century, at least). Starting from the other end of time I simply used JSTOR , with an advanced search set to “All content” and “Relandus” as search parameter. This returned 68 results, 53 of which I can with my JSTOR account access in full-text and 15 which I can’t. With what was left of the week after finishing all other tasks scheduled for these days I ended up with around 18 hours for a first go at these results. This took me through 31 of the 68, not quite 50% of the sample. The average time spent on each item was around 34 minutes, which I would consider as not too bad for the start. For the start because as in each new data set to be entered there were much more new identities to be identified than already known ones to be just related to the new items. These 31 items, most of which were journal articles from between 1883 and 2010, netted me a total of 88 new persons to be identified and added to the database along with another 11 new publications cited within the items. They also brought with them 23 new institutions such as the journals they were published in, and 25 new publishing houses. All in all, 31 items from the 68 search results netted me a total of 178 new database objects, or more than 5 per item (on average). Which means that it now takes me about 6 minutes to identify an unknown entity and add it to the database, at least in a form that I can work with.  As related items usually share at least some of their entities – which is why they are related, of course – over time the percentage of new objects to be added to the database declines against those already known, which should speed up the processing. Well, I’ll know better next week after having gone through the remaining 37 search results…

And what about references?

One of the publications with richer reference profiles

But if references are the key relational feature of the data model, how many references did I get out of these 31 search result items? That’s the really interesting number, isn’t it? And it is… 93. On average exactly three useful references, but in this case the average is deceptive. The majority of these texts made only one reference at all, and then there was a smaller group counting between 7 and 11 references each. Which is of course due to some publications centring on a special field of inquiry, in this case studies of early modern orientalism and/or early modern Dutch academia, and others focusing on more distantly related fields, such as – in this case – linguistics or biblical geography. This is not bad as such, and already produces useful visualizations. But it also means that, on average, 11 minutes of work are necessary to generate 1 meaningful reference. So this might be a point to ponder again if the method as it is now is really suited to the project. Will it work out like this? Seems like a bit more testing is needed. Looks like next week has just got a new objective…

Sample of items, linked by references to other objects in the database


[1] Utrechts Archief, 17.36, Uittreksel uit de resoluties van de Staaten van Groningen betreffende de theologische geschilpunten tussen de Groninger professoren Samuel Desmarets en Jacobus Alting, 1669, en Johannes à Marck en Johannes Braun, 1681–1690, p. [3]: March 1st, 1689.

[2] Utrechts Archief, 17.36, Uittreksel uit de resoluties van de Staaten van Groningen betreffende de theologische geschilpunten tussen de Groninger professoren Samuel Desmarets en Jacobus Alting, 1669, en Johannes à Marck en Johannes Braun, 1681–1690, p. [1]–[2]: May 14th, 1686.


OpenEdition suggests that you cite this post as follows:
Tobias Winnerling (October 26, 2018). Three Days of Publications. The Fading of Remembrance. Retrieved June 23, 2025 from https://fading18-20.hypotheses.org/267


One thought on “Three Days of Publications

  1. Pingback: One Week with Publications | The Fading of Remembrance

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.