Early in 2016, after nearly 20 years of experience in software development projects, starting this little blog seemed like a good idea to keep track of the tangled trails taken on the go in a technologised world. A year before I had started a new freelancing consulting business labeled qantr.com, the idea being to provide objective and reasonable advice about organisational change and technology options to institutions and companies on their path towards digital transformation. My first customers were the Berlin Philharmonics, a truely epic and innovative orchestra, the Max-Planck-Institute for the History of Science who developed groundbreaking concepts in digital humanities, and ISR AG, a consulting firm specialized in case management, analytics, and big data. Together with ISR we started to develop an industry 4.0 solution primarily for automotive industries.
Notes and IdeasMay, 2016 Earlier this year the Berliner Philharmoniker became part of the new Performing Arts project launched by the Google Cultural Institute. Eric Schmidt, and GCI Director Amit Sood personally came here to show a 360° Recording of Beethoven 9th conducted by Sir Simon Rattle. The GCI is based on Google Maps, and it is best known for making amazing arts collections available online in high resolution. In his talk Eric Schmidt said that the GCI will be free forever as it aimes at democratizing access to precious achievements of the human mind. But despite the unquestionable beauty of the images it remained unclear for me why the assets appear disconnected from any deeper knowledge they actually may be associated with. For someone like Alphabet maintaining a cultural graph (idealy based on a formal model, like the CIDOC CRM) should a feasible thing (other than for national institutions constrained by lack of money and dedication). It is simply sad that the GCI only offers flat metadata. A properly designed graph model would make it simple to search, navigate and explore the collections using rich and semantically precise queries, and even allow for making transitive closures. One could look for ‚a piece which was composed by someone in the romantic period and performed somewhere in Asia between 2010 and 2015 by an north-american orchestra‘. Or, one could try to find out ‚if J. S. Bachs Partitas have been influenced by music from outside Europe‘ (which btw actually seems to be the case). The knowledge about cultural heritage objects is much more solid than everything you find in other knowledge bases (YAGO, DBPedia, Freebase) because it is produced and maintained by scientists. Unfortunately, it is highly heterogenious and distributed, leaving myriads of co-references unrecognized. Putting those loose ends together is a real challenge eventually resulting in new insights how human achievements, things, persons, events, and ideas cause and influence each other.
February, 2016 I really wonder why orchestras don't share their data which is public anyway. Almost every classical orchestra in the whole world maintains it's own top secret database of well-known composers and works (let alone artists and instruments). Often one single orchestra is maintaining multiple master datasets in different divisions separately.
Scientific communities would rather try to avoid wasting their rare resources for maintaining datasets redundantly. How come that in the constantly underfunded cultural space money which could well be used for producing 'content' is indeed wasted for letting lots of people be keying in the same character strings in inexpertly designed databases? Couldn't one properly designed open ontology solve this global problem? Such an repository could be build and used in cooperation, it would be extended as needed (more entries), and could be enriched (more languages, appellation conventions, more deep metadata) by skilled community members. Most cultural institutions are funded and supervised by public authorities, which often simply means cutting money. The institutions are left alone with a mandate to only care for their own stuff. How could we eventually get over this?
IuT Research Record
- 2010-2011 Research Scholar at Fraunhofer Institute for Intelligent Analysis Systems IAIS, technical director of the German Digital Library.
- 2009 Research Scholar at Leibniz Institute for Evolution and Biodiversity (@Berlin Natural History Museum), workpackage leader of a EU/US digital library project (biodiversityheritagelibrary.org).
- 2006-2007 Research Scholar at Max-Planck-Institute for the History of Science: text mining and semantic clustering..
- 2002-2004 Research Scholar at Hermann-von-Helmholtz Center at Humboldt-University Berlin: research database development..
Talks (given in German)
- 2. IT-Konferenz Tech IT Out 15. September 2014 Akademie des Deutschen Buchhandels/Literaturhaus München – IT-Strategien und -Infrastrukturen für Medienhäuser: Push und Pull – Technologietrends (und -folgen) im Licht des (Publishing-) Business. 5. Paid Content-Konverenz, 17. September 2013 Akademie des Deutschen Buchhandels: Conversion-Optimierung von der Paywall bis zum Kaufabschluss.
- DDB (Deutsche Digitale Bilbiothek) Präsentation im Bundestag (Kulturausschusssitzung) am 25. Januar 2012 (hat unmittelbar zu einem fraktionsübergreifenden Beschluss geführt, die DDB dauerhaft zu entwicklen, siehe Plenarprotokoll 17/155 zur Sitzung am 26. Januar 2012).
- DDB Präsentation bei Staatsminister Neumann im Kanzleramt am 20.12.2012.
- DDB Präsentation im BMWi (bei Dr. Gördeler + Referenten) am 14.12.2012.
- Expertenanhörung zu Konzeption, Architektur, Realisierungstand am 12.12.2011 (mit Prof. Gradmann, Dr. Martin Doerr, Prof. Thaller, Michael Christen), Präsentation und Diskussion (1 Tag), als die Dokumentation ergänzende Grundlage für die Gutachten der Experten (alle Gutachten haben das System in der Folge als exzellente Ingenieursleistung gewürdigt).
- Interview: http://www.tagesspiegel.de/wissen/die-bibliothek-kommt-nach-hause/4338564.html
- Vortrag: Die Deutsche Digitale Bibliothek aus technischer Sicht (talk). 11. Oracle Bibliotheken Summit, 27-28 October 2010, Weimar, Germany. http://www.oracle.com/global/de/events/2010/local/sun/weimar/5_sessions.html
- Workshop: Das technische Konzept der Deutschen Bibliothek (workshop). Arbeitskreis "Entwicklung digitaler Bibliotheken" (Gesellschaft für Informatik), 8 October 2010 in Karlsruhe, Germany. http://www.emisa2010.kit.edu/59.php
Publications (partly in German)
- Kai Stalmann, Projekt "Deutsche Digitale Bibliothek": Rationale zur IAIS CORTEX Konzeption, den Datenmodellen und Mappings: http://www.iais.fraunhofer.de/fileadmin/user_upload/Abteilungen/NM/pdfs/DDB_IAIS-CORTEX_Rationale20111221.pdf.
- Kai Stalmann, Marion Borowski and Sven Becker: Preparing the Ground for the German Digital Library, in: ERCIM News, Special Theme: ICT for Cultural Heritage, Number 86, July 2011, publ. by the European Research Consortium for Informatics and Mathematics, www.ercim.eu.
- Kai Stalmann, Reinhard Budde, Robert Mertens, Christoph Tornau, Dennis Wegener, Volker Heydegger, Thorsten Wunderlich, Bernd Ingenbleek, Florian Schulz: Kultur im Wissensnetz - Die Architektur der Deutschen Digitalen Bibliothek, in: GI Gesellschaft für Informatik, Mitteilungen der GI-Fachgruppe "Entwicklungsmethoden für Informationssysteme und deren Anwendung", Jg. 31, Heft 2, Juni 2011
- Sven Becker, Marion Borowski, Melanie Gnasa, Kai Stalmann, Stefan Wrobel: eHumanities: Intelligent Analysis and Information System for Humanities and Culture. GI Jahrestagung (2) 2010: 552-556. http://dblp.uni-trier.de/rec/bibtex/conf/gi/BeckerBGSW10
- Kai Stalmann: BHL-Europe - Improving interoperability of and access to European biodiversity digital libraries (talk). 2nd LIBER-EBLIDA Workshop on the digitization of library material in Europe The Hague, Netherlands, 19-21 October 2009. http://www.libereurope.eu/files/Digitisation%20Programme%20Online-final.pdf
© 2016 - Kai Stalmann - Berlin - Germany