
From May 29th until June 2nd 2016, the 13th Extended Semantic Web Conference took place in Crete, Greece. CrowdTruth was presented by Oana Inel presenting her paper “Machine-Crowd Annotation Workflow for Event Understanding across Collections and Domains” and by Benjamin Timmermans presenting his paper “Exploiting disagreement through open-ended tasks for capturing interpretation spaces”, both in the PhD Symposium.


The Semantic Web group at the Vrije Universiteit Amsterdam was very well represented, with plenty of papers during the workshops and the conference. The paper on CLARIAH by Rinke, Albert, Kathrin among others won the best paper award at the Humanities & Semantic Web workshop. Here are some of the topics and papers that we found interesting during the conference.
EMSASW: Workshop on Emotions, Modality, Sentiment Analysis and the Semantic Web
In the Workshop on Emotions, Modality, Sentiment Analysis and the Semantic Web a keynote talk was given by Hassan Saif titled “Sentiment Analysis in Social Streams, the Role of Context and Semantics”. He explained that sentiment analysis is nothing more than extracting the polarity of an opinion. Through the Web 2.0 the sharing of opinions has become easier, increasing the potential of sentiment analysis. In order to find these opinions first opinion mining has to be performed, which is an integral part of sentiment analysis. Hassan compared several semantic solutions for sentiment analysis: SentiCircles which does not rely on the structure of texts but semantic representations of words in a context-term vector; Sentilo which is an unsupervised domain-independent semantic framework for sentence-level sentiment analysis; sentic computing, a multi-disciplinary tool for concept-level sentiment analysis that uses both contextual and conceptual semantics of words and can result in high performance on well structured and formal text.
Jennifer Ling and Roman Klinger presented their work titled “An Empirical, Quantitative Analysis of the Differences between Sarcasm and Irony”. They explained the differences between irony and sarcasm quite clearly. Irony can be split up into verbal irony which is the use of words for a meaning other than the literal meaning, and situational irony which is a situation where things happen opposite of what is expected. They made clear that sarcasm is ironic utterance, designed to cut or give pain. It is nothing more than a subtype of verbal irony. In tweets, they found that ironic and sarcastic tweets contain significantly less sentences than normal tweets.
PhD Symposium
Ghiara Ghidini and Simone Paolo Ponzetto organized a very nice PhD Symposium. They took care to assign for each student mentors that work in related domains and this made their feedback highly relevant and valuable. In this sense, we would like to thank to our mentors Chris Biemann, Christina Unger, Lyndon Nixon and Matteo Palmonari for helping us to improve our papers and for providing feedback during our presentations.
It was very nice to see that events present a high interest in the semantic web community. Marco Rovera presented his Phd proposal “A Knowledge-Based Framework for Events Representation and Reuse from Historical Archives” that aims to extract semantic knowledge from historical data in the context of events and make them available for different applications. It was nice to see that projects just as Agora and the Simple Event Model (SEM), developed at VU Amsterdam were mentioned in his work.
Another very interesting research project on the topic of human computation and crowdsourcing in order to solve problems that are still very difficult for computers was presented by Amna Basharat, “Semantics Driven Human-Machine Computation Framework for Linked Islamic Knowledge Engineering“. She envisioned hybrid human-machines workflows, where the skills and knowledge background of crowds and experts, together with automated approaches aim to improve the efficiency and reliability of semantic annotation tasks in specialized domains.
Vocabularies, Schemas and Ontologies
Céline Alec, Chantal Reynaud and Brigitte Safar presented their work “An Ontology-driven Approach for Semantic Annotation of Documents with Specific Concepts”. This is a collaboration with the weather company, where they use machine learning to classify things you can but also cannot do at a venue. This results in both positive and negative annotations. In order to achieve this, domain experts manually annotated documents and target concepts as either positive or negative. These target concepts were based on an ontology on tourist destinations with descriptive classes.
Open Knowledge Extraction Challenge
This year, the Open Knowledge Extraction Challenge was composed of 2 tasks and 2 submissions were selected for each of the tasks.
Task 1: Entity Recognition, Linking and Typing for Knowledge Base population
- Mohamed Chabchoub, Michel Gagnon and Amal Zouaq: Collective disambiguation and Semantic Annotation for Entity Linking and Typing. Their approach combines the output of Stanford NER with the output of DBpediaSpotlight as ground for various heuristics to improve their results (e.g., filtering verb mentions, merging mentions of a given concept by always choosing the longest span). For the mentions that were not disambiguated, they query DBpedia to extract the entity that is linked to each such mention, while for the entities that have no types, they use the Stanford type and translate it to the DUL typing. In the end, their system outperformed the Stanford NER with about 20% on the training set, and similarly the semantic annotators.
- Julien Plu, Giuseppe Rizzo and Raphaël Troncy: Enhancing Entity Linking by Combining Models. Their system is build on top of the ADEL system, presented in last year challenge. The new system architecture is composed of various models that are combined in order to improve the entity recognition and linking. Combining various models it is indeed a very good approach since it is very difficult if not almost impossible to choose one model that performs well across all datasets and domains.
Task 2: Class Induction and entity typing for Vocabulary and Knowledge Base enrichment
- Stefano Faralli and Simone Paolo Ponzetto: Open Knowledge Extraction Challenge (2016): A Hearst-like Pattern-Based approach to Hypernym Extraction and Class Induction. Introduced WebisaDB, a large database of hypernymy relations extracted from the web. In addition, they combined WordNet and OntoWordNet to extract the most suitable class for the extracted hypernyms using the WebisaDB.
- Lara Haidar-Ahmad, Ludovic Font, Amal Zouaq and Michel Gagnon: Entity Typing and Linking using SPARQL Patterns and DBpedia. As a take home message, their results show a strong need of (1) having a better linkage between the DBpedia resources and the DBpedia ontology and (2) changing some DBpedia resources into classes.
Semantic Sentiment Analysis Challenge
This challenge consisted of two tasks, one for polarity detection of 1m amazon reviews in 20 domains, and one on entity extraction of 5k sentences in two domains.
- Efstratios Sygkounas, Xianglei Li, Giuseppe Rizzo and Raphaël Troncy. The SentiME System at the SSA Challenge. They used a bag of 5 classifiers in order to classify the sentiment polarity. This bagging has shown to result in a better stability and accuracy of the classification. A four fold cross validation was used while for each sample the ratio of positive and negative examples was preserved.
- Soufian Jebbara and Philipp Cimiano – Aspect-Based Sentiment Analysis Using a Two-Step Neural Network Architecture. They retrieved word embeddings using a skip gram model that was trained on the amazon reviews dataset. They used the stanford POS tagger with 46 tags. Sentics were received from senticnet resulting in 5 sentics per word: pleasantness, attention, sensitivity, aptitude and polarity. They found that these sentics improve the accuracy of the classification and allow for less training iterations. The polarity was retrieved using SentiWordnet and used as a feature training. The results were limited because there was not enough training data.
IN-USE AND INDUSTRIAL TRACK
Mauro Dragoni presented his paper “Enriching a Small Artwork Collection through Semantic Linking”. A very nice project that highlights some of the issues that small museums and small museums collections encounter: data loss, no exposure, no linking to other collections, no multilinguality. One of the issues that they identified, poor linking to other collections is one of the main goals of our DIVE+ project&system – creating an event-centric browser for linking and browsing across cultural heritage collections. Working with small or local museums is very difficult due to poor data quality, quantity and data management. Attracting outside visitors is also very cumbersome since they have no real exposure and collection owners need to translate the data in multiple languages. As part of the Verbo-Visual-Virtual project, this research investigates how to combine NLP with Semantic Web technologies in order to improve the access to cultural information.
Rob Brennan presented the work on “Building the Seshat Ontology for a Global History Databank”, which is a collection of expert-curated body of knowledge about human history. They used an ontology to model uncertain temporal variables, and coding conventions in a wiki-like syntax to deal with uncertainty and disagreement. This allows each expert to define their interpretation of history. Different types of brackets are used to indicate varying degrees of certainty and confidence. However, in the tool they do not show all the possible values, just the likely ones. Three graphs were used for this model: the real geospatial data, the provenance and the annotations. Different user roles are supported in their tool, which they plan to use to model trust and the reliability of their data.
NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL
In the paper “Towards Monitoring of Novel Statements in the News” Michael Färber stated that the increasing amount of information that is currently available on the web makes it imperative to search for novel information, and not only relevant information. The approach extracts novel statements in the form of RDF triples, where novelty is measured with regard to an existing KB and semantic novelty classes. One of the limitations of the system, that is considered as future work, is the fact that the system does not consider the timeline. Old articles could be considered novel if their information is not in the KB.
As a side note, we also consider novelty detection an extremely relevant task given the overwhelming amount of information available, and we made the first steps in tackling this problem by combining NLP methods and crowdsourcing (see Crowdsourcing Salient Information from News and Tweets, LREC 2016).
The paper “Efficient Graph-based Document Similarity” by Christian Paul, Achim Rettinger, Aditya Mogadala, Craig Knoblock and Pedro Szekely deals with assessing the similarity or relatedness among documents. They rank documents based on their relevance/similarity by first performing a search for surface forms of words in the document collection and then looking for co-occurrences of words in documents. They integrate semantic technologies (DBpedia, Wikidata, xLisa) to solve problems arising due to language ambiguity: dealing with heterogenous data (news articles, tweets), poor or no metadata available for images, videos among others.
Amparo E. Cano presented the work on “Semantic Topic Compass – Classification based on Unsupervised Feature Ambiguity Gradation”. For classification they used lexical features such as ngrams, entities and twitter features, and also semantic features from dbpedia. The feature space of a topic is semantically represented under the hypothesis that words have a similar meaning if they occur in a similar context. Related words for a given topic are found using wikipedia articles. They found that enriching the data with semantic features improved the recall of the classification. For evaluation three annotators classified the data, where data on which they did not agree was removed from the dataset.
SEMANTIC DATA MANAGEMENT, BIG DATA, SCALABILITY
“Implicit Entity Linking in Tweets” by Sujan Perera, Pablo Mendes, Adarsh Alex, Amit Sheth and Krishnaprasad Thirunarayan – is a new approach of linking implicit entities by exploiting the facts and the known context around given entities. To achieve this, they use the temporal factor to disambiguate entities that are present in tweets, i.e., identify domain entities that are relevant at the time t.
Keynotes
On Tuesday, Jim Hedler gave a keynote speech titled “Wither OWL in a knowledge-graphed, Linked-Data World?”. The topic of the talk was the question whether OWL is dead or not. In 2010 he claimed that semantics were coming to search. Some of the companies back then like Siri had success, but many did not. SPARQL has been adopted in the supercomputing field, but they are not yet a fan of RDF. Many large companies are also using semantic concepts, but not OWL. They are simply not linking their ontologies. Schema.org is now used in 40% of google crawls. It is simple, and this is good because it is used in 10 billion pages. It’s simplicity keeps the use consistent.
Ontologies and owl are like sauron’s tower. If you let one inconsistency in, it may fall over completely. The RDFS view is different: it does not matter if things mean different things, it is jut about linking things together. In the Web 3.0 there are many use cases for ontologies in web apps at web scale. There is a lof of data but few semantics. This explains why RDFS and SPARQL are used but not why OWL is not. The problem is that we cannot talk about the right things in OWL.
On Thursday, Eleni Pratsini – Lab Director, Smarter Cities Technology Center, IBM Research – Ireland had a keynote on “Semantic Web in Business – Are we there yet?”. Her work focuses on advancing science and technology in order to improve the overall cities’ sustainability. Applying semantic web in smart cities could be the main way to understand the city’s needs and further empowering it take smart decisions over the population and the environment.
We both pitched our doctoral consortium papers at the minute of madness session and presented it in the poster session. You can read more about Oana’s presentation here, and Benjamin’s presentation here.
By Oana Inel and Benjamin Timmermans