Krajnc Aleksandra, Iacono Lucas, Kirschbichler Stephan, Klein Christoph, Breitfuss D, Steidl T, Pucher J
2023
This study investigates the kinematics of vehicle occupants on the passenger seat in reclined and upright seated positions. Thirty-nine volunteers (12 female and 27 male) were tested in 30 kph and 50 kph braking and steering manoeuvres. Eleven manoeuvres were conducted with each volunteer in aware and unaware states. A sedan modified with a belt integrated seat was used. The kinematics was recorded with a video-based system and (additionally) with acceleration / angular velocity sensors. Interaction with the seat was measured with pressure mats and the muscle activity was recorded in the upper body and in the lower body muscles. This publication focuses on the occupant kinematics and its processing with linear mathematical model. Kinematics and respective corridors are predicted for certain age, gender, and anthropometric data.
Malinverno Luca, Barros Vesna, Ghisoni Francesco, Visonà Giovanni, Kern Roman, Nickel Philip , Ventura Barbara Elvira, Simic Ilija, Stryeck Sarah, Manni Francesca , Ferri Cesar , Jean-Quartier Clair, Genga Laura , Schweikert Gabriele, Lovric Mario, Rosen-Zvi Michal
2023
Understanding the inner working of machine-learning models has become a crucial point of discussion in fairness and reliability of artificial intelligence (AI). In this perspective, we reveal insights from recently published scientific works on explainable AI (XAI) within the biomedical sciences. Specifically, we speculate that the COVID-19 pandemic is associated with the rate of publications in the field. Current research efforts seem to be directed more toward explaining black-box machine-learning models than designing novel interpretable architecture. Notably, an inflection period in the publication rate was observed in October 2020, when the quantity of XAI research in biomedical sciences surged upward significantly.While a universally accepted definition of explainability is unlikely, ongoing research efforts are pushing the biomedical field toward improving the robustness and reliability of applied machine learning, which we consider a positive trend.
Repolusk Tristan, Veas Eduardo Enrique
2023
Suzipu notation, also called banzipu notation, is a notation which was predominantly used in Song dynasty in China, and is still actively performed in the Xi’an Guyue music tradition. In this paper, the first tool for creating a machine-readable digital representation of suzipu notation with focus on optical music recognition (OMR) is proposed. This contribution serves two purposes: i) creating the basis for the future development of OMR methods with respect to suzipu notation; and ii) the facilitated digitization of musical sources written in suzipu notation. In summary, these purposes promote the preservation and understanding of cultural heritage through digitization
Geiger Bernhard, Schuppler Barbara
2023
Given the development of automatic speech recognition based techniques for creating phonetic annotations of large speech corpora, there has been a growing interest in investigating the frequencies of occurrence of phonological and reduction processes. Given that most studies have analyzed these processes separately, they did not provide insights about their cooccurrences. This paper contributes with introducing graph theory methods for the analysis of pronunciation variation in a large corpus of Austrian German conversational speech. More specifically, we investigate how reduction processes that are typical for spontaneous German in general co-occur with phonological processes typical for the Austrian German variety. Whereas our concrete findings are of special interest to scientists investigating variation in German, the approach presented opens new possibilities to analyze pronunciation variation in large corpora of different speaking styles in any language.
Grill-Kiefer Gerhard, Schröcker Stefan, Krasser Hannes, Körner Stefan
2023
Die optimale und nachhaltige Gestaltung komplexer Prozesse in Produktionsunternehmen setzt ein strukturiertes Vorgehen in der Problemlösung voraus. Durch das breite Aufgabenspektrum und die miteinander in Konkurrenz stehenden Zielsetzungen gilt diese Rahmenbedingung insbesondere für das Supply Chain Management in der Automobilindustrie. Mit einem in mehrere Schritte gegliederten Prozess gelingen die Entwicklung und Anwendung eines Rechenmodells zur Optimierung der Gesamtkosten im Teileversorgungsprozess. Dieses stellt unter Einbindung der beteiligten Fachbereiche die ganzheitliche Optimierung der Versorgungsgesamtkosten und die Durchführung effizienter Planungsschleifen im operativen Betrieb sicher. Der Datenqualität kommt hierbei eine besondere Bedeutung zu.
Siddiqi Shafaq, Qureshi Faiza, Lindstaedt Stefanie , Kern Roman
2023
Outlier detection in non-independent and identically distributed (non-IID) data refers to identifying unusual or unexpected observations in datasets that do not follow an independent and identically distributed (IID) assumption. This presents a challenge in real-world datasets where correlations, dependencies, and complex structures are common. In recent literature, several methods have been proposed to address this issue and each method has its own strengths and limitations, and the selection depends on the data characteristics and application requirements. However, there is a lack of a comprehensive categorization of these methods in the literature. This study addresses this gap by systematically reviewing methods for outlier detection in non-IID data published from 2015 to 2023. This study focuses on three major aspects; data characteristics, methods, and evaluation measures. In data characteristics, we discuss the differentiating properties of non-IID data. Then we review the recent methods proposed for outlier detection in non-IID data, covering their theoretical foundations and algorithmic approaches. Finally, we discuss the evaluation metrics proposed to measure the performance of these methods. Additionally, we present a taxonomy for organizing these methods and highlight the application domain of outlier detection in non-IID categorical data, outlier detection in federated learning, and outlier detection in attribute graphs. We provide a comprehensive overview of datasets used in the selected literature. Moreover, we discuss open challenges in outlier detection for non-IID to shed light on future research directions. By synthesizing the existing literature, this study contributes to advancing the understanding and development of outlier detection techniques in non-IID data settings.
Müllner Peter , Lex Elisabeth, Schedl Markus, Kowald Dominik
2023
State-of-the-art recommender systems produce high-quality recommendations to support users in finding relevant content. However, through the utilization of users' data for generating recommendations, recommender systems threaten users' privacy. To alleviate this threat, often, differential privacy is used to protect users' data via adding random noise. This, however, leads to a substantial drop in recommendation quality. Therefore, several approaches aim to improve this trade-off between accuracy and user privacy. In this work, we first overview threats to user privacy in recommender systems, followed by a brief introduction to the differential privacy framework that can protect users' privacy. Subsequently, we review recommendation approaches that apply differential privacy, and we highlight research that improves the trade-off between recommendation quality and user privacy. Finally, we discuss open issues, e.g., considering the relation between privacy and fairness, and the users' different needs for privacy. With this review, we hope to provide other researchers an overview of the ways in which differential privacy has been applied to state-of-the-art collaborative filtering recommender systems.
Müllner Peter , Lex Elisabeth, Schedl Markus, Kowald Dominik
2023
User-based KNN recommender systems (UserKNN) utilize the rating data of a target user’s k nearest neighbors in the recommendation process. This, however, increases the privacy risk of the neighbors since their rating data might be exposed to other users or malicious parties. To reduce this risk, existing work applies differential privacy by adding randomness to the neighbors’ ratings, which reduces the accuracy of UserKNN. In this work, we introduce ReuseKNN, a novel differentially-private KNN-based recommender system. The main idea is to identify small but highly reusable neighborhoods so that (i) only a minimal set of users requires protection with differential privacy, and (ii) most users do not need to be protected with differential privacy, since they are only rarely exploited as neighbors. In our experiments on five diverse datasets, we make two key observations: Firstly, ReuseKNN requires significantly smaller neighborhoods, and thus, fewer neighbors need to be protected with differential privacy compared to traditional UserKNN. Secondly, despite the small neighborhoods, ReuseKNN outperforms UserKNN and a fully differentially private approach in terms of accuracy. Overall, ReuseKNN leads to significantly less privacy risk for users than in the case of UserKNN.
Marta Moscati, Christian Wallman, Markus Reiter-Haas, Kowald Dominik, Elisabeth Lex, Markus Schedl
2023
Integrating the ACT-R Framework with Collaborative Filtering for Explainable Sequential Music Recommendati
Geiger Bernhard, Jahani Alireza, Hussain Hussain, Groen Derek
2023
In this work, we investigate Markov aggregation for agent-based models (ABMs). Specifically, if the ABM models agent movements on a graph, if its ruleset satisfies certain assumptions, and if the aim is to simulate aggregate statistics such as vertex populations, then the ABM can be replaced by a Markov chain on a comparably small state space. This equivalence between a function of the ABM and a smaller Markov chain allows to reduce the computational complexity of the agent-based simulation from being linear in the number of agents, to being constant in the number of agents and polynomial in the number of locations.We instantiate our theory for a recent ABM for forced migration (Flee). We show that, even though the rulesets of Flee violate some of our necessary assumptions, the aggregated Markov chain-based model, MarkovFlee, achieves comparable accuracy at substantially reduced computational cost. Thus, MarkovFlee can help NGOs and policy makers forecast forced migration in certain conflict scenarios in a cost-effective manner, contributing to fast and efficient delivery of humanitarian relief.
Rohrhofer Franz Martin, Posch Stefan, Gößnitzer Clemens, Geiger Bernhard
2023
This paper empirically studies commonly observed training difficulties of Physics-Informed Neural Networks (PINNs) on dynamical systems.Our results indicate that fixed points which are inherent to these systems play a key role in the optimization of the in PINNs embedded physics loss function.We observe that the loss landscape exhibits local optima that are shaped by the presence of fixed points.We find that these local optima contribute to the complexity of the physics loss optimization which can explain common training difficulties and resulting nonphysical predictions.Under certain settings, e.g., initial conditions close to fixed points or long simulations times, we show that those optima can even become better than that of the desired solution.
Posch Stefan, Gößnitzer Clemens, Rohrhofer Franz Martin, Geiger Bernhard, Wimmer Andreas
2023
The turbulent jet ignition concept using prechambers is a promising solution to achieve stable combustion at lean conditions in large gas engines, leading to high efficiency at low emission levels. Due to the wide range of design and operating parameters for large gas engine prechambers, the preferred method for evaluating different designs is computational fluid dynamics (CFD), as testing in test bed measurement campaigns is time-consuming and expensive. However, the significant computational time required for detailed CFD simulations due to the complexity of solving the underlying physics also limits its applicability. In optimization settings similar to the present case, i.e., where the evaluation of the objective function(s) is computationally costly, Bayesian optimization has largely replaced classical design-of-experiment. Thus, the present study deals with the computationally efficient Bayesian optimization of large gas engine prechambers design using CFD simulation. Reynolds-averaged-Navier-Stokes simulations are used to determine the target values as a function of the selected prechamber design parameters. The results indicate that the chosen strategy is effective to find a prechamber design that achieves the desired target values.
Rohrhofer Franz Martin, Posch Stefan, Gößnitzer Clemens, García-Oliver José M., Geiger Bernhard
2023
Flamelet models are widely used in computational fluid dynamics to simulate thermochemical processes in turbulent combustion. These models typically employ memory-expensive lookup tables that are predetermined and represent the combustion process to be simulated.Artificial neural networks (ANNs) offer a deep learning approach that can store this tabular data using a small number of network weights, potentially reducing the memory demands of complex simulations by orders of magnitude.However, ANNs with standard training losses often struggle with underrepresented targets in multivariate regression tasks, e.g., when learning minor species mass fractions as part of lookup tables.This paper seeks to improve the accuracy of an ANN when learning multiple species mass fractions of a hydrogen (\ce{H2}) combustion lookup table. We assess a simple, yet effective loss weight adjustment that outperforms the standard mean-squared error optimization and enables accurate learning of all species mass fractions, even of minor species where the standard optimization completely fails. Furthermore, we find that the loss weight adjustment leads to more balanced gradients in the network training, which explains its effectiveness.
Hoffer Johannes G., Ranftl Sascha, Geiger Bernhard
2023
We consider the problem of finding an input to a stochastic black box function such that the scalar output of the black box function is as close as possible to a target value in the sense of the expected squared error. While the optimization of stochastic black boxes is classic in (robust) Bayesian optimization, the current approaches based on Gaussian processes predominantly focus either on (i) maximization/minimization rather than target value optimization or (ii) on the expectation, but not the variance of the output, ignoring output variations due to stochasticity in uncontrollable environmental variables. In this work, we fill this gap and derive acquisition functions for common criteria such as the expected improvement, the probability of improvement, and the lower confidence bound, assuming that aleatoric effects are Gaussian with known variance. Our experiments illustrate that this setting is compatible with certain extensions of Gaussian processes, and show that the thus derived acquisition functions can outperform classical Bayesian optimization even if the latter assumptions are violated. An industrial use case in billet forging is presented.
Disch Leonie, Pammer-Schindler Viktoria
2023
Many knowledge-intensive tasks - where learning is required and expected - are now computer-supported. Subsequently, interaction design has the opportunity to support the learning that is necessary to complete a task. In our work, we specifically use knowledge construction theory to model learning. In this position paper, we elaborate on three overarching goals: I) identifying (computational) measurement methods that operationalize knowledge construction theory, II) using these measurement methods to evaluate and compare user interface design elements, and III) user interface adaptation using knowledge about which design elements support what step of knowledge construction - gained through II) together with user models. Our prior and ongoing work targets two areas, namely open science (knowledge construction is necessary to understand scientific texts) and data analytics (knowledge construction is necessary to develop insights based on data)
Wolfbauer Irmtraud, Bangerl Mia Magdalena, Maitz Katharina, Pammer-Schindler Viktoria
2023
In Rebo at Work, chatbot Rebo helps apprentices to reflect on a work experience and associate it with their training’s learning objectives. Rebo poses questions that motivate the apprentice to look at a work experience from different angles, pondering how it went, the problems they encountered, what they learned from it, and what they take away for the future. We present preliminary results of a 9-month field study (analysis of 90 interactions of the first 6 months) with 51 apprentices in the fields of metal technology, mechatronics, and electrical engineering. During reflection with Rebo at Work, 98% of apprentices were able to identify their work experience as a learning opportunity and reflect on that, and 83% successfully connected it with a learning objective. This shows that self-monitoring of learning objectives and reflection on work tasks can be guided by a conversational agent and motivates further research in this area.
Adilova Linara, Geiger Bernhard, Fischer Asja
2023
The information-theoretic framework promises to explain the predictive power of neural networks. In particular, the information plane analysis, which measures mutual information (MI) between input and representation as well as representation and output, should give rich insights into the training process. This approach, however, was shown to strongly depend on the choice of estimator of the MI. The problem is amplified for deterministic networks if the MI between input and representation is infinite. Thus, the estimated values are defined by the different approaches for estimation, but do not adequately represent the training process from an information-theoretic perspective. In this work, we show that dropout with continuously distributed noise ensures that MI is finite. We demonstrate in a range of experiments that this enables a meaningful information plane analysis for a class of dropout neural networks that is widely used in practice.
Berger Katharina, Rusch Magdalena, Pohlmann Antonia, Popowicz Martin, Geiger Bernhard, Gursch Heimo, Schöggl Josef-Peter, Baumgartner Rupert J.
2023
Digital product passports (DPPs) are an emerging technology and are considered as enablers of sustainable and circular value chains as they support sustainable product management (SPM) by gathering and containing product life cycle data. However, some life cycle data are considered sensitive by stakeholders, resulting in a reluctance to share such data. This contribution provides a concept illustrating how data science and machine learning approaches enable electric vehicle battery (EVB) value chain stakeholders to carry out confidentiality-preserving data exchange via a DPP. This, in turn, can support overcoming data sharing reluctances, consequently facilitating sustainability data management on a DPP for an EVB. The concept development comprised a literature review to identify data needs for sustainable EVB management, data management challenges, and potential data science approaches for data management support. Furthermore, three explorative focus group workshops and follow-up consultations with data scientists were conducted to discuss identified data sciences approaches. This work complements the emerging literature on digitalization and SPM by exploring the specific potential of data science, and machine learning approaches enabling sustainability data management and reducing data sharing reluctance. Furthermore, practical relevance is given, as this concept may provide practitioners with new impulses regarding DPP development and implementation.
Hobisch Elisabeth, Völkl Yvonne, Geiger Bernhard, Saric Sanja, Scholger Martina, Helic Denis, Koncar Philipp, Glatz Christina
2023
(extended abstract)
Kowald Dominik, Gregor Mayr, Markus Schedl, Elisabeth Lex
2023
A Study on Accuracy, Miscalibration, and Popularity Bias in Recommendation
Iacono Lucas, Pacios David, Vázquez-Poletti José Luis
2023
A sustainable agricultural system focuses on technologies and methodologies applied to supply a variety of sufficient, nutritious, and safe foods at an affordable price to feed the world population. To meet this goal, farmers and agronomists need crop health metrics to monitor the farms and to early-detect problems such as diseases or droughts. Then, they can apply the necessary measures to correct crops' problems and maximize yields. Large datasets of multispectral images and cloud computing is a must to obtain such metrics. Cameras placed in Drones and Satellites collect large multispectral image datasets. The Cloud allows for storing the image datasets and execution services that extract crops' health metrics such as the Normalized Difference Vegetation Index (NDVI). NDVI cloud computation generates new research challenges, such as which cloud service would allow paying the minimum cost to compute a certain amount of images. This article presents erverless NDVI (SNDVI) a novel serverless computing-based framework for NDVI computation. The main goal of our framework is to minimize the economic costs related to the use of a Public Cloud while computing NDVI from large datasets. We deployed our application using Amazon Lambda and Amazon S3, and then we performed a validation experiment. The experiment consisted of the execution of the framework to extract NDVI from a dataset of multispectral images collected with the Landsat 8 satellite. Then, we evaluate the overall framework performance in terms of; execution time and economic costs. Finally, the experiment results allowed us to determine that the framework fulfils its objective and that Serverless computing Services are a potentially convenient option for NDVI computation from large image datasets.
Jantscher Michael, Gunzer Felix, Kern Roman, Hassler Eva, Tschauner Sebastian, Reishofer Gernot
2023
Recent advances in deep learning and natural language processing (NLP) have opened many new opportunities for automatic text understanding and text processing in the medical field. This is of great benefit as many clinical downstream tasks rely on information from unstructured clinical documents. However, for low-resource languages like German, the use of modern text processing applications that require a large amount of training data proves to be difficult, as only few data sets are available mainly due to legal restrictions. In this study, we present an information extraction framework that was initially pre-trained on real-world computed tomographic (CT) reports of head examinations, followed by domain adaptive fine-tuning on reports from different imaging examinations. We show that in the pre-training phase, the semantic and contextual meaning of one clinical reporting domain can be captured and effectively transferred to foreign clinical imaging examinations. Moreover, we introduce an active learning approach with an intrinsic strategic sampling method to generate highly informative training data with low human annotation cost. We see that the model performance can be significantly improved by an appropriate selection of the data to be annotated, without the need to train the model on a specific downstream task. With a general annotation scheme that can be used not only in the radiology field but also in a broader clinical setting, we contribute to a more consistent labeling and annotation process that also facilitates the verification and evaluation of language models in the German clinical setting
Gabler Philipp, Geiger Bernhard, Schuppler Barbara, Kern Roman
2023
Superficially, read and spontaneous speech—the two main kinds of training data for automatic speech recognition—appear as complementary, but are equal: pairs of texts and acoustic signals. Yet, spontaneous speech is typically harder for recognition. This is usually explained by different kinds of variation and noise, but there is a more fundamental deviation at play: for read speech, the audio signal is produced by recitation of the given text, whereas in spontaneous speech, the text is transcribed from a given signal. In this review, we embrace this difference by presenting a first introduction of causal reasoning into automatic speech recognition, and describing causality as a tool to study speaking styles and training data. After breaking down the data generation processes of read and spontaneous speech and analysing the domain from a causal perspective, we highlight how data generation by annotation must affect the interpretation of inference and performance. Our work discusses how various results from the causality literature regarding the impact of the direction of data generation mechanisms on learning and prediction apply to speech data. Finally, we argue how a causal perspective can support the understanding of models in speech processing regarding their behaviour, capabilities, and limitations.
Trügler Andreas, Scher Sebastian, Kopeinik Simone, Kowald Dominik
2023
The use of data-driven decision support by public agencies is becoming more widespread and already influences the allocation of public resources. This raises ethical concerns, as it has adversely affected minorities and historically discriminated groups. In this paper, we use an approach that combines statistics and data-driven approaches with dynamical modeling to assess long-term fairness effects of labor market interventions. Specifically, we develop and use a model to investigate the impact of decisions caused by a public employment authority that selectively supports job-seekers through targeted help. The selection of who receives what help is based on a data-driven intervention model that estimates an individual’s chances of finding a job in a timely manner and rests upon data that describes a population in which skills relevant to the labor market are unevenly distributed between two groups (e.g., males and females). The intervention model has incomplete access to the individual’s actual skills and can augment this with knowledge of the individual’s group affiliation, thus using a protected attribute to increase predictive accuracy. We assess this intervention model’s dynamics—especially fairness-related issues and trade-offs between different fairness goals- over time and compare it to an intervention model that does not use group affiliation as a predictive feature. We conclude that in order to quantify the trade-off correctly and to assess the long-term fairness effects of such a system in the real-world, careful modeling of the surrounding labor market is indispensable.
Edtmayer, Hermann, Brandl, Daniel, Mach, Thomas, Schlager Elke, Gursch Heimo, Lugmair, Maximilian, Hochenauer, Christoph
2023
Increasing demands on indoor comfort in buildings and urgently needed energy efficiency measures require optimised HVAC systems in buildings. To achieve this, more extensive and accurate input data are required. This is difficult or impossible to accomplish with physical sensors. Virtual sensors, in turn, can provide these data; however, current virtual sensors are either too slow or too inaccurate to do so. The aim of our research was to develop a novel digital-twin workflow providing fast and accurate virtual sensors to solve this problem. To achieve a short calculation time and accurate virtual measurement results, we coupled a fast building energy simulation and an accurate computational fluid dynamics simulation. We used measurement data from a test facility as boundary conditions for the models and managed the coupling workflow with a customised simulation and data management interface. The corresponding simulation results were extracted for the defined virtual sensors and validated with measurement data from the test facility. In summary, the results showed that the total computation time of the coupled simulation was less than 10 min, compared to 20 h of the corresponding CFD models. At the same time, the accuracy of the simulation over five consecutive days was at a mean absolute error of 0.35 K for the indoor air temperature and at 1.2 % for the relative humidity. This shows that the novel coupled digital-twin workflow for virtual sensors is fast and accurate enough to optimise HVAC control systems in buildings.
Müllner Peter
2023
Recommender systems process abundances of user data to generate recommendations that fit well to each individual user. This utilization of user data can pose severe threats to user privacy, e.g., the inadvertent leakage of user data to untrusted parties or other users. Moreover, this data can be used to reveal a user’s identity, or to infer very private information as, e.g., gender. Instead of the plain application of privacy-enhancing techniques, which could lead to decreased accuracy, we tackle the problem itself, i.e., the utilization of user data. With this, we aim to equip recommender systems with means to provide high-quality recommendations that respect users’ privacy.
Lacic Emanuel, Duricic Tomislav, Fadljevic Leon, Theiler Dieter, Kowald Dominik
2023
Uptrendz: API-Centric Real-Time Recommendations in Multi-Domain Settings
Hoffer Johannes Georg, Geiger Bernhard, Kern Roman
2023
This research presents an approach that combines stacked Gaussian processes (stacked GP) with target vector Bayesian optimization (BO) to solve multi-objective inverse problems of chained manufacturing processes. In this context, GP surrogate models represent individual manufacturing processes and are stacked to build a unified surrogate model that represents the entire manufacturing process chain. Using stacked GPs, epistemic uncertainty can be propagated through all chained manufacturing processes. To perform target vector BO, acquisition functions make use of a noncentral χ-squared distribution of the squared Euclidean distance between a given target vector and surrogate model output. In BO of chained processes, there are the options to use a single unified surrogate model that represents the entire joint chain, or that there is a surrogate model for each individual process and the optimization is cascaded from the last to the first process. Literature suggests that a joint optimization approach using stacked GPs overestimates uncertainty, whereas a cascaded approach underestimates it. For improved target vector BO results of chained processes, we present an approach that combines methods which under- or overestimate uncertainties in an ensemble for rank aggregation. We present a thorough analysis of the proposed methods and evaluate on two artificial use cases and on a typical manufacturing process chain: preforming and final pressing of an Inconel 625 superalloy billet.
Martina Mara, Ratz Linda, Klara Krieg, Markus Schedl, Navid Rekabsa
2023
Biases in algorithmic systems have led to discrimination against historically disadvantaged groups, including reinforcing outdated gender stereotypes. While a substantial body of research addresses biases in algorithms and underlying data, little is known about if and how users themselves bring in bias. We contribute to the latter strand of research by investigating users’ replication of stereotypical gender representations in online search queries. Following Prototype Theory, we define the disproportionate mention of a gender that does not conform to the prototypical representative of a searched domain (e.g., “male nurse”) as an indication of bias. In a pilot study with 224 US participants and an online experiment with 400 UK participants, we find clear evidence of gender biases in formulating search queries. We also report the effects of an educative text on user behaviour and highlight the wish of users to learn about bias-mitigating strategies in their interactions with search engines.
Žlabravec Veronika, Strbad Dejan, Dogan Anita, Lovric Mario, Janči Tibor, Vidaček Filipec Sanja
2022
Evangelidis Thomas, Giassa Ilektra-Chara , Lovric Mario
2022
Identifying hit compounds is a principal step in early-stage drug discovery. While many machine learning (ML) approaches have been proposed, in the absence of binding data, molecular docking is the most widely used option to predict binding modes and score hundreds of thousands of compounds for binding affinity to the target protein. Docking's effectiveness is critically dependent on the protein-ligand (P-L) scoring function (SF), thus re-scoring with more rigorous SFs is a common practice. In this pilot study, we scrutinize the PM6-D3H4X/COSMO semi-empirical quantum mechanical (SQM) method as a docking pose re-scoring tool on 17 diverse receptors and ligand decoy sets, totaling 1.5 million P-L complexes. We investigate the effect of explicitly computed ligand conformational entropy and ligand deformation energy on SQM P-L scoring in a virtual screening (VS) setting, as well as molecular mechanics (MM) versus hybrid SQM/MM structure optimization prior to re-scoring. Our results proclaim that there is no obvious benefit from computing ligand conformational entropies or deformation energies and that optimizing only the ligand's geometry on the SQM level is sufficient to achieve the best possible scores. Instead, we leverage machine learning (ML) to include implicitly the missing entropy terms to the SQM score using ligand topology, physicochemical, and P-L interaction descriptors. Our new hybrid scoring function, named SQM-ML, is transparent and explainable, and achieves in average 9\% higher AUC-ROC than PM6-D3H4X/COSMO and 3\% higher than Glide SP, but with consistent and predictable performance across all test sets, unlike the former two SFs, whose performance is considerably target-dependent and sometimes resembles that of a random classifier. The code to prepare and train SQM-ML models is available at \url{https://github.com/tevang/sqm-ml.git} and we believe that will pave the way for a new generation of hybrid SQM/ML protein-ligand scoring functions.
Steger Sophie, Rohrhofer Franz Martin, Geiger Bernhard
2022
Despite extensive research, physics-informed neural networks (PINNs) are still difficult to train, especially when the optimization relies heavily on the physics loss term. Convergence problems frequently occur when simulating dynamical systems with high-frequency components, chaotic or turbulent behavior. In this work, we discuss whether the traditional PINN framework is able to predict chaotic motion by conducting experiments on the undamped double pendulum. Our results demonstrate that PINNs do not exhibit any sensitivity to perturbations in the initial condition. Instead, the PINN optimization consistently converges to physically correct solutions that violate the initial condition only marginally, but diverge significantly from the desired solution due to the chaotic nature of the system. In fact, the PINN predictions primarily exhibit low-frequency components with a smaller magnitude of higher-order derivatives, which favors lower physics loss values compared to the desired solution. We thus hypothesize that the PINNs "cheat" by shifting the initial conditions to values that correspond to physically correct solutions that are easier to learn. Initial experiments suggest that domain decomposition combined with an appropriate loss weighting scheme mitigates this effect and allows convergence to the desired solution.
Gursch Heimo, Körner Stefan, Thaler Franz, Waltner Georg, Ganster Harald, Rinnhofer Alfred, Oberwinkler Christian, Meisenbichler Reinhard, Bischof Horst, Kern Roman
2022
Refuse separation and sorting is currently done by recycling plants that are manually optimised for a fixed refuse composition. Since the refuse compositions constantly change, these plants deliver either suboptimal sorting performances or require constant monitoring and adjustments by the plant operators. Image recognition offers the possibility to continuously monitor the refuse composition on the conveyor belts in a sorting facility. When information about the refuse composition is combined with parameters and measurements of the sorting machinery, the sorting performance of a plant can be continuously monitored, problems detected, optimisations suggested and trends predicted. This article describes solutions for multispectral and 3D image capturing of refuse streams and evaluates the performance of image segmentation models. The image segmentation models are trained with synthetic training data to reduce the manual labelling effort thus reducing the costs of the image recognition introduction. Furthermore, an outlook on the combination of image recognition data with parameters and measurements of the sorting machinery in a combined time series analysis is provided.
Xue Yani, Li Miqing, Arabnejad Hamid, Suleimenova, Geiger Bernhard, Jahani Alireza, Groen Derek
2022
In the context of humanitarian support for forcibly displaced persons, camps play an important role in protecting people and ensuring their survival and health. A challenge in this regard is to find optimal locations for establishing a new asylum-seeker/unrecognized refugee or IDPs (internally displaced persons) camp. In this paper we formulate this problem as an instantiation of the well-known facility location problem (FLP) with three objectives to be optimized. In particular, we show that AI techniques and migration simulations can be used to provide decision support on camp placement.
Pammer-Schindler Viktoria, Lindstaedt Stefanie
2022
Digitale Kompetenzen sind im Bereich des strategischen Managements selbstverständlich, AI Literacy allerdings nicht. In diesem Artikel diskutieren wir, welches grundlegende Verständnis über künstliche Intelligenz (Artificial Intelligence – AI) für Entscheidungsträger:Innen im strategischen Management wichtig ist und welches darüber hinausgehende kontextspezifische und strategische Wissen.Digitale Kompetenzen für einen Großteil von beruflichen Tätigkeitsgruppen sind in aller Munde, zu Recht. Auf der Ebene von Entscheidungsträger:Innen im strategischen Management allerdings greifen diese zu kurz; sie sind größtenteils selbstverständlich im notwendigen Ausmaß: digitales Informationsmanagement, die Fähigkeit zur Kommunikation und Zusammenarbeit im Digitalen wie auch die Fähigkeiten, digitale Technologien zum Wissenserwerb und Lernen und zur Unterstützung bei kreativen Prozessen einzusetzen (Liste dieser typischen digitalen Kompetenzen aus [1]).Anders stellt sich die Sache dar, wenn es um spezialisiertes Wissen über moderne Computertechnologien geht, wie Methoden der automatischen Datenauswertung (Data Analytics) und der künstlichen Intelligenz, Internet of Things, Blockchainverfahren etc. (Auflistung in Anlehnung an Abb. 3 in [2]). Dieses Wissen wird in der Literatur durchaus als in Organisationen notwendiges Wissen behandelt [2]; allerdings üblicherweise mit dem Fokus darauf, dass dieses von Spezialist:Innen abgedeckt werden soll.Zusätzlich, und das ist die erste Hauptthese in diesem Kommentar, argumentieren wir, dass Entscheidungsträger:Innen im strategischen Management Grundlagenwissen in diesen technischen Bereichen brauchen, um in der Lage zu sein, diese Technologien in Bezug auf ihre Auswirkungen auf das eigene Unternehmen bzw. dessen Geschäftsumfeld einschätzen zu können. In diesem Artikel wird genauer das nötige Grundlagenwissen in Bezug auf künstliche Intelligenz (Artificial Intelligence – AI) betrachtet, das wir hier als „AI Literacy“ bezeichnen.
Rüdisser Hannah, Windisch Andreas, Amerstorfer U. V., Möstl C., Amerstorfer T., Bailey R. L., Reiss M. A.
2022
Interplanetary coronal mass ejections (ICMEs) are one of the main drivers for space weather disturbances. In the past, different approaches have been used to automatically detect events in existing time series resulting from solar wind in situ observations. However, accurate and fast detection still remains a challenge when facing the large amount of data from different instruments. For the automatic detection of ICMEs we propose a pipeline using a method that has recently proven successful in medical image segmentation. Comparing it to an existing method, we find that while achieving similar results, our model outperforms the baseline regarding training time by a factor of approximately 20, thus making it more applicable for other datasets. The method has been tested on in situ data from the Wind spacecraft between 1997 and 2015 with a True Skill Statistic of 0.64. Out of the 640 ICMEs, 466 were detected correctly by our algorithm, producing a total of 254 false positives. Additionally, it produced reasonable results on datasets with fewer features and smaller training sets from Wind, STEREO-A, and STEREO-B with TSSs of 0.56, 0.57, and 0.53, respectively. Our pipeline manages to find the start of an ICME with a mean absolute error (MAE) of around 2 hr and 56 min, and the end time with a MAE of 3 hr and 20 min. The relatively fast training allows straightforward tuning of hyperparameters and could therefore easily be used to detect other structures and phenomena in solar wind data, such as corotating interaction regions.
Stipanicev Drazenka, Repec Sinisa, Vucic Matej, Lovric Mario, Klobucar Goran
2022
In order to prevent the spread of COVID-19, contingency measures in the form of lockdowns were implemented all over the world, including in Croatia. The aim of this study was to detect if those severe, imposed restrictions of social interactions reflected on the water quality of rivers receiving wastewaters from urban areas. A total of 18 different pharmaceuticals (PhACs) and illicit drugs (IDrgs), as well as their metabolites, were measured for 16 months (January 2020–April 2021) in 12 different locations at in the Sava and Drava Rivers, Croatia, using UHPLC coupled to LCMS. This period encompassed two major Covid lockdowns (March–May 2020 and October 2020–March 2021). Several PhACs more than halved in river water mass flow during the lockdowns. The results of this study confirm that Covid lockdowns caused lower cumulative concentrations and mass flow of measured PhACs/IDrgs in the Sava and Drava Rivers. This was not influenced by the increased use of drugs for the treatment of the COVID-19, like antibiotics and steroidal anti-inflammatory drugs. The decreases in measured PhACs/IDrgs concentrations and mass flows were more pronounced during the first lockdown, which was stricter than the second.
Maitz Katharina, Fessl Angela, Pammer-Schindler Viktoria, Kaiser Rene_DB, Lindstaedt Stefanie
2022
Artificial intelligence (AI) is by now used in many different work settings, including construction industry. As new technologies change business and work processes, one important aspect is to understand how potentially affected workers perceive and understand the existing and upcoming AI in their work environment. In this work, we present the results of an exploratory case study with 20 construction workers in a small Austrian company about their knowledge of and attitudes toward AI. Our results show that construction workers’ understanding of AI as a concept is rather superficial, diffuse, and vague, often linked to physical and tangible entities such as robots, and often based on inappropriate sources of information which can lead to misconceptions about AI and AI anxiety. Learning opportunities for promoting (future) construction workers’ AI literacy should be accessible and understandable for learners at various educational levels and encompass aspects such as i) conveying the basics of digitalization, automation, and AI to enable a clear distinction of these concepts, ii) building on the learners’ actual experience realm, i.e., taking into account their focus on physical, tangible, and visible entities, and iii) reducing AI anxiety by elaborating on the limits of AI.
Fessl Angela, Maitz Katharina, Paleczek Lisa, Divitini Monica, Rouhani Majid, Köhler Thomas
2022
At the beginning of the COVID-19 pandemic, a sudden shift from mainly face-to-face teaching andlearning to exclusively online teaching and learning took place and posed challenges especially for inservice teachers at all types of schools. But also for pre-service teachers, i.e. students who are preparing themselves in order to become future teachers, are challenged by the new profile of competencies demanded. Suddenly, alle teachers had to orient themselves in a completely digital world of teaching in which acquiring digital competences was no longer an option but a real necessity.We are investigating which digital competences are necessary as a prerequisite for pre- and in-serviceteachers in the current COVID-19 pandemic to ensure high quality teaching and learning(Schaarschmidt et al., 2021). Based upon the European DigComp 2.1 (Carretero et al., 2017) andDigCompEdu (Redecker, 2017) frameworks and the Austrian Digi.kompP (Virtuelle PH, 2021) framework, we adopted a curriculum from most recent research, tailored to the specific needs of our Euroepan level target group. That curriculum addresses the individual digital media competence (two modules), and the media didactic competence (three modules). For each of these modules, we developedcompetence-based learning goals (Bloom et al., 1956; Krathwohl & Anderson, 2010; Fessl et al., 2021)that serve as a focal point of what the learner should be able to do after his/her specific learning experience. The learning content will be prepared as micro learning units to be lightweight and flexible as time constraints are known to be challenging for any professional development.In three sequentially conducted workshops (Sept. 2021, Nov. 2021, Feb. 2022), we discuss with different stakeholders (researchers, teachers, teacher-students, education administrators) the curriculum andthe learning goals. Preliminary results of the first two workshops show that our developed curriculumand the digital competences specified are crucial for successful online teaching. In our presentation, wewill summarize the results of all three workshops, discuss the theoretical underpinnings of our overallapproach, and provide insights on how we plan to convey the digital competences developed to educators using learning strategies such as micro learning and reflective learning.
Fessl Angela, Maitz Katharina, Pleczek Lisa, Köhler Thomas , Irnleitner Selina, Divitini Monica
2022
The COVID-19 pandemic initiated a fundamental change in learning and teaching in (higher-) education [HE]. On short notice, traditional teaching in HE suddenly had to be transformed into online teaching. This shift into the digital world posed a great challenge to in-service teachers at schools and universities, and pre-service teachers, as the acquisition of digital competences was no longer an option but a real necessity. The previously rather hidden or even neglected importance of teachers’ digital competences for successful teaching and learning became manifest and clearly visible. In this work, we investigate necessary digital competences to ensure high quality teaching and learning in and beyond the current COVID-19 pandemic. Based upon the European DigComp 2.1 (Carretero et al., 2017), DigCompEdu (Redecker, 2017) frameworks, the Austrian Digi.kompP framework (Virtuelle PH, 2021), and the recommendations given by German Education authorities (KMK 2017; KMK 2021; HRK 2022), we developed a curriculum consisting of 5 modules: 2 for individual digital media competence, and 3 for media didactic competence. For each module, competence-oriented learning goals and corresponding micro-learning contents were defined to meet the needs of teachers while considering their time constraints.Based on three online workshops, the curriculum and the corresponding learning goals were discussed with university teachers, pre-service teachers, and policymakers. The content of the curriculum was perceived as highly relevant for these target groups; however, some adaptations were required. From the university teachers’ perspective, we got feedback that they were overwhelmed with the situation and urgently needed digital competences. Policymakers suggested that further education regarding digital competences needs to offer a systematic exchange of experiences with peers. From the perspective of in-service teachers, it was stated that teacher education should focus more on digital competences and tools.In this paper, we will present the result of the workshop series that informed the design process of the DIGIVID curriculum for teaching professionals.
Disch Leonie, Fessl Angela, Pammer-Schindler Viktoria
2022
The uptake of open science resources needs knowledge construction on the side of the readers/receivers of scientific content. The design of technologies surrounding open science resources can facilitate such knowledge construction, but this has not been investigated yet. To do so, we first conducted a scoping review of literature, from which we draw design heuristics for knowledge construction in digital environments. Subsequently, we grouped the underlying technological functionalities into three design categories: i) structuring and supporting collaboration, ii) supporting the learning process, and iii) structuring, visualising and navigating (learning) content. Finally, we mapped the design categories and associated design heuristics to core components of popular open science platforms. This mapping constitutes a design space (design implications), which informs researchers and designers in the HCI community about suitable functionalities for supporting knowledge construction in existing or new digital open science platforms.
Santa Maria Gonzalez Tomas, Vermeulen Walter J.V., Baumgartner Rupert J.,
2022
The process of developing sustainable and circular business models is quite complex and thus hinders their wider implementation in the market. Further understanding and guidelines for firms are needed. Design thinking is a promising problem solving approach capable of facilitating the innovation process. However, design thinking does not necessarily include sustainability considerations, and it has not been sufficiently explored for application in business model innovation. Given the additional challenges posed by the need for time-efficiency and a digital environment, we have therefore developed a design thinking-based framework to guide the early development of circular business models in an online and efficient manner. We propose a new process framework called the Circular Sprint. This encompasses seven phases and contains twelve purposefully adapted activities. The framework development follows an Action Design Research approach, iteratively combining four streams of literature, feedback from sixteen experts and six workshops, and involved a total of 107 participants working in fourteen teams. The present paper describes the framework and its activities, together with evaluations of its usefulness and ease-of-use. The research shows that, while challenging, embedding sustainability, circularity and business model innovation within a design thinking process is indeed possible. We offer a flexible framework and a set of context-adaptable activities that can support innovators and practitioners in the complex process of circular business model innovation. These tools can also be used for training and educational purposes. We invite future researchers to build upon and modify our framework and its activities by adapting it to their required scenarios and purposes. A detailed step-by-step user guide is provided in the supplementary material.
Hochstrasser Carina, Herburger Michael, Plasch Michael, Lackner Ulrike, Breitfuß Gert
2022
Kurzfristige Störungen und langfristige Veränderungen führen zunehmend zu Störungen in der interorganisatorischen Logistik. Daher müssen widerstandsfähige Strukturen aufgebaut und durch datengestützte Entscheidungen überwacht werden. Da es im aktuellen Geschäftsumfeld jedoch nicht ausreicht, eigene Informations- und Datensätze zu generieren und zu verarbeiten, müssen Datenaustauschkonzepte wie Datenkreise entwickelt werden. Ziel dieses Beitrags ist es, die Bedürfnisse und Anforderungen der Stakeholder an einen Datenkreis in den Anwendungsbereichen Logistik und Resilienz zu untersuchen. Zu diesem Zweck wurde ein Mixed-Methods-Ansatz durchgeführt, der eine Stakeholder-Analyse und die Entwicklung von Anwendungsfällen mittels qualitativer (Workshops und Experteninterviews) und quantitativer (Online-Befragung) Methoden umfasst.
Mirzababaei Behzad, Pammer-Schindler Viktoria
2022
Large-scale learning scenarios as well as the ongoing pandemic situation underline the importance of educational technology in order to support scalability and spatial as well as temporal flexibility in all kinds of learning and teaching settings. Educational conversational agents build on a long research tradition in intelligent tutoring systems and other adaptive learning technologies but build for interaction on the more recent interaction paradigm of conversational interaction. In this paper, we describe a tutorial conversational agent, called GDPRAgent, which teaches a lesson on the European General Data Protection Regulation (GDPR). This regulation governs how personal data must be treated in Europe. Instructionally, the agent’s dialogue structure follows a basic GDPR curriculum and uses Bloom’s revised taxonomy of learning objectives in order to teach GDPR topics. This overall design of the dialogue structure allows inserting more specific adaptive tutorial strategies. From a learner perspective, the learners experience a completely one-on-one tutorial session in which they receive relevant content (is “being taught”) as well as experiences active learning parts such as doing quizzes or summarising content. Our prototype, therefore, illustrates a move away from the dichotomy between content and the activity of teaching/learning in educational technology.
Mirzababaei Behzad, Pammer-Schindler Viktoria
2022
This paper reports a between-subjects experiment (treatment group N = 42, control group N = 53) evaluating the effect of a conversational agent that teaches users to give a complete argument. The agent analyses a given argument for whether it contains a claim, a warrant and evidence, which are understood to be essential elements in a good argument. The agent detects which of these elements is missing, and accordingly scaffolds the argument completion. The experiment includes a treatment task (Task 1) in which participants of the treatment group converse with the agent, and two assessment tasks (Tasks 2 and 3) in which both the treatment and the control group answer an argumentative question. We find that in Task 1, 36 out of 42 conversations with the agent are coherent. This indicates good interaction quality. We further find that in Tasks 2 and 3, the treatment group writes a significantly higher percentage of argumentative sentences (task 2: t(94) = 1.73, p = 0.042, task 3: t(94) = 1.7, p = 0.045). This shows that participants of the treatment group used the scaffold, taught by the agent in Task 1, outside the tutoring conversation (namely in the assessment Tasks 2 and 3) and across argumentation domains (Task 3 is in a different domain of argumentation than Tasks 1 and 2). The work complements existing research on adaptive and conversational support for teaching argumentation in essays.
Breitfuß Gert, Disch Leonie, Santa Maria Gonzalez Tomas
2022
The present paper aims to validate commonly used business analysis methods to obtain input for an early phase business model regarding feasibility, desirability, and viability. The research applies a case study approach, exploring the early-phase development of an economically sustainable business model for an open science discovery platform.
Martin Ebel, Santa Maria Gonzalez Tomas, Breitfuß Gert
2022
Business model patterns are a common tool in business model design. We provide a theoretical foundation for their use within the framework of analogical reasoning as an important cognitive skill for business model innovation. Based on 12 innovation workshops with students and practitioners, we discuss scenarios of pattern card utilization and provide insights on its evaluation.Martin Ebel, Tomas Santa Maria Gert Breitfuss PUBLISHED
Wolfbauer Irmtraud, Pammer-Schindler Viktoria, Maitz Katharina, Rosé Carolyn P.
2022
We present a script for conversational reflection guidance embedded in reflective practice. Rebo Junior, a non-adaptive conversational agent, was evaluated in a 12-week field study with apprentices. We analysed apprentices’ interactions with Rebo Junior in terms of reflectivity, and measured the development of their reflection competence via reflective essays at three points in time during the field study. Reflection competence, a key competency for lifelong professional learning, becomes significantly higher by the third essay, after repeated interactions with Rebo Junior (paired-samples t-test t13=3.00, p=.010 from Essay 1 to Essay 3). However, we also observed a significant decrease in reflectivity in the Rebo Junior interactions over time (paired-samples t-test between the first and eighth interaction: t7=2.50, p=.041). We attribute this decline to i) the novelty of Rebo Junior wearing off (novelty effect) and ii) the apprentices learning the script and experiencing subsequent frustration due to the script not fading over time. Overall, this work i) informs future design through the observation of consistent decreases in engagement over 8 interactions with static scaffolding, and ii) contributes a reflection script applicable for reflection on tasks that resemble future expected work tasks, a typical setting in lifelong professional learning, and iii) indicates increased reflection competence after repeated reflection guided by a conversational agent.
Liu Xinglan, Hussain Hussain, Razouk Houssam, Kern Roman
2022
Graph embedding methods have emerged as effective solutions for knowledge graph completion. However, such methods are typically tested on benchmark datasets such as Freebase, but show limited performance when applied on sparse knowledge graphs with orders of magnitude lower density. To compensate for the lack of structure in a sparse graph, low dimensional representations of textual information such as word2vec or BERT embeddings have been used. This paper proposes a BERT-based method (BERT-ConvE), to exploit transfer learning of BERT in combination with a convolutional network model ConvE. Comparing to existing text-aware approaches, we effectively make use of the context dependency of BERT embeddings through optimizing the features extraction strategies. Experiments on ConceptNet show that the proposed method outperforms strong baselines by 50% on knowledge graph completion tasks. The proposed method is suitable for sparse graphs as also demonstrated by empirical studies on ATOMIC and sparsified-FB15k-237 datasets. Its effectiveness and simplicity make it appealing for industrial applications.
De Freitas Joao Pedro, Berg Sebastian, Geiger Bernhard, Mücke Manfred
2022
In this paper, we frame homogeneous-feature multi-task learning (MTL) as a hierarchical representation learning problem, with one task-agnostic and multiple task-specific latent representations. Drawing inspiration from the information bottleneck principle and assuming an additive independent noise model between the task-agnostic and task-specific latent representations, we limit the information contained in each task-specific representation. It is shown that our resulting representations yield competitive performance for several MTL benchmarks. Furthermore, for certain setups, we show that the trained parameters of the additive noise model are closely related to the similarity of different tasks. This indicates that our approach yields a task-agnostic representation that is disentangled in the sense that its individual dimensions may be interpretable from a task-specific perspective.
Müllner Peter , Schmerda Stefan, Theiler Dieter, Lindstaedt Stefanie , Kowald Dominik
2022
Data and algorithm sharing is an imperative part of data- and AI-driven economies. The efficient sharing of data and algorithms relies on the active interplay between users, data providers, and algorithm providers. Although recommender systems are known to effectively interconnect users and items in e-commerce settings, there is a lack of research on the applicability of recommender systems for data and algorithm sharing. To fill this gap, we identify six recommendation scenarios for supporting data and algorithm sharing, where four of these scenarios substantially differ from the traditional recommendation scenarios in e-commerce applications. We evaluate these recommendation scenarios using a novel dataset based on interaction data of the OpenML data and algorithm sharing platform, which we also provide for the scientific community. Specifically, we investigate three types of recommendation approaches, namely popularity-, collaboration-, and content-based recommendations. We find that collaboration-based recommendations provide the most accurate recommendations in all scenarios. Plus, the recommendation accuracy strongly depends on the specific scenario, e.g., algorithm recommendations for users are a more difficult problem than algorithm recommendations for datasets. Finally, the content-based approach generates the least popularity-biased recommendations that cover the most datasets and algorithms.
Salhofer Eileen, Liu Xinglan, Kern Roman
2022
State of the art performances for entity extrac-tion tasks are achieved by supervised learning,specifically, by fine-tuning pretrained languagemodels such as BERT. As a result, annotatingapplication specific data is the first step in manyuse cases. However, no practical guidelinesare available for annotation requirements. Thiswork supports practitioners by empirically an-swering the frequently asked questions (1) howmany training samples to annotate? (2) whichexamples to annotate? We found that BERTachieves up to 80% F1 when fine-tuned on only70 training examples, especially on biomedicaldomain. The key features for guiding the selec-tion of high performing training instances areidentified to be pseudo-perplexity and sentence-length. The best training dataset constructedusing our proposed selection strategy shows F1score that is equivalent to a random selectionwith twice the sample size. The requirementof only a small number of training data im-plies cheaper implementations and opens doorto wider range of applications.
Jean-Quartier Claire, Mazón Miguel Rey, Lovric Mario, Stryeck Sarah
2022
Research and development are facilitated by sharing knowledge bases, and the innovation process benefits from collaborative efforts that involve the collective utilization of data. Until now, most companies and organizations have produced and collected various types of data, and stored them in data silos that still have to be integrated with one another in order to enable knowledge creation. For this to happen, both public and private actors must adopt a flexible approach to achieve the necessary transition to break data silos and create collaborative data sharing between data producers and users. In this paper, we investigate several factors influencing cooperative data usage and explore the challenges posed by the participation in cross-organizational data ecosystems by performing an interview study among stakeholders from private and public organizations in the context of the project IDE@S, which aims at fostering the cooperation in data science in the Austrian federal state of Styria. We highlight technological and organizational requirements of data infrastructure, expertise, and practises towards collaborative data usage.
Malev Olga, Babic Sanja, Cota Anja Sima, Stipaničev Draženka, Repec Siniša, Drnić Martina, Lovric Mario, Bojanić Krunoslav, Radić Brkanac Sandra, Čož-Rakovac Rozelindra, Klobučar Göran
2022
This study focused on the short-term whole organism bioassays (WOBs) on fish (Danio rerio) and crustaceans (Gammarus fossarum and Daphnia magna) to assess the negative biological effects of water from the major European River Sava and the comparison of the obtained results with in vitro toxicity data (ToxCast database) and Risk Quotient (RQ) methodology. Pollution profiles of five sampling sites along the River Sava were assessed by simultaneous chemical analysis of 562 organic contaminants (OCs) of which 476 were detected. At each sampling site, pharmaceuticals/illicit drugs category was mostly represented by their cumulative concentration, followed by categories industrial chemicals, pesticides and hormones. An exposure-activity ratio (EAR) approach based on ToxCast data highlighted steroidal anti-inflammatory drugs, antibiotics, antiepileptics/neuroleptics, industrial chemicals and hormones as compounds with the highest biological potential. Summed EAR-based prediction of toxicity showed a good correlation with the estimated toxicity of assessed sampling sites using WOBs. WOBs did not exhibit increased mortality but caused various sub-lethal biological responses that were dependant relative to the sampling site pollution intensity as well as species sensitivity. Exposure of G. fossarum and D. magna to river water-induced lower feeding rates increased GST activity and TBARS levels. Zebrafish D. rerio embryo exhibited a significant decrease in heartbeat rate, failure in pigmentation formation, as well as inhibition of ABC transporters. Nuclear receptor activation was indicated as the biological target of greatest concern based on the EAR approach. A combined approach of short-term WOBs, with a special emphasis on sub-lethal endpoints, and chemical characterization of water samples compared against in vitro toxicity data from the ToxCast database and RQs can provide a comprehensive insight into the negative effect of pollutants on aquatic organisms.
Razouk Houssam, Kern Roman
2022
Digitalization of causal domain knowledge is crucial. Especially since the inclusion of causal domain knowledge in the data analysis processes helps to avoid biased results. To extract such knowledge, the Failure Mode Effect Analysis (FMEA) documents represent a valuable data source. Originally, FMEA documents were designed to be exclusively produced and interpreted by human domain experts. As a consequence, these documents often suffer from data consistency issues. This paper argues that due to the transitive perception of the causal relations, discordant and merged information cases are likely to occur. Thus, we propose to improve the consistency of FMEA documents as a step towards more efficient use of causal domain knowledge. In contrast to other work, this paper focuses on the consistency of causal relations expressed in the FMEA documents. To this end, based on an explicit scheme of types of inconsistencies derived from the causal perspective, novel methods to enhance the data quality in FMEA documents are presented. Data quality improvement will significantly improve downstream tasks, such as root cause analysis and automatic process control.
Lovric Mario, Antunović Mario, Šunić Iva, Vuković Matej, Kecorius Simon, Kröll Mark, Bešlić Ivan, Godec Ranka, Pehnec Gordana, Geiger Bernhard, Grange Stuart K, Šimić Iva
2022
In this paper, the authors investigated changes in mass concentrations of particulate matter (PM) during the Coronavirus Disease of 2019 (COVID-19) lockdown. Daily samples of PM1, PM2.5 and PM10 fractions were measured at an urban background sampling site in Zagreb, Croatia from 2009 to late 2020. For the purpose of meteorological normalization, the mass concentrations were fed alongside meteorological and temporal data to Random Forest (RF) and LightGBM (LGB) models tuned by Bayesian optimization. The models’ predictions were subsequently de-weathered by meteorological normalization using repeated random resampling of all predictive variables except the trend variable. Three pollution periods in 2020 were examined in detail: January and February, as pre-lockdown, the month of April as the lockdown period, as well as June and July as the “new normal”. An evaluation using normalized mass concentrations of particulate matter and Analysis of variance (ANOVA) was conducted. The results showed that no significant differences were observed for PM1, PM2.5 and PM10 in April 2020—compared to the same period in 2018 and 2019. No significant changes were observed for the “new normal” as well. The results thus indicate that a reduction in mobility during COVID-19 lockdown in Zagreb, Croatia, did not significantly affect particulate matter concentration in the long-term
Sousa Samuel, Kern Roman
2022
Deep learning (DL) models for natural language processing (NLP) tasks often handle private data, demanding protection against breaches and disclosures. Data protection laws, such as the European Union’s General Data Protection Regulation (GDPR), thereby enforce the need for privacy. Although many privacy-preserving NLP methods have been proposed in recent years, no categories to organize them have been introduced yet, making it hard to follow the progress of the literature. To close this gap, this article systematically reviews over sixty DL methods for privacy-preserving NLP published between 2016 and 2020, covering theoretical foundations, privacy-enhancing technologies, and analysis of their suitability for real-world scenarios. First, we introduce a novel taxonomy for classifying the existing methods into three categories: data safeguarding methods, trusted methods, and verification methods. Second, we present an extensive summary of privacy threats, datasets for applications, and metrics for privacy evaluation. Third, throughout the review, we describe privacy issues in the NLP pipeline in a holistic view. Further, we discuss open challenges in privacy-preserving NLP regarding data traceability, computation overhead, dataset size, the prevalence of human biases in embeddings, and the privacy-utility tradeoff. Finally, this review presents future research directions to guide successive research and development of privacy-preserving NLP models.
Koutroulis Georgios, Mutlu Belgin, Kern Roman
2022
In prognostics and health management (PHM), the task of constructing comprehensive health indicators (HI)from huge amounts of condition monitoring data plays a crucial role. HIs may influence both the accuracyand reliability of remaining useful life (RUL) prediction, and ultimately the assessment of system’s degradationstatus. Most of the existing methods assume apriori an oversimplified degradation law of the investigatedmachinery, which in practice may not appropriately reflect the reality. Especially for safety–critical engineeredsystems with a high level of complexity that operate under time-varying external conditions, degradationlabels are not available, and hence, supervised approaches are not applicable. To address the above-mentionedchallenges for extrapolating HI values, we propose a novel anticausal-based framework with reduced modelcomplexity, by predicting the cause from the causal models’ effects. Two heuristic methods are presented forinferring the structural causal models. First, the causal driver is identified from complexity estimate of thetime series, and second, the set of the effect measuring parameters is inferred via Granger Causality. Once thecausal models are known, off-line anticausal learning only with few healthy cycles ensures strong generalizationcapabilities that helps obtaining robust online predictions of HIs. We validate and compare our framework onthe NASA’s N-CMAPSS dataset with real-world operating conditions as recorded on board of a commercial jet,which are utilized to further enhance the CMAPSS simulation model. The proposed framework with anticausallearning outperforms existing deep learning architectures by reducing the average root-mean-square error(RMSE) across all investigated units by nearly 65%.
Steger Sophie, Geiger Bernhard, Smieja Marek
2022
We connect the problem of semi-supervised clustering to constrained Markov aggregation, i.e., the task of partitioning the state space of a Markov chain. We achieve this connection by considering every data point in the dataset as an element of the Markov chain's state space, by defining the transition probabilities between states via similarities between corresponding data points, and by incorporating semi-supervision information as hard constraints in a Hartigan-style algorithm. The introduced Constrained Markov Clustering (CoMaC) is an extension of a recent information-theoretic framework for (unsupervised) Markov aggregation to the semi-supervised case. Instantiating CoMaC for certain parameter settings further generalizes two previous information-theoretic objectives for unsupervised clustering. Our results indicate that CoMaC is competitive with the state-of-the-art
Schweimer Christoph, Gfrerer Christine, Lugstein Florian, Pape David, Velimsky Jan, Elsässer Robert, Geiger Bernhard
2022
Online social networks are a dominant medium in everyday life to stay in contact with friends and to share information. In Twitter, users can connect with other users by following them, who in turn can follow back. In recent years, researchers studied several properties of social networks and designed random graph models to describe them. Many of these approaches either focus on the generation of undirected graphs or on the creation of directed graphs without modeling the dependencies between reciprocal (i.e., two directed edges of opposite direction between two nodes) and directed edges. We propose an approach to generate directed social network graphs that creates reciprocal and directed edges and considers the correlation between the respective degree sequences.Our model relies on crawled directed graphs in Twitter, on which information w.r.t.\ a topic is exchanged or disseminated. While these graphs exhibit a high clustering coefficient and small average distances between random node pairs (which is typical in real-world networks), their degree sequences seem to follow a $\chi^2$-distribution rather than power law. To achieve high clustering coefficients, we apply an edge rewiring procedure that preserves the node degrees.We compare the crawled and the created graphs, and simulate certain algorithms for information dissemination and epidemic spreading on them. The results show that the created graphs exhibit very similar topological and algorithmic properties as the real-world graphs, providing evidence that they can be used as surrogates in social network analysis. Furthermore, our model is highly scalable, which enables us to create graphs of arbitrary size with almost the same properties as the corresponding real-world networks.
Hoffer Johannes Georg, Ofner Andreas Benjamin, Rohrhofer Franz Martin, Lovric Mario, Kern Roman, Lindstaedt Stefanie , Geiger Bernhard
2022
Most engineering domains abound with models derived from first principles that have beenproven to be effective for decades. These models are not only a valuable source of knowledge, but they also form the basis of simulations. The recent trend of digitization has complemented these models with data in all forms and variants, such as process monitoring time series, measured material characteristics, and stored production parameters. Theory-inspired machine learning combines the available models and data, reaping the benefits of established knowledge and the capabilities of modern, data-driven approaches. Compared to purely physics- or purely data-driven models, the models resulting from theory-inspired machine learning are often more accurate and less complex, extrapolate better, or allow faster model training or inference. In this short survey, we introduce and discuss several prominent approaches to theory-inspired machine learning and show how they were applied in the fields of welding, joining, additive manufacturing, and metal forming.
Reichel Robert, Gursch Heimo, Kröll Mark
2022
Der Trend, im Gesundheitswesen von Aufzeichnungen in Papierform auf digitale Formen zu wechseln, legt die Basis für eine elektronische Verarbeitung von Gesundheitsdaten. Dieser Artikel beschreibt die technischen Grundlagen für die semantische Aufbereitung und Analyse von textuellen Inhalten in der medizinischen Domäne. Die speziellen Eigenschaften medizinischer Texte gestalten die Extraktion sowie Aggregation relevanter Information herausfordernder als in anderen Anwendungsgebieten. Zusätzlich gibt es Bedarf an spezialisierten Methoden gerade im Bereich der Anonymisierung bzw. Pseudonymisierung personenbezogener Daten. Der Einsatz von Methoden der Computerlinguistik in Kombination mit der fortschreitenden Digitalisierung birgt dennoch enormes Potential, das Personal im Gesundheitswesen zu unterstützen.
Windisch Andreas, Gallien Thomas, Schwarzmueller Christopher
2022
Dyson-Schwinger equations (DSEs) are a non-perturbative way to express n-point functions in quantum field theory. Working in Euclidean space and in Landau gauge, for example, one can study the quark propagator Dyson-Schwinger equation in the real and complex domain, given that a suitable and tractable truncation has been found. When aiming for solving these equations in the complex domain, that is, for complex external momenta, one has to deform the integration contour of the radial component in the complex plane of the loop momentum expressed in hyper-spherical coordinates. This has to be done in order to avoid poles and branch cuts in the integrand of the self-energy loop. Since the nature of Dyson-Schwinger equations is such, that they have to be solved in a self-consistent way, one cannot analyze the analytic properties of the integrand after every iteration step, as this would not be feasible. In these proceedings, we suggest a machine learning pipeline based on deep learning (DL) approaches to computer vision (CV), as well as deep reinforcement learning (DRL), that could solve this problem autonomously by detecting poles and branch cuts in the numerical integrand after every iteration step and by suggesting suitable integration contour deformations that avoid these obstructions. We sketch out a proof of principle for both of these tasks, that is, the pole and branch cut detection, as well as the contour deformation.
Gashi Milot, Gursch Heimo, Hinterbichler Hannes, Pichler Stefan, Lindstaedt Stefanie , Thalmann Stefan
2022
Predictive Maintenance (PdM) is one of the most important applications of advanced data science in Industry 4.0, aiming to facilitate manufacturing processes. To build PdM models, sufficient data, such as condition monitoring and maintenance data of the industrial application, are required. However, collecting maintenance data is complex and challenging as it requires human involvement and expertise. Due to time constrains, motivating workers to provide comprehensive labeled data is very challenging, and thus maintenance data are mostly incomplete or even completely missing. In addition to these aspects, a lot of condition monitoring data-sets exist, but only very few labeled small maintenance data-sets can be found. Hence, our proposed solution can provide additional labels and offer new research possibilities for these data-sets. To address this challenge, we introduce MEDEP, a novel maintenance event detection framework based on the Pruned Exact Linear Time (PELT) approach, promising a low false-positive (FP) rate and high accuracy results in general. MEDEP could help to automatically detect performed maintenance events from the deviations in the condition monitoring data. A heuristic method is proposed as an extension to the PELT approach consisting of the following two steps: (1) mean threshold for multivariate time series and (2) distribution threshold analysis based on the complexity-invariant metric. We validate and compare MEDEP on the Microsoft Azure Predictive Maintenance data-set and data from a real-world use case in the welding industry. The experimental outcomes of the proposed approach resulted in a superior performance with an FP rate of around 10% on average and high sensitivity and accuracy results.
Lacic Emanuel, Kowald Dominik
2022
In this industry talk at ECIR'2022, we illustrate how to build a modern recommender system that can serve recommendations in real-time for a diverse set of application domains. Specifically, we present our system architecture that utilizes popular recommendation algorithms from the literature such as Collaborative Filtering, Content-based Filtering as well as various neural embedding approaches (e.g., Doc2Vec, Autoencoders, etc.). We showcase the applicability of our system architecture using two real-world use-cases, namely providing recommendations for the domains of (i) job marketplaces, and (ii) entrepreneurial start-up founding. We strongly believe that our experiences from both research- and industry-oriented settings should be of interest for practitioners in the field of real-time multi-domain recommender systems.
Lacic Emanuel, Fadljevic Leon, Weissenböck Franz, Lindstaedt Stefanie , Kowald Dominik
2022
Personalized news recommender systems support readers in finding the right and relevant articles in online news platforms. In this paper, we discuss the introduction of personalized, content-based news recommendations on DiePresse, a popular Austrian online news platform, focusing on two specific aspects: (i) user interface type, and (ii) popularity bias mitigation. Therefore, we conducted a two-weeks online study that started in October 2020, in which we analyzed the impact of recommendations on two user groups, i.e., anonymous and subscribed users, and three user interface types, i.e., on a desktop, mobile and tablet device. With respect to user interface types, we find that the probability of a recommendation to be seen is the highest for desktop devices, while the probability of interacting with recommendations is the highest for mobile devices. With respect to popularity bias mitigation, we find that personalized, content-based news recommendations can lead to a more balanced distribution of news articles' readership popularity in the case of anonymous users. Apart from that, we find that significant events (e.g., the COVID-19 lockdown announcement in Austria and the Vienna terror attack) influence the general consumption behavior of popular articles for both, anonymous and subscribed users
Kowald Dominik, Lacic Emanuel
2022
Multimedia recommender systems suggest media items, e.g., songs, (digital) books and movies, to users by utilizing concepts of traditional recommender systems such as collaborative filtering. In this paper, we investigate a potential issue of such collaborative-filtering based multimedia recommender systems, namely popularity bias that leads to the underrepresentation of unpopular items in the recommendation lists. Therefore, we study four multimedia datasets, i.e., LastFm, MovieLens, BookCrossing and MyAnimeList, that we each split into three user groups differing in their inclination to popularity, i.e., LowPop, MedPop and HighPop. Using these user groups, we evaluate four collaborative filtering-based algorithms with respect to popularity bias on the item and the user level. Our findings are three-fold: firstly, we show that users with little interest into popular items tend to have large user profiles and thus, are important data sources for multimedia recommender systems. Secondly, we find that popular items are recommended more frequently than unpopular ones. Thirdly, we find that users with little interest into popular items receive significantly worse recommendations than users with medium or high interest into popularity.
Ofner Andreas Benjamin, Kefalas Achilles, Posch Stefan, Geiger Bernhard
2022
This article introduces a method for the detection of knock occurrences in an internal combustion engine (ICE) using a 1-D convolutional neural network trained on in-cylinder pressure data. The model architecture is based on expected frequency characteristics of knocking combustion. All cycles were reduced to 60° CA long windows with no further processing applied to the pressure traces. The neural networks were trained exclusively on in-cylinder pressure traces from multiple conditions, with labels provided by human experts. The best-performing model architecture achieves an accuracy of above 92% on all test sets in a tenfold cross-validation when distinguishing between knocking and non-knocking cycles. In a multiclass problem where each cycle was labeled by the number of experts who rated it as knocking, 78% of cycles were labeled perfectly, while 90% of cycles were classified at most one class from ground truth. They thus considerably outperform the broadly applied maximum amplitude of pressure oscillation (MAPO) detection method, as well as references reconstructed from previous works. Our analysis indicates that the neural network learned physically meaningful features connected to engine-characteristic resonances, thus verifying the intended theory-guided data science approach. Deeper performance investigation further shows remarkable generalization ability to unseen operating points. In addition, the model proved to classify knocking cycles in unseen engines with increased accuracy of 89% after adapting to their features via training on a small number of exclusively non-knocking cycles. The algorithm takes below 1 ms to classify individual cycles, effectively making it suitable for real-time engine control.
Hoffer Johannes Georg, Geiger Bernhard, Kern Roman
2022
The avoidance of scrap and the adherence to tolerances is an important goal in manufacturing. This requires a good engineering understanding of the underlying process. To achieve this, real physical experiments can be conducted. However, they are expensive in time and resources, and can slow down production. A promising way to overcome these drawbacks is process exploration through simulation, where the finite element method (FEM) is a well-established and robust simulation method. While FEM simulation can provide high-resolution results, it requires extensive computing resources to do so. In addition, the simulation design often depends on unknown process properties. To circumvent these drawbacks, we present a Gaussian Process surrogate model approach that accounts for real physical manufacturing process uncertainties and acts as a substitute for expensive FEM simulation, resulting in a fast and robust method that adequately depicts reality. We demonstrate that active learning can be easily applied with our surrogate model to improve computational resources. On top of that, we present a novel optimization method that treats aleatoric and epistemic uncertainties separately, allowing for greater flexibility in solving inverse problems. We evaluate our model using a typical manufacturing use case, the preforming of an Inconel 625 superalloy billet on a forging press.
Ross-Hellauer Anthony, Cole Nicki Lisa, Fessl Angela, Klebel Thomas, Pontika, Nancy, Reichmann Stefan
2022
Open Science holds the promise to make scientific endeavours more inclusive, participatory, understandable, accessible and re-usable for large audiences. However, making processes open will not per se drive wide reuse or participation unless also accompanied by the capacity (in terms of knowledge, skills, financial resources, technological readiness and motivation) to do so. These capacities vary considerably across regions, institutions and demographics. Those advantaged by such factors will remain potentially privileged, putting Open Science's agenda of inclusivity at risk of propagating conditions of ‘cumulative advantage’. With this paper, we systematically scope existing research addressing the question: ‘What evidence and discourse exists in the literature about the ways in which dynamics and structures of inequality could persist or be exacerbated in the transition to Open Science, across disciplines, regions and demographics?’ Aiming to synthesize findings, identify gaps in the literature and inform future research and policy, our results identify threats to equity associated with all aspects of Open Science, including Open Access, Open and FAIR Data, Open Methods, Open Evaluation, Citizen Science, as well as its interfaces with society, industry and policy. Key threats include: stratifications of publishing due to the exclusionary nature of the author-pays model of Open Access; potential widening of the digital divide due to the infrastructure-dependent, highly situated nature of open data practices; risks of diminishing qualitative methodologies as ‘reproducibility’ becomes synonymous with quality; new risks of bias and exclusion in means of transparent evaluation; and crucial asymmetries in the Open Science relationships with industry and the public, which privileges the former and fails to fully include the latter.
BDVA Task Force, Duricic Tomislav
2022
The session will explore the importance of data-driven AI for the financial sector by comparing the highly innovative and revolutionary world of Fintech companies with Financial Institutions, highlighting the peculiarities of the sector such as the paradigm of ethical AI. The session will cover topics related to Open Innovation Hubs and acceleration programs, to highlight the importance of innovation and the opportunities of Fintechs mentioning as well the VDIH (Virtualized Digital Innovation Hub), an innovative service developed within the INFINITECH project, a digital finance flagship H2020 project. Moreover, the findings and insights of the Whitepaper of the Task Force “AI and Big Data for the Financial Sector” will be presented, emphasizing market trends, vision, and the innovation impact of novel technologies on the financial sector. The session will end with a key-note speech by a representative from the Fintech District, the largest open ecosystem within the Italian fintech community, deepening the evolution of the fintech sector and sharing future insights and opportunities.
Amjad Rana Ali, Liu Kairen, Geiger Bernhard
2022
In this work, we investigate the use of three information-theoretic quantities--entropy, mutual information with the class variable, and a class selectivity measure based on Kullback-Leibler (KL) divergence--to understand and study the behavior of already trained fully connected feedforward neural networks (NNs). We analyze the connection between these information-theoretic quantities and classification performance on the test set by cumulatively ablating neurons in networks trained on MNIST, FashionMNIST, and CIFAR-10. Our results parallel those recently published by Morcos et al., indicating that class selectivity is not a good indicator for classification performance. However, looking at individual layers separately, both mutual information and class selectivity are positively correlated with classification performance, at least for networks with ReLU activation functions. We provide explanations for this phenomenon and conclude that it is ill-advised to compare the proposed information-theoretic quantities across layers. Furthermore, we show that cumulative ablation of neurons with ascending or descending information-theoretic quantities can be used to formulate hypotheses regarding the joint behavior of multiple neurons, such as redundancy and synergy, with comparably low computational cost. We also draw connections to the information bottleneck theory for NNs.
Mirzababaei Behzad, Pammer-Schindler Viktoria
2021
This article discusses the usefulness of Toulmin’s model of arguments as structuring an assessment of different types of wrongness in an argument. We discuss the usability of the model within a conversational agent that aims to support users to develop a good argument. Within the article, we present a study and the development of classifiers that identify the existence of structural components in a good argument, namely a claim, a warrant (underlying understanding), and evidence. Based on a dataset (three sub-datasets with 100, 1,026, 211 responses in each) in which users argue about the intelligence or non-intelligence of entities, we have developed classifiers for these components: The existence and direction (positive/negative) of claims can be detected a weighted average F1 score over all classes (positive/negative/unknown) of 0.91. The existence of a warrant (with warrant/without warrant) can be detected with a weighted F1 score over all classes of 0.88. The existence of evidence (with evidence/without evidence) can be detected with a weighted average F1 score of 0.80. We argue that these scores are high enough to be of use within a conditional dialogue structure based on Bloom’s taxonomy of learning; and show by argument an example conditional dialogue structure that allows us to conduct coherent learning conversations. While in our described experiments, we show how Toulmin’s model of arguments can be used to identify structural problems with argumentation, we also discuss how Toulmin’s model of arguments could be used in conjunction with content-wise assessment of the correctness especially of the evidence component to identify more complex types of wrongness in arguments, where argument components are not well aligned. Owing to having progress in argument mining and conversational agents, the next challenges could be the developing agents that support learning argumentation. These agents could identify more complex type of wrongness in arguments that result from wrong connections between argumentation components.
Geiger Bernhard
2021
(extended abstract)
Gursch Heimo, Pramhas Martin, Bernhard Knopper, Daniel Brandl, Markus Gratzl, Schlager Elke, Kern Roman
2021
Im Projekt COMFORT (Comfort Orientated and Management Focused Operation of Room condiTions) wird die Behaglichkeit von Büroräumen mit Simulationen und datengetriebenen Verfahren untersucht. Während die datengetriebenen Verfahren auf Messdaten setzen, benötigt die Simulation umfangreiche Beschreibungen der Büroräume, welche sich vielfach mit im Building Information Model (BIM) erfassten Informationen decken. Trotz großer Fortschritte in den letzten Jahren, ist die Integration von BIM und Simulation noch nicht vollständig automatisiert. An dem Fallbeispiel der Aufstockung eines Bürogebäudes der Thomas Lorenz ZT GmbH wird die Übergabe von BIM-Daten an Building Energy Simulation (BES) und Computational Fluid Dynamics (CFD) Simulationen untersucht. Beim untersuchten Gebäude wurde der gesamte Planungsprozess anhand des BIM durchgeführt. Damit konnten Einreichplanung, Ausschreibungsplanung für sämtliche Gewerke inkl. Massenableitung, Ausführungspläne wie Polier-, Schalungs- und Bewehrungspläne aus dem Modell abgeleitet werden und das Haustechnikmodell frühzeitig mit Architektur- und Tragwerksplanungsmodell verknüpft werden.Ausgehend vom BIM konnten die nötigen Daten im IFC-Format an die BES übergeben werden. Die verwendete Software konnte aber noch keine automatische Übergabe durchführen, weshalb eine manuelle Nachbearbeitung der Räume erforderlich war. Für die CFD-Simulation wurden nur ausgewählte Räume betrachtet, denn der Zusatzaufwand zur Übergabe im STEP-Format ist bei normaler Bearbeitung des BIM immer noch sehr groß. Dabei muss der freie Luftraum im BIM separat modelliert und bestimmte geometrischen Randbedingungen erfüllt werden. Ebenso müssen Angaben zu Wärmequellen und Möbel in einer sehr hohen Planungstiefe vorliegen. Der Austausch von Randbedingungen an den Grenzflächen zwischen Luft und Hülle musste noch manuell geschehen.Die BES- und CFD-Simulationsergebnisse sind bezüglich ihrer Aussagekraft mit denen aus herkömmlichen, manuell erstellten Simulationsmodellen als identisch zu betrachten. Eine automatische Übernahme von Parameterwerten scheitert momentan noch an der mangelnden Interpretier- bzw. Zuordenbarkeit in der Simulationssoftware. In Zukunft sollen es die Etablierung von IFC 4 und zusätzlicher Industry Foundation Class (IFC) Parameter einfacher machen die benötigten Daten im Modell strukturiert zu hinterlegen. Besonderes Augenmerk ist dabei auf die Integration von Raumbuchdaten in BIM zu legen, da diese Informationen nicht nur für die Simulation von großem Nutzen sind. Diese Informationsintegrationen sind nicht auf eine einmalige Übermittlung beschränkt, sondern zielen auf eine Integration zur automatischen Übernahme von Änderungen zwischen BIM, Simulation und anknüpfenden Bereichen ab.
Wolfbauer Irmtraud
2021
Use Case & Motivation:Styrian SME’s need an online learning platform for their apprentices in mechatronics, metal and electrical engineering. Research opportunities: * Apprentices as target group are under-researched* Designing a computer-mediated learning intervention in the overlap between workplace learning and educational setting* Contributing to research on reflection guidance technologies* Developing the first reflection guidance chatbot
Mirzababaei Behzad, Pammer-Schindler Viktoria
2021
This article discusses the usefulness of Toulmin’s model of arguments as structuring an assessment of different types of wrongness in an argument. We discuss the usability of the model within a conversational agent that aims to support users to develop a good argument. Within the article, we present a study and the development of classifiers that identify the existence of structural components in a good argument, namely a claim, a warrant (underlying understanding), and evidence. Based on a dataset (three sub-datasets with 100, 1,026, 211 responses in each) in which users argue about the intelligence or non-intelligence of entities, we have developed classifiers for these components: The existence and direction (positive/negative) of claims can be detected a weighted average F1 score over all classes (positive/negative/unknown) of 0.91. The existence of a warrant (with warrant/without warrant) can be detected with a weighted F1 score over all classes of 0.88. The existence of evidence (with evidence/without evidence) can be detected with a weighted average F1 score of 0.80. We argue that these scores are high enough to be of use within a conditional dialogue structure based on Bloom’s taxonomy of learning; and show by argument an example conditional dialogue structure that allows us to conduct coherent learning conversations. While in our described experiments, we show how Toulmin’s model of arguments can be used to identify structural problems with argumentation, we also discuss how Toulmin’s model of arguments could be used in conjunction with content-wise assessment of the correctness especially of the evidence component to identify more complex types of wrongness in arguments, where argument components are not well aligned. Owing to having progress in argument mining and conversational agents, the next challenges could be the developing agents that support learning argumentation. These agents could identify more complex type of wrongness in arguments that result from wrong connections between argumentation components.
Reiter-Haas Markus, Kopeinik Simone, Lex Elisabeth
2021
In this paper, we study the moral framing of political content on Twitter. Specifically, we examine differences in moral framing in two datasets: (i) tweets from US-based politicians annotated with political affiliation and (ii) COVID-19 related tweets in German from followers of the leaders of the five major Austrian political parties. Our research is based on recent work that introduces an unsupervised approach to extract framing bias and intensity in news using a dictionary of moral virtues and vices. In this paper, we use a more extensive dictionary and adapt it to German-language tweets. Overall, in both datasets, we observe a moral framing that is congruent with the public perception of the political parties. In the US dataset, democrats have a tendency to frame tweets in terms of care, while loyalty is a characteristic frame for republicans. In the Austrian dataset, we find that the followers of the governing conservative party emphasize care, which is a key message and moral frame in the party’s COVID-19 campaign slogan. Our work complements existing studies on moral framing in social media. Also, our empirical findings provide novel insights into moral-based framing on COVID19 in Austria
Oana Inel, Duricic Tomislav, Harmanpreet Kaur, Lex Elisabeth, Nava Tintarev
2021
Online videos have become a prevalent means for people to acquire information. Videos, however, are often polarized, misleading, or contain topics on which people have different, contradictory views. In this work, we introduce natural language explanations to stimulate more deliberate reasoning about videos and raise users’ awareness of potentially deceiving or biased information. With these explanations, we aim to support users in actively deciding and reflecting on the usefulness of the videos. We generate the explanations through an end-to-end pipeline that extracts reflection triggers so users receive additional information to the video based on its source, covered topics, communicated emotions, and sentiment. In a between-subjects user study, we examine the effect of showing the explanations for videos on three controversial topics. Besides, we assess the users’ alignment with the video’s message and how strong their belief is about the topic. Our results indicate that respondents’ alignment with the video’s message is critical to evaluate the video’s usefulness. Overall, the explanations were found to be useful and of high quality. While the explanations do not influence the perceived usefulness of the videos compared to only seeing the video, people with an extreme negative alignment with a video’s message perceived it as less useful (with or without explanations) and felt more confident in their assessment. We relate our findings to cognitive dissonance since users seem to be less receptive to explanations when the video’s message strongly challenges their beliefs. Given these findings, we provide a set of design implications for explanations grounded in theories on reducing cognitive dissonance in light of raising awareness about online deception.
Duricic Tomislav, Volker Seiser, Lex Elisabeth
2021
We perform a cross-platform analysis in which we study how does linking YouTube content on Reddit conspiracy forum impact language used in user comments on YouTube. Our findings show a slight change in user language in that it becomes more similar to language used on Reddit.
Duricic Tomislav, Kowald Dominik, Schedl Markus, Lex Elisabeth
2021
Homophily describes the phenomenon that similarity breeds connection, i.e., individuals tend to form ties with other people who are similar to themselves in some aspect(s). The similarity in music taste can undoubtedly influence who we make friends with and shape our social circles. In this paper, we study homophily in an online music platform Last.fm regarding user preferences towards listening to mainstream (M), novel (N), or diverse (D) content. Furthermore, we draw comparisons with homophily based on listening profiles derived from artists users have listened to in the past, i.e., artist profiles. Finally, we explore the utility of users' artist profiles as well as features describing M, N, and D for the task of link prediction. Our study reveals that: (i) users with a friendship connection share similar music taste based on their artist profiles; (ii) on average, a measure of how diverse is the music two users listen to is a stronger predictor of friendship than measures of their preferences towards mainstream or novel content, i.e., homophily is stronger for D than for M and N; (iii) some user groups such as high-novelty-seekers (explorers) exhibit strong homophily, but lower than average artist profile similarity; (iv) using M, N and D achieves comparable results on link prediction accuracy compared with using artist profiles, but the combination of features yields the best accuracy results, and (v) using combined features does not add value if graph-based features such as common neighbors are available, making M, N, and D features primarily useful in a cold-start user recommendation setting for users with few friendship connections. The insights from this study …
Egger Jan, Pepe Antonio, Gsaxner Christina, Jin Yuan, Li Jianning, Kern Roman
2021
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term ‘deep learning’, and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category
Pammer-Schindler Viktoria, Prilla Michael
2021
A substantial body of human-computer interaction literature investigates tools that are intended to support reflection, e.g. under the header of quantified self or in computer-mediated learning. These works describe the issues that are reflected on by users in terms of examples, such as reflecting on financial expenditures, lifestyle, professional growth, etc. A coherent concept is missing. In this paper, the reflection object is developed based on activity theory, reflection theory and related design-oriented research. The reflection object is both what is reflected on and what is changed through reflection. It constitutes the link between reflection and other activities in which the reflecting person participates. By combining these two aspects—what is reflected on and what is changed—into a coherent conceptual unit, the concept of the reflection object provides a frame to focus on how to support learning, change and transformation, which is a major challenge when designing technologies for reflection.
Leski Florian, Fruhwirth Michael, Pammer-Schindler Viktoria
2021
The increasing volume of available data and the advances in analytics and artificial intelligence hold the potential for new business models also in offline-established organizations. To successfully implement a data-driven business model, it is crucial to understand the environment and the roles that need to be fulfilled by actors in the business model. This partner perspective is overlooked by current research on data-driven business models. In this paper, we present a structured literature review in which we identified 33 relevant publications. Based on this literature, we developed a framework consisting of eight roles and two attributes that can be assigned to actors as well as three classes of exchanged values between actors. Finally, we evaluated our framework through three cases from one automotive company collected via interviews in which we applied the framework to analyze data-driven business models for which our interviewees are responsible.
Lovric Mario, Duricic Tomislav, Tran Thi Ngoc Han, Hussain Hussain, Lacic Emanuel, Morten A. Rasmussen, Kern Roman
2021
Methods for dimensionality reduction are showing significant contributions to knowledge generation in high-dimensional modeling scenarios throughout many disciplines. By achieving a lower dimensional representation (also called embedding), fewer computing resources are needed in downstream machine learning tasks, thus leading to a faster training time, lower complexity, and statistical flexibility. In this work, we investigate the utility of three prominent unsupervised embedding techniques (principal component analysis—PCA, uniform manifold approximation and projection—UMAP, and variational autoencoders—VAEs) for solving classification tasks in the domain of toxicology. To this end, we compare these embedding techniques against a set of molecular fingerprint-based models that do not utilize additional pre-preprocessing of features. Inspired by the success of transfer learning in several fields, we further study the performance of embedders when trained on an external dataset of chemical compounds. To gain a better understanding of their characteristics, we evaluate the embedders with different embedding dimensionalities, and with different sizes of the external dataset. Our findings show that the recently popularized UMAP approach can be utilized alongside known techniques such as PCA and VAE as a pre-compression technique in the toxicology domain. Nevertheless, the generative model of VAE shows an advantage in pre-compressing the data with respect to classification accuracy.
Hoffer Johannes Georg, Geiger Bernhard, Ofner Patrick, Kern Roman
2021
The technical world of today fundamentally relies on structural analysis in the form of design and structural mechanic simulations.A traditional and robust simulation method is the physics-based Finite Element Method (FEM) simulation. FEM simulations in structural mechanics are known to be very accurate, however, the higher the desired resolution, the more computational effort is required. Surrogate modeling provides a robust approach to address this drawback. Nonetheless, finding the right surrogate model and its hyperparameters for a specific use case is not a straightforward process.In this paper, we discuss and compare several classes of mesh-free surrogate models based on traditional and thriving Machine Learning (ML) and Deep Learning (DL) methods.We show that relatively simple algorithms (such as $k$-nearest neighbor regression) can be competitive in applications with low geometrical complexity and extrapolation requirements. With respect to tasks exhibiting higher geometric complexity, our results show that recent DL methods at the forefront of literature (such as physics-informed neural networks), are complicated to train and to parameterize and thus require further research before they can be put to practical use. In contrast, we show that already well-researched DL methods such as the multi-layer perceptron are superior with respect to interpolation use cases and can be easily trained with available tools.With our work, we thus present a basis for selection and practical implementation of surrogate models.
Iacono Lucas, Veas Eduardo Enrique
2021
AVL RACING and the Knowledge Visualization group of Know-Center GmbH, are evaluating the performance of racing drivers using the latest wearables technologies, data analytics and vehicle dynamics simulation software from AVL. The goal is to measure human factors related with biosensors synchronized with vehicle data at a Driver-in-the-Loop (DiL) and vehicle dynamics simulation software AVL VSM™ RACE
Iacono Lucas, Veas Eduardo Enrique
2021
Know-Center is developing human-centered intelligent systems that detect cognitive, emotional and health related states by action, perception and by means of cognitive and health metrics. Models of human behavior and intention allow to be derived during different activities. The innovative set-up is reflected by linking the human telemetry (HT) system with activity monitors and by synchronizing the data. This article details our system composed of several wearable sensors, such as EEG, eye-tracker, ECG, EMG, and a data-logger, and methodology used to perform our studies
Pammer-Schindler Viktoria, Rosé Carolyn
2021
Professional and lifelong learning are a necessity for workers. This is true both for re-skilling from disappearing jobs, as well as for staying current within a professional domain. AI-enabled scaffolding and just-in-time and situated learning in the workplace offer a new frontier for future impact of AIED. The hallmark of this community’s work has been i) data-driven design of learning technology and ii) machine-learning enabled personalized interventions. In both cases, data are the foundation of AIED research and data-related ethics are thus central to AIED research. In this paper we formulate a vision how AIED research could address data-related ethics issues in informal and situated professional learning. The foundation of our vision is a secondary analysis of five research cases that offer insights related to data-driven adaptive technologies for informal professional learning. We describe the encountered data-related ethics issues. In our interpretation, we have developed three themes: Firstly, in informal and situated professional learning, relevant data about professional learning – to be used as a basis for learning analytics and reflection or as a basis for adaptive systems - is not only about learners. Instead, due to the situatedness of learning, relevant data is also about others (colleagues, customers, clients) and other objects from the learner’s context. Such data may be private, proprietary, or both. Secondly, manual tracking comes with high learner control over data. Thirdly, learning is not necessarily a shared goal in informal professional learning settings. From an ethics perspective, this is particularly problematic as much data that would be relevant for use within learning technologies hasn’t been collected for the purposes of learning. These three themes translate into challenges for AIED research that need to be addressed in order to successfully investigate and develop AIED technology for informal and situated professional learning. As an outlook of this paper, we connect these challenges to ongoing research directions within AIED – natural language processing, socio-technical design, and scenario-based data collection - that might be leveraged and aimed towards addressing data-related ethics challenges.
Müllner Peter , Lex Elisabeth, Kowald Dominik
2021
In this position paper, we discuss the merits of simulating privacy dynamics in recommender systems. We study this issue at hand from two perspectives: Firstly, we present a conceptual approach to integrate privacy into recommender system simulations, whose key elements are privacy agents. These agents can enhance users' profiles with different privacy preferences, e.g., their inclination to disclose data to the recommender system. Plus, they can protect users' privacy by guarding all actions that could be a threat to privacy. For example, agents can prohibit a user's privacy-threatening actions or apply privacy-enhancing techniques, e.g., Differential Privacy, to make actions less threatening. Secondly, we identify three critical topics for future research in privacy-aware recommender system simulations: (i) How could we model users' privacy preferences and protect users from performing any privacy-threatening actions? (ii) To what extent do privacy agents modify the users' document preferences? (iii) How do privacy preferences and privacy protections impact recommendations and privacy of others? Our conceptual privacy-aware simulation approach makes it possible to investigate the impact of privacy preferences and privacy protection on the micro-level, i.e., a single user, but also on the macro-level, i.e., all recommender system users. With this work, we hope to present perspectives on how privacy-aware simulations could be realized, such that they enable researchers to study the dynamics of privacy within a recommender system.
Geiger Bernhard, Kubin Gernot
2021
This Special Issue aims to investigate the properties of the information bottleneck (IB) functional in its new context in deep learning and to propose learning mechanisms inspired by the IB framework. More specifically, we invited authors to submit manuscripts that provide novel insight into the properties of the IB functional that apply the IB principle for training deep, i.e., multi-layer machine learning structures such as NNs and that investigate the learning behavior of NNs using the IBframework. To cover the breadth of the current literature, we also solicited manuscripts that discuss frameworks inspired by the IB principle, but that depart from them in a well-motivated manner.
Gursch Heimo, Ganster Harald, Rinnhofer Alfred, Waltner Georg, Payer Christian, Oberwinkler Christian, Meisenbichler Reinhard, Kern Roman
2021
Refuse sorting is a key technology to increase the recycling rate and reduce the growths of landfills worldwide. The project KI-Waste combines image recognition with time series analysis to monitor and optimise processes in sorting facilities. The image recognition captures the refuse category distribution and particle size of the refuse streams in the sorting facility. The time series analysis focuses on insights derived from machine parameters and sensor values. The combination of results from the image recognition and the time series analysis creates a new holistic view of the complete sorting process and the performance of a sorting facility. This is the basis for comprehensive monitoring, data-driven optimisations, and performance evaluations supporting workers in sorting facilities. Digital solutions allowing the workers to monitor the sorting process remotely are very desirable since the working conditions in sorting facilities are potentially harmful due to dust, bacteria, and fungal spores. Furthermore, the introduction of objective sorting performance measures enables workers to make informed decisions to improve the sorting parameters and react quicker to changes in the refuse composition. This work describes ideas and objectives of the KI-Waste project, summarises techniques and approaches used in KI-Waste, gives preliminary findings, and closes with an outlook on future work.
Smieja Marek, Wolczyk Maciej, Tabor Jacek, Geiger Bernhard
2021
We propose a semi-supervised generative model, SeGMA, which learns a joint probability distribution of data and their classes and is implemented in a typical Wasserstein autoencoder framework. We choose a mixture of Gaussians as a target distribution in latent space, which provides a natural splitting of data into clusters. To connect Gaussian components with correct classes, we use a small amount of labeled data and a Gaussian classifier induced by the target distribution. SeGMA is optimized efficiently due to the use of the Cramer-Wold distance as a maximum mean discrepancy penalty, which yields a closed-form expression for a mixture of spherical Gaussian components and, thus, obviates the need of sampling. While SeGMA preserves all properties of its semi-supervised predecessors and achieves at least as good generative performance on standard benchmark data sets, it presents additional features: 1) interpolation between any pair of points in the latent space produces realistically looking samples; 2) combining the interpolation property with disentangling of class and style information, SeGMA is able to perform continuous style transfer from one class to another; and 3) it is possible to change the intensity of class characteristics in a data point by moving the latent representation of the data point away from specific Gaussian components.
Geiger Bernhard
2021
We review the current literature concerned with information plane (IP) analyses of neural network (NN) classifiers. While the underlying information bottleneck theory and the claim that information-theoretic compression is causally linked to generalization are plausible, empirical evidence was found to be both supporting and conflicting. We review this evidence together with a detailed analysis of how the respective information quantities were estimated. Our survey suggests that compression visualized in IPs is not necessarily information-theoretic but is rather often compatible with geometric compression of the latent representations. This insight gives the IP a renewed justification. Aside from this, we shed light on the problem of estimating mutual information in deterministic NNs and its consequences. Specifically, we argue that, even in feedforward NNs, the data processing inequality needs not to hold for estimates of mutual information. Similarly, while a fitting phase, in which the mutual information is between the latent representation and the target increases, is necessary (but not sufficient) for good classification performance, depending on the specifics of mutual information estimation, such a fitting phase needs to not be visible in the IP.
Rekabsaz Navi, Kopeinik Simone, Schedl Markus
2021
Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation of BERT Ranker
Lex Elisabeth, Kowald Dominik, Seitlinger Paul, Tran Tran, Felfernig Alexander, Schedl Markus
2021
Psychology-informed Recommender Systems
Ruiz-Calleja Adolfo, Prieto Luis P., Ley Tobias, Rodrıguez-Triana Marıa Jesus, Dennerlein Sebastian
2021
Despite the ubiquity of learning in workplace and professional settings, the learning analytics (LA) community has paid significant attention to such settings only recently. This may be due to the focus on researching formal learning, as workplace learning is often informal, hard to grasp and not unequivocally defined. This paper summarizes the state of the art of Workplace Learning Analytics (WPLA), extracted from a two-iteration systematic literature review. Our in-depth analysis of 52 existing proposals not only provides a descriptive view of the field, but also reflects on researcher conceptions of learning and their influence on the design, analytics and technology choices made in this area. We also discuss the characteristics of workplace learning that make WPLA proposals different from LA in formal education contexts and the challenges resulting from this. We found that WPLA is gaining momentum, especially in some fields, like healthcare and education. The focus on theory is generally a positive feature in WPLA, but we encourage a stronger focus on assessing the impact of WPLA in realistic settings.
Wolf-Brenner Christof
2021
In his book Superintelligence, Nick Bostrom points to several ways the development of Artificial Intelligence (AI) might fail, turn out to be malignant or even induce an existential catastrophe. He describes ‘Perverse Instantiations’ (PI) as cases, in which AI figures out how to satisfy some goal through unintended ways. For instance, AI could attempt to paralyze human facial muscles into constant smiles to achieve the goal of making humans smile. According to Bostrom, cases like this ought to be avoided since they include a violation of human designer’s intentions. However, AI findingsolutions that its designers have not yet thought of and therefore could also not have intended is arguably one of the main reasons why we are so eager to use it on a variety of problems. In this paper, I aim to show that the conceptof PI is quite vague, mostly due to ambiguities surrounding the term ‘intention’. Ultimately, this text aims to serve as a starting point for a further discussion of the research topic, the development of a research agenda and future improvement of the terminology
Fessl Angela, Maitz Katharina, Dennerlein Sebastian, Pammer-Schindler Viktoria
2021
Clear formulation and communication of learning goals is an acknowledged best practice in instruction at all levels. Typically, in curricula and course management systems, dedicated places for specifying learning goals at course-level exist. However, even in higher education, learning goals are typically formulated in a very heterogeneous manner. They are often not concrete enough to serve as guidance for students to master a lecture or to foster self-regulated learning. In this paper, we present a systematics for formulating learning goals for university courses, and a web-based widget that visualises these learning goals within a university's learning management system. The systematics is based on the revised version of Bloom's taxonomy of educational objectives by Anderson and Krathwohl. We evaluated both the learning goal systematics and the web-based widget in three lectures at our university.The participating lecturers perceived the systematics as easy-to-use and as helpful to structure their course and the learning content. Students' perceived benets lay in getting a quick overview of the lecture and its content as well as clear information regarding the requirements for passing the exam. By analysing the widget's activity log data, we could show that the widget helps students to track their learning progress and supports them in planning and conducting their learning in a self-regulated way. This work highlights how theory-based best practice in teaching can be transferred into a digital learning environment; at the same time it highlights that good non-technical systematics for formulating learning goals positively impacts on teaching and learning.
Basirat Mina, Geiger Bernhard, Roth Peter
2021
Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.
Schweimer Christoph, Geiger Bernhard, Wang Meizhu, Gogolenko Sergiy, Gogolenko Sergiy, Mahmood Imran, Jahani Alireza, Suleimenova Diana, Groen Derek
2021
Automated construction of location graphs is instrumental but challenging, particularly in logistics optimisation problems and agent-based movement simulations. Hence, we propose an algorithm for automated construction of location graphs, in which vertices correspond to geographic locations of interest and edges to direct travelling routes between them. Our approach involves two steps. In the first step, we use a routing service to compute distances between all pairs of L locations, resulting in a complete graph. In the second step, we prune this graph by removing edges corresponding to indirect routes, identified using the triangle inequality. The computational complexity of this second step is O(L3), which enables the computation of location graphs for all towns and cities on the road network of an entire continent. To illustrate the utility of our algorithm in an application, we constructed location graphs for four regions of different size and road infrastructures and compared them to manually created ground truths. Our algorithm simultaneously achieved precision and recall values around 0.9 for a wide range of the single hyperparameter, suggesting that it is a valid approach to create large location graphs for which a manual creation is infeasible.
Geiger Bernhard, Al-Bashabsheh Ali
2021
We derive two sufficient conditions for a function of a Markov random field (MRF) on a given graph to be a MRF on the same graph. The first condition is information-theoretic and parallels a recent information-theoretic characterization of lumpability of Markov chains. The second condition, which is easier to check, is based on the potential functions of the corresponding Gibbs field. We illustrate our sufficient conditions at the hand of several examples and discuss implications for practical applications of MRFs. As a side result, we give a partial characterization of functions of MRFs that are information preserving.
Kowald Dominik, Müllner Peter , Zangerle Eva, Bauer Christine, Schedl Markus, Lex_KC Elisabeth
2021
Support the Underground: Characteristics of Beyond-Mainstream Music Listeners. EPJ Data Scienc
Schedl Markus, Bauer Christine, Reisinger Wolfgang, Kowald Dominik, Lex_KC Elisabeth
2021
Listener Modeling and Context-Aware Music Recommendation Based on Country Archetyp
Schweimer Christoph, Geiger Bernhard, Wang Meizhu, Gogolenko Sergiy, Mahmood Imran, Jahani Alireza, Suleimenova Diana, Groen Derek
2021
Müllner Peter , Kowald Dominik, Lex Elisabeth
2021
In this paper, we explore the reproducibility of MetaMF, a meta matrix factorization framework introduced by Lin et al. MetaMF employs meta learning for federated rating prediction to preserve users' privacy. We reproduce the experiments of Lin et al. on five datasets, i.e., Douban, Hetrec-MovieLens, MovieLens 1M, Ciao, and Jester. Also, we study the impact of meta learning on the accuracy of MetaMF's recommendations. Furthermore, in our work, we acknowledge that users may have different tolerances for revealing information about themselves. Hence, in a second strand of experiments, we investigate the robustness of MetaMF against strict privacy constraints. Our study illustrates that we can reproduce most of Lin et al.'s results. Plus, we provide strong evidence that meta learning is essential for MetaMF's robustness against strict privacy constraints.
Kefalas Achilles, Ofner Andreas Benjamin, Pirker Gerhard, Posch Stefan, Geiger Bernhard, Wimmer Andreas
2021
The phenomenon of knock is an abnormal combustion occurring in spark-ignition (SI) engines and forms a barrier that prevents an increase in thermal efficiency while simultaneously reducing CO2 emissions. Since knocking combustion is highly stochastic, a cyclic analysis of in-cylinder pressure is necessary. In this study we propose an approach for efficient and robust detection and identification of knocking combustion in three different internal combustion engines. The proposed methodology includes a signal processing technique, called continuous wavelet transformation (CWT), which provides a simultaneous analysis of the in-cylinder pressure traces in the time and frequency domains with coefficients. These coefficients serve as input for a convolutional neural network (CNN) which extracts distinctive features and performs an image recognition task in order to distinguish between non-knock and knock. The results revealed the following: (i) The CWT delivered a stable and effective feature space with the coefficients that represents the unique time-frequency pattern of each individual in-cylinder pressure cycle; (ii) the proposed approach was superior to the state-of-the-art threshold value exceeded (TVE) method with a maximum amplitude pressure oscillation (MAPO) criterion improving the overall accuracy by 6.15 percentage points (up to 92.62%); and (iii) The CWT + CNN method does not require calibrating threshold values for different engines or operating conditions as long as enough and diverse data is used to train the neural network.
Lucija Krusic, Barbara Schuppler, Martin Hagmüller, Kopeinik Simone
2021
Due to recent advances in digitalisation and an emergence of new technologies, the STEM job market is further growing. This leads to higher salaries and lower unemployment rates. Despite these advantages, a pressing economic need for qualified STEM personal and many initiatives for increasing interest in STEM subjects, Austrian technical universities have consistently had issues with recruiting engineering students. Particularly women remain strongly underrepresented in STEM careers. Possible causes of this gender gap can be found in the effects of stereotype threat and the influence of role models, as stereotypical representations affect young people in various phases of their personal and professional development. As a part of the project proposal “Gender differences in career choices: Does the language matter?“, we investigated gender biases that potential students of Austrian STEM universities might face, and conducted two pilot studies: i) the analysis of EFL textbooks used in Austrian high schools, and ii) viewbooks used as promotional material for Austrian universities. EFL (English as a foreign language) textbooks are often used in teaching. We consider them as particularly relevant, since each of these books includes a dedicated section on careers. In the course of the first pilot study, we conducted a content analysis of eight textbooks for gender biases of personas in the context of careers and jobs. While results point to a nearly equal distribution of male and female personifications i.e., we found 9% more male characters, they were, however, not equally distributed among the different careers. Female personas were commonly associated with traditionally female careers (“stay-at-home mom”, “housewife”), which can be classified as indoor and domestic, while male personas tended to be associated with more prestigious, outdoor occupations (“doctor”, “police officer”). STEM occupations were predominantly (80%) associated with the male gender. Thus, the analysis of the Austrian EFL textbooks clearly points to the existence of gender stereotyping and gender bias as to the relationship of gender and career choice. In the second pilot study, we analyzed the symbolic portrayal of gender diversity in 52 Austrian university viewbooks, one for each bachelor programme at five Universities covering fields such as STEM, economy and law. As part of the analysis, we compared the representations of male and female students and professors with the actual student and faculty body. Results show a rather equal numeric gender representation in the non- technical university viewbooks but not in those of technical universities analysed. The comparison to real-life students’ gender distribution, revealed instances of underrepresentation of the male student body and overrepresentation of the female student body in technical university viewbooks (e.g., 15.4% underrepresentation of male students and 15.3% overrepresentation of female students in TUGraz viewbooks). We consider this a positive finding, as we believe that a diverse and gender neutral representation of people in educational and career information materials has the potential to entice a desired change in prospective students’ perception towards STEM subjects and engineering sciences.
Kaiser Rene_DB
2021
Request for quotation (RFQ) is a process that typically requires a company to inspect specification documents shared by a potential customer. In order to create an offer, requirements need to be extracted from the specifications. In a collaborative research project, we investigate methods to support the document-centric knowledge work offer engineers conduct when processing RFQs, and started to develop a software tool including artificial/assistive intelligence features, several of which are based on natural language processing (NLP). Based on our concrete application case, we have identified three aspects towards which intelligent, adaptive user interfaces may contribute: adaptation to specific workflow approaches, adaptation to user-specific annotation behaviour with respect to the automatic provision of suggestions, and support for the user to maintain concentration while conducting an everyday routine task. In a preliminary, conceptual research phase, we seek to discuss these ideas and develop them further.
Lovric Mario, Kern Roman, Fadljevic Leon, Gerdenitsch, Johann, Steck, Thomas, Peche, Ernst
2021
In industrial electro galvanizing lines, the performance of the dimensionally stable anodes (Ti +IrOx) is a crucial factor for product quality. Ageing of the anodes causes worsened zinc coatingdistribution on the steel strip and a significant increase in production costs due to a higher resistivityof the anodes. Up to now, the end of the anode lifetime has been detected by visual inspectionevery several weeks. The voltage of the rectifiers increases much earlier, indicating the deteriorationof anode performance. Therefore monitoring rectifier voltage has the potential for a prematuredetermination of the end of anode lifetime. Anode condition is only one of many parameters affectingthe rectifier voltage. In this work we employed machine learning to predict expected baseline rectifiervoltages for a variety of steel strips and operating conditions at an industrial electro galvanizingline. In the plating section the strip passes twelve “Gravitel” cells and zinc from the electrolyte isdeposited on the surface at high current densities. Data, collected on one exemplary rectifier unitequipped with two anodes, have been studied for a period of two years. The dataset consists of onetarget variable (rectifier voltage) and nine predictive variables describing electrolyte, current andsteel strip characteristics. For predictive modelling, we used selected Random Forest Regression.Training was conducted on intervals after the plating cell was equipped with new anodes. Our resultsshow a Normalized Root Mean Square Error of Prediction (NRMSEP) of 1.4 % for baseline rectifiervoltage during good anode condition. When anode condition was estimated as bad (by manualinspection), we observe a large distinctive deviation in regard to the predicted baseline voltage. Thegained information about the observed deviation can be used for early detection resp. classificationof anode ageing to recognize the onset of damage and reduce total operation cost
Kraus Pavel, Bornemann Manfred, Alwert Kay, Matern, Andreas, Reimer, Ulrich, Kaiser Rene_DB
2020
Wissensmanagement (WM) hatte bis 2007 keinen allgemein gleich verstandenen Begriffs- und Definitionsunterbau. Gerade in wirtschaftlich schwierigen Zeiten muss WM als Disziplin für seine eigene Klarheit und Stringenz sorgen – eine Zersplitterung in verschiedene Denkschulen schwächt WM-Kommunikation, -Einsatz und -Weiterentwicklung. Das DACH-WM-Glossar erscheint in einer neuen Form und zwar aus einer pragmatischen Synthese der Glossare Praxishandbuch des WM-Forums Graz von 2007 und des DACH-WM-Glossars von 2009, ergänzt durch zusätzliche Quellen.
Velimsky Jan, Schweimer Christoph, Tran Thi Ngoc Han, Gfrerer Christine
2020
In this paper, we investigate the information sharing patterns via Twitter for the social media networks of two ideologically divergent political parties, the Freedom Party (FPOE) and the NEOS, in the lead-up to and during the 2019 Austrian National Council Elections and ask: 1) To what extent do the associated networks differ in their structure?2) Which determinants affect the spreading behaviour of messages in the two networks, and which factors explain these differences? 3) What type of political news and information did verified users (e.g., news media or politicians) share ahead of the vote and which role do these users play in the dissemination of messages in the respective networks. Analysing approximately 200,000 tweets, the study relies on qualitative and quantitative text analysis including sentiment analysis, on supervised classification of relevant attributes for the message spread combined with neural network models retrieving the retweet probabilities for source tweets and on network analysis. In addition to notable differences between the two parties in network structure and Twitter usage, we find that verified users, as well as URLs, other media elements (videos or photos) and hashtags play an important role in the spreading of messages. We also reveal that negative sentiments have a higher retweetability compared to other sentiments. Interestingly, gender seems to matter in the network related to the FPOE, where male users get more retweets than female users.
Geiger Bernhard, Kubin Gernot
2020
guest editorial for a special issue
Gursch Heimo, Schlager Elke, Feichtinger Gerald, Brandl Daniel
2020
The comfort humans perceive in rooms depends on many influencing factors and is currently only poorly recorded and maintained. This is due to circumstances like the subjective nature of perceived comfort, lack of sensors or data processing infrastructure. Project COMFORT (Comfort Orientated and Management Focused Operation of Room condiTions) researches the modelling of perceived thermal comfort of humans in office rooms. This begins at extensive and long-term measurements taking in a laboratory test chamber and in real-world office rooms. Data is collected from the installed building services engineering systems, from high-accurate reference measurement equipment and from weather services describing the outside conditions. All data is stored in a specially developed central Data Management System (DMS) creating the basis for all research and studies in project COMFORT.The collected data is the key enabler for the creation of soft sensors describing comfort relevant indices like predicted mean vote (PMV), predicted percentage of dissatisfied (PPD) and operative temperature (OT). Two different approaches are conducted complementing and extending each other in the realisation of soft sensors. Firstly, a purely data-driven modelling approach generates models for soft sensors by learning the relations between explanatory and target variables in the collected data. Secondly, simulation-based soft sensors are derived from Building Energy Simulation (BES) and Computational Fluid Dynamic (CFD) simulations.The first result of the data-driven analysis is a solar Radiation Modelling (RM) component, capable of splitting global radiation into its direct horizontal and diffuse components. This is needed, since only global radiation data is available for the investigated locations, but the global radiation needs to be divided into direct and diffuse radiation due to their hugely differences in their thermal impact on buildings. The current BES and CFD simulation provide as their results soft sensors for comfort relevant indices, which will be complemented by data-driven soft sensors in the remainder of the project.
Dumouchel Suzane, Blotiere Emilie, Breitfuß Gert, Chen Yin, Di Donato Francesca, Eskevich Maria, Forbes Paula, Georgiadis Haris, Gingold Arnaud, Gorgaini Elisa, Morainville Yoann, de Paoli Stefano, Petitfils Clara, Pohle Stefanie, Toth-Czifra Erzebeth
2020
Social sciences and humanities (SSH) research is divided across a wide array of disciplines, sub-disciplines and languages. While this specialisation makes it possible to investigate the extensive variety of SSH topics, it also leads to a fragmentation that prevents SSH research from reaching its full potential. The TRIPLE project brings answers to these issues by developing an innovative discovery platform for SSH data, researchers’ projects and profiles. Having started in October 2019, the project has already three main achievements that are presented in this paper: 1) the definition of main features of the GOTRIPLE platform; 2) its interoperability; 3) its multilingual, multicultural and interdisciplinary vocation. These results have been achieved thanks to different methodologies such as a co-design process, market analysis and benchmarking, monitoring and co-building. These preliminary results highlight the need of respecting diversity of practices and communities through coordination and harmonisation.
Ciura Krzesimir, Fedorowicz Joanna, Zuvela Petar, Lovric Mario, Kapica Hanna, Baranowski Pawel, Sawicki Wieslaw, Wong Ming Wah, Sączewski Jaroslaw
2020
Currently, rapid evaluation of the physicochemical parameters of drug candidates, such as lipophilicity, is in high demand owing to it enabling the approximation of the processes of absorption, distribution, metabolism, and elimination. Although the lipophilicity of drug candidates is determined using the shake flash method (n-octanol/water system) or reversed phase liquid chromatography (RP-LC), more biosimilar alternatives to classical lipophilicity measurement are currently available. One of the alternatives is immobilized artificial membrane (IAM) chromatography. The present study is a continuation of our research focused on physiochemical characterization of biologically active derivatives of isoxazolo[3,4-b]pyridine-3(1H)-ones. The main goal of this study was to assess the affinity of isoxazolones to phospholipids using IAM chromatography and compare it with the lipophilicity parameters established by reversed phase chromatography. Quantitative structure–retention relationship (QSRR) modeling of IAM retention using differential evolution coupled with partial least squares (DE-PLS) regression was performed. The results indicate that in the studied group of structurally related isoxazolone derivatives, discrepancies occur between the retention under IAM and RP-LC conditions. Although some correlation between these two chromatographic methods can be found, lipophilicity does not fully explain the affinities of the investigated molecules to phospholipids. QSRR analysis also shows common factors that contribute to retention under IAM and RP-LC conditions. In this context, the significant influences of WHIM and GETAWAY descriptors in all the obtained models should be highlighted
Lovric Mario, Meister Richard, Steck Thomas, Fadljevic Leon, Gerdenitsch Johann, Schuster Stefan, Schiefermüller Lukas, Lindstaedt Stefanie , Kern Roman
2020
In industrial electro galvanizing lines aged anodes deteriorate zinc coating distribution over the strip width, leading to an increase in electricity and zinc cost. We introduce a data-driven approach in predictive maintenance of anodes to replace the cost- and labor-intensive manual inspection, which is still common for this task. The approach is based on parasitic resistance as an indicator of anode condition which might be aged or mis-installed. The parasitic resistance is indirectly observable via the voltage difference between the measured and baseline (theoretical) voltage for healthy anode. Here we calculate the baseline voltage by means of two approaches: (1) a physical model based on electrical and electrochemical laws, and (2) advanced machine learning techniques including boosting and bagging regression. The data was collected on one exemplary rectifier unit equipped with two anodes being studied for a total period of two years. The dataset consists of one target variable (rectifier voltage) and nine predictive variables used in the models, observing electrical current, electrolyte, and steel strip characteristics. For predictive modelling, we used Random Forest, Partial Least Squares and AdaBoost Regression. The model training was conducted on intervals where the anodes were in good condition and validated on other segments which served as a proof of concept that bad anode conditions can be identified using the parasitic resistance predicted by our models. Our results show a RMSE of 0.24 V for baseline rectifier voltage with a mean ± standard deviation of 11.32 ± 2.53 V for the best model on the validation set. The best-performing model is a hybrid version of a Random Forest which incorporates meta-variables computed from the physical model. We found that a large predicted parasitic resistance coincides well with the results of the manual inspection. The results of this work will be implemented in online monitoring of anode conditions to reduce operational cost at a production site
Obermeier, Melanie Maria, Wicaksono, Wisnu Adi, Taffner, Julian, Bergna, Alessandro, Poehlein, Anja, Cernava, Tomislav, Lindstaedt Stefanie , Lovric Mario, Müller Bogota, Christina Andrea, Berg, Gabriele
2020
The expanding antibiotic resistance crisis calls for a more in depth understanding of the importance of antimicrobial resistance genes (ARGs) in pristine environments. We, therefore, studied the microbiome associated with Sphagnum moss forming the main vegetation in undomesticated, evolutionary old bog ecosystems. In our complementary analysis of culture collections, metagenomic data and a fosmid library from different geographic sites in Europe, we identified a low abundant but highly diverse pool of resistance determinants, which targets an unexpectedly broad range of 29 antibiotics including natural and synthetic compounds. This derives both, from the extraordinarily high abundance of efflux pumps (up to 96%), and the unexpectedly versatile set of ARGs underlying all major resistance mechanisms. Multi-resistance was frequently observed among bacterial isolates, e.g. in Serratia, Rouxiella, Pandoraea, Paraburkholderia and Pseudomonas. In a search for novel ARGs, we identified the new class A β-lactamase Mm3. The native Sphagnum resistome comprising a highly diversified and partially novel set of ARGs contributes to the bog ecosystem´s plasticity. Our results reinforce the ecological link between natural and clinically relevant resistomes and thereby shed light onto this link from the aspect of pristine plants. Moreover, they underline that diverse resistomes are an intrinsic characteristic of plant-associated microbial communities, they naturally harbour many resistances including genes with potential clinical relevance
Rauter Romana, Lerch Anita, Lederer-Hutsteiner Thomas, Klinger Sabine, Mayr Andrea, Gutounig Robert, Pammer-Schindler Viktoria
2020
Barreiros Carla, Silva Nelson, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2020
Kern Roman, Al-Ubaidi Tarek, Sabol Vedran, Krebs Sarah, Khodachenko Maxim, Scherf Manuel
2020
Scientific progress in the area of machine learning, in particular advances in deep learning, have led to an increase in interest in eScience and related fields. While such methods achieve great results, an in-depth understanding of these new technologies and concepts is still often lacking and domain knowledge and subject matter expertise play an important role. In regard to space science there are a vast variety of application areas, in particular with regard to analysis of observational data. This chapter aims at introducing a number of promising approaches to analyze time series data, via the introduction query by example, i.e., any signal can be provided to the system, which then responds with a ranked list of datasets containing similar signals. Building on top of this ability the system can then be trained using annotations provided by expert users, with the goal of detecting similar features and hence provide a semiautomated analysis and classification. A prototype built to work on MESSENGER data based on existing background implementations by the Know-Center in cooperation with the Space Research Institute in Graz is presented. Further, several representations of time series data that demonstrated to be required for analysis tasks, as well as techniques for preprocessing, frequent pattern mining, outlier detection, and classification of segmented and unsegmented data, are discussed. Screen shots of the developed prototype, detailing various techniques for the presentation of signals, complete the discussion.
Dennerlein Sebastian, Wolf-Brenner Christof, Gutounig Robert, Schweiger Stefan, Pammer-Schindler Viktoria
2020
Künstliche Intelligenz (KI) ist zum Gegenstand gesellschaftlicher Debatten geworden. Die Beratung durch KI unterstützt uns in der Schule, im Alltag beim Einkauf, bei der Urlaubsplanung und beim Medienkonsum, manipuliert uns allerdings auch gezielt bei Entscheidungen oder führt durch Filter-Bubble-Phänomene zur Realitätsverzerrung.Eine der jüngsten Aufregungen hierzulande galt der Nutzung moderner Algorithmik durch das österreichische Arbeitsmarktservice AMS. Der sogenannte "AMS-Algorithmus" soll Beratende bei der Entscheidung über Fördermaßnahmen unterstützen.Wenn KI in einem so erheblichen Ausmaß in menschliches Handeln eingreift, bedarf sie im Hinblick auf ethische Prinzipien einer sorgfältigen Bewertung. Das ist notwendig, um unethische Folgen zu vermeiden. Üblicherweise wird gefordert, KI bzw. Algorithmen sollen fair sein, was bedeutet, sie sollen nicht diskriminieren und transparent sollen sie sein, also Einblick in ihre Funktionsweise ermöglichen
Fessl Angela, Pammer-Schindler_TU Viktoria, Kai Pata, Mati Mõttus, Jörgen Janus, Tobias Ley
2020
This paper presents cooperative design as method to address the needs of SMEs to gain sufficient knowledge about new technologies in order for them to decide about adoption for knowledge management. We developed and refined a cooperative design method iteratively over nine use cases. In each use case, the goal was to match the SME’s knowledge management needs with offerings of new (to the SMEs) technologies. Where traditionally, innovation adoption and diffusion literature assume new knowledge to be transferred from knowledgeable stakeholders to less knowledgeable stakeholders, our method is built on cooperative design. In this, the relevant knowledge is constructed by the SMEs who wish to decide upon the adoption of novel technologies through the cooperative design process. The presented method is constituted of an analysis stage based on activity theory and a design stage based on paper prototyping and design workshops. In all nine cases, our method led to a good understanding a) of the domain by researchers – validated by the creation of meaningful first-version paper prototypes and b) of new technologies – validated by meaningful input to design and plausible assessment of technologies’ benefit for the respective SME. Practitioners and researchers alike are invited to use the here documented tools to cooperatively match the domain needs of practitioners with the offerings of new technologies. The value of our work lies in providing a concrete implementation of the cooperative design paradigm that is based on an established theory (activity theory) for work analysis and established tools of cooperative design (paper prototypes and design workshops as media of communication); and a discussion based on nine heterogeneous use cases.
Geiger Bernhard, Fischer Ian
2020
In this short note, we relate the variational bounds proposed in Alemi et al. (2017) and Fischer (2020) for the information bottleneck (IB) and the conditional entropy bottleneck (CEB) functional, respectively. Although the two functionals were shown to be equivalent, it was empirically observed that optimizing bounds on the CEB functional achieves better generalization performance and adversarial robustness than optimizing those on the IB functional. This work tries to shed light on this issue by showing that, in the most general setting, no ordering can be established between these variational bounds, while such an ordering can be enforced by restricting the feasible sets over which the optimizations take place. The absence of such an ordering in the general setup suggests that the variational bound on the CEB functional is either more amenable to optimization or a relevant cost function for optimization in its own regard, i.e., without justification from the IB or CEB functionals.
Tschinkel Gerwald
2020
One classic issue associated with being a researcher nowadays is the multitude and magnitude of search results for a given topic. Recommender systems can help to fix this problem by directing users to the resources most relevant to their specific research focus. However, sets of automatically generated recommendations are likely to contain irrelevant resources, making user interfaces that provide effective filtering mechanisms necessary.This problem is exacerbated when users resume a previously interrupted research task, or when different users attempt to tackle one extensive list of results, as confusion as to what resources should be consulted can be overwhelming.The presented recommendation dashboard uses micro-visualisations to display the state of multiple filters in a data type-specific manner. This paper describes the design and geometry of micro-visualisations and presents results from an evaluation of their readability and memorability in the context of exploring recommendation results. Based on that, this paper also proposes applying micro-visualisations for extending the use of a desktop-based dashboard to the needs of small-screen, mobile multi-touch devices, such as smartphones. A small-scale heuristic evaluation was conducted using a first prototype implementation.
Žuvela, Petar, Lovric Mario, Yousefian-Jazi, Ali, Liu, J. Jay
2020
Numerous industrial applications of machine learning feature critical issues that need to be addressed. This work proposes a framework to deal with these issues, such as competing objectives and class imbalance in designing a machine vision system for the in-line detection of surface defects on glass substrates of thin-film transistor liquid crystal displays (TFT-LCDs). The developed inspection system composes of (i) feature engineering: extraction of only the defect-relevant features from images using two-dimensional wavelet decomposition and (ii) training ensemble classifiers (proof of concept with a C5.0 ensemble, random forests (RF), and adaptive boosting (AdaBoost)). The focus is on cost sensitivity, increased generalization, and robustness to handle class imbalance and address multiple competing manufacturing objectives. Comprehensive performance evaluation was conducted in terms of accuracy, sensitivity, specificity, and the Matthews correlation coefficient (MCC) by calculating their 12,000 bootstrapped estimates. Results revealed significant differences (p < 0.05) between the three developed diagnostic algorithms. RFR (accuracy of 83.37%, sensitivity of 60.62%, specificity of 89.72%, and MCC of 0.51) outperformed both AdaBoost (accuracy of 81.14%, sensitivity of 69.23%, specificity of 84.48%, and MCC of 0.50) and the C5.0 ensemble (accuracy of 78.35%, sensitivity of 65.35%, specificity of 82.03%, and MCC of 0.44) in all the metrics except sensitivity. AdaBoost exhibited stronger performance in detecting defective TFT-LCD glass substrates. These promising results demonstrated that the proposed ensemble approach is a viable alternative to manual inspections when applied to an industrial case study with issues such as competing objectives and class imbalance.
Malev, Olga, Lovric Mario, Stipaničev, Draženka, Repec, Siniša, Martinović-Weigelt, Dalma, Zanella, Davor, Đuretec, Valnea Sindiči, Barišić, Josip, Li, Mei, Klobučar, Göran
2020
Chemical analysis of plasma samples of wild fish from the Sava River (Croatia) revealed the presence of 90 different pharmaceuticals/illicit drugs and their metabolites (PhACs/IDrgs). The concentrations of these PhACs/IDrgs in plasma were 10 to 1, 000 times higher than their concentrations in river water. Antibiotics, allergy/cold medications and analgesics were categories with the highest plasma concentrations. Fifty PhACs/IDrgs were identified as chemicals of concern based on the fish plasma model (FPM) effect ratios (ER) and their potential to activate evolutionary conserved biological targets. Chemicals of concern were also prioritized by calculating exposure-activity ratios (EARs) where plasma concentrations of chemicals were compared to their bioactivities in comprehensive ToxCast suite of in vitro assays. Overall, the applied prioritization methods indicated stimulants (nicotine, cotinine) and allergy/cold medications (prednisolone, dexamethasone) as having the highest potential biological impact on fish. The FPM model pointed to psychoactive substances (hallucinogens/stimulants and opioids) and psychotropic substances in the cannabinoids category (i.e. CBD and THC). EAR confirmed above and singled out additional chemicals of concern - anticholesteremic simvastatin and antiepileptic haloperidol. Present study demonstrates how the use of a combination of chemical analyses, and bio-effects based risk predictions with multiple criteria can help identify priority contaminants in freshwaters. The results reveal a widespread exposure of fish to complex mixtures of PhACs/IDrgs, which may target common molecular targets. While many of the prioritized chemicals occurred at low concentrations, their adverse effect on aquatic communities, due to continuous chronic exposure and additive effects, should not be neglected.
Duricic Tomislav, Hussain Hussain, Lacic Emanuel, Kowald Dominik, Lex Elisabeth, Helic Denis
2020
In this work, we study the utility of graph embeddings to generate latent user representations for trust-based collaborative filtering. In a cold-start setting, on three publicly available datasets, we evaluate approaches from four method families:(i) factorization-based,(ii) random walk-based,(iii) deep learning-based, and (iv) the Large-scale Information Network Embedding (LINE) approach. We find that across the four families, random-walk-based approaches consistently achieve the best accuracy. Besides, they result in highly novel and diverse recommendations. Furthermore, our results show that the use of graph embeddings in trust-based collaborative filtering significantly improves user coverage
Havaš Auguštin, Dubravka, Šarac, Jelena, Lovric Mario, Živković, Jelena, Malev, Olga, Fuchs, Nives, Novokmet, Natalija, Turkalj, Mirjana, Missoni, Saša
2020
Maternal nutrition and lifestyle in pregnancy are important modifiable factors for both maternal and offspring’s health. Although the Mediterranean diet has beneficial effects on health, recent studies have shown low adherence in Europe. This study aimed to assess the Mediterranean diet adherence in 266 pregnant women from Dalmatia, Croatia and to investigate their lifestyle habits and regional differences. Adherence to the Mediterranean diet was assessed through two Mediterranean diet scores. Differences in maternal characteristics (diet, education, income, parity, smoking, pre-pregnancy body mass index (BMI), physical activity, contraception) with regards to location and dietary habits were analyzed using the non-parametric Mann–Whitney U test. The machine learning approach was used to reveal other potential non-linear relationships. The results showed that adherence to the Mediterranean diet was low to moderate among the pregnant women in this study, with no significant mainland–island differences. The highest adherence was observed among wealthier women with generally healthier lifestyle choices. The most significant mainland–island differences were observed for lifestyle and socioeconomic factors (income, education, physical activity). The machine learning approach confirmed the findings of the conventional statistical method. We can conclude that adverse socioeconomic and lifestyle conditions were more pronounced in the island population, which, together with the observed non-Mediterranean dietary pattern, calls for more effective intervention strategies
Reiter-Haas Markus, Wittenbrink Davi, Lacic Emanuel
2020
Finding the right job is a difficult task for anyone as it usually depends on many factors like salary, job description, or geographical location. Students with almost no prior experience, especially, have a hard time on the job market, which is very competitive in nature. Additionally, students often suffer a lack of orientation, as they do not know what kind of job is suitable for their education. At Talto1, we realized this and have built a platform to help Austrian university students with finding their career paths as well as providing them with content that is relevant to their career possibilities. This is mainly achieved by guiding the students toward different types of entities that are related to their career, i.e., job postings, company profiles, and career-related articles.In this talk, we share our experiences with solving the recommendation problem for university students. One trait of the student-focused job domain is that behaviour of the students differs depending on their study progression. At the beginning of their studies, they need study-specific career information and part-time jobs to earn additional money. Whereas, when they are nearing graduation, they require information about their potential future employers and entry-level full-time jobs. Moreover, we can observe seasonal patterns in user activity in addition to the need of handling both logged-in and anonymous session users at the same time.To cope with the requirements of the job domain, we built hybrid models based on a microservice architecture that utilizes popular algorithms from the literature such as Collaborative Filtering, Content-based Filtering as well as various neural embedding approaches (e.g., Doc2Vec, Autoencoders, etc.). We further adapted our architecture to calculate relevant recommendations in real-time (i.e., after a recommendation is requested) as individual user sessions in Talto are usually short-lived and context-dependent. Here we found that the online performance of the utilized approach also depends on the location context [1]. Hence, the current location of a user on the mobile or web application impacts the expected recommendations.One optimization criterion on the Talto career platform is to provide relevant cross-entity recommendations as well as explain why those were shown. Recently, we started to tackle this by learning embeddings of entities that lie in the same embedding space [2]. Specifically, we pre-train word embeddings and link different entities by shared concepts, which we use for training the network embeddings. This embeds both the concepts and the entities into a common vector space, where the common vector space is a result of considering the textual content, as well as the network information (i.e., links to concepts). This way, different entity types (e.g., job postings, company profiles, and articles) are directly comparable and are suited for a real-time recommendation setting. Interestingly enough, with such an approach we also end up with individual words sharing the same embedding space. This, in turn, can be leveraged to enhance the textual search functionality of a platform, which is most commonly based just on a TF-IDF model.Furthermore, we found that such embeddings allow us to tackle the problem of explainability in an algorithm-agnostic way. Since the Talto platform utilizes various recommendation algorithms as well as continuously conducts AB tests, an algorithm-agnostic explainability model would be best suited to provide the students with meaningful explanations. As such, we will also go into the details on how we can adapt our explanation model to not rely on the utilized recommendation algorithm.
Lacic Emanuel, Markus Reiter-Haas, Kowald Dominik, Reddy Dareddy Mano, Cho Junghoo, Lex Elisabeth
2020
In this work, we address the problem of providing job recommendations in an online session setting, in which we do not have full user histories. We propose a recom-mendation approach, which uses different autoencoder architectures to encode ses-sions from the job domain. The inferred latent session representations are then used in a k-nearest neighbor manner to recommend jobs within a session. We evaluate our approach on three datasets, (1) a proprietary dataset we gathered from the Austrian student job portal Studo Jobs, (2) a dataset released by XING after the RecSys 2017 Challenge and (3) anonymized job applications released by CareerBuilder in 2012. Our results show that autoencoders provide relevant job recommendations as well as maintain a high coverage and, at the same time, can outperform state-of-the-art session-based recommendation techniques in terms of system-based and session-based novelty
Dennerlein Sebastian, Wolf-Brenner Christof, Gutounig Robert, Schweiger Stefan, Pammer-Schindler Viktoria
2020
In society and politics, there is a rising interest in considering ethical principles in technological innovation, especially in the intersection of education and technology. We propose a first iteration of a theory-derived framework to analyze ethical issues in technology-enhanced learning (TEL) software development. The framework understands ethical issues as an expression of the overall socio-technical system that are rooted in the interactions of human actors with technology, so-called socio-technical interactions (STIs). For guiding ethical reflection, the framework helps to explicate this human involvement, and to elicit discussions of ethical principles on these STIs. Prompts in the form of reflection questions can be inferred to reflect on the technology functionality from relevant human perspectives, and in relation to a list of fundamental ethical principles. We illustrate the framework and discuss its implications for TEL
Gayane Sedrakya, Dennerlein Sebastian, Pammer-Schindler Viktoria, Lindstaedt Stefanie
2020
Our earlier research attempts to close the gap between learning behavior analytics based dashboard feedback and learning theories by grounding the idea of dashboard feedback onto learning science concepts such as feedback, learning goals, (socio-/meta-) cognitive mechanisms underlying learning processes. This work extends the earlier research by proposing mechanisms for making those concepts and relationships measurable. The outcome is a complementary framework that allows identifying feedback needs and timing for their provision in a generic context that can be applied to a certain subject in a given LMS. The research serves as general guidelines for educators in designing educational dashboards, as well as a starting research platform in the direction of systematically matching learning sciences concepts with data and analytics concepts
Klimashevskaia Anastasia, Geiger Bernhard, Hagmüller Martin, Helic Denis, Fischer Frank
2020
(extended abstract)
Hobisch Elisbeth, Scholger Martina, Fuchs Alexandra, Geiger Bernhard, Koncar Philipp, Saric Sanja
2020
(extended abstract)
Schrunner Stefan, Geiger Bernhard, Zernig Anja, Kern Roman
2020
Classification has been tackled by a large number of algorithms, predominantly following a supervised learning setting. Surprisingly little research has been devoted to the problem setting where a dataset is only partially labeled, including even instances of entirely unlabeled classes. Algorithmic solutions that are suited for such problems are especially important in practical scenarios, where the labelling of data is prohibitively expensive, or the understanding of the data is lacking, including cases, where only a subset of the classes is known. We present a generative method to address the problem of semi-supervised classification with unknown classes, whereby we follow a Bayesian perspective. In detail, we apply a two-step procedure based on Bayesian classifiers and exploit information from both a small set of labeled data in combination with a larger set of unlabeled training data, allowing that the labeled dataset does not contain samples from all present classes. This represents a common practical application setup, where the labeled training set is not exhaustive. We show in a series of experiments that our approach outperforms state-of-the-art methods tackling similar semi-supervised learning problems. Since our approach yields a generative model, which aids the understanding of the data, it is particularly suited for practical applications.
Amjad Rana Ali, Geiger Bernhard
2020
In this theory paper, we investigate training deep neural networks (DNNs) for classification via minimizing the information bottleneck (IB) functional. We show that the resulting optimization problem suffers from two severe issues: First, for deterministic DNNs, either the IB functional is infinite for almost all values of network parameters, making the optimization problem ill-posed, or it is piecewise constant, hence not admitting gradient-based optimization methods. Second, the invariance of the IB functional under bijections prevents it from capturing properties of the learned representation that are desirable for classification, such as robustness and simplicity. We argue that these issues are partly resolved for stochastic DNNs, DNNs that include a (hard or soft) decision rule, or by replacing the IB functional with related, but more well-behaved cost functions. We conclude that recent successes reported about training DNNs using the IB framework must be attributed to such solutions. As a side effect, our results indicate limitations of the IB framework for the analysis of DNNs. We also note that rather than trying to repair the inherent problems in the IB functional, a better approach may be to design regularizers on latent representation enforcing the desired properties directly.
Gogolenko Sergiy, Groen Derek, Suleimenova Dian, Mahmood Imra, Lawenda Marcin, Nieto De Santos Javie, Hanley Joh, Vukovic Milana, Kröll Mark, Geiger Bernhard, Elsaesser Rober, Hoppe Dennis
2020
Accurate digital twinning of the global challenges (GC) leadsto computationally expensive coupled simulations. These simulationsbring together not only different models, but also various sources of mas-sive static and streaming data sets. In this paper, we explore ways tobridge the gap between traditional high performance computing (HPC)and data-centric computation in order to provide efficient technologicalsolutions for accurate policy-making in the domain of GC. GC simula-tions in HPC environments give rise to a number of technical challengesrelated to coupling. Being intended to reflect current and upcoming situ-ation for policy-making, GC simulations extensively use recent streamingdata coming from external data sources, which requires changing tradi-tional HPC systems operation. Another common challenge stems fromthe necessity to couple simulations and exchange data across data centersin GC scenarios. By introducing a generalized GC simulation workflow,this paper shows commonality of the technical challenges for various GCand reflects on the approaches to tackle these technical challenges in theHiDALGO project
Amjad Rana Ali, Bloechl Clemens, Geiger Bernhard
2020
We propose an information-theoretic Markov aggregation framework that is motivated by two objectives: 1) The Markov chain observed through the aggregation mapping should be Markov. 2) The aggregated chain should retain the temporal dependence structure of the original chain. We analyze our parameterized cost function and show that it contains previous cost functions as special cases, which we critically assess. Our simple optimization heuristic for deterministic aggregations characterizes the optimization landscape for different parameter values.
Breitfuß Gert, Fruhwirth Michael, Wolf-Brenner Christof, Riedl Angelika, Ginthör Robert, Pimas Oliver
2020
In the future, every successful company must have a clear idea of what data means to it. The necessary transformation to a data-driven company places high demands on companies and challenges management, organization and individual employees. In order to generate concrete added value from data, the collaboration of different disciplines e.g. data scientists, domain experts and business people is necessary. So far few tools are available which facilitate the creativity and co-creation process amongst teams with different backgrounds. The goal of this paper is to design and develop a hands-on and easy to use card-based tool for the generation of data service ideas that supports the required interdisciplinary cooperation. By using a Design Science Research approach we analysed 122 data service ideas and developed an innovation tool consisting of 38 cards. The first evaluation results show that the developed Data Service Cards are both perceived as helpful and easy to use.
Fruhwirth Michael, Breitfuß Gert, Pammer-Schindler Viktoria
2020
The availability of data sources and advances in analytics and artificial intelligence offers the opportunity for organizationsto develop new data-driven products, services and business models. Though, this process is challenging for traditionalorganizations, as it requires knowledge and collaboration from several disciplines such as data science, domain experts, orbusiness perspective. Furthermore, it is challenging to craft a meaningful value proposition based on data; whereas existingresearch can provide little guidance. To overcome those challenges, we conducted a Design Science Research project toderive requirements from literature and a case study, develop a collaborative visual tool and evaluate it through severalworkshops with traditional organizations. This paper presents the Data Product Canvas, a tool connecting data sources withthe user challenges and wishes through several intermediate steps. Thus, this paper contributes to the scientific body ofknowledge on developing data-driven business models, products and services.
Koncar Philipp, Fuchs Alexandra, Hobisch Elisabeth, Geiger Bernhard, Scholger Martina, Helic Denis
2020
Spectator periodicals contributed to spreading the ideas of the Age of Enlightenment, a turning point in human history and the foundation of our modern societies. In this work, we study the spirit and atmosphere captured in the spectator periodicals about important social issues from the 18th century by analyzing text sentiment of those periodicals. Specifically, based on a manually annotated corpus of over 3 700 issues published in five different languages and over a period of more than one hundred years, we conduct a three-fold sentiment analysis: First, we analyze the development of sentiment over time as well as the influence of topics and narrative forms on sentiment. Second, we construct sentiment networks to assess the polarity of perceptions between different entities, including periodicals, places and people. Third, we construct and analyze sentiment word networks to determine topological differences between words with positive and negative polarity allowing us to make conclusions on how sentiment was expressed in spectator periodicals.Our results depict a mildly positive tone in spectator periodicals underlining the positive attitude towards important topics of the Age of Enlightenment, but also signaling stylistic devices to disguise critique in order to avoid censorship. We also observe strong regional variation in sentiment, indicating cultural and historic differences between countries. For example, while Italy perceived other European countries as positive role models, French periodicals were frequently more critical towards other European countries. Finally, our topological analysis depicts a weak overrepresentation of positive sentiment words corroborating our findings about a general mildly positive tone in spectator periodicals.We believe that our work based on the combination of the sentiment analysis of spectator periodicals and the extensive knowledge available from literary studies sheds interesting new light on these publications. Furthermore, we demonstrate the inclusion of sentiment analysis as another useful method in the digital humanist’s distant reading toolbox.
Fruhwirth Michael, Ropposch Christiana, Pammer-Schindler Viktoria
2020
Purpose: This paper synthesizes existing research on tools and methods that support data-driven business model innovation, and maps out relevant directions for future research.Design/methodology/approach: We have carried out a structured literature review and collected and analysed a respectable but not excessively large number of 33 publications, due to the comparatively emergent nature of the field.Findings: Current literature on supporting data-driven business model innovation differs in the types of contribution (taxonomies, patterns, visual tools, methods, IT tool and processes), the types of thinking supported (divergent and convergent) and the elements of the business models that are addressed by the research (value creation, value capturing and value proposition).Research implications: Our review highlights the following as relevant directions for future research. Firstly, most research focusses on supporting divergent thinking, i.e. ideation. However, convergent thinking, i.e. evaluating, prioritizing, and deciding, is also necessary. Secondly, the complete procedure of developing data-driven business models and also the development on chains of tools related to this have been under-investigated. Thirdly, scarcely any IT tools specifically support the development of data-driven business models. These avenues also highlight the necessity to integrate between research on specifics of data in business model innovation, on innovation management, information systems and business analytics.Originality/Value: This paper is the first to synthesize the literature on how to identify and develop data-driven
Dumouchel Suzanne, Blotiere Emilie, Barbot Laure, Breitfuß Gert, Chen Yin, Di Donato Francesca, Forbes Paula, Petifils Clara, Pohle Stefanie
2020
SSH research is divided across a wide array of disciplines, sub-disciplines, and languages. While this specialisation makes it possible to investigate the extensive variety of SSH topics, it also leads to a fragmentation that prevents SSH research from reaching its full potential. Use and reuse of SSH research is suboptimal, interdisciplinary collaboration possibilities are often missed partially because of missing standards and referential keys between disciplines. By the way the reuse of data may paradoxically complicate a relevant sorting and a trust relationship. As a result, societal, economic and academic impacts are limited. Conceptually, there is a wealth of transdisciplinary collaborations, but in practice there is a need to help SSH researchers and research institutions to connect them and support them, to prepare the research data for these overarching approaches and to make them findable and usable. The TRIPLE (Targeting Researchers through Innovative Practices and Linked Exploration) project is a practical answer to the above issues, as it aims at designing and developing the European discovery platform dedicated to SSH resources. Funded under the European Commission program INFRAEOSC-02-2019 “Prototyping new innovative services”, thanks to a consortium of 18 partners, TRIPLE will develop a full multilingual and multicultural solution for the discovery and the reuse of SSH resources. The project started in October 2019 for a duration of 42 months thanks to European funding of 5.6 million €.
Dennerlein Sebastian, Tomberg Vladimir, Treasure-Jones, Tamsin, Theiler Dieter, Lindstaedt Stefanie , Ley Tobias
2020
PurposeIntroducing technology at work presents a special challenge as learning is tightly integrated with workplace practices. Current design-based research (DBR) methods are focused on formal learning context and often questioned for a lack of yielding traceable research insights. This paper aims to propose a method that extends DBR by understanding tools as sociocultural artefacts, co-designing affordances and systematically studying their adoption in practice.Design/methodology/approachThe iterative practice-centred method allows the co-design of cognitive tools in DBR, makes assumptions and design decisions traceable and builds convergent evidence by consistently analysing how affordances are appropriated. This is demonstrated in the context of health-care professionals’ informal learning, and how they make sense of their experiences. The authors report an 18-month DBR case study of using various prototypes and testing the designs with practitioners through various data collection means.FindingsBy considering the cognitive level in the analysis of appropriation, the authors came to an understanding of how professionals cope with pressure in the health-care domain (domain insight); a prototype with concrete design decisions (design insight); and an understanding of how memory and sensemaking processes interact when cognitive tools are used to elaborate representations of informal learning needs (theory insight).Research limitations/implicationsThe method is validated in one long-term and in-depth case study. While this was necessary to gain an understanding of stakeholder concerns, build trust and apply methods over several iterations, it also potentially limits this.Originality/valueBesides generating traceable research insights, the proposed DBR method allows to design technology-enhanced learning support for working domains and practices. The method is applicable in other domains and in formal learning.
Kowald Dominik, Lex Elisabeth, Markus Schedl
2020
In this paper, we introduce a psychology-inspired approachto model and predict the music genre preferences of differ-ent groups of users by utilizing human memory processes.These processes describe how humans access informationunits in their memory by considering the factors of (i) pastusage frequency, (ii) past usage recency, and (iii) the currentcontext. Using a publicly available dataset of more than abillion music listening records shared on the music stream-ing platform Last.fm, we find that our approach providessignificantly better prediction accuracy results than variousbaseline algorithms for all evaluated user groups, i.e., (i) low-mainstream music listeners, (ii) medium-mainstream musiclisteners, and (iii) high-mainstream music listeners. Further-more, our approach is based on a simple psychological model,which contributes to the transparency and explainability ofthe calculated predictions
Kowald Dominik, Markus Schedl, Lex Elisabeth
2020
Research has shown that recommender systems are typicallybiased towards popular items, which leads to less popular items beingunderrepresented in recommendations. The recent work of Abdollahpouriet al. in the context of movie recommendations has shown that this pop-ularity bias leads to unfair treatment of both long-tail items as well asusers with little interest in popular items. In this paper, we reproducethe analyses of Abdollahpouri et al. in the context of music recommen-dation. Specifically, we investigate three user groups from the Last.fmmusic platform that are categorized based on how much their listen-ing preferences deviate from the most popular music among all Last.fmusers in the dataset: (i) low-mainstream users, (ii) medium-mainstreamusers, and (iii) high-mainstream users. In line with Abdollahpouri et al.,we find that state-of-the-art recommendation algorithms favor popularitems also in the music domain. However, their proposed Group Aver-age Popularity metric yields different results for Last.fm than for themovie domain, presumably due to the larger number of available items(i.e., music artists) in the Last.fm dataset we use. Finally, we comparethe accuracy results of the recommendation algorithms for the three usergroups and find that the low-mainstreaminess group significantly receivesthe worst recommendations.
Dennerlein Sebastian, Pammer-Schindler Viktoria, Ebner Markus, Getzinger Günter, Ebner Martin
2020
Sustainably digitalizing higher education requires a human-centred approach. To address actual problems in teaching as well as learning and increase acceptance, the Technology Enhanced Learning (TEL) solution(s) must be co-designed with affected researchers, teachers, students and administrative staff. We present research-in-progress about a sandpit-informed innovation process with a f2f-marketplace of TEL research and problemmapping as well team formation alongside a competitive call phase, which is followed by a cooperative phase of funded interdisciplinary pilot teams codesigning and implementing TEL innovations. Pilot teams are supported by a University Innovation Canvas to document and reflect on their TEL innovation from multiple viewpoints.
Fuchs Alexandra, Geiger Bernhard, Hobisch Elisabeth, Koncar Philipp, More Jacqueline, Saric Sanja, Scholger Martina
2020
Feichtinger Gerald, Gursch Heimo, Schlager Elke, Brandl Daniel, Gratzl Markus
2020
Bhat Karthik Subramanya, Bachhiesl Udo, Feichtinger Gerald, Stigler Heinz
2020
India, as a ‘developing’ country, is in the middle of a unique situation of handling its energy transition towards carbon-free energy along with its continuous economic development. With respect to the agreed COP 21 and SDG 2030 targets, India has drafted several energy strategies revolving around clean renewable energy. With multiple roadblocks for development of large hydro power capacities within the country, the long-term renewable goals of India focus highly on renewable energy technologies like solar Photo-Voltaic (PV) and wind capacities. However, with a much slower rate of development in transmission infrastructure and the given situations of the regional energy systems in the Indian subcontinent, these significant changes in India could result in severe technical and economic consequences for the complete interconnected region. The presented investigations in this paper have been conducted using ATLANTIS_India, a unique techno-economic simulation model developed at the Institute of Electricity Economics and Energy Innovation/Graz University of Technology, designed for the electricity system in the Indian subcontinent region. The model covers the electricity systems of India, Bangladesh, Bhutan, Nepal, and Sri Lanka, and is used to analyse a scenario where around 118 GW of solar PV and wind capacity expansion is planned in India until the target year 2050. This paper presents the simulation approach as well as the simulated results and conclusions. The simulation results show the positive and negative technoeconomic impacts of the discussed strategy on the overall electricity system, while suggesting possible solutions.
Fadljevic Leon, Maitz Katharina, Kowald Dominik, Pammer-Schindler Viktoria, Gasteiger-Klicpera Barbara
2020
This paper describes the analysis of temporal behavior of 11--15 year old students in a heavily instructionally designed adaptive e-learning environment. The e-learning system is designed to support student's acquisition of health literacy. The system adapts text difficulty depending on students' reading competence, grouping students into four competence levels. Content for the four levels of reading competence was created by clinical psychologists, pedagogues and medicine students. The e-learning system consists of an initial reading competence assessment, texts about health issues, and learning tasks related to these texts. The research question we investigate in this work is whether temporal behavior is a differentiator between students despite the system's adaptation to students' reading competence, and despite students having comparatively little freedom of action within the system. Further, we also investigated the correlation of temporal behaviour with performance. Unsupervised clustering clearly separates students into slow and fast students with respect to the time they take to complete tasks. Furthermore, topic completion time is linearly correlated with performance in the tasks. This means that we interpret working slowly in this case as diligence, which leads to more correct answers, even though the level of text difficulty matches student's reading competence. This result also points to the design opportunity to integrate advice on overarching learning strategies, such as working diligently instead of rushing through, into the student's overall learning activity. This can be done either by teachers, or via additional adaptive learning guidance within the system.
Lex Elisabeth, Kowald Dominik, Schedl Markus
2020
In this paper, we address the problem of modeling and predicting the music genre preferences of users. We introduce a novel user modeling approach, BLLu, which takes into account the popularity of music genres as well as temporal drifts of user listening behavior. To model these two factors, BLLu adopts a psychological model that describes how humans access information in their memory. We evaluate our approach on a standard dataset of Last.fm listening histories, which contains fine-grained music genre information. To investigate performance for different types of users, we assign each user a mainstreaminess value that corresponds to the distance between the user’s music genre preferences and the music genre preferences of the (Last.fm) mainstream. We adopt BLLu to model the listening habits and to predict the music genre preferences of three user groups: listeners of (i) niche, low-mainstream music, (ii) mainstream music, and (iii) medium-mainstream music that lies in-between. Our results show that BLLu provides the highest accuracy for predicting music genre preferences, compared to five baselines: (i) group-based modeling, (ii) user-based collaborative filtering, (iii) item-based collaborative filtering, (iv) frequency-based modeling, and (v) recency-based modeling. Besides, we achieve the most substantial accuracy improvements for the low-mainstream group. We believe that our findings provide valuable insights into the design of music recommender systems
Thalmann Stefan, Fessl Angela, Pammer-Schindler Viktoria
2020
Digitization is currently one of the major factors changing society and the business world. Most research focused on the technical issues of this change, but also employees and especially the way how they learn changes dramatically. In this paper, we are interested in exploring the perspectives of decision makers in huge manufacturing companies on current challenges in organizing learning and knowledge distribution in digitized manufacturing environments. Moreover, weinvestigated the change process and challenges of implementing new knowledge and learning processes.To this purpose, we have conducted 24 interviews with senior representatives of large manufacturing companies from Austria, Germany, Italy, Liechtenstein and Switzerland. Our exploratory study shows that decision makers perceive significant changes in work practice of manufacturing due to digitization and they currently plan changes in organizational training and knowledge distribution processes in response. Due to the lack of best practices, companies focus verymuch on technological advancements. The delivery of knowledge just-in-time directly into work practice is afavorite approach. Overall, digital learning services are growing and new requirements regarding compliance,quality management and organisational culture arise.
Fruhwirth Michael, Rachinger Michael, Prlja Emina
2020
The modern economy relies heavily on data as a resource for advancement and growth. Data marketplaces have gained an increasing amount of attention, since they provide possibilities to exchange, trade and access data across organizations. Due to the rapid development of the field, the research on business models of data marketplaces is fragmented. We aimed to address this issue in this article by identifying the dimensions and characteristics of data marketplaces from a business model perspective. Following a rigorous process for taxonomy building, we propose a business model taxonomy for data marketplaces. Using evidence collected from a final sample of twenty data marketplaces, we analyze the frequency of specific characteristics of data marketplaces. In addition, we identify four data marketplace business model archetypes. The findings reveal the impact of the structure of data marketplaces as well as the relevance of anonymity and encryption for identified data marketplace archetypes.
Lovric Mario, Šimić Iva, Godec Ranka, Kröll Mark, Beslic Ivan
2020
Narrow city streets surrounded by tall buildings are favorable to inducing a general effect of a “canyon” in which pollutants strongly accumulate in a relatively small area because of weak or inexistent ventilation. In this study, levels of nitrogen-oxide (NO2), elemental carbon (EC) and organic carbon (OC) mass concentrations in PM10 particles were determined to compare between seasons and different years. Daily samples were collected at one such street canyon location in the center of Zagreb in 2011, 2012 and 2013. By applying machine learning methods we showed seasonal and yearly variations of mass concentrations for carbon species in PM10 and NO2, as well as their covariations and relationships. Furthermore, we compared the predictive capabilities of five regressors (Lasso, Random Forest, AdaBoost, Support Vector Machine and Partials Least squares) with Lasso regression being the overall best performing algorithm. By showing the feature importance for each model, we revealed true predictors per target. These measurements and application of machine learning of pollutants were done for the first time at a street canyon site in the city of Zagreb, Croatia.
Kaiser Rene_DB, Thalmann Stefan, Pammer-Schindler Viktoria, Fessl Angela
2020
Organisations participate in collaborative projects that include competitors for a number of strategic reasons, even whilst knowing that this requires them to consider both knowledge sharing and knowledge protection throughout collaboration. In this paper, we investigated which knowledge protection practices representatives of organizations employ in a collaborative research and innovation project that can be characterized as a co-opetitive setting. We conducted a series of 30 interviews and report the following seven practices in structured form: restrictive partner selection in operative project tasks, communication through a gatekeeper, to limit access to a central platform, to hide details of machine data dumps, to have data not leave a factory for analysis, a generic model enabling to hide usage parameters, and to apply legal measures. When connecting each practice to a priori literature, we find three practices focussing on collaborative data analytics tasks had not yet been covered so far.
Arslanovic Jasmina, Ajana Löw, Lovric Mario, Kern Roman
2020
Previous studies have suggested that artistic (synchronized) swimming athletes might showeating disorders symptoms. However, systematic research on eating disorders in artistic swimming is limited and the nature and antecedents of the development of eating disorders in this specific population of athletes is still scarce. Hence, the aim of our research was to investigate the eating disorder symptoms in artistic swimming athletes using the EAT-26 instrument, and to examine the relation of the incidence and severity of these symptoms to body mass index and body image dissatisfaction. Furthermore, we wanted to compare artistic swimmers with athletes of a non-leanness (but also an aquatic) sport, therefore we also included a group of female water-polo athletes of the same age. The sample consisted of 36 artistic swimmers and 34 female waterpolo players (both aged 13-16). To test the presence of the eating disorder symptoms the EAT-26 was used. The Mann-Whitney U Test (MWU) was used to test for the differences in EAT-26 scores. The EAT-26 total score and the Dieting subscale (one of the three subscale) showed significant differences between the two groups. The median value for EAT-26 total score was higher in the artistic swimmers’ group (C = 11) than in the waterpolo players’ group (C = 8). A decision tree classifier was used to discriminate the artistic swimmers and female water polo players based on the features from the EAT26 and calculated features. The most discriminative features were the BMI, the dieting subscale and the habit of post-meal vomiting.Our results suggest that artistic swimmers, at their typical competing age, show higher risk of developing eating disorders than female waterpoloplayers and that they are also prone to dieting weight-control behaviors to achieve a desired weight. Furthermore, results indicate that purgative behaviors, such as binge eating or self-induced vomiting, might not be a common weight-control behavior among these athletes. The results corroborate the findings that sport environment in leanness sports might contribute to the development of eating disorders. The results are also in line with evidence that leanness sports athletes are more at risk for developing restrictive than purgative eating behaviors, as the latter usually do not contribute to body weight reduction. As sport environment factors in artistic swimming include judging criteria that emphasize a specific body shape and performance, it is important to raise the awareness of mental health risks that such environment might encourage.
Chiancone Alessandro, Cuder Gerald, Geiger Bernhard, Harzl Annemarie, Tanzer Thomas, Kern Roman
2019
This paper presents a hybrid model for the prediction of magnetostriction in power transformers by leveraging the strengths of a data-driven approach and a physics-based model. Specifically, a non-linear physics-based model for magnetostriction as a function of the magnetic field is employed, the parameters of which are estimated as linear combinations of electrical coil measurements and coil dimensions. The model is validated in a practical scenario with coil data from two different suppliers, showing that the proposed approach captures the different magnetostrictive properties of the two suppliers and provides an estimation of magnetostriction in agreement with the measurement system in place. It is argued that the combination of a non-linear physics-based model with few parameters and a linear data-driven model to estimate these parameters is attractive both in terms of model accuracy and because it allows training the data-driven part with comparably small datasets.
Stanisavljevic Darko, Cemernek David, Gursch Heimo, Urak Günter, Lechner Gernot
2019
Additive manufacturing becomes a more and more important technology for production, mainly driven by the ability to realise extremely complex structures using multiple materials but without assembly or excessive waste. Nevertheless, like any high-precision technology additive manufacturing responds to interferences during the manufacturing process. These interferences – like vibrations – might lead to deviations in product quality, becoming manifest for instance in a reduced lifetime of a product or application issues. This study targets the issue of detecting such interferences during a manufacturing process in an exemplary experimental setup. Collection of data using current sensor technology directly on a 3D-printer enables a quantitative detection of interferences. The evaluation provides insights into the effectiveness of the realised application-oriented setup, the effort required for equipping a manufacturing system with sensors, and the effort for acquisition and processing the data. These insights are of practical utility for organisations dealing with additive manufacturing: the chosen approach for detecting interferences shows promising results, reaching interference detection rates of up to 100% depending on the applied data processing configuration.
Santos Tiago, Schrunner Stefan, Geiger Bernhard, Pfeiler Olivia, Zernig Anja, Kaestner Andre, Kern Roman
2019
Semiconductor manufacturing is a highly innovative branch of industry, where a high degree of automation has already been achieved. For example, devices tested to be outside of their specifications in electrical wafer test are automatically scrapped. In this paper, we go one step further and analyze test data of devices still within the limits of the specification, by exploiting the information contained in the analog wafermaps. To that end, we propose two feature extraction approaches with the aim to detect patterns in the wafer test dataset. Such patterns might indicate the onset of critical deviations in the production process. The studied approaches are: 1) classical image processing and restoration techniques in combination with sophisticated feature engineering and 2) a data-driven deep generative model. The two approaches are evaluated on both a synthetic and a real-world dataset. The synthetic dataset has been modeled based on real-world patterns and characteristics. We found both approaches to provide similar overall evaluation metrics. Our in-depth analysis helps to choose one approach over the other depending on data availability as a major aspect, as well as on available computing power and required interpretability of the results.
Lacic Emanuel, Reiter-Haas Markus, Duricic Tomislav, Slawicek Valentin, Lex Elisabeth
2019
In this work, we present the findings of an online study, where we explore the impact of utilizing embeddings to recommend job postings under real-time constraints. On the Austrian job platform Studo Jobs, we evaluate two popular recommendation scenarios: (i) providing similar jobs and, (ii) personalizing the job postings that are shown on the homepage. Our results show that for recommending similar jobs, we achieve the best online performance in terms of Click-Through Rate when we employ embeddings based on the most recent interaction. To personalize the job postings shown on a user's homepage, however, combining embeddings based on the frequency and recency with which a user interacts with job postings results in the best online performance.
Duricic Tomislav, Lacic Emanuel, Kowald Dominik, Lex Elisabeth
2019
User-based Collaborative Filtering (CF) is one of the most popular approaches to create recommender systems. CF, however, suffers from data sparsity and the cold-start problem since users often rate only a small fraction of available items. One solution is to incorporate additional information into the recommendation process such as explicit trust scores that are assigned by users to others or implicit trust relationships that result from social connections between users. Such relationships typically form a very sparse trust network, which can be utilized to generate recommendations for users based on people they trust. In our work, we explore the use of regular equivalence applied to a trust network to generate a similarity matrix that is used for selecting k-nearest neighbors used for item recommendation. Two vertices in a network are regularly equivalent if their neighbors are themselves equivalent and by using the iterative approach of calculating regular equivalence, we can study the impact of strong and weak ties on item recommendation. We evaluate our approach on cold start users on a dataset crawled from Epinions and find that by using weak ties in addition to strong ties, we can improve the performance of a trust-based recommender in terms of recommendation accuracy.
Lassnig Markus, Stabauer Petra, Breitfuß Gert, Müller Julian
2019
Zahlreiche Forschungsergebnisse im Bereich Geschäftsmodellinnovationen haben gezeigt, dass über 90 Prozent aller Geschäftsmodelle der letzten 50 Jahre aus einer Rekombination von bestehenden Konzepten entstanden sind. Grundsätzlich gilt das auch für digitale Geschäftsmodellinnovationen. Angesichts der Breite potenzieller digitaler Geschäftsmodellinnovationen wollten die Autoren wissen, welche Modellmuster in der wirtschaftlichen Praxis welche Bedeutung haben. Deshalb wurde die digitale Transformation mit neuen Geschäftsmodellen in einer empirischen Studie basierend auf qualitativen Interviews mit 68 Unternehmen untersucht. Dabei wurden sieben geeignete Geschäftsmodellmuster identifiziert, bezüglich ihres Disruptionspotenzials von evolutionär bis revolutionär klassifiziert und der Realisierungsgrad in den Unternehmen analysiert.Die stark komprimierte Conclusio lautet, dass das Thema Geschäftsmodellinnovationen durch Industrie 4.0 und digitale Transformation bei den Unternehmen angekommen ist. Es gibt jedoch sehr unterschiedliche Geschwindigkeiten in der Umsetzung und im Neuheitsgrad der Geschäftsmodellideen. Die schrittweise Weiterentwicklung von Geschäftsmodellen (evolutionär) wird von den meisten Unternehmen bevorzugt, da hier die grundsätzliche Art und Weise des Leistungsangebots bestehen bleibt. Im Gegensatz dazu gibt es aber auch Unternehmen, die bereits radikale Änderungen vornehmen, die die gesamte Geschäftslogik betreffen (revolutionäre Geschäftsmodellinnovationen). Entsprechend wird im vorliegenden Artikel ein Clustering von Geschäftsmodellinnovatoren vorgenommen – von Hesitator über Follower über Optimizer bis zu Leader in Geschäftsmodellinnovationen.
Wolfbauer Irmtraud
2019
Presentation of PhDUse Case: An online learning platform for apprentices.Research opportunities: Target group is under-researched1. Computer usage & ICT self-efficacy2. Communities of practice, identities as learnersReflection guidance technologies3. Rebo, the reflection guidance chatbot
Wolfbauer Irmtraud
2019
Use Case: An online learning platform for apprentices.Research opportunities: Target group is under-researched1. Computer usage & ICT self-efficacy2. Communities of practice, identities as learnersReflection guidance technologies3. Rebo, the reflection guidance chatbot
Kowald Dominik, Lex Elisabeth, Schdel Markus
2019
Iacopo Vagliano, Fessl Angela, Franziska Günther, Thomas Köhler, Vasileios Mezaris, Ahmed Saleh, Ansgar Scherp, Simic Ilija
2019
The MOVING platform enables its users to improve their information literacy by training how to exploit data and text mining methods in their daily research tasks. In this paper, we show how it can support researchers in various tasks, and we introduce its main features, such as text and video retrieval and processing, advanced visualizations, and the technologies to assist the learning process.
Fessl Angela, Apaolaza Aitor, Gledson Ann, Pammer-Schindler Viktoria, Vigo Markel
2019
Searching on the web is a key activity for working and learning purposes. In this work, we aimed to motivate users to reflect on their search behaviour, and to experiment with different search functionalities. We implemented a widget that logs user interactions within a search platform, mirrors back search behaviours to users, and prompts users to reflect about it. We carried out two studies to evaluate the impact of such widget on search behaviour: in Study 1 (N = 76), participants received screenshots of the widget including reflection prompts while in Study 2 (N = 15), a maximum of 10 search tasks were conducted by participants over a period of two weeks on a search platform that contained the widget. Study 1 shows that reflection prompts induce meaningful insights about search behaviour. Study 2 suggests that, when using a novel search platform for the first time, those participants who had the widget prioritised search behaviours over time. The incorporation of the widget into the search platform after users had become familiar with it, however, was not observed to impact search behaviour. While the potential to support un-learning of routines could not be shown, the two studies suggest the widget’s usability, perceived usefulness, potential to induce reflection and potential to impact search behaviour.
Kopeinik Simone, Seitlinger Paul, Lex Elisabeth
2019
Kopeinik Simone, Lex Elisabeth, Kowald Dominik, Albert Dietrich, Seitlinger Paul
2019
When people engage in Social Networking Sites, they influence one another through their contributions. Prior research suggests that the interplay between individual differences and environmental variables, such as a person’s openness to conflicting information, can give rise to either public spheres or echo chambers. In this work, we aim to unravel critical processes of this interplay in the context of learning. In particular, we observe high school students’ information behavior (search and evaluation of Web resources) to better understand a potential coupling between confirmatory search and polarization and, in further consequence, improve learning analytics and information services for individual and collective search in learning scenarios. In an empirical study, we had 91 high school students performing an information search in a social bookmarking environment. Gathered log data was used to compute indices of confirmatory search and polarisation as well as to analyze the impact of social stimulation. We find confirmatory search and polarization to correlate positively and social stimulation to mitigate, i.e., reduce the two variables’ relationship. From these findings, we derive practical implications for future work that aims to refine our formalism to compute confirmatory search and polarisation indices and to apply it for depolarizing information services
Fruhwirth Michael, Pammer-Schindler Viktoria, Thalmann Stefan
2019
Data plays a central role in many of today's business models. With the help of advanced analytics, knowledge about real-world phenomena can be discovered from data. This may lead to unintended knowledge spillover through a data-driven offering. To properly consider this risk in the design of data-driven business models, suitable decision support is needed. Prior research on approaches that support such decision-making is scarce. We frame designing business models as a set of decision problems with the lens of Behavioral Decision Theory and describe a Design Science Research project conducted in the context of an automotive company. We develop an artefact that supports identifying knowledge risks, concomitant with design decisions, during the design of data-driven business models and verify knowledge risks as a relevant problem. In further research, we explore the problem in-depth and further design and evaluate the artefact within the same company as well as in other companies.
Silva Nelson, Madureira, Luis
2019
Uncover hidden suppliers and their complex relationships across the entire Supply Chain is quite complex. Unexpected disruptions, e.g. earthquakes, volcanoes, bankruptcies or nuclear disasters have a huge impact on major Supply Chain strategies. It is very difficult to predict the real impact of these disruptions until it is too late. Small, unknown suppliers can hugely impact the delivery of a product. Therefore, it is crucial to constantly monitor for problems with both direct and indirect suppliers.
Schlager Elke, Gursch Heimo, Feichtinger Gerald
2019
Poster to publish the finally implemented "Data Management System" @ Know-Center for the COMFORT project
Feichtinger Gerald, Gursch Heimo
2019
Poster - allgemeine Projektvorstellung
Monsberger Michael, Koppelhuber Daniela, Sabol Vedran, Gursch Heimo, Spataru Adrian, Prentner Oliver
2019
A lot of research is currently focused on studying user behavior indirectly by analyzing sensor data. However, only little attention has been given to the systematic acquisition of immediate user feedback to study user behavior in buildings. In this paper, we present a novel user feedback system which allows building users to provide feedback on the perceived sense of personal comfort in a room. To this end, a dedicated easy-to-use mobile app has been developed; it is complemented by a supporting infrastructure, including a web page for an at-a-glance overview. The obtained user feedback is compared with sensor data to assess whether building services (e.g., heating, ventilation and air-conditioning systems) are operated in accordance with user requirements. This serves as a basis to develop algorithms capable of optimizing building operation by providing recommendations to facility management staff or by automatic adjustment of operating points of building services. In this paper, we present the basic concept of the novel feedback system for building users and first results from an initial test phase. The results show that building users utilize the developed app to provide both, positive and negative feedback on room conditions. They also show that it is possible to identify rooms with non-ideal operating conditions and that reasonable measures to improve building operation can be derived from the gathered information. The results highlight the potential of the proposed system.
Fuchs Alexandra, Geiger Bernhard, Hobisch Elisabeth, Koncar Philipp, Saric Sanja, Scholger Martina
2019
with contributions from Denis Helic and Jacqueline More
Lindstaedt Stefanie , Geiger Bernhard, Pirker Gerhard
2019
Big Data and data-driven modeling are receiving more and more attention in various research disciplines, where they are often considered as universal remedies. Despite their remarkable records of success, in certain cases a purely data-driven approach has proven to be suboptimal or even insufficient.This extended abstract briefly defines the terms Big Data and data-driven modeling and characterizes scenarios in which a strong focus on data has proven to be promising. Furthermore, it explains what progress can be made by fusing concepts from data science and machine learning with current physics-based concepts to form hybrid models, and how these can be applied successfully in the field of engine pre-simulation and engine control.
di Sciascio Maria Cecilia, Strohmaier David, Errecalde Marcelo Luis, Veas Eduardo Enrique
2019
Digital libraries and services enable users to access large amounts of data on demand. Yet, quality assessment of information encountered on the Internet remains an elusive open issue. For example, Wikipedia, one of the most visited platforms on the Web, hosts thousands of user-generated articles and undergoes 12 million edits/contributions per month. User-generated content is undoubtedly one of the keys to its success but also a hindrance to good quality. Although Wikipedia has established guidelines for the “perfect article,” authors find it difficult to assert whether their contributions comply with them and reviewers cannot cope with the ever-growing amount of articles pending review. Great efforts have been invested in algorithmic methods for automatic classification of Wikipedia articles (as featured or non-featured) and for quality flaw detection. Instead, our contribution is an interactive tool that combines automatic classification methods and human interaction in a toolkit, whereby experts can experiment with new quality metrics and share them with authors that need to identify weaknesses to improve a particular article. A design study shows that experts are able to effectively create complex quality metrics in a visual analytics environment. In turn, a user study evidences that regular users can identify flaws, as well as high-quality content based on the inspection of automatic quality scores.
di Sciascio Maria Cecilia, Brusilovsky Peter, Trattner Christoph, Veas Eduardo Enrique
2019
Information-seeking tasks with learning or investigative purposes are usually referred to as exploratory search. Exploratory search unfolds as a dynamic process where the user, amidst navigation, trial and error, and on-the-fly selections, gathers and organizes information (resources). A range of innovative interfaces with increased user control has been developed to support the exploratory search process. In this work, we present our attempt to increase the power of exploratory search interfaces by using ideas of social search—for instance, leveraging information left by past users of information systems. Social search technologies are highly popular today, especially for improving ranking. However, current approaches to social ranking do not allow users to decide to what extent social information should be taken into account for result ranking. This article presents an interface that integrates social search functionality into an exploratory search system in a user-controlled way that is consistent with the nature of exploratory search. The interface incorporates control features that allow the user to (i) express information needs by selecting keywords and (ii) to express preferences for incorporating social wisdom based on tag matching and user similarity. The interface promotes search transparency through color-coded stacked bars and rich tooltips. This work presents the full series of evaluations conducted to, first, assess the value of the social models in contexts independent to the user interface, in terms of objective and perceived accuracy. Then, in a study with the full-fledged system, we investigated system accuracy and subjective aspects with a structural model revealing that when users actively interacted with all of its control features, the hybrid system outperformed a baseline content-based–only tool and users were more satisfied.
Gursch Heimo, Cemernek David, Wuttei Andreas, Kern Roman
2019
The increasing potential of Information and Communications Technology (ICT) drives higher degrees of digitisation in the manufacturing industry. Such catchphrases as “Industry 4.0” and “smart manufacturing” reflect this tendency. The implementation of these paradigms is not merely an end to itself, but a new way of collaboration across existing department and process boundaries. Converting the process input, internal and output data into digital twins offers the possibility to test and validate the parameter changes via simulations, whose results can be used to update guidelines for shop-floor workers. The result is a Cyber-Physical System (CPS) that brings together the physical shop-floor, the digital data created in the manufacturing process, the simulations, and the human workers. The CPS offers new ways of collaboration on a shared data basis: the workers can annotate manufacturing problems directly in the data, obtain updated process guidelines, and use knowledge from other experts to address issues. Although the CPS cannot replace manufacturing management since it is formalised through various approaches, e. g., Six-Sigma or Advanced Process Control (APC), it is a new tool for validating decisions in simulation before they are implemented, allowing to continuously improve the guidelines.
Geiger Bernhard, Koch Tobias
2019
In 1959, Rényi proposed the information dimension and the d-dimensional entropy to measure the information content of general random variables. This paper proposes a generalization of information dimension to stochastic processes by defining the information dimension rate as the entropy rate of the uniformly quantized stochastic process divided by minus the logarithm of the quantizer step size 1/m in the limit as m → ∞. It is demonstrated that the information dimension rate coincides with the rate-distortion dimension, defined as twice the rate-distortion function R(D) of the stochastic process divided by - log(D) in the limit as D ↓ 0. It is further shown that among all multivariate stationary processes with a given (matrixvalued) spectral distribution function (SDF), the Gaussian process has the largest information dimension rate and the information dimension rate of multivariate stationary Gaussian processes is given by the average rank of the derivative of the SDF. The presented results reveal that the fundamental limits of almost zero-distortion recovery via compressible signal pursuit and almost lossless analog compression are different in general.
Kaiser Rene_DB
2019
Video content and technology is an integral part of our private and professional lives. We consume news and entertainment content, and besides communication and learning there are many more significant application areas. One area, however, where video content and technology is not (yet) utilized and exploited to a large extent are production environments in factories of the producing industries like the semiconductor and electronic components and systems (ECS) industries. This article outlines some of the opportunities and challenges towards better exploitation of video content and technology in such contexts. An understanding of the current situation is the basis for future socio-technical interventions where video technology may be integrated in work processes within factories.
Schweimer Christoph, Geiger Bernhard, Suleimenova Diana, Groen Derek, Gfrerer Christine, Pape David, Elsaesser Robert, Kocsis Albert Tihamér, Liszkai B., Horváth Zoltan
2019
Jorge Guerra Torres, Carlos Catania, Veas Eduardo Enrique
2019
Modern Network Intrusion Detection systems depend on models trained with up-to-date labeled data. Yet, the process of labeling a network traffic dataset is specially expensive, since expert knowledge is required to perform the annotations. Visual analytics applications exist that claim to considerably reduce the labeling effort, but the expert still needs to ponder several factors before issuing a label. And, most often the effect of bad labels (noise) in the final model is not evaluated. The present article introduces a novel active learning strategy that learns to predict labels in (pseudo) real-time as the user performs the annotation. The system called RiskID, presents several innovations: i) a set of statistical methods summarize the information, which is illustrated in a visual analytics application, ii) that interfaces with the active learning strategy forbuilding a random forest model as the user issues annotations; iii) the (pseudo) real-time predictions of the model are fed back visually to scaffold the traffic annotation task. Finally, iv) an evaluation framework is introduced that represents a complete methodology for evaluating active learning solutions, including resilience against noise.
Jorge Guerra Torres, Veas Eduardo Enrique, Carlos Catania
2019
Labeling a real network dataset is specially expensive in computer security, as an expert has to ponder several factors before assigning each label. This paper describes an interactive intelligent system to support the task of identifying hostile behavior in network logs. The RiskID application uses visualizations to graphically encode features of network connections and promote visual comparison. In the background, two algorithms are used to actively organize connections and predict potential labels: a recommendation algorithm and a semi-supervised learning strategy. These algorithms together with interactive adaptions to the user interface constitute a behavior recommendation. A study is carried out to analyze how the algo-rithms for recommendation and prediction influence the workflow of labeling a dataset. The results of a study with 16 participants indicate that the behaviour recommendation significantly improves the quality of labels. Analyzing interaction patterns, we identify a more intuitive workflow used when behaviour recommendation isavailable.
Luzhnica Granit, Veas Eduardo Enrique
2019
Proficiency in any form of reading requires a considerable amount of practice. With exposure, people get better at recognising words, because they develop strategies that enable them to read faster. This paper describes a study investigating recognition of words encoded with a 6-channel vibrotactile display. We train 22 users to recognise ten letters of the English alphabet. Additionally, we repeatedly expose users to 12 words in the form of training and reinforcement testing.Then, we test participants on exposed and unexposed words to observe the effects of exposure to words. Our study shows that, with exposure to words, participants did significantly improve on recognition of exposed words. The findings suggest that such a word exposure technique could be used during the training of novice users in order to boost the word recognition of a particular dictionary of words.
Remonda Adrian, Krebs Sarah, Luzhnica Granit, Kern Roman, Veas Eduardo Enrique
2019
This paper explores the use of reinforcement learning (RL) models for autonomous racing. In contrast to passenger cars, where safety is the top priority, a racing car aims to minimize the lap-time. We frame the problem as a reinforcement learning task witha multidimensional input consisting of the vehicle telemetry, and a continuous action space. To findout which RL methods better solve the problem and whether the obtained models generalize to drivingon unknown tracks, we put 10 variants of deep deterministic policy gradient (DDPG) to race in two experiments: i) studying how RL methods learn to drive a racing car and ii) studying how the learning scenario influences the capability of the models to generalize. Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
Barreiros Carla, Pammer-Schindler Viktoria, Veas Eduardo Enrique
2019
We present a visual interface for communicating the internal state of a coffee machine via a tree metaphor. Nature-inspired representations have a positive impact on human well-being. We also hypothesize that representing the coffee machine asa tree stimulates emotional connection to it, which leads to better maintenance performance.The first study assessed the understandability of the tree representation, comparing it with icon-based and chart-based representations. An online survey with 25 participants indicated no significant mean error difference between representations.A two-week field study assessed the maintenance performance of 12 participants, comparing the tree representation with the icon-based representation. Based on 240 interactions with the coffee machine, we concluded that participants understood themachine states significantly better in the tree representation. Their comments and behavior indicated that the tree representation encouraged an emotional engagement with the machine. Moreover, the participants performed significantly more optional maintenance tasks with the tree representation.
Kowald Dominik, Traub Matthias, Theiler Dieter, Gursch Heimo, Lacic Emanuel, Lindstaedt Stefanie , Kern Roman, Lex Elisabeth
2019
Kowald Dominik, Lacic Emanuel, Theiler Dieter, Traub Matthias, Kuffer Lucky, Lindstaedt Stefanie , Lex Elisabeth
2019
Kowald Dominik, Lex Elisabeth, Schedl Markus
2019
Lex Elisabeth, Kowald Dominik
2019
Toller Maximilian, Santos Tiago, Kern Roman
2019
Season length estimation is the task of identifying the number of observations in the dominant repeating pattern of seasonal time series data. As such, it is a common pre-processing task crucial for various downstream applications. Inferring season length from a real-world time series is often challenging due to phenomena such as slightly varying period lengths and noise. These issues may, in turn, lead practitioners to dedicate considerable effort to preprocessing of time series data since existing approaches either require dedicated parameter-tuning or their performance is heavily domain-dependent. Hence, to address these challenges, we propose SAZED: spectral and average autocorrelation zero distance density. SAZED is a versatile ensemble of multiple, specialized time series season length estimation approaches. The combination of various base methods selected with respect to domain-agnostic criteria and a novel seasonality isolation technique, allow a broad applicability to real-world time series of varied properties. Further, SAZED is theoretically grounded and parameter-free, with a computational complexity of O(𝑛log𝑛), which makes it applicable in practice. In our experiments, SAZED was statistically significantly better than every other method on at least one dataset. The datasets we used for the evaluation consist of time series data from various real-world domains, sterile synthetic test cases and synthetic data that were designed to be seasonal and yet have no finite statistical moments of any order.
Toller Maximilian, Geiger Bernhard, Kern Roman
2019
Distance-based classification is among the most competitive classification methods for time series data. The most critical componentof distance-based classification is the selected distance function.Past research has proposed various different distance metrics ormeasures dedicated to particular aspects of real-world time seriesdata, yet there is an important aspect that has not been considered so far: Robustness against arbitrary data contamination. In thiswork, we propose a novel distance metric that is robust against arbitrarily “bad” contamination and has a worst-case computationalcomplexity of O(n logn). We formally argue why our proposedmetric is robust, and demonstrate in an empirical evaluation thatthe metric yields competitive classification accuracy when appliedin k-Nearest Neighbor time series classification.
Breitfuß Gert, Berger Martin, Doerrzapf Linda
2019
The Austrian Federal Ministry for Transport, Innovation and Technology created an initiative to fund the setup and operation of Living Labs to provide a vital innovation ecosystem for mobility and transport. Five Urban Mobility Labs (UML) located in four urban areas have been selected for funding (duration 4 years) and started operation in 2017. In order to cover the risk of a high dependency of public funding (which is mostly limited in time), the lab management teams face the challenge to develop a viable and future-proof UML Business Model. The overall research goal of this paper is to get empirical insights on how a UML Business Model evolves on a long-term perspective and which success factors play a role. To answer the research question, a method mix of desk research and qualitative methods have been selected. In order to get an insight into the UML Business Model, two circles of 10 semi-structured interviews (two responsible persons of each UML) are planned. The first circle of the interviews took place between July 2018 and January 2019. The second circle of interviews is planned for 2020. Between the two rounds of the survey, a Business Model workshop is planned to share and create ideas for future Business Model developments. Based on the gained research insights a comprehensive list of success factors and hands-on recommendations will be derived. This should help UML organizations in developing a viable Business Model in order to support sustainable innovations in transport and mobility.
Geiger Bernhard
2019
joint work with Tobias Koch, Universidad Carlos III de Madrid
Silva Nelson, Blascheck Tanja, Jianu Radu, Rodrigues Nils, Weiskopf Daniel, Raubal Martin, Schreck Tobias
2019
Visual analytics (VA) research provides helpful solutions for interactive visual data analysis when exploring large and complexdatasets. Due to recent advances in eye tracking technology, promising opportunities arise to extend these traditional VA approaches.Therefore, we discuss foundations for eye tracking support in VAsystems. We first review and discuss the structure and range oftypical VA systems. Based on a widely used VA model, we presentfive comprehensive examples that cover a wide range of usage scenarios. Then, we demonstrate that the VA model can be used tosystematically explore how concrete VA systems could be extendedwith eye tracking, to create supportive and adaptive analytics systems. This allows us to identify general research and applicationopportunities, and classify them into research themes. In a call foraction, we map the road for future research to broaden the use ofeye tracking and advance visual analytics.
Kaiser Rene_DB
2019
This paper gives a comprehensive overview of the Virtual Director concept. A Virtual Director is a software component automating the key decision making tasks of a TV broadcast director. It decides how to mix and present the available content streams on a particular playout device, most essentially deciding which camera view to show and when to switch to another. A Virtual Director allows to take decisions respecting individual user preferences and playout device characteristics. In order to take meaningful decisions, a Virtual Director must be continuously informed by real-time sensors which emit information about what is happening in the scene. From such (low-level) 'cues', the Virtual Director infers higher-level events, actions, facts and states which in turn trigger the real-time processes deciding on the presentation of the content. The behaviour of a Virtual Director, the 'production grammar', defines how decisions are taken, generally encompassing two main aspects: selecting what is most relevant, and deciding how to show it, applying cinematographic principles.
Thalmann Stefan, Gursch Heimo, Suschnigg Josef, Gashi Milot, Ennsbrunner Helmut, Fuchs Anna Katharina, Schreck Tobias, Mutlu Belgin, Mangler Jürgen, Huemer Christian, Lindstaedt Stefanie
2019
Current trends in manufacturing lead to more intelligent products, produced in global supply chains in shorter cycles, taking more and complex requirements into account. To manage this increasing complexity, cognitive decision support systems, building on data analytic approaches and focusing on the product life cycle, stages seem a promising approach. With two high-tech companies (world market leader in their domains) from Austria, we are approaching this challenge and jointly develop cognitive decision support systems for three real world industrial use cases. Within this position paper, we introduce our understanding of cognitive decision support and we introduce three industrial use cases, focusing on the requirements for cognitive decision support. Finally, we describe our preliminary solution approach for each use case and our next steps.
Stepputat Kendra, Kienreich Wolfgang, Dick Christopher S.
2019
With this article, we present the ongoing research project “Tango Danceability of Music in European Perspective” and the transdisciplinary research design it is built upon. Three main aspects of tango argentino are in focus—the music, the dance, and the people—in order to understand what is considered danceable in tango music. The study of all three parts involves computer-aided analysis approaches, and the results are examined within ethnochoreological and ethnomusicological frameworks. Two approaches are illustrated in detail to show initial results of the research model. Network analysis based on the collection of online tango event data and quantitative evaluation of data gathered by an online survey showed significant results, corroborating the hypothesis of gatekeeping effects in the shaping of musical preferences. The experiment design includes incorporation of motion capture technology into dance research. We demonstrate certain advantages of transdisciplinary approaches in the study of Intangible Cultural Heritage, in contrast to conventional studies based on methods from just one academic discipline.
Pammer-Schindler Viktoria
2019
This is a commentary of mine, created in the context of an open review process, selected for publication alongside the accepted original paper in a juried process, and published alongside the paper at the given DOI,
Xie Benjamin, Harpstead Erik, DiSalvo Betsy, Slovak Petr, Kharuffa Ahmed, Lee Michael J., Pammer-Schindler Viktoria, Ogan Amy, Williams Joseph Jay
2019
Winter Kevin, Kern Roman
2019
This paper presents the Know-Center system submitted for task 5 of the SemEval-2019workshop. Given a Twitter message in either English or Spanish, the task is to first detect whether it contains hateful speech and second,to determine the target and level of aggression used. For this purpose our system utilizes word embeddings and a neural network architecture, consisting of both dilated and traditional convolution layers. We achieved aver-age F1-scores of 0.57 and 0.74 for English and Spanish respectively.
Maritsch Martin, Diana Suleimenova, Geiger Bernhard, Derek Groen
2019
Geiger Bernhard, Schrunner Stefan, Kern Roman
2019
Schrunner and Geiger have contributed equally to this work.
Adolfo Ruiz Calleja, Dennerlein Sebastian, Kowald Dominik, Theiler Dieter, Lex Elisabeth, Tobias Ley
2019
In this paper, we propose the Social Semantic Server (SSS) as a service-based infrastructure for workplace andprofessional Learning Analytics (LA). The design and development of the SSS has evolved over 8 years, startingwith an analysis of workplace learning inspired by knowledge creation theories and its application in differentcontexts. The SSS collects data from workplace learning tools, integrates it into a common data model based ona semantically-enriched Artifact-Actor Network and offers it back for LA applications to exploit the data. Further,the SSS design promotes its flexibility in order to be adapted to different workplace learning situations. Thispaper contributes by systematizing the derivation of requirements for the SSS according to the knowledge creationtheories, and the support offered across a number of different learning tools and LA applications integrated to it.It also shows evidence for the usefulness of the SSS extracted from four authentic workplace learning situationsinvolving 57 participants. The evaluation results indicate that the SSS satisfactorily supports decision making indiverse workplace learning situations and allow us to reflect on the importance of the knowledge creation theoriesfor such analysis.
Renner Bettina, Wesiak Gudrun, Pammer-Schindler Viktoria, Prilla Michael, Müller Lars, Morosini Dalia, Mora Simone, Faltin Nils, Cress Ulrike
2019
Fessl Angela, Simic Ilija, Barthold Sabine, Pammer-Schindler Viktoria
2019
Information literacy, the access to knowledge and use of it are becoming a precondition for individuals to actively take part in social,economic, cultural and political life. Information literacy must be considered as a fundamental competency like the ability to read, write and calculate. Therefore, we are working on automatic learning guidance with respect to three modules of the information literacy curriculum developed by the EU (DigComp 2.1 Framework). In prior work, we havelaid out the essential research questions from a technical side. In this work, we follow-up by specifying the concept to micro learning, and micro learning content units. This means, that the overall intervention that we design is concretized to: The widget is initialized by assessing the learners competence with the help of a knowledge test. This is the basis for recommending suitable micro learning content, adapted to the identified competence level. After the learner has read/worked through the content, the widget asks a reflective question to the learner. The goal of the reflective question is to deepen the learning. In this paper we present the concept of the widget and its integration in a search platform.
Fruhwirth Michael, Breitfuß Gert, Müller Christiana
2019
Die Nutzung von Daten in Unternehmen zur Analyse und Beantwortung vielfältiger Fragestellungen ist “daily business”. Es steckt aber noch viel mehr Potenzial in Daten abseits von Prozessoptimierungen und Business Intelligence Anwendungen. Der vorliegende Beitrag gibt einen Überblick über die wichtigsten Aspekte bei der Transformation von Daten in Wert bzw. bei der Entwicklung datengetriebener Geschäftsmodelle. Dabei werden die Charakteristika von datengetriebenen Geschäftsmodellen und die benötigten Kompetenzen näher beleuchtet. Vier Fallbeispiele österreichischer Unternehmen geben Einblicke in die Praxis und abschließend werden aktuelle Herausforderungen und Entwicklungen diskutiert.
Luzhnica Granit, Veas Eduardo Enrique
2019
Luzhnica Granit, Veas Eduardo Enrique
2019
This paper proposes methods of optimising alphabet encoding for skin reading in order to avoid perception errors. First, a user study with 16 participants using two body locations serves to identify issues in recognition of both individual letters and words. To avoid such issues, a two-step optimisation method of the symbol encoding is proposed and validated in a second user study with eight participants using the optimised encoding with a seven vibromotor wearable layout on the back of the hand. The results show significant improvements in the recognition accuracy of letters (97%) and words (97%) when compared to the non-optimised encoding.
Breitfuß Gert, Fruhwirth Michael, Pammer-Schindler Viktoria, Stern Hermann, Dennerlein Sebastian
2019
Increasing digitization is generating more and more data in all areas ofbusiness. Modern analytical methods open up these large amounts of data forbusiness value creation. Expected business value ranges from process optimizationsuch as reduction of maintenance work and strategic decision support to businessmodel innovation. In the development of a data-driven business model, it is usefulto conceptualise elements of data-driven business models in order to differentiateand compare between examples of a data-driven business model and to think ofopportunities for using data to innovate an existing or design a new businessmodel. The goal of this paper is to identify a conceptual tool that supports datadrivenbusiness model innovation in a similar manner: We applied three existingclassification schemes to differentiate between data-driven business models basedon 30 examples for data-driven business model innovations. Subsequently, wepresent the strength and weaknesses of every scheme to identify possible blindspots for gaining business value out of data-driven activities. Following thisdiscussion, we outline a new classification scheme. The newly developed schemecombines all positive aspects from the three analysed classification models andresolves the identified weaknesses.
Clemens Bloechl, Rana Ali Amjad, Geiger Bernhard
2019
We present an information-theoretic cost function for co-clustering, i.e., for simultaneous clustering of two sets based on similarities between their elements. By constructing a simple random walk on the corresponding bipartite graph, our cost function is derived from a recently proposed generalized framework for information-theoretic Markov chain aggregation. The goal of our cost function is to minimize relevant information loss, hence it connects to the information bottleneck formalism. Moreover, via the connection to Markov aggregation, our cost function is not ad hoc, but inherits its justification from the operational qualities associated with the corresponding Markov aggregation problem. We furthermore show that, for appropriate parameter settings, our cost function is identical to well-known approaches from the literature, such as “Information-Theoretic Co-Clustering” by Dhillon et al. Hence, understanding the influence of this parameter admits a deeper understanding of the relationship between previously proposed information-theoretic cost functions. We highlight some strengths and weaknesses of the cost function for different parameters. We also illustrate the performance of our cost function, optimized with a simple sequential heuristic, on several synthetic and real-world data sets, including the Newsgroup20 and the MovieLens100k data sets.
Lovric Mario, Molero Perez Jose Manuel, Kern Roman
2019
The authors present an implementation of the cheminformatics toolkit RDKit in a distributed computing environment, Apache Hadoop. Together with the Apache Spark analytics engine, wrapped by PySpark, resources from commodity scalable hardware can be employed for cheminformatic calculations and query operations with basic knowledge in Python programming and understanding of the resilient distributed datasets (RDD). Three use cases of cheminfomatical computing in Spark on the Hadoop cluster are presented; querying substructures, calculating fingerprint similarity and calculating molecular descriptors. The source code for the PySpark‐RDKit implementation is provided. The use cases showed that Spark provides a reasonable scalability depending on the use case and can be a suitable choice for datasets too big to be processed with current low‐end workstations
Robert Gutounig, Romana Rauter, Susanne Sackl-Sharif , Sabine Klinger, Dennerlein Sebastian
2018
Mit Digitalisierung werden unterschiedliche Erwartungen verbunden, die aus Organisationssicht bzw. aus ArbeitnehmerInnensicht durchaus ungleich ausfallen können. Eindeutig festzustellen ist jedenfalls die zunehmende Durch-dringung von Arbeitsprozessen durch digitale Tools. Bekannt sind mittlerweile auch zahlreiche gesundheitsbelastende Faktoren, die sich etwa durch Beschleu-nigung bzw. Intensivierung der Arbeit ergeben. Vor diesem Hintergrund wurde mittels einer explorativen Studie aus dem Gesundheitsdienstleistungsbereich er-hoben, vor welche neuen Herausforderungen ArbeitnehmerInnen und Organisa-tionen durch die zunehmende digitale Mediennutzung gestellt werden. Aus den Interviews und der Befragung geht hervor, dass die Durchführung der Arbeit ohne digitale Unterstützung nicht mehr denkbar wäre, besonders hinsichtlich der Dokumentation von Daten, aber zunehmend auch die Arbeit an den PatientInnen selbst betreffend. Durchgängig sind Ambivalenzen in der Wahrnehmung der Mit-arbeiterInnen zu finden, z.B. erleichterter Zugriff auf Daten vs. Kontrollregime durch den Arbeitgeber. Weitere identifizierte Themenfelder für Forschung zu Auswirkungen und Potenzialen digitaler Mediennutzung beinhalten u.a. Digital Literacy und partizipative Ansätze der Technikentwicklung. (PDF) Zwischen Produktivität und Überlastung. Auswirkungen digitalisierter Arbeitsprozesse im Gesundheitsdienstleistungsbereich am Beispiel Krankenhaus. Available from: https://www.researchgate.net/publication/324835753_Zwischen_Produktivitat_und_Uberlastung_Auswirkungen_digitalisierter_Arbeitsprozesse_im_Gesundheitsdienstleistungsbereich_am_Beispiel_Krankenhaus [accessed Nov 15 2019].
Mutlu Belgin, Simic Ilija, Cicchinelli Analia, Sabol Vedran, Veas Eduardo Enrique
2018
Learning dashboards (LD) are commonly applied for monitoring and visual analysis of learning activities. The main purpose of LDs is to increase awareness, to support self assessment and reflection and, when used in collaborative learning platforms (CLP), to improve the collaboration among learners. Collaborative learning platforms serve astools to bring learners together, who share the same interests and ideas and are willing to work and learn together – a process which, ideally, leads to effective knowledge building. However, there are collaborationand communications factors which affect the effectiveness of knowledge creation – human, social and motivational factors, design issues, technical conditions, and others. In this paper we introduce a learning dashboard – the Visualizer – that serves the purpose of (statistically) analyzing andexploring the behaviour of communities and users. Visualizer allows a learner to become aware of other learners with similar characteristics and also to draw comparisons with individuals having similar learninggoals. It also helps a teacher become aware of how individuals working in the groups (learning communities) interact with one another and across groups.
Fessl Angela, Wesiak Gudrun, Pammer-Schindler Viktoria
2018
Managing knowledge in periods of digital change requires not only changes in learning processes but also in knowledge transfer. For this knowledge transfer, we see reflective learning as an important strategy to keep the vast body of theoretical knowledge fresh and up-to-date, and to transfer theoretical knowledge to practical experience. In this work, we present a study situated in a qualification program for stroke nurses in Germany. In the seven-week study, 21 stroke nurses used a quiz on medical knowledge as an additional learning instrument. The quiz contained typical quiz questions (“content questions”) as well as reflective questions that aimed at stimulating nurses to reflect on the practical relevance of the learned knowledge. We particularly looked at how reflective questions can support the transfer of theoretical knowledge into practice. The results show that by playful learning and presenting reflective questions at the right time, participants reflected and related theoretical knowledge to practical experience.
2018
Vibrotactile skin-reading uses wearable vibrotactile displays to convey dynamically generated textual information. Such wearable displays have potential to be used in a broad range of applications. Nevertheless, the reading process is passive, and users have no control over the reading flow. To compensate for such drawback, this paper investigates what kind of interactions are necessary for vibrotactile skin reading and the modalities of such interactions. An interaction concept for skin reading was designed by taking into account the reading as a process. We performed a formative study with 22 participants to assess reading behaviour in word and sentence reading using a six-channel wearable vibrotactile display. Our study shows that word based interactions in sentence reading are more often used and preferred by users compared to character-based interactions and that users prefer gesture-based interaction for skin reading. Finally, we discuss how such wearable vibrotactile displays could be extended with sensors that would enable recognition of such gesture-based interaction. This paper contributes a set of guidelines for the design of wearable haptic displays for text communication.
Geiger Bernhard
2018
This short note presents results about the symmetric Jensen-Shannon divergence between two discrete mixture distributions p1 and p2. Specifically, for i=1,2, pi is the mixture of a common distribution q and a distribution p̃ i with mixture proportion λi. In general, p̃ 1≠p̃ 2 and λ1≠λ2. We provide experimental and theoretical insight to the behavior of the symmetric Jensen-Shannon divergence between p1 and p2 as the mixture proportions or the divergence between p̃ 1 and p̃ 2 change. We also provide insight into scenarios where the supports of the distributions p̃ 1, p̃ 2, and q do not coincide.
Ross-Hellauer Anthony, Schmidt Birgit, Kramer Bianca
2018
As open access (OA) to publications continues to gather momentum, we should continuously question whether it is moving in the right direction. A novel intervention in this space is the creation of OA publishing platforms commissioned by funding organizations. Examples include those of the Wellcome Trust and the Gates Foundation, as well as recently announced initiatives from public funders like the European Commission and the Irish Health Research Board. As the number of such platforms increases, it becomes urgently necessary to assess in which ways, for better or worse, this emergent phenomenon complements or disrupts the scholarly communications landscape. This article examines ethical, organizational, and economic strengths and weaknesses of such platforms, as well as usage and uptake to date, to scope the opportunities and threats presented by funder OA platforms in the ongoing transition to OA. The article is broadly supportive of the aims and current implementations of such platforms, finding them a novel intervention which stands to help increase OA uptake, control costs of OA, lower administrative burden on researchers, and demonstrate funders’ commitment to fostering open practices. However, the article identifies key areas of concern about the potential for unintended consequences, including the appearance of conflicts of interest, difficulties of scale, potential lock-in, and issues of the branding of research. The article ends with key recommendations for future consideration which include a focus on open scholarly infrastructure.
Geiger Bernhard
2018
This entry for the 2018 MDPI English Writing Prize has been published as a chapter of "The Global Benefits of Open Research", edited by Martyn Rittman.
Fernández Alonso, Miguel Yuste, Kern Roman
2018
Collection of environmental datasets recorded with Tinkerforge sensors and used in the development of a bachelor thesis on the topic of frequent pattern mining. The data was collected in several locations in the city of Graz, Austria, as well as an additional dataset recorded in Santander, Spain.
Fessl Angela, Kowald Dominik, Susana López Sola, Ana Moreno, Ricardo Alonso, Maturana, Thalmann_TU Stefan
2018
Learning analytics deals with tools and methods for analyzing anddetecting patterns in order to support learners while learning in formal as wellas informal learning settings. In this work, we present the results of two focusgroups in which the effects of a learning resource recommender system and adashboard based on analytics for everyday learning were discussed from twoperspectives: (1) knowledge workers as self-regulated everyday learners (i.e.,informal learning) and (2) teachers who serve as instructors for learners (i.e.,formal learning). Our findings show that the advantages of analytics for everydaylearning are three-fold: (1) it can enhance the motivation to learn, (2) it canmake learning easier and broadens the scope of learning, and (3) it helps to organizeand to systematize everyday learning.
Pammer-Schindler Viktoria, Fessl Angela, Wertner Alfred
2018
Becoming a data-savvy professional requires skills and competencesin information literacy, communication and collaboration, and content creationin digital environments. In this paper, we present a concept for automatic learningguidance in relation to an information literacy curriculum. The learning guidanceconcept has three components: Firstly, an open learner model in terms of an informationliteracy curriculum is created. Based on the data collected in the learnermodel, learning analytics is used in combination with a corresponding visualizationto present the current learning status of the learner. Secondly, reflectionprompts in form of sentence starters or reflective questions adaptive to the learnermodel aim to guide learning. Thirdly, learning resources are suggested that arestructured along learning goals to motivate learners to progress. The main contributionof this paper is to discuss what we see as main research challenges withrespect to existing literature on open learner modeling, learning analytics, recommendersystems for learning, and learning guidance.
Iacopo Vagliano, Franziska Günther, Mathias Heinz, Aitor Apaolaza, Irina Bienia, Breitfuß Gert, Till Blume, Chrysa Collyda, Fessl Angela, Sebastian Gottfried, Hasitschka Peter, Jasmin Kellermann, Thomas Köhler, Annalouise Maas, Vasileios Mezaris, Ahmed Saleh, Andrzej Skulimowski, Thalmann_TU Stefan, Markel Vigo, Wertner Alfred, Michael Wiese, Ansgar Scherp
2018
In the Big Data era, people can access vast amounts of information, but often lack the time, strategies and tools to efficiently extract the necessary knowledge from it. Research and innovation staff needs to effectively obtain an overview of publications, patents, funding opportunities, etc., to derive an innovation strategy. The MOVING platform enables its users to improve their information literacy by training how to exploit data mining methods in their daily research tasks. Through a novel integrated working and training environment, the platform supports the education of data-savvy information professionals and enables them to deal with the challenges of Big Data and open innovation.
Luzhnica Granit, Veas Eduardo Enrique, Caitlyn Seim
2018
This paper investigates the effects of using passive haptic learning to train the skill of comprehending text from vibrotactile patterns. The method of transmitting messages, skin-reading, is effective at conveying rich information but its active training method requires full user attention, is demanding, time-consuming, and tedious. Passive haptic learning offers the possibility to learn in the background while performing another primary task. We present a study investigating the use of passive haptic learning to train for skin-reading.
Luzhnica Granit, Veas Eduardo Enrique
2018
Sensory substitution has been a research subject for decades, and yet its applicability outside of the research is very limited. Thus creating scepticism among researchers that a full sensory substitution is not even possible [8]. In this paper, we do not substitute the entire perceptual channel. Instead, we follow a different approach which reduces the captured information drastically. We present concepts and implementation of two mobile applications which capture the user's environment, describe it in the form of text and then convey its textual description to the user through a vibrotactile wearable display. The applications target users with hearing and vision impairments.
Barreiros Carla, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2018
In the context of the Internet of Things (IoT), every device have sensing and computing capabilities to enhance many aspects of human life. There are more and more IoT devices in our homes and at our workplaces, and they still depend on human expertise and intervention for tasks as maintenance and (re)configuration. Using biophilic design and calm computing principles, we developed a nature-inspired representation, BioIoT, to communicate sensor information. This visual language contributes to the users’ well-being and performance while being as easy to understand as traditional data representations. Our work is based on the assumption that if machines are perceived to be more like living beings, users will take better care of them, which ideally would translate into a better device maintenance. In addition, the users’ overall well-being can be improved by bringing nature to their lives. In this work, we present two use case scenarios under which the BioIoT concept can be applied and demonstrate its potential benefits in households and at workplaces.
Lex Elisabeth, Wagner Mario, Kowald Dominik
2018
In this work, we propose a content-based recommendation approach to increase exposure to opposing beliefs and opinions. Our aim is to help provide users with more diverse viewpoints on issues, which are discussed in partisan groups from different perspectives. Since due to the backfire effect, people's original beliefs tend to strengthen when challenged with counter evidence, we need to expose them to opposing viewpoints at the right time. The preliminary work presented here describes our first step into this direction. As illustrative showcase, we take the political debate on Twitter around the presidency of Donald Trump.
Kowald Dominik, Lex Elisabeth
2018
The micro-blogging platform Twitter allows its nearly 320 million monthly active users to build a network of follower connections to other Twitter users (i.e., followees) in order to subscribe to content posted by these users. With this feature, Twitter has become one of the most popular social networks on the Web and was also the first platform that offered the concept of hashtags. Hashtags are freely-chosen keywords, which start with the hash character, to annotate, categorize and contextualize Twitter posts (i.e., tweets).Although hashtags are widely accepted and used by the Twitter community, the heavy reuse of hashtags that are popular in the personal Twitter networks (i.e., own hashtags and hashtags used by followees) can lead to filter bubble effects and thus, to situations, in which only content associated with these hashtags are presented to the user. These filter bubble effects are also highly associated with the concept of confirmation bias, which is the tendency to favor and reuse information that confirms personal preferences. One example would be a Twitter user who is interested in political tweets of US president Donald Trump. Depending on the hashtags used, the user could either be stuck in a pro-Trump (e.g., #MAGA) or contra-Trump (e.g., #fakepresident) filter bubble. Therefore, the goal of this paper is to study confirmation bias and filter bubble effects in hashtag usage on Twitter by treating the reuse of hashtags as a phenomenon that fosters confirmation bias.
Gursch Heimo, Silva Nelson, Reiterer Bernhard , Paletta Lucas , Bernauer Patrick, Fuchs Martin, Veas Eduardo Enrique, Kern Roman
2018
The project Flexible Intralogistics for Future Factories (FlexIFF) investigates human-robot collaboration in intralogistics teams in the manufacturing industry, which form a cyber-physical system consisting of human workers, mobile manipulators, manufacturing machinery, and manufacturing information systems. The workers use Virtual Reality (VR) and Augmented Reality (AR) devices to interact with the robots and machinery. The right information at the right time is key for making this collaboration successful. Hence, task scheduling for mobile manipulators and human workers must be closely linked with the enterprise’s information systems, offering all actors on the shop floor a common view of the current manufacturing status. FlexIFF will provide useful, well-tested, and sophisticated solutions for cyberphysicals systems in intralogistics, with humans and robots making the most of their strengths, working collaboratively and helping each other.
Lacic Emanuel, Kowald Dominik, Lex Elisabeth
2018
In this paper, we present work-in-progress on applying user pre-filtering to speed up and enhance recommendations based on Collab-orative Filtering. We propose to pre-filter users in order to extracta smaller set of candidate neighbors, who exhibit a high numberof overlapping entities and to compute the final user similaritiesbased on this set. To realize this, we exploit features of the high-performance search engine Apache Solr and integrate them into ascalable recommender system. We have evaluated our approachon a dataset gathered from Foursquare and our evaluation resultssuggest that our proposed user pre-filtering step can help to achieveboth a better runtime performance as well as an increase in overallrecommendation accuracy
Kowald Dominik, Lacic Emanuel, Theiler Dieter, Lex Elisabeth
2018
In this paper, we present preliminary results of AFEL-REC, a rec-ommender system for social learning environments. AFEL-RECis build upon a scalable so‰ware architecture to provide recom-mendations of learning resources in near real-time. Furthermore,AFEL-REC can cope with any kind of data that is present in sociallearning environments such as resource metadata, user interactionsor social tags. We provide a preliminary evaluation of three rec-ommendation use cases implemented in AFEL-REC and we €ndthat utilizing social data in form of tags is helpful for not only im-proving recommendation accuracy but also coverage. ‘is papershould be valuable for both researchers and practitioners inter-ested in providing resource recommendations in social learningenvironments
Cuder Gerald, Baumgartner Christian
2018
Cancer is one of the most uprising diseases in our modern society and is defined by an uncontrolled growth of tissue. This growth is caused by mutation on the cellular level. In this thesis, a data-mining workflow was developed to find these responsible genes among thousands of irrelevant ones in three microarray datasets of different cancer types by applying machine learning methods such as classification and gene selection. In this work, four state-of-the-art selection algorithms are compared with a more sophisticated method, termed Stacked-Feature Ranking (SFR), further increasing the discriminatory ability in gene selection.
Dennerlein Sebastian, Kowald Dominik, Lex Elisabeth, Ley Tobias, Pammer-Schindler Viktoria
2018
Co-Creation methods for interactive computer systems design are by now widely accepted as part of the methodological repertoire in any software development process. As the communityis becoming more and more aware of the factthat software is driven by complex, artificially intelligent algorithms, the question arises what “co-creation of algorithms” in the sense of users ex-plicitly shaping the parameters of algorithms during co-creation, could mean, and how it would work. They are not tangible like featuresin a tool and desired effects are harder to be explained or understood. Therefore, we propose an it-erative simulation-based Co-Design approach that allows to Co-Create Algo-rithms together with the domain professionals by making their assumptions and effects observable. The proposal is a methodological idea for discussion within the EC-TEL community, yet to be applied in a research practice
Duricic Tomislav, Lacic Emanuel, Kowald Dominik, Lex Elisabeth
2018
User-based Collaborative Filtering (CF) is one of the most popularapproaches to create recommender systems. Œis approach is basedon €nding the most relevant k users from whose rating history wecan extract items to recommend. CF, however, su‚ers from datasparsity and the cold-start problem since users o‰en rate only asmall fraction of available items. One solution is to incorporateadditional information into the recommendation process such asexplicit trust scores that are assigned by users to others or implicittrust relationships that result from social connections betweenusers. Such relationships typically form a very sparse trust network,which can be utilized to generate recommendations for users basedon people they trust. In our work, we explore the use of a measurefrom network science, i.e. regular equivalence, applied to a trustnetwork to generate a similarity matrix that is used to select thek-nearest neighbors for recommending items. We evaluate ourapproach on Epinions and we €nd that we can outperform relatedmethods for tackling cold-start users in terms of recommendationaccuracy
Cicchinelli Analia, Veas Eduardo Enrique, Pardo Abelardo, Pammer-Schindler Viktoria, Fessl Angela, Barreiros Carla, Lindstaedt Stefanie
2018
This paper aims to identify self-regulation strategies from students' interactions with the learning management system (LMS). We used learning analytics techniques to identify metacognitive and cognitive strategies in the data. We define three research questions that guide our studies analyzing i) self-assessments of motivation and self regulation strategies using standard methods to draw a baseline, ii) interactions with the LMS to find traces of self regulation in observable indicators, and iii) self regulation behaviours over the course duration. The results show that the observable indicators can better explain self-regulatory behaviour and its influence in performance than preliminary subjective assessments.
Silva Nelson, Schreck Tobias, Veas Eduardo Enrique, Sabol Vedran, Eggeling Eva, Fellner Dieter W.
2018
We developed a new concept to improve the efficiency of visual analysis through visual recommendations. It uses a novel eye-gaze based recommendation model that aids users in identifying interesting time-series patterns. Our model combines time-series features and eye-gaze interests, captured via an eye-tracker. Mouse selections are also considered. The system provides an overlay visualization with recommended patterns, and an eye-history graph, that supports the users in the data exploration process. We conducted an experiment with 5 tasks where 30 participants explored sensor data of a wind turbine. This work presents results on pre-attentive features, and discusses the precision/recall of our model in comparison to final selections made by users. Our model helps users to efficiently identify interesting time-series patterns.
Fessl Angela, Wertner Alfred, Pammer-Schindler Viktoria
2018
In this demonstration paper, we describe a prototype that visualizes usage of different search interfaces on a single search platform with the goal to motivate users to explore alternative search interfaces. The underlying rationale is, that by now the one-line-input to search engines is so standard, that we can assume users’ search behavior to be operationalized. This means, that users may be reluctant to explore alternatives even though these may be suited better to their context of use / search task.
di Sciascio Maria Cecilia, Brusilovsky Peter, Veas Eduardo Enrique
2018
Information-seeking tasks with learning or investigative purposes are usually referred to as exploratory search. Exploratory search unfolds as a dynamic process where the user, amidst navigation, trial-and-error and on-the-fly selections, gathers and organizes information (resources). A range of innovative interfaces with increased user control have been developed to support exploratory search process. In this work we present our attempt to increase the power of exploratory search interfaces by using ideas of social search, i.e., leveraging information left by past users of information systems. Social search technologies are highly popular nowadays, especially for improving ranking. However, current approaches to social ranking do not allow users to decide to what extent social information should be taken into account for result ranking. This paper presents an interface that integrates social search functionality into an exploratory search system in a user-controlled way that is consistent with the nature of exploratory search. The interface incorporates control features that allow the user to (i) express information needs by selecting keywords and (ii) to express preferences for incorporating social wisdom based on tag matching and user similarity. The interface promotes search transparency through color-coded stacked bars and rich tooltips. In an online study investigating system accuracy and subjective aspects with a structural model we found that, when users actively interacted with all its control features, the hybrid system outperformed a baseline content-based-only tool and users were more satisfied.
Pammer-Schindler Viktoria, Thalmann Stefan, Fessl Angela, Füssel Julia
2018
Traditionally, professional learning for senior professionalsis organized around face-2-face trainings. Virtual trainingsseem to offer an opportunity to reduce costs related to traveland travel time. In this paper we present a comparative casestudy that investigates the differences between traditionalface-2-face trainings in physical reality, and virtualtrainings via WebEx. Our goal is to identify how the way ofcommunication impacts interaction between trainees,between trainees and trainers, and how it impactsinterruptions. We present qualitative results fromobservations and interviews of three cases in differentsetups (traditional classroom, web-based with allparticipants co-located, web-based with all participants atdifferent locations) and with overall 25 training participantsand three trainers. The study is set within one of the BigFour global auditing companies, with advanced seniorauditors as learning cohort
Kaiser Rene_DB
2018
Production companies typically have not utilized video content and video technology in factory environ-ments to a significant extent in the past. However, the current Industry 4.0 movement inspires companies to reconsider production processes and job qualifications for their shop floor workforce. Infrastructure and machines get connected to central manufacturing execution systems in digitization and datafication efforts. In the realm of this fourth industrial revolution, companies are encouraged to revisit their strategy regarding video-based applications as well. This paper discusses the current situation and selected aspects of opportu-nities and challenges of video technology that might enable added value in such environments.
Kaiser Rene_DB
2018
This paper aims to contribute to the discussion on 360° video storytelling. It describes the 'Virtual Director' concept, an enabling technology that was developed to personalize video presentation in applications where multiple live streams are available at the same time. Users are supported in dynamically changing viewpoints, as the Virtual Director essentially automates the tasks of a human director. As research prototypes on a proof-of-concept maturity level, this approach has been evaluated for personalized live event broadcast, group video communication and distributed theatre performances. While on the capture side a 180° high-resolution panoramic video feed has been used in one of these application scenarios, so far, only traditional 2D video screen were investigated for playout. The research question this paper aims to contribute to is how technology in general, and an adaptation of the Virtual Director concept in particular, could assist users in their needs when consuming 360° content, both live and recorded. In contexts when users do not want to enjoy the freedom to look into any direction, or when content creators want them to look in a certain direction, how could the interaction with and intervention of a Virtual Director be applied from a storytelling point of view?
Kowald Dominik
2018
Social tagging systems enable users to collaboratively assign freely chosen keywords (i.e.,tags) to resources (e.g., Web links). In order to support users in nding descriptive tags, tagrecommendation algorithms have been proposed. One issue of current state-of-the-art tagrecommendation algorithms is that they are often designed in a purely data-driven way andthus, lack a thorough understanding of the cognitive processes that play a role when peopleassign tags to resources. A prominent example is the activation equation of the cognitivearchitecture ACT-R, which formalizes activation processes in human memory to determineif a specic memory unit (e.g., a word or tag) will be needed in a specic context. It is theaim of this thesis to investigate if a cognitive-inspired approach, which models activationprocesses in human memory, can improve tag recommendations.For this, the relation between activation processes in human memory and usage prac-tices of tags is studied, which reveals that (i) past usage frequency, (ii) recency, and (iii)semantic context cues are important factors when people reuse tags. Based on this, acognitive-inspired tag recommendation approach termed BLLAC+MPr is developed based onthe activation equation of ACT-R. An extensive evaluation using six real-world folksonomydatasets shows that BLLAC+MPr outperforms current state-of-the-art tag recommendationalgorithms with respect to various evaluation metrics. Finally, BLLAC+MPr is utilized forhashtag recommendations in Twitter to demonstrate its generalizability in related areas oftag-based recommender systems. The ndings of this thesis demonstrate that activationprocesses in human memory can be utilized to improve not only social tag recommendationsbut also hashtag recommendations. This opens up a number of possible research strands forfuture work, such as the design of cognitive-inspired resource recommender systems
Ross-Hellauer Anthony, Kowald Dominik, Lex Elisabeth
2018
Fruhwirth Michael, Breitfuß Gert, Pammer-Schindler Viktoria
2018
The increasing amount of generated data and advances in technology and data analytics and are enablers and drivers for new business models with data as a key resource. Currently established organisations struggle with identifying the value and benefits of data and have a lack of know-how, how to develop new products and services based on data. There is very little research that is narrowly focused on data-driven business model innovation in established organisations. The aim of this research is to investigate existing activities within Austrians enterprises with regard to exploring data-driven business models and challenges encountered in this endeavour. The outcome of the research in progress paper are categories of challenges related to organisation, business and technology, established organisations in Austria face during data-driven business model innovation
Cuder Gerald, Breitfuß Gert, Kern Roman
2018
Electric vehicles have enjoyed a substantial growth in recent years. One essential part to ensure their success in the future is a well-developed and easy-to-use charging infrastructure. Since charging stations generate a lot of (big) data, gaining useful information out of this data can help to push the transition to E-Mobility. In a joint research project, the Know-Center, together with the has.to.be GmbH applied data analytics methods and visualization technologies on the provided data sets. One objective of the research project is, to provide a consumption forecast based on the historical consumption data. Based on this information, the operators of charging stations are able to optimize the energy supply. Additionally, the infrastructure data were analysed with regard to "predictive maintenance", aiming to optimize the availability of the charging stations. Furthermore, advanced prediction algorithms were applied to provide services to the end user regarding availability of charging stations.
Andrusyak Bohdan, Kugi Thomas, Kern Roman
2018
The stock and foreign exchange markets are the two fundamental financial markets in the world and play acrucial role in international business. This paper examines the possibility of predicting the foreign exchangemarket via machine learning techniques, taking the stock market into account. We compare prediction modelsbased on algorithms from the fields of shallow and deep learning. Our models of foreign exchange marketsbased on information from the stock market have been shown to be able to predict the future of foreignexchange markets with an accuracy of over 60%. This can be seen as an indicator of a strong link between thetwo markets. Our insights offer a chance of a better understanding guiding the future of market predictions.We found the accuracy depends on the time frame of the forecast and the algorithms used, where deeplearning tends to perform better for farther-reaching forecasts
Lacic Emanuel, Traub Matthias, Duricic Tomislav, Haslauer Eva, Lex Elisabeth
2018
A challenge for importers in the automobile industry is adjusting to rapidly changing market demands. In this work, we describe a practical study of car import planning based on the monthly car registrations in Austria. We model the task as a data driven forecasting problem and we implement four different prediction approaches. One utilizes a seasonal ARIMA model, while the other is based on LSTM-RNN and both compared to a linear and seasonal baselines. In our experiments, we evaluate the 33 different brands by predicting the number of registrations for the next month and for the year to come.
Lassnig Markus, Stabauer Petra, Breitfuß Gert, Mauthner Katrin
2018
Zahlreiche Forschungsergebnisse im Bereich Geschäftsmodellinnovationenhaben gezeigt, dass über 90% aller Geschäftsmodelle der letzten50 Jahre aus einer Rekombination von bestehenden Konzepten entstanden sind.Grundsätzlich gilt das auch für digitale Geschäftsmodellinnovationen. Angesichtsder Breite potenzieller digitaler Geschäftsmodellinnovationen wollten die Autorenwissen, welche Modellmuster in der wirtschaftlichen Praxis welche Bedeutung haben.Deshalb wurde die digitale Transformation mit neuen Geschäftsmodellen ineiner empirischen Studie basierend auf qualitativen Interviews mit 68 Unternehmenuntersucht. Dabei wurden sieben geeignete Geschäftsmodellmuster identifiziert, bezüglichihres Disruptionspotenzials von evolutionär bis revolutionär klassifiziert undder Realisierungsgrad in den Unternehmen analysiert.Die stark komprimierte Conclusio lautet, dass das Thema Geschäftsmodellinnovationendurch Industrie 4.0 und digitale Transformation bei den Unternehmenangekommen ist. Es gibt jedoch sehr unterschiedliche Geschwindigkeiten in der Umsetzungund im Neuheitsgrad der Geschäftsmodellideen. Die schrittweise Weiterentwicklungvon Geschäftsmodellen (evolutionär) wird von den meisten Unternehmenbevorzugt, da hier die grundsätzliche Art und Weise des Leistungsangebots bestehenbleibt. Im Gegensatz dazu gibt es aber auch Unternehmen, die bereits radikale Änderungenvornehmen, die die gesamte Geschäftslogik betreffen. Entsprechend wird imvorliegenden Artikel ein Clustering von Geschäftsmodellinnovatoren vorgenommen – von Hesitator über Follower über Optimizer bis zu Leader in Geschäftsmodellinnovationen
Wertner Alfred, Stern Hermann, Pammer-Schindler Viktoria, Weghofer Franz
2018
Sprachsteuerung stellt ein potentiell sehr mächtiges Werkzeug dar und sollte rein von der Theorie (grundlegende Spracheingabe) her schon seit 20 Jahren einsetzbar sein. Sie ist in der Vergangenheit im industriellen Umfeld jedoch primär an nicht ausgereifter Hardware oder gar der Notwendigkeit einer firmenexternen aktiven Datenverbindung gescheitert. Bei Magna Steyr am Standort Graz wird die Kommissionierung bisher mit Hilfe von Scan-nern erledigt. Dieser Prozess ließe sich sehr effektiv durch eine durchgängige Sprachsteue-rung unterstützen, wenn diese einfach, zuverlässig sowie Compliance-konform umsetzbar wäre und weiterhin den Menschen als zentralen Mittelpunkt und Akteur (Stichwort Hu-man in the Loop) verstehen würde. Daher wurden bestehende Spracherkennungssysteme für mobile Plattformen sowie passende „off the shelf“ Hardware (Smartphones und Headsets) ausgewählt und prototypisch als Android Applikation („Talk2Me“) umgesetzt. Ziel war es, eine Aussage über die Einsetzbarkeit von sprachgesteuerten mobilen Anwen-dungen im industriellen Umfeld liefern zu können.Mit dem Open Source Speech Recognition Kit CMU Sphinx in Kombination mit speziell auf das Vokabular der abgebildeten Prozesse angepassten Wörterbüchern konnten wir eine sehr gute Erkennungsrate erreichen ohne das Sprachmodell individuell auf einzelne Mitar-beiterInnen trainieren zu müssen. Talk2Me zeigt innovativ, wie erprobte, kostengünstige und verfügbare Technologie (Smartphones und Spracherkennung als Eingabe sowie Sprachsynthese als Ausgabe) Ein-zug in unseren Arbeitsalltag haben kann.
d'Aquin Mathieu , Kowald Dominik, Fessl Angela, Thalmann Stefan, Lex Elisabeth
2018
The goal of AFEL is to develop, pilot and evaluate methods and applications, which advance informal/collective learning as it surfaces implicitly in online social environments. The project is following a multi-disciplinary, industry-driven approach to the analysis and understanding of learner data in order to personalize, accelerate and improve informal learning processes. Learning Analytics and Educational Data Mining traditionally relate to the analysis and exploration of data coming from learning environments, especially to understand learners' behaviours. However, studies have for a long time demonstrated that learning activities happen outside of formal educational platforms, also. This includes informal and collective learning usually associated, as a side effect, with other (social) environments and activities. Relying on real data from a commercially available platform, the aim of AFEL is to provide and validate the technological grounding and tools for exploiting learning analytics on such learning activities. This will be achieved in relation to cognitive models of learning and collaboration, which are necessary to the understanding of loosely defined learning processes in online social environments. Applying the skills available in the consortium to a concrete set of live, industrial online social environments, AFEL will tackle the main challenges of informal learning analytics through 1) developing the tools and techniques necessary to capture information about learning activities from (not necessarily educational) online social environments; 2) creating methods for the analysis of such informal learning data, based on combining feature engineering and visual analytics with cognitive models of learning and collaboration; and 3) demonstrating the potential of the approach in improving the understanding of informal learning, and the way it is better supported; 4) evaluate all the former items in real world large scale applications and platforms.
Kowald Dominik, Seitlinger Paul , Ley Tobias , Lex Elisabeth
2018
In this paper, we present the results of an online study with the aim to shed light on the impact that semantic context cues have on the user acceptance of tag recommendations. Therefore, we conducted a work-integrated social bookmarking scenario with 17 university employees in order to compare the user acceptance of a context-aware tag recommendation algorithm called 3Layers with the user acceptance of a simple popularity-based baseline. In this scenario, we validated and verified the hypothesis that semantic context cues have a higher impact on the user acceptance of tag recommendations in a collaborative tagging setting than in an individual tagging setting. With this paper, we contribute to the sparse line of research presenting online recommendation studies.
Koncar Philipp
2018
This synthetically generated dataset can be used to evaluate outlier detection algorithms. It has 10 attributes and 1000 observations, of which 100 are labeled as outliers. Two-dimensional combinations of attributes form differently shaped clusters. Attribute 0 & Attribute 1: Two circular clusters Attribute 2 & Attribute 3: Two banana shaped clusters Attribute 4 & Attribute 5: Three point clouds Attribute 6 & Attribute 7: Two point clouds with variances Attribute 8 & Attribute 9: Three anisotropic shaped clusters. The "outlier" column states whether an observation is an outlier or not. Additionally, the .zip file contains 10 stratified randomized train test splits (70% train, 30% test).
Lovric Mario
2018
The objects are numbered. The Y-variable are boiling points. Other features are structural features of molecules. In the outlier column the outliers are assigned with a value of 1.The data is derived from a published chemical dataset on boiling point measurements [1] and from public data [2]. Features were generated by means of the RDKit Python library [3]. The dataset was infused with known outliers (~5%) based on significant structural differences, i.e. polar and non-polar molecules. Cherqaoui D., Villemin D. Use of a Neural Network to determine the Boiling Point of Alkanes. J CHEM SOC FARADAY TRANS. 1994;90(1):97–102. https://pubchem.ncbi.nlm.nih.gov/ RDKit: Open-source cheminformatics; http://www.rdkit.org
Lovric Mario, Stipaničev Draženka , Repec Siniša , Malev Olga , Klobučar Göran
2018
Lacic Emanuel, Kowald Dominik, Reiter-Haas Markus, Slawicek Valentin, Lex Elisabeth
2018
In this work, we address the problem of recommending jobs touniversity students. For this, we explore the impact of using itemembeddings for a content-based job recommendation system. Fur-thermore, we utilize a model from human memory theory to integratethe factors of frequency and recency of job posting interactions forcombining item embeddings. We evaluate our job recommendationsystem on a dataset of the Austrian student job portal Studo usingprediction accuracy, diversity as well as adapted novelty, which isintroduced in this work. We find that utilizing frequency and recencyof interactions with job postings for combining item embeddingsresults in a robust model with respect to accuracy and diversity, butalso provides the best adapted novelty results
Hasani-Mavriqi Ilire, Kowald Dominik, Helic Denis, Lex Elisabeth
2018
In this paper, we study the process of opinion dynamics and consensus building inonline collaboration systems, in which users interact with each other followingtheir common interests and their social proles. Specically, we are interested inhow users similarity and their social status in the community, as well as theinterplay of those two factors inuence the process of consensus dynamics. Forour study, we simulate the diusion of opinions in collaboration systems using thewell-known Naming Game model, which we extend by incorporating aninteraction mechanism based on user similarity and user social status. Weconduct our experiments on collaborative datasets extracted from the Web. Ourndings reveal that when users are guided by their similarity to other users, theprocess of consensus building in online collaboration systems is delayed. Asuitable increase of inuence of user social status on their actions can in turnfacilitate this process. In summary, our results suggest that achieving an optimalconsensus building process in collaboration systems requires an appropriatebalance between those two factors.
Luzhnica Granit, Veas Eduardo Enrique
2018
Vibrotactile skin-reading uses wearable vibrotactile displays to convey dynamically generated textual information. Such wearable displays have potential to be used in a broad range of applications. Nevertheless, the reading process is passive, and users have no control over the reading flow. To compensate for such drawback, this paper investigates what kind of interactions are necessary for vibrotactile skin reading and the modalities of such interactions. An interaction concept for skin reading was designed by taking into account the reading as a process. We performed a formative study with 22 participants to assess reading behaviour in word and sentence reading using a six-channel wearable vibrotactile display. Our study shows that word based interactions in sentence reading are more often used and preferred by users compared to character-based interactions and that users prefer gesture-based interaction for skin reading. Finally, we discuss how such wearable vibrotactile displays could be extended with sensors that would enable recognition of such gesture-based interaction. This paper contributes a set of guidelines for the design of wearable haptic displays for text communication.
Lovric Mario, Krebs Sarah, Cemernek David, Kern Roman
2018
The use of big data technologies has a deep impact on today’s research (Tetko et al., 2016) and industry (Li et al., n.d.), but also on public health (Khoury and Ioannidis, 2014) and economy (Einav and Levin, 2014). These technologies are particularly important for manufacturing sites, where complex processes are coupled with large amounts of data, for example in chemical and steel industry. This data originates from sensors, processes. and quality-testing. Typical application of these technologies is related to predictive maintenance and optimisation of production processes. Media makes the term “big data” a hot buzzword without going to deep into the topic. We noted a lack in user’s understanding of the technologies and techniques behind it, making the application of such technologies challenging. In practice the data is often unstructured (Gandomi and Haider, 2015) and a lot of resources are devoted to cleaning and preparation, but also to understanding causalities and relevance among features. The latter one requires domain knowledge, making big data projects not only challenging from a technical perspective, but also from a communication perspective. Therefore, there is a need to rethink the big data concept among researchers and manufacturing experts including topics like data quality, knowledge exchange and technology required. The scope of this presentation is to present the main pitfalls in applying big data technologies amongst users from industry, explain scaling principles in big data projects, and demonstrate common challenges in an industrial big data project
Lovric Mario
2018
Today's data amount is significantly increasing. A strong buzzword in research nowadays is big data.Therefore the chemistry student has to be well prepared for the upcoming age where he does not only rule the laboratories but is a modeler and data scientist as well. This tutorial covers the very basics of molecular modeling and data handling with the use of Python and Jupyter Notebook. It is the first in a series aiming to cover the relevant topics in machine learning, QSAR and molecular modeling, as well as the basics of Python programming
Santos Tiago, Kern Roman
2018
Semiconductor manufacturing processes critically depend on hundreds of highly complex process steps, which may cause critical deviations in the end-product.Hence, a better understanding of wafer test data patterns, which represent stress tests conducted on devices in semiconductor material slices, may lead to an improved production process.However, the shapes and types of these wafer patterns, as well as their relation to single process steps, are unknown.In a first step to address these issues, we tailor and apply a variational auto-encoder (VAE) to wafer pattern images.We find the VAE's generator allows for explorative wafer pattern analysis, andits encoder provides an effective dimensionality reduction algorithm, which, in a clustering application, performs better than several baselines such as t-SNE and yields interpretable clusters of wafer patterns.
Urak Günter, Ziak Hermann, Kern Roman
2018
The task of federated search is to combine results from multiple knowledge bases into a single, aggregated result list, where the items typically range from textual documents toimages. These knowledge bases are also called sources, and the process of choosing the actual subset of sources for a given query is called source selection. A scenario wherethese sources do not provide information about their content in a standardized way is called uncooperative setting. In our work we focus on knowledge bases providing long tail content, i.e., rather specialized sources offering a low number of relevant documents. These sources are often neglected in favor of more popular knowledge sources, both by today’s Web users as well as by most of the existing source selection techniques. We propose a system for source selection which i) could be utilized to automatically detect long tail knowledge bases and ii) generates aggregated search results that tend to incorporate results from these long tail sources. Starting from the current state-of-the-art we developed components that allowed to adjust the amount of contribution from long tail sources. Our evaluation is conducted on theTREC 2014 Federated WebSearch dataset. As this dataset also favors the most popular sources, systems that include many long tail knowledge bases will yield low performancemeasures. Here, we propose a system where just a few relevant long tail sources are integrated into the list of more popular knowledge bases. Additionally, we evaluated the implications of an uncooperative setting, where only minimal information of the sources is available to the federated search system. Here a severe drop in performance is observed once the share of long tail sources is higher than 40%. Our work is intended to steer the development of federated search systems that aim at increasing the diversity and coverage of the aggregated search result.
Rexha Andi, Kröll Mark, Ziak Hermann, Kern Roman
2018
The goal of our work is inspired by the task of associating segments of text to their real authors. In this work, we focus on analyzing the way humans judge different writing styles. This analysis can help to better understand this process and to thus simulate/ mimic such behavior accordingly. Unlike the majority of the work done in this field (i.e., authorship attribution, plagiarism detection, etc.) which uses content features, we focus only on the stylometric, i.e. content-agnostic, characteristics of authors.Therefore, we conducted two pilot studies to determine, if humans can identify authorship among documents with high content similarity. The first was a quantitative experiment involving crowd-sourcing, while the second was a qualitative one executed by the authors of this paper.Both studies confirmed that this task is quite challenging.To gain a better understanding of how humans tackle such a problem, we conducted an exploratory data analysis on the results of the studies. In the first experiment, we compared the decisions against content features and stylometric features. While in the second, the evaluators described the process and the features on which their judgment was based. The findings of our detailed analysis could (i) help to improve algorithms such as automatic authorship attribution as well as plagiarism detection, (ii) assist forensic experts or linguists to create profiles of writers, (iii) support intelligence applications to analyze aggressive and threatening messages and (iv) help editor conformity by adhering to, for instance, journal specific writing style.
Babić Sanja, Barišić Josip, Stipaničev Draženka, Repec Siniša, Lovric Mario, Malev Olga, Čož-Rakovac Rozalindra, Klobučar GIV
2018
Quantitative chemical analyses of 428 organic contaminants (OCs) confirmed the presence of 313 OCs in the sediment extracts from river Sava, Croatia. Pharmaceuticals were present in higher concentration than pesticides thus confirming their increasing threat to freshwater ecosystems. Toxicity evaluation of the sediment extracts from four locations (Jesenice, Rugvica, Galdovo and Lukavec) using zebrafish embryotoxicity test (ZET) accompanied with semi-quantitative histopathological analyses exhibited good correlation with cumulative number and concentrations of OCs at investigated sites (10,048.6, 15,222.8, 1,247.6, and 9,130.5 ng/g respectively) and proved its role as a good indicator of toxic potential of complex contaminant mixtures. Toxicity prediction of sediment extracts and sediment was assessed using Toxic unit (TU) approach and PBT (persistence, bioaccumulation and toxicity) ranking. Also, prior-knowledge informed chemical-gene interaction models were generated and graph mining approaches used to identify OCs and genes most likely to be influential in these mixtures. Predicted toxicity of sediment extracts (TUext) for sampled locations was similar to the results obtained by ZET and associated histopathology resulting in Rugvica sediment as being the most toxic, followed by Jesenice, Lukavec and Galdovo. Sediment TU (TUsed) favoured OCs with low octanol-water partition coefficient like herbicide glyphosate and antibiotics ciprofloxacin and sulfamethazine thus indicating locations containing higher concentrations of these OCs (Galdovo and Rugvica) as most toxic. Results suggest that comprehensive in silico sediment toxicity predictions advocate providing equal attention to organic contaminants with either very low or very high log Kow
Barreiros Carla, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2018
This paper describes a novel visual metaphor to communicate sensor information of a connected device. The Internet of Things aims to extend every device with sensing and computing capabilities. A byproduct is that even domestic machines become increasingly complex, tedious to understand and maintain. This paper presents a prototype instrumenting a coffee machine with sensors. The machine streams the sensor data, which is picked up by an augmented reality application serving a nature metaphor. The nature metaphor, BioAR, represents the status derived from the coffee machine sensors in the features of a 3D virtual tree. The tree is meant to pass for a living proxy of the machine it represents. The metaphor, shown either with AR or a simple holographic display, reacts to the user manipulation of the machine and its workings. A first user study validates that the representation is correctly understood, and that it inspires affect for the machine. A second user study validates that the metaphor scales to a large number of machines.
Breitfuß Gert, Berger Martin, Doerrzapf Linda
2018
The initiative „Urban Mobility Labs“ (UML), driven by the Austrian Ministry of Transport, Innovation and Technology, was started to support the setup of innovative and experimental environments for research, testing, implementation and transfer of mobility solutions. This should happen by incorporating the scientific community, citizens and stakeholders in politics and administration as well as other groups. The emerging structural frame shall enhance the efficiency and effectivity of the innovation process. In this paper insights and in-depth analysis of the approaches and experiences gained in the eight UML exploratory projects will be outlined. These projects were analyzed, systematized and enriched with further considerations. Furthermore, their knowledge growth as user-centered innovation environments was documented during the exploratory phase.
Bassa Akim, Kröll Mark, Kern Roman
2018
Open Information Extraction (OIE) is the task of extracting relations fromtext without the need of domain speci c training data. Currently, most of the researchon OIE is devoted to the English language, but little or no research has been conductedon other languages including German. We tackled this problem and present GerIE, anOIE parser for the German language. Therefore we started by surveying the availableliterature on OIE with a focus on concepts, which may also apply to the Germanlanguage. Our system is built upon the output of a dependency parser, on which anumber of hand crafted rules are executed. For the evaluation we created two dedicateddatasets, one derived from news articles and one based on texts from an encyclopedia.Our system achieves F-measures of up to 0.89 for sentences that have been correctlypreprocessed.
Neuhold Robert, Gursch Heimo, Cik Michael
2018
Data collection on motorways for traffic management operations is traditionally based on local measurements points and camera monitoring systems. This work looks into social media as additional data source for the Austrian motorway operator ASFINAG. A data driven system called Driver´s Dashboard was developed to collect incident descriptions from social media sources (Facebook, RSS feeds), to filter relevant messages, and to fuse them with local traffic data. All collected texts were analysed for concepts describing road situations linking the texts from the web and social media with traffic messages and traffic data. Due to the Austrian characteristics in social media use and road transportation very few messages are available compared to other studies. 3,586 messages were collected within a five-week period. 7.1% of these messages were automatically annotated as traffic relevant by the system. An evaluation of these traffic relevant messages showed that 22% of these messages were actually relevant for the motorway operator. Further, the traffic relevant messages for the motorway operator were analysed more in detail to identify correlations between message text and traffic data characteristics. A correlation of message text and traffic data was found in nine of eleven messages by comparing the speed profiles and traffic state data with the message text.
Rauter, R., Zimek, M.
2017
New business opportunities in the digital economy are established when datasets describing a problem, data services solving the said problem, the required expertise and infrastructure come together. For most real-word problems finding the right data sources, services consulting expertise, and infrastructure is difficult, especially since the market players change often. The Data Market Austria (DMA) offers a platform to bring datasets, data services, consulting, and infrastructure offers to a common marketplace. The recommender systems included in DMA analyses all offerings, to derive suggestions for collaboration between them, like which dataset could be best processed by which data service. The suggestions should help the costumers on DMA to identify new collaborations reaching beyond traditional industry boundaries to get in touch with new clients or suppliers in the digital domain. Human brokers will work together with the recommender system to set up data value chains matching different offers to create a data value chain solving the problems in various domains. In its final expansion stage, DMA is intended to be a central hub for all actors participating in the Austrian data economy, regardless of their industrial and research domain to overcome traditional domain boundaries.
Lukas Sabine, Pammer-Schindler Viktoria, Almer Alexander, Schnabel Thomas
2017
Köfler Armin, Pammer-Schindler Viktoria, Almer Alexander, Schnabel Thomas
2017
Rexha Andi, Kröll Mark, Ziak Hermann, Kern Roman
2017
In this pilot study, we tried to capture humans' behavior when identifying authorship of text snippets. At first, we selected textual snippets from the introduction of scientific articles written by single authors. Later, we presented to the evaluators a source and four target snippets, and then, ask them to rank the target snippets from the most to the least similar from the writing style.The dataset is composed by 66 experiments manually checked for not having any clear hint during the ranking for the evaluators. For each experiment, we have evaluations from three different evaluators.We present each experiment in a single line (in the CSV file), where, at first we present the metadata of the Source-Article (Journal, Title, Authorship, Snippet), and the metadata for the 4 target snippets (Journal, Title, Authorship, Snippet, Written From the same Author, Published in the same Journal) and the ranking given by each evaluator. This task was performed in the open source platform, Crowd Flower. The headers of the CSV are self-explained. In the TXT file, you can find a human-readable version of the experiment. For more information about the extraction of the data, please consider reading our paper: "Extending Scientific Literature Search by Including the Author’s Writing Style" @BIR: http://www.gesis.org/en/services/events/events-archive/conferences/ecir-workshops/ecir-workshop-2017
Kowald Dominik
2017
Social tagging systems enable users to collaboratively assign freely chosen keywords(i.e., tags) to resources (e.g., Web links). In order to support users in finding descrip-tive tags, tag recommendation algorithms have been proposed. One issue of currentstate-of-the-art tag recommendation algorithms is that they are often designed ina purely data-driven way and thus, lack a thorough understanding of the cognitiveprocesses that play a role when people assign tags to resources. A prominent exam-ple is the activation equation of the cognitive architecture ACT-R, which formalizesactivation processes in human memory to determine if a specific memory unit (e.g.,a word or tag) will be needed in a specific context. It is the aim of this thesis toinvestigate if a cognitive-inspired approach, which models activation processes inhuman memory, can improve tag recommendations.For this, the relation between activation processes in human memory and usagepractices of tags is studied, which reveals that (i) past usage frequency, (ii) recency,and (iii) semantic context cues are important factors when people reuse tags. Basedon this, a cognitive-inspired tag recommendation approach termed BLLAC+MPrisdeveloped based on the activation equation of ACT-R. An extensive evaluation usingsix real-world folksonomy datasets shows that BLLAC+MProutperforms currentstate-of-the-art tag recommendation algorithms with respect to various evaluationmetrics. Finally, BLLAC+MPris utilized for hashtag recommendations in Twitter todemonstrate its generalizability in related areas of tag-based recommender systems.The findings of this thesis demonstrate that activation processes in human memorycan be utilized to improve not only social tag recommendations but also hashtagrecommendations. This opens up a number of possible research strands for futurework, such as the design of cognitive-inspired resource recommender systems
Breitfuß Gert, Kaiser Rene_DB, Kern Roman, Kowald Dominik, Lex Elisabeth, Pammer-Schindler Viktoria, Veas Eduardo Enrique
2017
Proceedings of the Workshop Papers of i-Know 2017, co-located with International Conference on Knowledge Technologies and Data-Driven Business 2017 (i-Know 2017), Graz, Austria, October 11-12, 2017.
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2017
Whenever users engage in gathering and organizing new information, searching and browsing activities emerge at the core of the exploration process. As the process unfolds and new knowledge is acquired, interest drifts occur inevitably and need to be accounted for. Despite the advances in retrieval and recommender algorithms, real-world interfaces have remained largely unchanged: results are delivered in a relevance-ranked list. However, it quickly becomes cumbersome to reorganize resources along new interests, as any new search brings new results. We introduce an interactive user-driven tool that aims at supporting users in understanding, refining, and reorganizing documents on the fly as information needs evolve. Decisions regarding visual and interactive design aspects are tightly grounded on a conceptual model for exploratory search. In other words, the different views in the user interface address stages of awareness, exploration, and explanation unfolding along the discovery process, supported by a set of text-mining methods. A formal evaluation showed that gathering items relevant to a particular topic of interest with our tool incurs in a lower cognitive load compared to a traditional ranked list. A second study reports on usage patterns and usability of the various interaction techniques within a free, unsupervised setting.
d'Aquin Mathieu , Adamou Alessandro , Dietze Stefan , Fetahu Besnik , Gadiraju Ujwal , Hasani-Mavriqi Ilire, Holz Peter, Kümmerle Joachim, Kowald Dominik, Lex Elisabeth, Lopez Sola Susana, Mataran Ricardo, Sabol Vedran, Troullinou Pinelopi, Veas Eduardo, Veas Eduardo Enrique
2017
More and more learning activities take place online in a self-directed manner. Therefore, just as the idea of self-tracking activities for fitness purposes has gained momentum in the past few years, tools and methods for awareness and self-reflection on one's own online learning behavior appear as an emerging need for both formal and informal learners. Addressing this need is one of the key objectives of the AFEL (Analytics for Everyday Learning) project. In this paper, we discuss the different aspects of what needs to be put in place in order to enable awareness and self-reflection in online learning. We start by describing a scenario that guides the work done. We then investigate the theoretical, technical and support aspects that are required to enable this scenario, as well as the current state of the research in each aspect within the AFEL project. We conclude with a discussion of the ongoing plans from the project to develop learner-facing tools that enable awareness and self-reflection for online, self-directed learners. We also elucidate the need to establish further research programs on facets of self-tracking for learning that are necessarily going to emerge in the near future, especially regarding privacy and ethics.
Ross-Hellauer Anthony, Deppe A., Schmidt B.
2017
Open peer review (OPR) is a cornerstone of the emergent Open Science agenda. Yet to date no large-scale survey of attitudes towards OPR amongst academic editors, authors, reviewers and publishers has been undertaken. This paper presents the findings of an online survey, conducted for the OpenAIRE2020 project during September and October 2016, that sought to bridge this information gap in order to aid the development of appropriate OPR approaches by providing evidence about attitudes towards and levels of experience with OPR. The results of this cross-disciplinary survey, which received 3,062 full responses, show the majority (60.3%) of respondents to be believe that OPR as a general concept should be mainstream scholarly practice (although attitudes to individual traits varied, and open identities peer review was not generally favoured). Respondents were also in favour of other areas of Open Science, like Open Access (88.2%) and Open Data (80.3%). Among respondents we observed high levels of experience with OPR, with three out of four (76.2%) reporting having taken part in an OPR process as author, reviewer or editor. There were also high levels of support for most of the traits of OPR, particularly open interaction, open reports and final-version commenting. Respondents were against opening reviewer identities to authors, however, with more than half believing it would make peer review worse. Overall satisfaction with the peer review system used by scholarly journals seems to strongly vary across disciplines. Taken together, these findings are very encouraging for OPR’s prospects for moving mainstream but indicate that due care must be taken to avoid a “one-size fits all” solution and to tailor such systems to differing (especially disciplinary) contexts. OPR is an evolving phenomenon and hence future studies are to be encouraged, especially to further explore differences between disciplines and monitor the evolution of attitudes.
Seifert Christin, Bailer Werner, Orgel Thomas, Gantner Louis, Kern Roman, Ziak Hermann, Petit Albin, Schlötterer Jörg, Zwicklbauer Stefan, Granitzer Michael
2017
The digitization initiatives in the past decades have led to a tremendous increase in digitized objects in the cultural heritagedomain. Although digitally available, these objects are often not easily accessible for interested users because of the distributedallocation of the content in different repositories and the variety in data structure and standards. When users search for culturalcontent, they first need to identify the specific repository and then need to know how to search within this platform (e.g., usageof specific vocabulary). The goal of the EEXCESS project is to design and implement an infrastructure that enables ubiquitousaccess to digital cultural heritage content. Cultural content should be made available in the channels that users habituallyvisit and be tailored to their current context without the need to manually search multiple portals or content repositories. Torealize this goal, open-source software components and services have been developed that can either be used as an integratedinfrastructure or as modular components suitable to be integrated in other products and services. The EEXCESS modules andcomponents comprise (i) Web-based context detection, (ii) information retrieval-based, federated content aggregation, (iii) meta-data definition and mapping, and (iv) a component responsible for privacy preservation. Various applications have been realizedbased on these components that bring cultural content to the user in content consumption and content creation scenarios. Forexample, content consumption is realized by a browser extension generating automatic search queries from the current pagecontext and the focus paragraph and presenting related results aggregated from different data providers. A Google Docs add-onallows retrieval of relevant content aggregated from multiple data providers while collaboratively writing a document. Theserelevant resources then can be included in the current document either as citation, an image, or a link (with preview) withouthaving to leave disrupt the current writing task for an explicit search in various content providers’ portals.
di Sciascio Maria Cecilia, Sabol Vedran, Veas Eduardo Enrique
2017
Whenever we gather or organize knowledge, the task of search-ing inevitably takes precedence. As exploration unfolds, it be-comes cumbersome to reorganize resources along new interests,as any new search brings new results. Despite huge advances inretrieval and recommender systems from the algorithmic point ofview, many real-world interfaces have remained largely unchanged:results appear in an infinite list ordered by relevance with respect tothe current query. We introduceuRank, a user-driven visual tool forexploration and discovery of textual document recommendations.It includes a view summarizing the content of the recommenda-tion set, combined with interactive methods for understanding, re-fining and reorganizing documents on-the-fly as information needsevolve. We provide a formal experiment showing thatuRankuserscan browse the document collection and efficiently gather items rel-evant to particular topics of interest with significantly lower cogni-tive load compared to traditional list-based representations.
Müller-Putz G. R., Ofner P., Schwarz Andreas, Pereira J., Luzhnica Granit, di Sciascio Maria Cecilia, Veas Eduardo Enrique, Stein Sebastian, Williamson John, Murray-Smith Roderick, Escolano C., Montesano L., Hessing B., Schneiders M., Rupp R.
2017
The aim of the MoreGrasp project is to develop a non-invasive, multimodal user interface including a brain-computer interface(BCI)for intuitive control of a grasp neuroprosthesisto supportindividuals with high spinal cord injury(SCI)in everyday activities. We describe the current state of the project, including the EEG system, preliminary results of natural movements decoding in people with SCI, the new electrode concept for the grasp neuroprosthesis, the shared control architecture behind the system and the implementation ofa user-centered design.
Mohr Peter, Mandl David, Tatzgern Markus, Veas Eduardo Enrique, Schmalstieg Dieter, Kalkofen Denis
2017
A video tutorial effectively conveys complex motions, butmay be hard to follow precisely because of its restriction toa predetermined viewpoint. Augmented reality (AR) tutori-als have been demonstrated to be more effective. We bringthe advantages of both together by interactively retargetingconventional, two-dimensional videos into three-dimensionalAR tutorials. Unlike previous work, we do not simply overlayvideo, but synthesize 3D-registered motion from the video.Since the information in the resulting AR tutorial is registeredto 3D objects, the user can freely change the viewpoint with-out degrading the experience. This approach applies to manystyles of video tutorials. In this work, we concentrate on aclass of tutorials which alter the surface of an object
Guerra Jorge, Catania Carlos, Veas Eduardo Enrique
2017
This paper presents a graphical interface to identify hostilebehavior in network logs. The problem of identifying andlabeling hostile behavior is well known in the network securitycommunity. There is a lack of labeled datasets, which makeit difficult to deploy automated methods or to test the perfor-mance of manual ones. We describe the process of search-ing and identifying hostile behavior with a graphical tool de-rived from an open source Intrusion Prevention System, whichgraphically encodes features of network connections from alog-file. A design study with two network security expertsillustrates the workflow of searching for patterns descriptiveof unwanted behavior and labeling occurrences therewith.
Veas Eduardo Enrique
2017
In our goal to personalize the discovery of scientific information, we built systems using visual analytics principles for exploration of textual documents [1]. The concept was extended to explore information quality of user generated content [2]. Our interfaces build upon a cognitive model, where awareness is a key step of exploration [3]. In education-related circles, a frequent concern is that people increasingly need to know how to search, and that knowing how to search leads to finding information efficiently. The ever-growing information overabundance right at our fingertips needs a naturalskill to develop and refine search queries to get better search results, or does it?Exploratory search is an investigative behavior we adopt to build knowledge by iteratively selecting interesting features that lead to associations between representative items in the information space [4,5]. Formulating queries was proven more complicated for humans than recognizing information visually [6]. Visual analytics takes the form of an open ended dialog between the user and the underlying analytics algorithms operating on the data [7]. This talk describes studies on exploration and discovery with visual analytics interfaces that emphasize transparency and control featuresto trigger awareness. We will discuss the interface design and the studies of visual exploration behavior.
di Sciascio Maria Cecilia, Mayr Lukas, Veas Eduardo Enrique
2017
Knowledge work such as summarizing related research inpreparation for writing, typically requires the extraction ofuseful information from scientific literature. Nowadays theprimary source of information for researchers comes fromelectronic documents available on the Web, accessible throughgeneral and academic search engines such as Google Scholaror IEEE Xplore. Yet, the vast amount of resources makesretrieving only the most relevant results a difficult task. Asa consequence, researchers are often confronted with loadsof low-quality or irrelevant content. To address this issuewe introduce a novel system, which combines a rich, inter-active Web-based user interface and different visualizationapproaches. This system enables researchers to identify keyphrases matching current information needs and spot poten-tially relevant literature within hierarchical document collec-tions. The chosen context was the collection and summariza-tion of related work in preparation for scientific writing, thusthe system supports features such as bibliography and citationmanagement, document metadata extraction and a text editor.This paper introduces the design rationale and components ofthe PaperViz. Moreover, we report the insights gathered in aformative design study addressing usability
Ross-Hellauer Anthony
2017
Background: “Open peer review” (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and implementations. The literature reflects this, with numerous overlapping and contradictory definitions. While for some the term refers to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles. For others it signifies both of these conditions, and for yet others it describes systems where not only “invited experts” are able to comment. For still others, it includes a variety of combinations of these and other novel methods.Methods: Recognising the absence of a consensus view on what open peer review is, this article undertakes a systematic review of definitions of “open peer review” or “open review”, to create a corpus of 122 definitions. These definitions are systematically analysed to build a coherent typology of the various innovations in peer review signified by the term, and hence provide the precise technical definition currently lacking.Results: This quantifiable data yields rich information on the range and extent of differing definitions over time and by broad subject area. Quantifying definitions in this way allows us to accurately portray exactly how ambiguously the phrase “open peer review” has been used thus far, for the literature offers 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature reviewed.Conclusions: I propose a pragmatic definition of open peer review as an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the aims of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process.
Kowald Dominik, Lex Elisabeth
2017
In this paper, we study the imbalance between current state-of-the-art tag recommendation algorithms and the folksonomy structures of real-world social tagging systems. While algorithms such as FolkRank are designed for dense folksonomy structures, most social tagging systems exhibit a sparse nature. To overcome this imbalance, we show that cognitive-inspired algorithms, which model the tag vocabulary of a user in a cognitive-plausible way, can be helpful. Our present approach does this via implementing the activation equation of the cognitive architecture ACT-R, which determines the usefulness of units in human memory (e.g., tags). In this sense, our long-term research goal is to design hybrid recommendation approaches, which combine the advantages of both worlds in order to adapt to the current setting (i.e., sparse vs. dense ones)
Luzhnica Granit, Veas Eduardo Enrique
2017
This paper investigates sensitivity based prioritisation in the construction of tactile patterns. Our evidence is obtained by three studies using a wearable haptic display with vibrotactile motors (tactors). Haptic displays intended to transmit symbols often suffer the tradeoff between throughput and accuracy. For a symbol encoded with more than one tactor simultaneous onsets (spatial encoding) yields the highest throughput at the expense of the accuracy. Sequential onset increases accuracy at the expense of throughput. In the desire to overcome these issues, we investigate aspects of prioritisation based on sensitivity applied to the encoding of haptics patterns. First, we investigate an encoding method using mixed intensities, where different body locations are simultaneously stimulated with different vibration intensities. We investigate whether prioritising the intensity based on sensitivity improves identification accuracy when compared to simple spatial encoding. Second, we investigate whether prioritising onset based on sensitivity affects the identification of overlapped spatiotemporal patterns. A user study shows that this method significantly increases the accuracy. Furthermore, in a third study, we identify three locations on the hand that lead to an accurate recall. Thereby, we design the layout of a haptic display equipped with eight tactors, capable of encoding 36 symbols with only one or two locations per symbol.
Luzhnica Granit, Veas Eduardo Enrique, Stein Sebastian, Pammer-Schindler Viktoria, Williamson John, Murray-Smith Roderick
2017
Haptic displays are commonly limited to transmitting a dis- crete set of tactile motives. In this paper, we explore the transmission of real-valued information through vibrotactile displays. We simulate spatial continuity with three perceptual models commonly used to create phantom sensations: the lin- ear, logarithmic and power model. We show that these generic models lead to limited decoding precision, and propose a method for model personalization adjusting to idiosyncratic and spatial variations in perceptual sensitivity. We evaluate this approach using two haptic display layouts: circular, worn around the wrist and the upper arm, and straight, worn along the forearm. Results of a user study measuring continuous value decoding precision show that users were able to decode continuous values with relatively high accuracy (4.4% mean error), circular layouts performed particularly well, and per- sonalisation through sensitivity adjustment increased decoding precision.
Dragoni Mauro, Federici Marco, Rexha Andi
2017
One of the most important opinion mining research directions falls in the extraction ofpolarities referring to specific entities (aspects) contained in the analyzed texts. The detectionof such aspects may be very critical especially when documents come from unknowndomains. Indeed, while in some contexts it is possible to train domain-specificmodels for improving the effectiveness of aspects extraction algorithms, in others themost suitable solution is to apply unsupervised techniques by making such algorithmsdomain-independent. Moreover, an emerging need is to exploit the results of aspectbasedanalysis for triggering actions based on these data. This led to the necessityof providing solutions supporting both an effective analysis of user-generated contentand an efficient and intuitive way of visualizing collected data. In this work, we implementedan opinion monitoring service implementing (i) a set of unsupervised strategiesfor aspect-based opinion mining together with (ii) a monitoring tool supporting usersin visualizing analyzed data. The aspect extraction strategies are based on the use of semanticresources for performing the extraction of aspects from texts. The effectivenessof the platform has been tested on benchmarks provided by the SemEval campaign and have been compared with the results obtained by domain-adapted techniques.
Kern Roman, Falk Stefan, Rexha Andi
2017
This paper describes our participation inSemEval-2017 Task 10, named ScienceIE(Machine Reading for Scientist). We competedin Subtask 1 and 2 which consist respectivelyin identifying all the key phrasesin scientific publications and label them withone of the three categories: Task, Process,and Material. These scientific publicationsare selected from Computer Science, MaterialSciences, and Physics domains. We followeda supervised approach for both subtasksby using a sequential classifier (CRF - ConditionalRandom Fields). For generating oursolution we used a web-based application implementedin the EU-funded research project,named CODE. Our system achieved an F1score of 0.39 for the Subtask 1 and 0.28 forthe Subtask 2.
Rexha Andi, Kern Roman, Ziak Hermann, Dragoni Mauro
2017
Retrieval of domain-specific documents became attractive for theSemantic Web community due to the possibility of integrating classicInformation Retrieval (IR) techniques with semantic knowledge.Unfortunately, the gap between the construction of a full semanticsearch engine and the possibility of exploiting a repository ofontologies covering all possible domains is far from being filled.Recent solutions focused on the aggregation of different domain-specificrepositories managed by third-parties. In this paper, wepresent a semantic federated search engine developed in the contextof the EEXCESS EU project. Through the developed platform,users are able to perform federated queries over repositories in atransparent way, i.e. without knowing how their original queries aretransformed before being actually submitted. The platform implementsa facility for plugging new repositories and for creating, withthe support of general purpose knowledge bases, knowledge graphsdescribing the content of each connected repository. Such knowledgegraphs are then exploited for enriching queries performed byusers.
Schrunner Stefan, Bluder Olivia, Zernig Anja, Kaestner Andre, Kern Roman
2017
In semiconductor industry it is of paramount im- portance to check whether a manufactured device fulfills all quality specifications and is therefore suitable for being sold to the customer. The occurrence of specific spatial patterns within the so-called wafer test data, i.e. analog electric measurements, might point out on production issues. However the shape of these critical patterns is unknown. In this paper different kinds of process patterns are extracted from wafer test data by an image processing approach using Markov Random Field models for image restoration. The goal is to develop an automated procedure to identify visible patterns in wafer test data to improve pattern matching. This step is a necessary precondition for a subsequent root-cause analysis of these patterns. The developed pattern ex- traction algorithm yields a more accurate discrimination between distinct patterns, resulting in an improved pattern comparison than in the original dataset. In a next step pattern classification will be applied to improve the production process control.
Lindstaedt Stefanie , Czech Paul, Fessl Angela
2017
A Lifecycle Approach to Knowledge Excellence various industries and use cases. Through their cognitive computing-based approach, which combines the strength of man and the machine, they are setting standards within both the local and the international research community. With their expertise in the field of knowledge management they are describing the basic approaches in this chapter.
Tschinkel Gerwald, Sabol Vedran
2017
When using classical search engines, researchers are often confronted with a number of results far beyond what they can realistically manage to read; when this happens, recommender systems can help, by pointing users to the most valuable sources of information. In the course of a long-term research project, research into one area can extend over several days, weeks, or even months. Interruptions are unavoidable, and, when multiple team members have to discuss the status of a project, it’s important to be able to communicate the current research status easily and accurately. Multiple type-specific interactive views can help users identify the results most relevant to their focus of interest. Our recommendation dashboard uses micro-filter visualizations intended to improve the experience of working with multiple active filters, allowing researchers to maintain an overview of their progress. Within this paper, we carry out an evaluation of whether micro-visualizations help to increase the memorability and readability of active filters in comparison to textual filters. Five tasks, quantitative and qualitative questions, and the separate view on the different visualisation types enabled us to gain insights on how micro-visualisations behave and will be discussed throughout the paper.
Mutlu Belgin, Veas Eduardo Enrique, Trattner Christoph
2017
In today's digital age with an increasing number of websites, social/learning platforms, and different computer-mediated communication systems, finding valuable information is a challenging and tedious task, regardless from which discipline a person is. However, visualizations have shown to be effective in dealing with huge datasets: because they are grounded on visual cognition, people understand them and can naturally perform visual operations such as clustering, filtering and comparing quantities. But, creating appropriate visual representations of data is also challenging: it requires domain knowledge, understanding of the data, and knowledge about task and user preferences. To tackle this issue, we have developed a recommender system that generates visualizations based on (i) a set of visual cognition rules/guidelines, and (ii) filters a subset considering user preferences. A user places interests on several aspects of a visualization, the task or problem it helps to solve, the operations it permits, or the features of the dataset it represents. This paper concentrates on characterizing user preferences, in particular: i) the sources of information used to describe the visualizations, the content descriptors respectively, and ii) the methods to produce the most suitable recommendations thereby. We consider three sources corresponding to different aspects of interest: a title that describes the chart, a question that can be answered with the chart (and the answer), and a collection of tags describing features of the chart. We investigate user-provided input based on these sources collected with a crowd-sourced study. Firstly, information-theoretic measures are applied to each source to determine the efficiency of the input in describing user preferences and visualization contents (user and item models). Secondly, the practicability of each input is evaluated with content-based recommender system. The overall methodology and results contribute methods for design and analysis of visual recommender systems. The findings in this paper highlight the inputs which can (i) effectively encode the content of the visualizations and user's visual preferences/interest, and (ii) are more valuable for recommending personalized visualizations.
Seitlinger Paul, Ley Tobias, Kowald Dominik, Theiler Dieter, Hasani-Mavriqi Ilire, Dennerlein Sebastian, Lex Elisabeth, Albert D.
2017
Creative group work can be supported by collaborative search and annotation of Web resources. In this setting, it is important to help individuals both stay fluent in generating ideas of what to search next (i.e., maintain ideational fluency) and stay consistent in annotating resources (i.e., maintain organization). Based on a model of human memory, we hypothesize that sharing search results with other users, such as through bookmarks and social tags, prompts search processes in memory, which increase ideational fluency, but decrease the consistency of annotations, e.g., the reuse of tags for topically similar resources. To balance this tradeoff, we suggest the tag recommender SoMe, which is designed to simulate search of memory from user-specific tag-topic associations. An experimental field study (N = 18) in a workplace context finds evidence of the expected tradeoff and an advantage of SoMe over a conventional recommender in the collaborative setting. We conclude that sharing search results supports group creativity by increasing the ideational fluency, and that SoMe helps balancing the evidenced fluency-consistency tradeoff.
Barreiros Carla, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2017
In our research we explore representing the state of production machines using a new nature metaphor, called BioIoT. The underlying rationale is to represent relevant information in an agreeable manner and to increase machines’ appeal to operators. In this paper we describe a study with twelve participants in which sensory information of a coffee machine is encoded in a virtual tree. All participants considered the interaction with the BioIoT pleasant; and most reported to feel more inclined to perform machine maintenance, take “care” for the machine, than given classic state representation. The study highlights as directions for follow-up research personalization, intelligibility vs representational power, limits of the metaphor, and immersive visualization.
Kaiser Rene_DB, Meixner Britta, Jäger Joscha
2017
Enabling interactive access to multimedia content and evaluating content-consumption behaviors and experiences involve several different research areas, which are covered at many different conferences. For four years, the Workshop on Interactive Content Consumption (WSICC) series offered a forum for combining interdisciplinary, comprehensive views, inspiring new discussions related to interactive multimedia. Here, the authors reflect on the outcome of the series.
Meixner Britta, Kaiser Rene_DB, Jäger Joscha, Ooi Wei Tsang, Kosch Harald
2017
(journal special issue)
Cemernek David, Gursch Heimo, Kern Roman
2017
The catchphrase “Industry 4.0” is widely regarded as a methodology for succeeding in modern manufacturing. This paper provides an overview of the history, technologies and concepts of Industry 4.0. One of the biggest challenges to implementing the Industry 4.0 paradigms in manufacturing are the heterogeneity of system landscapes and integrating data from various sources, such as different suppliers and different data formats. These issues have been addressed in the semiconductor industry since the early 1980s and some solutions have become well-established standards. Hence, the semiconductor industry can provide guidelines for a transition towards Industry 4.0 in other manufacturing domains. In this work, the methodologies of Industry 4.0, cyber-physical systems and Big data processes are discussed. Based on a thorough literature review and experiences from the semiconductor industry, we offer implementation recommendations for Industry 4.0 using the manufacturing process of an electronics manufacturer as an example.
Shao Lin, Silva Nelson, Schreck Tobias, Eggeling Eva
2017
The Scatter Plot Matrix (SPLOM) is a well-known technique for visual analysis of high-dimensional data. However, one problem of large SPLOMs is that typically not all views are potentially relevant to a given analysis task or user. The matrix itself may contain structured patterns across the dimensions, which could interfere with the investigation for unexplored views. We introduce a new concept and prototype implementation for an interactive recommender system supporting the exploration of large SPLOMs based on indirectly obtained user feedback from user eye tracking. Our system records the patterns that are currently under exploration based on gaze times, recommending areas of the SPLOM containing potentially new, unseen patterns for successive exploration. We use an image-based dissimilarity measure to recommend patterns that are visually dissimilar to previously seen ones, to guide the exploration in large SPLOMs. The dynamic exploration process is visualized by an analysis provenance heatmap, which captures the duration on explored and recommended SPLOM areas. We demonstrate our exploration process by a user experiment, showing the indirectly controlled recommender system achieves higher pattern recall as compared to fully interactive navigation using mouse operations.
Gursch Heimo, Cemernek David, Kern Roman
2017
In manufacturing environments today, automated machinery works alongside human workers. In many cases computers and humans oversee different aspects of the same manufacturing steps, sub-processes, and processes. This paper identifies and describes four feedback loops in manufacturing and organises them in terms of their time horizon and degree of automation versus human involvement. The data flow in the feedback loops is further characterised by features commonly associated with Big Data. Velocity, volume, variety, and veracity are used to establish, describe and compare differences in the data flows.
Hasitschka Peter, Sabol Vedran, Thalmann Stefan
2017
Industry 4.0 describes the digitization and the interlinkingof companies working together in a supply chain [1]. Thereby,the digitization and the interlinking does not only affects themachines and IT infrastructure, rather also the employees areaffected [3]. The employees have to acquire more and morecomplex knowledge within a shorter period of time. To copewith this challenge, the learning needs to be integrated into thedaily work practices, while the learning communities shouldmap the organizational production networks [2]. Such learningnetworks support the knowledge exchange and joint problemsolving together with all involved parties [4]. However, insuch communities not all involved actors are known and hencesupport to find the right learning material and peers is needed.Nowadays, many different learning environments are usedin the industry. Their complexity makes it hard to understandwhether the system provides an optimal learning environment.The large number of learning resources, learners and theiractivities makes it hard to identify potential problems inside alearning environment. Since the human visual system providesenormous power for discovering patterns from data displayedusing a suitable visual representation [5], visualizing such alearning environment could provide deeper insights into itsstructure and activities of the learners.Our goal is to provide a visual framework supporting theanalysis of communities that arise in a learning environment.Such analysis may lead to discovery of information that helpsto improve the learning environment and the users’ learningsuccess.
Geiger Manfred, Waizenegger Lena, Treasure-Jones Tamsin, Sarigianni Christina, Maier Ronald, Thalmann Stefan, Remus Ulrich
2017
Research on information system (IS) adoption and resistance has accumulatedsubstantial theoretical and managerial knowledge. Surprisingly, the paradox that end userssupport and at the same time resist use of an IS has received relatively little attention. Theinvestigation of this puzzle, however, is important to complement our understanding ofresistant behaviours and consequently to strengthen the explanatory power of extanttheoretical constructs on IS resistance. We investigate an IS project within the healthcare ...
Thalmann Stefan, Thiele Janna, Manhart Markus, Virnes Marjo
2017
This study explored the application scenarios of a mobile app called Ach So! forworkplace learning of construction work apprentices. The mobile application was used forpiloting new technology-enhanced learning practices in vocational apprenticeship trainingat construction sites in Finland and in a training center in Germany. Semi-structured focusgroup interviews were conducted after the pilot test periods. The interview data served asthe data source for the concept-driven framework analysis that employed theoretical ...
Thalmann Stefan, Larrazábal Jorge, Pammer-Schindler Viktoria, Kreuzthaler Armin, Fessl Angela
2017
n times of globalization, also workforce needs to be able to go global. This holds true especially for technical experts holding an exclusive expertise. Together with a global manufacturing company, we addressed the challenge of being able to send staff into foreign countries for managing technical projects in the foreign language. We developed a language learning concept that combines a language learning platform with conventional individual but virtually conducted coaching sessions. In our use case, we developed this ...
Thalmann Stefan, Pammer-Schindler Viktoria
2017
Aktuelle Untersuchungen zeigen einerseits auf, dass der Mensch weiterhin eine zentrale Rolle in der Industrie spielt. Andererseits ist aber auch klar, dass die Zahl der direkt in der Produktion beschäftigten Mitarbeter sinken wird. Die Veränderung wird dahin gehen, dass der Mensch weniger gleichförmige Prozese bearbeitet, stattdessen aber in der Lage sein muss, sich schnell ändernden Arbeitstätigkeiten azupassen und individualisierte Fertigungsprozesse zu steuern. Die Reduktion der Mitarbeiter hat jedoch auch eine Reduktion von Redunanzen zur Folge. Dies führt dazu, dass dem Einzelnen mehr Verantwortung übertragen wird. Als Folge haben Fehlentscheidungen eine görßere Tragweite und bedeuten somit auch ein höheres Risikio. Der Erfolg einer Industrie 4.0 Kampagne wird daher im Wesentlichen von den Anpassungsfähigkeiten der Mitarbeiter abhängen.
Pammer-Schindler Viktoria, Fessl Angela, Weghofer Franz, Thalmann Stefan
2017
Die Digitalisierung der Industrie wird aktuell sehr stark aus technoogischer Sicht betrachtet. Aber auch für den Menschen ergebn sich vielfältige Herausforderungen in dieser veränderten Arbeitsumgebung. Sie betreffen hautsächlich das Lernen von benötigtem Wissen.
Stabauer Petra, Breitfuß Gert, Lassnig Markus
2017
Nowadays digitalization is on everyone’s mind and affecting all areas of life. The rapid development of information technology and the increasing pervasiveness of digitalization represent new challenges to the business world. The emergence of the so-called fourth industrial revolution and the Internet of Things (IoT) confronts existing firms with changes in numerous aspects of doing business. Not only information and communication technologies are changing production processes through increasing automation. Digitalization can affect products and services itself. This could lead to major changes in a company’s value chain and as a consequence affects the company’s business model. In the age of digitalization, it is no longer sufficient to change single aspects of a firm’s business strategy, the business model itself needs to be the subject of innovation. This paper presents how digitalization affects business models of well-established companies in Austria. The results are demonstrated by means of two best practice case studies. The case studies were identified within an empirical research study funded by the Austrian Ministry for Transport, Innovation and Technology (BMVIT). The selected best practice cases presents how digitalization affects a firm’s business model and demonstrates the transformation of the value creation process by simultaneously contributing to sustainable development.
de Reuver Mark, Tarkus Astrid, Haaker Timber, Breitfuß Gert, Roelfsema Melissa, Kosman Ruud, Heikkilä Marikka
2017
In this paper, we present two design cycles for an online platform with ICT-enabled tooling that supports business model innovation by SMEs. The platform connects the needs of the SMEs regarding BMI with tools that can help to solve those needs and questions. The needs are derived from our earlier case study work (Heikkilä et al. 2016), showing typical BMI patterns of the SMEs needs - labelled as ‘I want to’s - about what an entrepreneur wants to achieve with business model innovation. The platform provides sets of integrated tools that can answer the typical ‘I want to’ questions that SMEs have with innovating their business models.
Pammer-Schindler Viktoria, Fessl Angela, Wiese Michael, Thalmann Stefan
2017
Financial auditors routinely search internal as well as public knowledge bases as part of the auditing process. Efficient search strategies are crucial for knowledge workers in general and for auditors in particular. Modern search technology quickly evolves; and features beyond keyword search like fac-etted search or visual overview of knowledge bases like graph visualisations emerge. It is therefore desirable for auditors to learn about new innovations and to explore and experiment with such technologies. In this paper, we present a reflection intervention concept that intends to nudge auditors to reflect on their search behaviour and to trigger informal learning in terms of by trying out new or less frequently used search features. The reflection intervention concept has been tested in a focus group with six auditors using a mockup. Foremost, the discussion centred on the timing of reflection interventions and how to raise mo-tivation to achieve a change in search behaviour.
Pammer-Schindler Viktoria, Fessl Angela, Wesiak Gudrun, Feyertag Sandra, Rivera-Pelayo Verónica
2017
This paper presents a concept for in-app reflection guidance and its evaluation in four work-related field trials. By synthesizing across four field trials, we can show that computer-based reflection guidance can function in the workplace, in the sense of being accepted as technology, being perceived as useful and leading to reflective learning. This is encouraging for all endeavours aiming to transfer existing knowledge on reflection supportive technology from educational settings to the workplace. However,reflective learning in our studies was mostly visible to limited depth in textual entries made in the applications themselves; and proactive reflection guidance technology like prompts were often found to be disruptive. We offer these two issues as highly relevant questions for future research.
Pammer-Schindler Viktoria, Rivera-Pelayo Verónica, Fessl Angela, Müller Lars
2017
The benefits of self-tracking have been thoroughly investigated in private areas of life, like health or sustainable living, but less attention has been given to the impact and benefits of self-tracking in work-related settings. Through two field studies, we introduced and evaluated a mood self-tracking application in two call centers to investigate the role of mood self-tracking at work, as well as its impact on individuals and teams. Our studies indicate that mood self-tracking is accepted and can improve performance if the application is well integrated into the work processes and matches the management style. The results show that (i) capturing moods and explicitly relating them to work tasks facilitated reflection, (ii) mood self-tracking increased emotional awareness and this improved cohesion within teams, and (iii) proactive reactions by managers to trends and changes in team members’ mood were key for acceptance of reflection and correlated with measured improvements in work performance. These findings help to better understand the role and potential of self-tracking in work settings and further provide insights that guide future researchers and practitioners to design and introduce these tools in a workplace setting.
Ginthör Robert, Lamb Reinhold, Koinegg Johann
2017
Daten stellen den Rohstoff und die Basis für viele Unternehmen und deren künftigen wirtschaftlichen Erfolg in der Industrie dar. Diese Radar-Ausgabe knüpft inhaltlich an die veröffentlichten Radar-Ausgaben „Dienstleistungsinnovationen“ und „Digitalisierte Maschinen und Anlagen“ an und beleuchtet die technischen Möglichkeiten und zukünftigen Entwicklungen von Data-driven Business im Kontext der Green Tech Industries. Basierend auf der fortschreitenden Digitalisierung nimmt das Angebotan strukturierten und unstrukturierten Daten in den unterschiedlichen Bereichen der Wirtschaft rasant zu. In diesem Kontext gilt es sowohl interne als auch externe Daten unterschiedlichen Ursprungs zentral zu erfassen, zu validieren, miteinander zu kombinieren, auszuwerten sowie daraus neue Erkenntnisse und Anwendungen für ein Data DrivenBusiness zu generieren.
Stern Hermann, Dennerlein Sebastian, Pammer-Schindler Viktoria, Ginthör Robert, Breitfuß Gert
2017
To specify the current understanding of business models in the realm of Big Data, we used a qualitative approach analysing 25 Big Data projects spread over the domains of Retail, Energy, Production, and Life Sciences, and various company types (SME, group, start-up, etc.). All projects have been conducted in the last two years at Austria’s competence center for Data-driven Business and Big Data Analytics, the Know-Center.
Reiter-Haas Markus, Slawicek Valentin, Lacic Emanuel
2017
Topps David, Dennerlein Sebastian, Treasure-Jones Tamsin
2017
There is increasing interest in Barcamps and Unconferences as an educational approach during traditional medical education conferences. Ourgroup has now accumulated extensive experience in these formats over a number of years in different educational venues. We present asummary of observations and lessons learned about what works and what doesn’t.
Ruiz-Calleja Adolfo, Prieto Luis Pablo, Jesús Rodríguez Triana María , Dennerlein Sebastian, Ley Tobias
2017
Despite the ubiquity of learning in the everyday life of most workplaces, the learning analytics community only has paid attention to such settings very recently. One probable reason for this oversight is the fact that learning in the workplace is often informal, hard to grasp and not univocally defined. This paper summarizes the state of the art of Workplace Learning Analytics (WPLA), extracted from a systematic literature review of five academic databases as well as other known sources in the WPLA community. Our analysis of existing proposals discusses particularly on the role of different conceptions of learning and their influence on the LA proposals’ design and technology choices. We end the paper by discussing opportunities for future work in this emergent field.
Wilsdon James , Bar-Ilan Judit, Frodemann Robert, Lex Elisabeth, Peters Isabella , Wouters Paul
2017
Lacic Emanuel, Kowald Dominik, Lex Elisabeth
2017
Recommender systems are acknowledged as an essential instrumentto support users in finding relevant information. However,the adaptation of recommender systems to multiple domain-specificrequirements and data models still remains an open challenge. Inthe present paper, we contribute to this sparse line of research withguidance on how to design a customizable recommender systemthat accounts for multiple domains with heterogeneous data. Usingconcrete showcase examples, we demonstrate how to setup amulti-domain system on the item and system level, and we reportevaluation results for the domains of (i) LastFM, (ii) FourSquare,and (iii) MovieLens. We believe that our findings and guidelinescan support developers and researchers of recommender systemsto easily adapt and deploy a recommender system in distributedenvironments, as well as to develop and evaluate algorithms suitedfor multi-domain settings
Kowald Dominik, Kopeinik Simone , Lex Elisabeth
2017
Recommender systems have become important tools to supportusers in identifying relevant content in an overloaded informationspace. To ease the development of recommender systems, a numberof recommender frameworks have been proposed that serve a widerange of application domains. Our TagRec framework is one of thefew examples of an open-source framework tailored towards developingand evaluating tag-based recommender systems. In this paper,we present the current, updated state of TagRec, and we summarizeand reƒect on four use cases that have been implemented withTagRec: (i) tag recommendations, (ii) resource recommendations,(iii) recommendation evaluation, and (iv) hashtag recommendations.To date, TagRec served the development and/or evaluation processof tag-based recommender systems in two large scale Europeanresearch projects, which have been described in 17 research papers.‘us, we believe that this work is of interest for both researchersand practitioners of tag-based recommender systems.
Görögh Edit, Vignoli Michela, Gauch Stephan, Blümel Clemens, Kraker Peter, Hasani-Mavriqi Ilire, Luzi Daniela , Walker Mappet, Toli Eleni, Sifacaki Electra
2017
The growing dissatisfaction with the traditional scholarly communication process and publishing practices as well as increasing usage and acceptance of ICT and Web 2.0 technologies in research have resulted in the proliferation of alternative review, publishing and bibliometric methods. The EU-funded project OpenUP addresses key aspects and challenges of the currently transforming science landscape and aspires to come up with a cohesive framework for the review-disseminate-assess phases of the research life cycle that is fit to support and promote open science. The objective of this paper is to present first results and conclusions of the landscape scan and analysis of alternative peer review, altmetrics and innovative dissemination methods done during the first project year.
Kraker Peter, Enkhbayar Asuraa, Schramm Maxi, Kittel Christopher, Chamberlain Scott, Skaug Mike , Brembs Björn
2017
Görögh Edit, Toli Eleni, Kraker Peter
2017
Kopeinik Simone, Lex Elisabeth, Seitlinger Paul, Ley Tobias, Albert Dietrich
2017
In online social learning environments, tagging has demonstratedits potential to facilitate search, to improve recommendationsand to foster reflection and learning.Studieshave shown that shared understanding needs to be establishedin the group as a prerequisite for learning. We hypothesisethat this can be fostered through tag recommendationstrategies that contribute to semantic stabilization.In this study, we investigate the application of two tag recommendersthat are inspired by models of human memory:(i) the base-level learning equation BLL and (ii) Minerva.BLL models the frequency and recency of tag use while Minervais based on frequency of tag use and semantic context.We test the impact of both tag recommenders on semanticstabilization in an online study with 56 students completinga group-based inquiry learning project in school. Wefind that displaying tags from other group members contributessignificantly to semantic stabilization in the group,as compared to a strategy where tags from the students’individual vocabularies are used. Testing for the accuracyof the different recommenders revealed that algorithms usingfrequency counts such as BLL performed better whenindividual tags were recommended. When group tags wererecommended, the Minerva algorithm performed better. Weconclude that tag recommenders, exposing learners to eachother’s tag choices by simulating search processes on learners’semantic memory structures, show potential to supportsemantic stabilization and thus, inquiry-based learning ingroups.
Kowald Dominik, Pujari Suhbash Chandra, Lex Elisabeth
2017
Hashtags have become a powerful tool in social platformssuch as Twitter to categorize and search for content, and tospread short messages across members of the social network.In this paper, we study temporal hashtag usage practices inTwitter with the aim of designing a cognitive-inspired hashtagrecommendation algorithm we call BLLI,S. Our mainidea is to incorporate the effect of time on (i) individualhashtag reuse (i.e., reusing own hashtags), and (ii) socialhashtag reuse (i.e., reusing hashtags, which has been previouslyused by a followee) into a predictive model. For this,we turn to the Base-Level Learning (BLL) equation from thecognitive architecture ACT-R, which accounts for the timedependentdecay of item exposure in human memory. Wevalidate BLLI,S using two crawled Twitter datasets in twoevaluation scenarios. Firstly, only temporal usage patternsof past hashtag assignments are utilized and secondly, thesepatterns are combined with a content-based analysis of thecurrent tweet. In both evaluation scenarios, we find not onlythat temporal effects play an important role for both individualand social hashtag reuse but also that our BLLI,S approachprovides significantly better prediction accuracy andranking results than current state-of-the-art hashtag recommendationmethods.
Traub Matthias, Gursch Heimo, Lex Elisabeth, Kern Roman
2017
New business opportunities in the digital economy are established when datasets describing a problem, data services solving the said problem, the required expertise and infrastructure come together. For most real-word problems finding the right data sources, services consulting expertise, and infrastructure is difficult, especially since the market players change often. The Data Market Austria (DMA) offers a platform to bring datasets, data services, consulting, and infrastructure offers to a common marketplace. The recommender systems included in DMA analyses all offerings, to derive suggestions for collaboration between them, like which dataset could be best processed by which data service. The suggestions should help the costumers on DMA to identify new collaborations reaching beyond traditional industry boundaries to get in touch with new clients or suppliers in the digital domain. Human brokers will work together with the recommender system to set up data value chains matching different offers to create a data value chain solving the problems in various domains. In its final expansion stage, DMA is intended to be a central hub for all actors participating in the Austrian data economy, regardless of their industrial and research domain to overcome traditional domain boundaries.
Trattner Christoph, Elsweiler David
2017
Food recommenders have the potential to positively inuence theeating habits of users. To achieve this, however, we need to understandhow healthy recommendations are and the factors whichinuence this. Focusing on two approaches from the literature(single item and daily meal plan recommendation) and utilizing alarge Internet sourced dataset from Allrecipes.com, we show howalgorithmic solutions relate to the healthiness of the underlyingrecipe collection. First, we analyze the healthiness of Allrecipes.comrecipes using nutritional standards from the World Health Organisationand the United Kingdom Food Standards Agency. Second,we investigate user interaction patterns and how these relate to thehealthiness of recipes. Third, we experiment with both recommendationapproaches. Our results indicate that overall the recipes inthe collection are quite unhealthy, but this varies across categorieson the website. Users in general tend to interact most often with theleast healthy recipes. Recommender algorithms tend to score popularitems highly and thus on average promote unhealthy items. Thiscan be tempered, however, with simple post-ltering approaches,which we show by experiment are better suited to some algorithmsthan others. Similarly, we show that the generation of meal planscan dramatically increase the number of healthy options open tousers. One of the main ndings is, nevertheless, that the utilityof both approaches is strongly restricted by the recipe collection.Based on our ndings we draw conclusions how researchers shouldattempt to make food recommendation systems promote healthynutrition.
Ziak Hermann, Kern Roman
2017
The combination of different knowledge bases in thefield of information retrieval is called federated or aggregated search. It has several benefits over single source retrieval but poses some challenges as well. This work focuses on the challenge of result aggregation; especially in a setting where the final result list should include a certain degree of diversity and serendipity. Both concepts have been shown to have an impact on how user perceive an information retrieval system. In particular, we want to assess if common procedures for result list aggregation can be utilized to introduce diversity and serendipity. Furthermore, we study whether a blocking or interleaving for result aggregation yields better results. In a cross vertical aggregated search the so-called verticalscould be news, multimedia content or text. Block ranking is one approach to combine such heterogeneous result. It relies on the idea that these verticals are combined into a single result list as blocks of several adjacent items. An alternative approach for this is interleaving. Here the verticals are blended into one result list on an item by item basis, i.e. adjacent items in the result list may come from different verticals. To generate the diverse and serendipitous results we reliedon a query reformulation technique which we showed to be beneficial to generate diversified results in previous work. To conduct this evaluation we created a dedicated dataset. This dataset served as a basis for three different evaluation settings on a crowd sourcing platform, with over 300 participants. Our results show that query based diversification can be adapted to generate serendipitous results in a similar manner. Further, we discovered that both approaches, interleaving and block ranking, appear to be beneficial to introduce diversity and serendipity. Though it seems that queries either benefit from one approach or the other but not from both.
Toller Maximilian, Kern Roman
2017
The in-depth analysis of time series has gained a lot of re-search interest in recent years, with the identification of pe-riodic patterns being one important aspect. Many of themethods for identifying periodic patterns require time series’season length as input parameter. There exist only a few al-gorithms for automatic season length approximation. Manyof these rely on simplifications such as data discretization.This paper presents an algorithm for season length detec-tion that is designed to be sufficiently reliable to be used inpractical applications. The algorithm estimates a time series’season length by interpolating, filtering and detrending thedata. This is followed by analyzing the distances betweenzeros in the directly corresponding autocorrelation function.Our algorithm was tested against a comparable algorithmand outperformed it by passing 122 out of 165 tests, whilethe existing algorithm passed 83 tests. The robustness of ourmethod can be jointly attributed to both the algorithmic ap-proach and also to design decisions taken at the implemen-tational level.
Rexha Andi, Kröll Mark, Ziak Hermann, Kern Roman
2017
Our work is motivated by the idea to extend the retrieval of related scientific literature to cases, where the relatedness also incorporates the writing style of individual scientific authors. Therefore we conducted a pilot study to answer the question whether humans can identity authorship once the topological clues have been removed. As first result, we found out that this task is challenging, even for humans. We also found some agreement between the annotators. To gain a better understanding how humans tackle such a problem, we conducted an exploratory data analysis. Here, we compared the decisions against a number of topological and stylometric features. The outcome of our work should help to improve automatic authorship identificationalgorithms and to shape potential follow-up studies.
Santos Tiago, Walk Simon, Helic Denis
2017
Modeling activity in online collaboration websites, such asStackExchange Question and Answering portals, is becom-ing increasingly important, as the success of these websitescritically depends on the content contributed by its users. Inthis paper, we represent user activity as time series and per-form an initial analysis of these time series to obtain a bet-ter understanding of the underlying mechanisms that governtheir creation. In particular, we are interested in identifyinglatent nonlinear behavior in online user activity as opposedto a simpler linear operating mode. To that end, we applya set of statistical tests for nonlinearity as a means to char-acterize activity time series derived from 16 different onlinecollaboration websites. We validate our approach by com-paring activity forecast performance from linear and nonlin-ear models, and study the underlying dynamical systems wederive with nonlinear time series analysis. Our results showthat nonlinear characterizations of activity time series helpto (i) improve our understanding of activity dynamics in on-line collaboration websites, and (ii) increase the accuracy offorecasting experiments.
Strohmaier David, di Sciascio Maria Cecilia, Errecalde Marcelo, Veas Eduardo Enrique
2017
Innovations in digital libraries and services enable users to access large amounts of data on demand. Yet, quality assessment of information encountered on the Internet remains an elusive open issue. For example, Wikipedia, one of the most visited platforms on the Web, hosts thousands of user-generated articles and undergoes 12 million edits/contributions per month. User-generated content is undoubtedly one of the keys to its success, but also a hindrance to good quality: contributions can be of poor quality because everyone, even anonymous users, can participate. Though Wikipedia has defined guidelines as to what makes the perfect article, authors find it difficult to assert whether their contributions comply with them and reviewers cannot cope with the ever growing amount of articles pending review. Great efforts have been invested in algorith-mic methods for automatic classification of Wikipedia articles (as featured or non-featured) and for quality flaw detection. However, little has been done to support quality assessment of user-generated content through interactive tools that allow for combining automatic methods and human intelligence. We developed WikiLyzer, a toolkit comprising three Web-based interactive graphic tools designed to assist (i) knowledge discovery experts in creating and testing metrics for quality measurement , (ii) users searching for good articles, and (iii) users that need to identify weaknesses to improve a particular article. A case study suggests that experts are able to create complex quality metrics with our tool and a report in a user study on its usefulness to identify high-quality content.
Ayris Paul, Berthou Jean-Yves, Bruce Rachel, Lindstaedt Stefanie , Monreale Anna, Mons Barend, Murayama Yasuhiro, Södegard Caj, Tochtermann Klaus, Wilkinson Ross
2016
The European Open Science Cloud (EOSC) aims to accelerate and support the current transition to more effective Open Science and Open Innovation in the Digital Single Market. It should enable trusted access to services, systems and the re-use of shared scientific data across disciplinary, social and geographical borders. This report approaches the EOSC as a federated environment for scientific data sharing and re-use, based on existing and emerging elements in the Member States, with light-weight international guidance and governance, and a large degree of freedom regarding practical implementation.
Lindstaedt Stefanie , Ley Tobias, Klamma Ralf, Wild Fridolin
2016
Recognizing the need for addressing the rather fragmented character of research in this field, we have held a workshop on learning analytics for workplace and professional learning at the Learning Analytics and Knowledge (LAK) Conference. The workshop has taken a broad perspective, encompassing approaches from a number of previous traditions, such as adaptive learning, professional online communities, workplace learning and performance analytics. Being co-located with the LAK conference has provided an ideal venue for addressing common challenges and for benefiting from the strong research on learning analytics in other sectors that LAK has established. Learning Analytics for Workplace and Professional Learning is now on the research agenda of several ongoing EU projects, and therefore a number of follow-up activities are planned for strengthening integration in this emerging field.
Rexha Andi, Kern Roman, Dragoni Mauro , Kröll Mark
2016
With different social media and commercial platforms, users express their opinion about products in a textual form. Automatically extracting the polarity (i.e. whether the opinion is positive or negative) of a user can be useful for both actors: the online platform incorporating the feedback to improve their product as well as the client who might get recommendations according to his or her preferences. Different approaches for tackling the problem, have been suggested mainly using syntactic features. The “Challenge on Semantic Sentiment Analysis” aims to go beyond the word-level analysis by using semantic information. In this paper we propose a novel approach by employing the semantic information of grammatical unit called preposition. We try to drive the target of the review from the summary information, which serves as an input to identify the proposition in it. Our implementation relies on the hypothesis that the proposition expressing the target of the summary, usually containing the main polarity information.
Atzmüller Martin, Alvin Chin, Trattner Christoph
2016
Dennerlein Sebastian, Treasure-Jones Tamsin, Lex Elisabeth, Ley Tobias
2016
Background: Teamworking, within and acrosshealthcare organisations, is essential to deliverexcellent integrated care. Drawing upon an alternationof collaborative and cooperative phases, we exploredthis teamworking and respective technologicalsupport within UK Primary Care. Participants usedBits&Pieces (B&P), a sensemaking tool for tracedexperiences that allows sharing results and mutuallyelaborating them: i.e. cooperating and/orcollaborating.Summary of Work: We conducted a two month-longcase study involving six healthcare professionals. InB&P, they reviewed organizational processes, whichrequired the involvement of different professions ineither collaborative and/or cooperative manner. Weused system-usage data, interviews and qualitativeanalysis to understand the interplay of teamworkingpracticeand technology.Summary of Results: Within our analysis we mainlyidentified cooperation phases. In a f2f-meeting,professionals collaboratively identified subtasks andassigned individuals leading collaboration on them.However, these subtasks were undertaken asindividual sensemaking efforts and finally combined(i.e. cooperation). We found few examples ofreciprocal interpretation processes (i.e. collaboration):e.g. discussing problems during sensemaking ormonitoring other’s sensemaking-outcomes to makesuggestions.Discussion: These patterns suggest that collaborationin healthcare often helps to construct a minimalshared understanding (SU) of subtasks to engage incooperation, where individuals trust in other’scompetencies and autonomous completion. However,we also found that professionals with positivecollaboration history and deepened SU were willing toundertake subtasks collaboratively. It seems thatacquiring such deepened SU of concepts andmethods, leads to benefits that motivate professionalsto collaborate more.Conclusion: Healthcare is a challenging environmentrequiring interprofessional work across organisations.For effective teamwork, a deepened SU is crucial andboth cooperation and collaboration are required.However, we found a tendency of staff to rely mainlyon cooperation when working in teams and not fullyexplore benefits of collaboration.Take Home Messages: To maximise benefits ofinterprofessional working, tools for teamworkingshould support both cooperation and collaborationprocesses and scaffold the move between them
Thalmann Stefan, Manhart Markus
2016
Organizations join networks to acquire external knowledge. This is especially important for SMEs since they often lack resources and are dependent on external knowledge to achieve and sustain competitive advantage. However, finding the right balance between measures facilitating knowledge sharing and measures protecting knowledge is a challenge. Whilst sharing is the raison d’être of networks, neglecting knowledge protection can be also detrimental to network, e.g., lead to one-sided skimming of knowledge. We identified four practices SMEs currently apply to balance protection of competitive knowledge and knowledge sharing in the network: (a) share in subgroups with high trust, (b) share partial aspects of the knowledge base, (c) share with people with low proximities, and (d) share common knowledge and protect the crucial. We further found that the application of the practices depends on the maturity of the knowledge. Further, we discuss how the practices relate to organizational protection capabilities and how the network can provide IT to support the development of these capabilities.
Thalmann Stefan, Ilvonen Ilona, Manhart Markus , Sillaber Christian
2016
New ways of combining digital and physical innovations, as well as intensified inter-organizational collaborations, create new challenges to the protection of organizational knowledge. Existing research on knowledge protection is at an early stage and scattered among various research domains. This research-in-progress paper presents a plan for a structured literature review on knowledge protection, integrating the perspectives of the six base domains of knowledge, strategic, risk, intellectual property rights, innovation, and information technology security management. We define knowledge protection as a set of capabilities comprising and enforcing technical, organizational, and legal mechanisms to protect tacit and explicit knowledge necessary to generate or adopt innovations.
Cik Michael, Hebenstreit Cornelia, Horn Christopher, Schulze Gunnar, Traub Matthias, Schweighofer Erich, Hötzendorf Walter, Fellendorf Martin
2016
Guaranteeing safety during mega events has always played a role for organizers, their security guards and the action force. This work was realized to enhance safety at mega events and demonstrations without the necessity of fixed installations. Therefore a low cost monitoring system supporting the organization and safety personnel was developed using cell phone data and social media data in combination with safety concepts to monitor safety during the event in real time. To provide the achieved results in real time to the event and safety personnel an application for a Tablet-PC was established. Two representative events were applied as case studies to test and evaluate the results and to check response and executability of the app on site. Because data privacy is increasingly important, legal experts were closely involved and provided legal support.
Ziak Hermann, Kern Roman
2016
Within this work represents the documentation of our ap-proach on the Social Book Search Lab 2016 where we took part in thesuggestion track. The main goal of the track was to create book recom-mendation for readers only based on their stated request within a forum.The forum entry contained further contextual information, like the user’scatalogue of already read books and the list of example books mentionedin the user’s request. The presented approach is mainly based on themetadata included in the book catalogue provided by the organizers ofthe task. With the help of a dedicated search index we extracted severalpotential book recommendations which were re-ranked by the use of anSVD based approach. Although our results did not meet our expectationwe consider it as first iteration towards a competitive solution.
Luzhnica Granit, Öjeling Christoffer, Veas Eduardo Enrique, Pammer-Schindler Viktoria
2016
This paper presents and discusses the technical concept of a virtualreality version of the Sokoban game with a tangible interface. Theunderlying rationale is to provide spinal-cord injury patients whoare learning to use a neuroprosthesis to restore their capability ofgrasping with a game environment for training. We describe as rel-evant elements to be considered in such a gaming concept: input,output, virtual objects, physical objects, activity tracking and per-sonalised level recommender. Finally, we also describe our experi-ences with instantiating the overall concept with hand-held mobilephones, smart glasses and a head mounted cardboard setup
Silva Nelson, Shao Lin, Schreck Tobias, Eggeling Eva, Fellner Dieter W.
2016
We present a new open-source prototype framework to exploreand visualize eye-tracking experiments data. Firstly, standard eyetrackersare used to record raw eye gaze data-points on user experiments.Secondly, the analyst can configure gaze analysis parameters,such as, the definition of areas of interest, multiple thresholdsor the labeling of special areas, and we upload the data to a searchserver. Thirdly, a faceted web interface for exploring and visualizingthe users’ eye gaze on a large number of areas of interest isavailable. Our framework integrates several common visualizationsand it also includes new combined representations like an eye analysisoverview and a clustered matrix that shows the attention timestrength between multiple areas of interest. The framework can bereadily used for the exploration of eye tracking experiments data.We make available the source code of our prototype framework foreye-tracking data analysis.
Silva Nelson, Caldera Christian, Krispel Ulrich, Eggeling Eva, Sunk Alexander, Reisinger Gerhard, Sihn Wilfried, Fellner Dieter W.
2016
Value stream mapping is a lean management method for analyzing and optimizing a series of events for production or services. Even today the first step in value stream analysis – the acquisition of the current state map – is still created using pen & paper by physically visiting the production line. We capture a digital representation of how manufacturing processes look like in reality. The manufacturing processes can be represented and efficiently analyzed for future production planning as a future state map by using a meta description together with a dependency graph. With VASCO we present a tool, which contributes to all parts of value stream analysis - from data acquisition, over analyzing, planning, comparison up to simulation of alternative future state maps.We call this a holistic approach for Value stream mapping including detailed analysis of lead time, productivity, space, distance, material disposal, energy and carbon dioxide equivalents – depending in a change of calculated direct product costs.
Silva Nelson, Shao Lin, Schreck Tobias, Eggeling Eva, Fellner Dieter W.
2016
Effective visual exploration of large data sets is an important problem. A standard tech- nique for mapping large data sets is to use hierarchical data representations (trees, or dendrograms) that users may navigate. If the data sets get large, so do the hierar- chies, and effective methods for the naviga- tion are required. Traditionally, users navi- gate visual representations using desktop in- teraction modalities, including mouse interac- tion. Motivated by recent availability of low- cost eye-tracker systems, we investigate ap- plication possibilities to use eye-tracking for controlling the visual-interactive data explo- ration process. We implemented a proof-of- concept system for visual exploration of hier- archic data, exemplified by scatter plot dia- grams which are to be explored for grouping and similarity relationships. The exploration includes usage of degree-of-interest based dis- tortion controlled by user attention read from eye-movement behavior. We present the basic elements of our system, and give an illustra- tive use case discussion, outlining the applica- tion possibilities. We also identify interesting future developments based on the given data views and captured eye-tracking information. (13) Visual Exploration of Hierarchical Data Using Degree-of-Interest Controlled by Eye-Tracking. Available from: https://www.researchgate.net/publication/309479681_Visual_Exploration_of_Hierarchical_Data_Using_Degree-of-Interest_Controlled_by_Eye-Tracking [accessed Oct 3, 2017].
Berndt Rene, Silva Nelson, Edtmayr Thomas, Sunk Alexander, Krispel Ulrich, Caldera Christian, Eggeling Eva, Fellner Dieter W., Sihn Wilfried
2016
Value stream mapping is a lean management method for analyzing and optimizing a series of events for production or services. Even today the first step in value stream analysis - the acquisition of the current state - is still created using pen & paper by physically visiting the production place. We capture a digital representation of how manufacturing processes look like in reality. The manufacturing processes can be represented and efficiently analyzed for future production planning by using a meta description together with a dependency graph. With our Value Stream Creator and explOrer (VASCO) we present a tool, which contributes to all parts of value stream analysis - from data acquisition, over planning, comparison with previous realities, up to simulation of future possible states.
Gursch Heimo, Körner Stefan, Krasser Hannes, Kern Roman
2016
Painting a modern car involves applying many coats during a highly complex and automated process. The individual coats not only serve a decoration purpose but are also curial for protection from damage due to environmental influences, such as rust. For an optimal paint job, many parameters have to be optimised simultaneously. A forecasting model was created, which predicts the paint flaw probability for a given set of process parameters, to help the production managers modify the process parameters to achieve an optimal result. The mathematical model was based on historical process and quality observations. Production managers who are not familiar with the mathematical concept of the model can use it via an intuitive Web-based Graphical User Interface (Web-GUI). The Web-GUI offers production managers the ability to test process parameters and forecast the expected quality. The model can be used for optimising the process parameters in terms of quality and costs.
Gursch Heimo, Kern Roman
2016
Many different sensing, recording and transmitting platforms are offered on today’s market for Internet of Things (IoT) applications. But taking and transmitting measurements is just one part of a complete system. Also long time storage and processing of recorded sensor values are vital for IoT applications. Big Data technologies provide a rich variety of processing capabilities to analyse the recorded measurements. In this paper an architecture for recording, searching, and analysing sensor measurements is proposed. This architecture combines existing IoT and Big Data technologies to bridge the gap between recording, transmission, and persistency of raw sensor data on one side, and the analysis of data on Hadoop clusters on the other side. The proposed framework emphasises scalability and persistence of measurements as well as easy access to the data from a variety of different data analytics tools. To achieve this, a distributed architecture is designed offering three different views on the recorded sensor readouts. The proposed architecture is not targeted at one specific use-case, but is able to provide a platform for a large number of different services.
Hasani-Mavriqi Ilire, Geigl Florian, Pujari Suhbash Chandra, Lex Elisabeth, Helic Denis
2016
In this paper, we study the process of opinion dynamics and consensus building in online collaboration systems, in which users interact with each other following their common interests and their social profiles. Specifically, we are interested in how users similarity and their social status in the community, as well as the interplay of those two factors influence the process of consensus dynamics. For our study, we simulate the diffusion of opinions in collaboration systems using the well-known Naming Game model, which we extend by incorporating an interaction mechanism based on user similarity and user social status. We conduct our experiments on collaborative datasets extracted from the Web. Our findings reveal that when users are guided by their similarity to other users, the process of consensus building in online collaboration systems is delayed. A suitable increase of influence of user social status on their actions can in turn facilitate this process. In summary, our results suggest that achieving an optimal consensus building process in collaboration systems requires an appropriate balance between those two factors.
Czech Paul
2016
Needs, opportunities and challenges
Goldgruber Eva, Gutounig Robert, Schweiger Stefan, Dennerlein Sebastian
2016
Gutounig Robert, Goldgruber Eva, Dennerlein Sebastian, Schweiger Stefan
2016
Dennerlein Sebastian, Gutounig Robert, Goldgruber Eva , Schweiger Stefan
2016
There are many web-based tools like social networks, collaborative writing, or messaging tools that connectorganizations in accordance with web 2.0 principles. Slack is such a web 2.0 instant messaging tool. As per developer, itintegrates the entire communication, file-sharing, real-time messaging, digital archiving and search at one place. Usage inline with these functionalities would reflect expected appropriation, while other usage would account for unexpectedappropriation. We explored which factors of web 2.0 tools determine actual usage and how they affect knowledgemanagement (KM). Therefore, we investigated the relation between the three influencing factors, proposed tool utility fromdeveloper side, intended usage of key implementers, and context of application, to the actual usage in terms of knowledgeactivities (generate, acquire, organize, transfer and save knowledge). We conducted episodic interviews with keyimplementers in five different organizational contexts to understand how messaging tools affect KM by analyzing theappropriation of features. Slack was implemented with the intention to enable exchange between project teams, connectingdistributed project members, initiate a community of learners and establish a communication platform. Independent of thecontext, all key implementers agreed on knowledge transfer, organization and saving in accordance with Slack’s proposedutility. Moreover, results revealed that a usage intention of internal management does not lead to acquisition of externalknowledge, and usage intention of networking not to generation of new knowledge. These results suggest that it is not thecontext of application, but the intended usage that mainly affects the tool's efficacy with respect to KM: I.e. intention seemsto affect tool selection, first, explaining commonalities with respect to knowledge activities (expected appropriation) and,subsequently, intention also affects unexpected appropriation beyond the developers’ tool utility. A messaging tool is, hence,not only a messaging tool, but it is ‘what you make of it!’
Kowald Dominik, Lex Elisabeth, Kopeinik Simone
2016
In recent years, a number of recommendation algorithmshave been proposed to help learners find suitable learning resources online.Next to user-centered evaluations, offline-datasets have been usedto investigate new recommendation algorithms or variations of collaborativefiltering approaches. However, a more extensive study comparinga variety of recommendation strategies on multiple TEL datasets ismissing. In this work, we contribute with a data-driven study of recommendationstrategies in TEL to shed light on their suitability forTEL datasets. To that end, we evaluate six state-of-the-art recommendationalgorithms for tag and resource recommendations on six empiricaldatasets: a dataset from European Schoolnets TravelWell, a dataset fromthe MACE portal, which features access to meta-data-enriched learningresources from the field of architecture, two datasets from the socialbookmarking systems BibSonomy and CiteULike, a MOOC dataset fromthe KDD challenge 2015, and Aposdle, a small-scale workplace learningdataset. We highlight strengths and shortcomings of the discussed recommendationalgorithms and their applicability to the TEL datasets.Our results demonstrate that the performance of the algorithms stronglydepends on the properties and characteristics of the particular dataset.However, we also find a strong correlation between the average numberof users per resource and the algorithm performance. A tag recommenderevaluation experiment reveals that a hybrid combination of a cognitiveinspiredand a popularity-based approach consistently performs best onall TEL datasets we utilized in our study.
Rexha Andi, Klampfl Stefan, Kröll Mark, Kern Roman
2016
To bring bibliometrics and information retrieval closer together, we propose to add the concept of author attribution into the pre-processing of scientific publications. Presently, common bibliographic metrics often attribute the entire article to all the authors affecting author-specific retrieval processes. We envision a more finegrained analysis of scientific authorship by attributing particular segments to authors. To realize this vision, we propose a new feature representation of scientific publications that captures the distribution of tylometric features. In a classification setting, we then seek to predict the number of authors of a scientific article. We evaluate our approach on a data set of ~ 6100 PubMed articles and achieve best results by applying random forests, i.e., 0.76 precision and 0.76 recall averaged over all classes.
Fessl Angela, Pammer-Schindler Viktoria, Blunk Oliver, Prilla Michael
2016
Reflective learning has been established as a process that deepenslearning in both educational and work-related settings. We present a literaturereview on various approaches and tools (e.g., prompts, journals, visuals)providing guidance for facilitating reflective learning. Research consideredin this review coincides common understanding of reflective learning, hasapplied and evaluated a tool supporting reflection and presents correspondingresults. Literature was analysed with respect to timing of reflection, reflectionparticipants, type of reflection guidance, and results achieved regardingreflection. From this analysis, we were able to derive insights, guidelinesand recommendations for the design of reflection guidance functionality incomputing systems: (i) ensure that learners understand the purpose of reflectivelearning, (ii) combine reflective learning tools with reflective questions either inform of prompts or with peer-to-peer or group discussions, (iii) for work-relatedsettings consider the time with regard to when and how to motivate to reflect.
Rexha Andi, Kröll Mark, Kern Roman
2016
Monitoring (social) media represents one means for companies to gain access to knowledge about, for instance, competitors, products as well as markets. As a consequence, social media monitoring tools have been gaining attention to handle amounts of data nowadays generated in social media. These tools also include summarisation services. However, most summarisation algorithms tend to focus on (i) first and last sentences respectively or (ii) sentences containing keywords.In this work we approach the task of summarisation by extracting 4W (who, when, where, what) information from (social)media texts. Presenting 4W information allows for a more compact content representation than traditional summaries. Inaddition, we depart from mere named entity recognition (NER) techniques to answer these four question types by includingnon-rigid designators, i.e. expressions which do not refer to the same thing in all possible worlds such as “at the main square”or “leaders of political parties”. To do that, we employ dependency parsing to identify grammatical characteristics for each question type. Every sentence is then represented as a 4W block. We perform two different preliminary studies: selecting sentences that better summarise texts by achieving an F1-measure of 0.343, as well as a 4W block extraction for which we achieve F1-measures of 0.932; 0.900; 0.803; 0.861 for “who”, “when”, “where” and “what” category respectively. In a next step the 4W blocks are ranked by relevance. The top three ranked blocks, for example, then constitute a summary of the entire textual passage. The relevance metric can be customised to the user’s needs, for instance, ranked by up-to-dateness where the sentences’ tense is taken into account. In a user study we evaluate different ranking strategies including (i) up-todateness,(ii) text sentence rank, (iii) selecting the firsts and lasts sentences or (iv) coverage of named entities, i.e. based on the number of named entities in the sentence. Our 4W summarisation method presents a valuable addition to a company’s(social) media monitoring toolkit, thus supporting decision making processes.
Pimas Oliver, Rexha Andi, Kröll Mark, Kern Roman
2016
The PAN 2016 author profiling task is a supervised classification problemon cross-genre documents (tweets, blog and social media posts). Our systemmakes use of concreteness, sentiment and syntactic information present in thedocuments. We train a random forest model to identify gender and age of a document’sauthor. We report the evaluation results received by the shared task.
Trattner Christoph, Kuśmierczyk Tomasz, Rokicki Markus, Herder Eelco
2016
Historically, there have always been differences in how men andwomen cook or eat. The reasons for this gender divide have mostlygone in Western culture, but still there is qualitative and anecdotalevidence that men prefer heftier food, that women take care of everydaycooking, and that men cook to impress. In this paper, weshow that these differences can also quantitatively be observed in alarge dataset of almost 200 thousand members of an online recipecommunity. Further, we show that, using a set of 88 features, thegender of the cooks can be predicted with fairly good accuracy of75%, with preference for particular dishes, the use of spices andthe use of kitchen utensils being the strongest predictors. Finally,we show the positive impact of our results on online food reciperecommender systems that take gender information into account.
Kern Roman, Klampfl Stefan, Rexha Andi
2016
This report describes our contribution to the 2nd ComputationalLinguistics Scientific Document Summarization Shared Task (CLSciSumm2016), which asked to identify the relevant text span in a referencepaper that corresponds to a citation in another document that citesthis paper. We developed three different approaches based on summarisationand classification techniques. First, we applied a modified versionof an unsupervised summarisation technique, TextSentenceRank, to thereference document, which incorporates the similarity of sentences tothe citation on a textual level. Second, we employed classification to selectfrom candidates previously extracted through the original TextSentenceRankalgorithm. Third, we used unsupervised summarisation of therelevant sub-part of the document that was previously selected in a supervisedmanner.
Trattner Christoph, Kuśmierczyk Tomasz, Nørvåg Kjetil
2016
Gursch Heimo, Ziak Hermann, Kröll Mark, Kern Roman
2016
Modern knowledge workers need to interact with a large number of different knowledge sources with restricted or public access. Knowledge workers are thus burdened with the need to familiarise and query each source separately. The EEXCESS (Enhancing Europe’s eXchange in Cultural Educational and Scientific reSources) project aims at developing a recommender system providing relevant and novel content to its users. Based on the user’s work context, the EEXCESS system can either automatically recommend useful content, or support users by providing a single user interface for a variety of knowledge sources. In the design process of the EEXCESS system, recommendation quality, scalability and security where the three most important criteria. This paper investigates the scalability aspect achieved by federated design of the EEXCESS recommender system. This means that, content in different sources is not replicated but its management is done in each source individually. Recommendations are generated based on the context describing the knowledge worker’s information need. Each source offers result candidates which are merged and re-ranked into a single result list. This merging is done in a vector representation space to achieve high recommendation quality. To ensure security, user credentials can be set individually by each user for each source. Hence, access to the sources can be granted and revoked for each user and source individually. The scalable architecture of the EEXCESS system handles up to 100 requests querying up to 10 sources in parallel without notable performance deterioration. The re-ranking and merging of results have a smaller influence on the system's responsiveness than the average source response rates. The EEXCESS recommender system offers a common entry point for knowledge workers to a variety of different sources with only marginally lower response times as the individual sources on their own. Hence, familiarisation with individual sources and their query language is not necessary.
Rexha Andi, Dragoni Mauro, Kern Roman, Kröll Mark
2016
Ontology matching in a multilingual environment consists of finding alignments between ontologies modeled by using more than one language. Such a research topic combines traditional ontology matching algorithms with the use of multilingual resources, services, and capabilities for easing multilingual matching. In this paper, we present a multilingual ontology matching approach based on Information Retrieval (IR) techniques: ontologies are indexed through an inverted index algorithm and candidate matches are found by querying such indexes. We also exploit the hierarchical structure of the ontologies by adopting the PageRank algorithm for our system. The approaches have been evaluated using a set of domain-specific ontologies belonging to the agricultural and medical domain. We compare our results with existing systems following an evaluation strategy closely resembling a recommendation scenario. The version of our system using PageRank showed an increase in performance in our evaluations.
Traub Matthias, Lacic Emanuel, Kowald Dominik, Kahr Martin, Lex Elisabeth
2016
In this paper, we present work-in-progress on a recommender system designed to help people in need find the best suited social care institution for their personal issues. A key requirement in such a domain is to assure and to guarantee the person's privacy and anonymity in order to reduce inhibitions and to establish trust. We present how we aim to tackle this barely studied domain using a hybrid content-based recommendation approach. Our approach leverages three data sources containing textual content, namely (i) metadata from social care institutions, (ii) institution specific FAQs, and (iii) questions that a specific institution has already resolved. Additionally, our approach considers the time context of user questions as well as negative user feedback to previously provided recommendations. Finally, we demonstrate an application scenario of our recommender system in the form of a real-world Web system deployed in Austria.
Lacic Emanuel
2016
Recommender systems are acknowledged as an essential instru- ment to support users in finding relevant information. However, adapting to different domain specific data models is a challenge, which many recommender frameworks neglect. Moreover, the ad- vent of the big data era has posed the need for high scalability and real-time processing of frequent data updates, and thus, has brought new challenges for the recommender systems’ research community. In this work, we show how different item, social and location data features can be utilized and supported to provide real-time recom- mendations. We further show how to process data updates online and capture user’s real-time interest without recalculating recom- mendations. The presented recommendation framework provides a scalable and customizable architecture suited for providing real- time recommendations to multiple domains. We further investigate the impact of an increasing request load and show how the runtime can be decreased by scaling the framework.
Stanisavljevic Darko, Hasani-Mavriqi Ilire, Lex Elisabeth, Strohmaier M., Helic Denis
2016
In this paper we assess the semantic stability of Wikipedia by investigat-ing the dynamics of Wikipedia articles’ revisions over time. In a semantically stablesystem, articles are infrequently edited, whereas in unstable systems, article contentchanges more frequently. In other words, in a stable system, the Wikipedia com-munity has reached consensus on the majority of articles. In our work, we measuresemantic stability using the Rank Biased Overlap method. To that end, we prepro-cess Wikipedia dumps to obtain a sequence of plain-text article revisions, whereaseach revision is represented as a TF-IDF vector. To measure the similarity betweenconsequent article revisions, we calculate Rank Biased Overlap on subsequent termvectors. We evaluate our approach on 10 Wikipedia language editions includingthe five largest language editions as well as five randomly selected small languageeditions. Our experimental results reveal that even in policy driven collaborationnetworks such as Wikipedia, semantic stability can be achieved. However, there aredifferences on the velocity of the semantic stability process between small and largeWikipedia editions. Small editions exhibit faster and higher semantic stability than large ones. In particular, in large Wikipedia editions, a higher number of successiverevisions is needed in order to reach a certain semantic stability level, whereas, insmall Wikipedia editions, the number of needed successive revisions is much lowerfor the same level of semantic stability.
Kopeinik Simone, Kowald Dominik, Hasani-Mavriqi Ilire, Lex Elisabeth
2016
Classic resource recommenders like Collaborative Filteringtreat users as just another entity, thereby neglecting non-linear user-resource dynamics that shape attention and in-terpretation. SUSTAIN, as an unsupervised human cate-gory learning model, captures these dynamics. It aims tomimic a learner’s categorization behavior. In this paper, weuse three social bookmarking datasets gathered from Bib-Sonomy, CiteULike and Delicious to investigate SUSTAINas a user modeling approach to re-rank and enrich Collab-orative Filtering following a hybrid recommender strategy.Evaluations against baseline algorithms in terms of recom-mender accuracy and computational complexity reveal en-couraging results. Our approach substantially improves Col-laborative Filtering and, depending on the dataset, success-fully competes with a computationally much more expen-sive Matrix Factorization variant. In a further step, we ex-plore SUSTAIN’s dynamics in our specific learning task andshow that both memorization of a user’s history and clus-tering, contribute to the algorithm’s performance. Finally,we observe that the users’ attentional foci determined bySUSTAIN correlate with the users’ level of curiosity, iden-tified by the SPEAR algorithm. Overall, the results ofour study show that SUSTAIN can be used to efficientlymodel attention-interpretation dynamics of users and canhelp improve Collaborative Filtering for resource recommen-dations.
Kraker Peter, Kittel Christopher, Enkhbayar Asuraa
2016
The goal of Open Knowledge Maps is to create a visual interface to the world’s scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
Mutlu Belgin, Sabol Vedran, Gursch Heimo, Kern Roman
2016
Graphical interfaces and interactive visualisations are typical mediators between human users and data analytics systems. HCI researchers and developers have to be able to understand both human needs and back-end data analytics. Participants of our tutorial will learn how visualisation and interface design can be combined with data analytics to provide better visualisations. In the first of three parts, the participants will learn about visualisations and how to appropriately select them. In the second part, restrictions and opportunities associated with different data analytics systems will be discussed. In the final part, the participants will have the opportunity to develop visualisations and interface designs under given scenarios of data and system settings.
Gursch Heimo, Wuttei Andreas, Gangloff Theresa
2016
Highly optimised assembly lines are commonly used in various manufacturing domains, such as electronics, microchips, vehicles, electric appliances, etc. In the last decades manufacturers have installed software systems to control and optimise their shop foor processes. Machine Learning can enhance those systems by providing new insights derived from the previously captured data. This paper provides an overview of Machine Learning felds and an introduction to manufacturing management systems. These are followed by a discussion of research projects in the feld of applying Machine Learning solutions for condition monitoring, process control, scheduling, and predictive maintenance.
Santos Tiago, Kern Roman
2016
This paper provides an overview of current literature on timeseries classification approaches, in particular of early timeseries classification.A very common and effective time series classification ap-proach is the 1-Nearest Neighbor classifier, with differentdistance measures such as the Euclidean or dynamic timewarping distances. This paper starts by reviewing thesebaseline methods.More recently, with the gain in popularity in the applica-tion of deep neural networks to the field of computer vision,research has focused on developing deep learning architec-tures for time series classification as well. The literature inthe field of deep learning for time series classification hasshown promising results.Early time series classification aims to classify a time se-ries with as few temporal observations as possible, whilekeeping the loss of classification accuracy at a minimum.Prominent early classification frameworks reviewed by thispaper include, but are not limited to, ECTS, RelClass andECDIRE. These works have shown that early time seriesclassification may be feasible and performant, but they alsoshow room for improvement
Kern Roman, Ziak Hermann
2016
Context-driven query extraction for content-basedrecommender systems faces the challenge of dealing with queriesof multiple topics. In contrast to manually entered queries, forautomatically generated queries this is a more frequent problem. For instances if the information need is inferred indirectly viathe user's current context. Especially for federated search systemswere connected knowledge sources might react vastly differentlyon such queries, an algorithmic way how to deal with suchqueries is of high importance. One such method is to split mixedqueries into their individual subtopics. To gain insight how amulti topic query can be split into its subtopics we conductedan evaluation where we compared a naive approach against amore complex approaches based on word embedding techniques:One created using Word2Vec and one created using GloVe. Toevaluate these two approaches we used the Webis-QSeC-10 queryset, consisting of about 5,000 multi term queries. Queries of thisset were concatenated and passed through the algorithms withth