Eldorado Community:http://hdl.handle.net/2003/92020-07-09T07:56:22Z2020-07-09T07:56:22ZDetecting relevant differences in the covariance operators of functional time series - a sup-norm approachDette, HolgerKokot, Kevinhttp://hdl.handle.net/2003/391812020-06-24T01:40:55Z2020-06-23T09:06:20ZTitle: Detecting relevant differences in the covariance operators of functional time series - a sup-norm approach
Authors: Dette, Holger; Kokot, Kevin
Abstract: In this paper we propose statistical inference tools for the covariance operators of functional
time series in the two sample and change point problem. In contrast to most of
the literature the focus of our approach is not testing the null hypothesis of exact equality
of the covariance operators. Instead we propose to formulate the null hypotheses in the
form that "the distance between the operators is small", where we measure deviations by
the sup-norm. We provide powerful bootstrap tests for these type of hypotheses, investigate
their asymptotic properties and study their finite sample properties by means of a
simulation study.2020-06-23T09:06:20ZDekarbonisierung bis zum Jahr 2050? Klimapolitische Maßnahmen und Energieprognosen für Deutschland, Österreich und die SchweizFrondel, ManuelThomas, Tobiashttp://hdl.handle.net/2003/391802020-06-24T01:40:49Z2020-06-23T09:04:17ZTitle: Dekarbonisierung bis zum Jahr 2050? Klimapolitische Maßnahmen und Energieprognosen für Deutschland, Österreich und die Schweiz
Authors: Frondel, Manuel; Thomas, Tobias
Abstract: Angesichts der wachsenden klimapolitischen Herausforderungen streben viele Länder Europas
bis zum Jahr 2050 eine Dekarbonisierung an, das heißt den Ausstieg aus der Nutzung
fossiler Energieträger. Vor diesem Hintergrund präsentiert dieser Beitrag Prognosen des
Energiebedarfs und der Energiemixe für Deutschland, Österreich und die Schweiz für das
Jahr 2030 sowie einen Ausblick auf das Jahr 2050. Der Vergleich der bisherigen Energiepolitiken
dieser Länder offenbart gravierende Unterschiede: Während Deutschland bislang
vorwiegend auf die massive Subventionierung alternativer Stromerzeugungstechnologien
gesetzt hat, war der bisherige Ansatz Österreichs eher, Energieverbrauch und Treibhausgasausstoß
mit ordnungsrechtlichen Maßnahmen, insbesondere Ge- und Verboten, aber
auch Subventionen, senken zu wollen. Im Gegensatz dazu setzt die Schweiz bereits seit
dem Jahr 2008 auf das marktwirtschaftliche Instrument der CO2-Abgabe. Die hier präsentierten
Prognosen des Energiebedarfs der drei Länder deuten darauf hin, dass vor allem
Deutschland und Österreich mit einer Fortführung der bisherigen Politik das langfristige
Ziel einer weitgehenden Dekarbonisierung nicht erreichen dürften, während es in der
Schweiz bereits zu einem spürbaren Rückgang des Primärenergieverbrauchs gekommen
ist. Vor diesem Hintergrund gewinnt die jüngst in Deutschland beschlossene CO2-Bepreisung
der Emissionen in den Bereichen Verkehr und Wärme besondere Bedeutung. Auch
Österreich möchte in diesen Sektoren eine CO2-Bepreisung einführen. Es bleibt allerdings
abzuwarten, wie konsequent das marktwirtschaftliche Instrument der CO2-Bepreisung tatsächlich
verfolgt werden wird.2020-06-23T09:04:17ZSequential change point detection in high dimensional time seriesGösmann, JosuaStoehr, ChristinaDette, Holgerhttp://hdl.handle.net/2003/391672020-06-04T01:40:48Z2020-06-03T12:45:05ZTitle: Sequential change point detection in high dimensional time series
Authors: Gösmann, Josua; Stoehr, Christina; Dette, Holger
Abstract: Change point detection in high dimensional data has found considerable interest
in recent years. Most of the literature designs methodology for a retrospective
analysis, where the whole sample is already available when the statistical inference begins.
This paper takes a different point of view and develops monitoring schemes for the
online scenario, where high dimensional data arrives steadily and the goal is to detect
changes as fast as possible controlling at the same time the probability of a type I error of
a false alarm. We develop sequential procedures capable of detecting changes in the mean
vector of a successively observed high dimensional time series with spatial and temporal
dependence. The statistical properties of the methods are analyzed in the case where
both, the sample size and dimension converge to infinity. In this scenario it is shown that
the new monitoring schemes have asymptotic level alpha under the null hypothesis of no
change and are consistent under the alternative of a change in at least one component
of the high dimensional mean vector. Moreover, we also prove that the new detection
scheme identifies all components affected by a change. The finite sample properties of the
new methodology are illustrated by means of a simulation study and in the analysis of a
data example.
Our approach is based on a new type of monitoring scheme for one-dimensional data
which turns out to be often more powerful than the usually used CUSUM and Page-
CUSUM methods, and the component-wise statistics are aggregated by the maximum
statistic. From a mathematical point of view we use Gaussian approximations for high
dimensional time series to prove our main results and derive extreme value convergence for
the maximum of the maximal increment of dependent Brownian motions. In particular
we show that the range of a Brownian motion on a given interval is in the domain of
attraction of the Gumbel distribution.2020-06-03T12:45:05ZA distribution free test for changes in the trend function of locally stationary processesHeinrichs, FlorianDette, Holgerhttp://hdl.handle.net/2003/391542020-05-28T01:40:49Z2020-05-27T11:25:57ZTitle: A distribution free test for changes in the trend function of locally stationary processes
Authors: Heinrichs, Florian; Dette, Holger
Abstract: In the common time series model Xi,n = μ(i/n)+"i,n with non-stationary errors we consider the problem of detecting a significant deviation of the mean function g(μ) from a benchmark g(μ) (such as the initial value μ(0) or the average trend R 1 0 μ(t)dt). The problem is motivated by a more realistic modelling of change point analysis, where one is interested in identifying relevant deviations in a smoothly varying sequence of means (μ(i/n))i=1,...,n and cannot assume that the sequence is piecewise constant. A test for this type of hypotheses is developed using an appropriate estimator for the integrated squared deviation of the mean function and the threshold. By a new concept of self-normalization adapted to non-stationary processes an asymptotically
pivotal test for the hypothesis of a relevant deviation is constructed. The results are illustrated by means of a simulation study and a data example.2020-05-27T11:25:57ZK-sign depth: From asymptotics to efficient implementationMalcherczyk, DennisLeckey, KevinMüller, Christine H.http://hdl.handle.net/2003/391002020-05-01T01:40:51Z2020-04-30T15:44:11ZTitle: K-sign depth: From asymptotics to efficient implementation
Authors: Malcherczyk, Dennis; Leckey, Kevin; Müller, Christine H.
Abstract: The K-sign depth (K-depth) of a model parameter θ in a data set is the relative number of K-tuples among its residual vector that have alternating signs. The K-depth test based on K-depth, recently proposed by Leckey et al. (2019), is equivalent to the classical residual-based sign test for K = 2, but is much more powerful for K ≥ 3. This test has two major drawbacks. First, the computation of the K-depth is fairly time consuming, and second, the test requires knowledge about the quantiles of the test statistic which previously had to be obtained by simulation for each sample size individually. We tackle both of these drawbacks by presenting a limit theorem for the distribution of the test statistic and deriving an (asymptotically equivalent) form of the K-depth which can be computed eﬃciently. For K = 3, such a limit theorem was already derived in Kustosz et al. (2016a) by mimicking the proof for U-statistics. We provide here a much shorter proof based on Donsker’s theorem and extend it to any K ≥ 3. As part of the proof, we derive an asymptotically equivalent form of the K-depth which can be computed in linear time. This alternative and the original implementation of the K-depth are compared with respect to their runtimes and absolute diﬀerence.2020-04-30T15:44:11ZPowerful generalized sign tests based on sign depthLeckey, KevinMalcherczyk, DennisMüller, Christine H.http://hdl.handle.net/2003/390992020-05-01T01:40:50Z2020-04-30T15:40:48ZTitle: Powerful generalized sign tests based on sign depth
Authors: Leckey, Kevin; Malcherczyk, Dennis; Müller, Christine H.
Abstract: The classical sign test usually provides very bad power for certain alternatives. We present
a generalization which is similarly easy to comprehend but much more powerful. It is based on
K-sign depth, shortly denoted by K-depth. These so-called K-depth tests are motivated by
simplicial regression depth, but are not restricted to regression problems. They can be applied
as soon as the true model leads to independent residuals with median equal to zero. Moreover,
general hypotheses on the unknown parameter vector can be tested. Since they depend only
on the signs of the residuals, these test statistics are outlier robust. While the 2-depth test, i.e.
the K-depth test for K = 2, is equivalent to the classical sign test, K-depth test with K ≥3
turn out to be more powerful in many applications. As we will briefly discuss, these tests are
also related to runs tests. A drawback of the K-depth test is its fairly high computational effort
when implemented naively. However, we show how this inherent computational complexity can
be reduced. In order to see why K-depth tests with K ≥ 3 are more powerful than the classical
sign test, we discuss the asymptotic behaviour of its test statistic for residual vectors with only
few sign changes, which is in particular the case for some nonfits the classical sign test cannot
reject. In contrast, we also consider residual vectors with alternating signs, representing models
that fit the data very well. Finally, we demonstrate the good power of the K-depth tests for
quadratic regression.2020-04-30T15:40:48ZMarket premia for renewables in Germany: The effect on electricity pricesFrondel, ManuelKaeding, MatthiasSommer, Stephanhttp://hdl.handle.net/2003/390982020-05-01T01:40:50Z2020-04-30T15:36:56ZTitle: Market premia for renewables in Germany: The effect on electricity prices
Authors: Frondel, Manuel; Kaeding, Matthias; Sommer, Stephan
Abstract: Due to the growing share of ”green” electricity generated by renewable energy
technologies, the frequency of negative price spikes has substantially increased in
Germany. To reduce such events, in 2012, a market premium scheme (MPS) was introduced
as an alternative to feed-in tariffs for the promotion of green electricity. Drawing
on hourly day-ahead spot prices for the time period spanning 2009 to 2016 and
employing a nonparametric modeling strategy called Bayesian Additive Regression
Trees, this paper empirically evaluates the efficacy of Germany’s MPS. Via counterfactual
analyses, we demonstrate that the introduction of the MPS decreased the number
of hours with negative prices by some 70%.2020-04-30T15:36:56ZEfficient tests for bio-equivalence in functional dataDette, HolgerKokot, Kevinhttp://hdl.handle.net/2003/390972020-05-01T01:40:49Z2020-04-30T15:22:06ZTitle: Efficient tests for bio-equivalence in functional data
Authors: Dette, Holger; Kokot, Kevin
Abstract: We study the problem of testing the equivalence of functional parameters (such as the
mean or variance function) in the two sample functional data problem. In contrast to
previous work, which reduces the functional problem to a multiple testing problem for the
equivalence of scalar data by comparing the functions at each point, our approach is based
on an estimate of a distance measuring the maximum deviation between the two functional
parameters. Equivalence is claimed if the estimate for the maximum deviation does not
exceed a given threshold. A bootstrap procedure is proposed to obtain quantiles for the
distribution of the test statistic and consistency of the corresponding test is proved in the
large sample scenario. As the methods proposed here avoid the use of the intersectionunion
principle they are less conservative and more powerful than the currently available
methodology.2020-04-30T15:22:06ZProviding Information by Resource- Constrained Data AnalysisMorik, KatharinaRhode, Wolfganghttp://hdl.handle.net/2003/390962020-05-01T01:40:51Z2020-04-30T15:18:02ZTitle: Providing Information by Resource- Constrained Data Analysis
Authors: Morik, Katharina; Rhode, Wolfgang
Abstract: The Collaborative Research Center SFB 876 (Providing Information by Resource-Constrained Data Analysis) brings together the research fields of data analysis (Data Mining, Knowledge Discovery in Data Bases, Machine Learning, Statistics) and embedded systems and enhances their methods such that information from distributed, dynamic masses of data becomes available anytime and anywhere. The research center approaches these problems with new algorithms respecting the resource constraints in the different scenarios. This Technical Report presents the work of the members of the integrated graduate school.2020-04-30T15:18:02ZPivotal tests for relevant differences in the second order dynamics of functional time seriesvan Delft, AnneDette, Holgerhttp://hdl.handle.net/2003/390902020-04-21T01:40:48Z2020-04-20T10:39:37ZTitle: Pivotal tests for relevant differences in the second order dynamics of functional time series
Authors: van Delft, Anne; Dette, Holger
Abstract: Motivated by the need to statistically quantify differences between modern (complex) datasets
which commonly result as high-resolution measurements of stochastic processes varying
over a continuum, we propose novel testing procedures to detect relevant differences between
the second order dynamics of two functional time series. In order to take the between-function
dynamics into account that characterize this type of functional data, a frequency
domain approach is taken. Test statistics are developed to compare differences in the spectral
density operators and in the primary modes of variation as encoded in the associated eigenelements.
Under mild moment conditions, we show convergence of the underlying statistics to
Brownian motions and obtain pivotal test statistics via a self-normalization approach. The latter
is essential because the nuisance parameters can be unwieldly and their robust estimation
infeasible, especially if the two functional time series are dependent. Besides from these novel
features, the properties of the tests are robust to any choice of frequency band enabling also
to compare energy contents at a single frequency. The finite sample performance of the tests
are verified through a simulation study and are illustrated with an application to fMRI data.2020-04-20T10:39:37ZQuantifying deviations from separability in space-time functional processesDette, HolgerDierickx, GauthierKutta, Timhttp://hdl.handle.net/2003/390752020-04-01T01:40:49Z2020-03-31T09:11:42ZTitle: Quantifying deviations from separability in space-time functional processes
Authors: Dette, Holger; Dierickx, Gauthier; Kutta, Tim
Abstract: The estimation of covariance operators of spatio-temporal data is in many applications only computationally feasible under simplifying assumptions, such as separability of the covariance into strictly temporal and spatial factors. Powerful tests for this assumption have been proposed in the literature. However, as real world systems, such as climate data are notoriously inseparable, validating this assumption by statistical tests, seems inherently questionable. In this paper we present an alternative approach: By virtue of separability measures, we quantify how strongly the data’s covariance operator diverges from a separable approximation. Conﬁdence intervals localize these measures with statistical guarantees. This method provides users with a ﬂexible tool, to weigh the computational gains of a separable model against the associated increase in bias. As separable approximations we consider the established methods of partial traces and partial products, and develop weak convergence principles for the corresponding estimators. Moreover, we also prove such results for estimators of optimal, separable approximations, which are arguably of most interest in applications. In particular we present for the ﬁrst time statistical inference for this object, which has been conﬁned to estimation previously. Besides conﬁdence intervals, our results encompass tests for approximate separability. All methods proposed in this paper are free of nuisance parameters and do neither require computationally expensive resampling procedures nor the estimation of nuisance parameters. A simulation study underlines the advantages of our approach and its applicability is demonstrated by the investigation of German annual temperature data.2020-03-31T09:11:42ZDesign admissibility and de la Garza phenomenon in multi-factor experimentsDette, HolgerLiu, XinYue, Rong-Xianhttp://hdl.handle.net/2003/390702020-03-25T02:40:47Z2020-03-24T11:30:44ZTitle: Design admissibility and de la Garza phenomenon in multi-factor experiments
Authors: Dette, Holger; Liu, Xin; Yue, Rong-Xian
Abstract: The determination of an optimal design for a given regression problem is an intricate
optimization problem, especially for models with multivariate predictors. Design
admissibility and invariance are main tools to reduce the complexity of the optimization
problem and have been successfully applied for models with univariate predictors.
In particular several authors have developed sufficient conditions for the existence of
saturated designs in univariate models, where the number of support points of the optimal
design equals the number of parameters. These results generalize the celebrated de
la Garza phenomenon (de la Garza, 1954) which states that for a polynomial regression
model of degree p -1 any optimal design can be based on at most p points.
This paper provides - for the first time - extensions of these results for models
with a multivariate predictor. In particular we study a geometric characterization
of the support points of an optimal design to provide sufficient conditions for the
occurrence of the de la Garza phenomenon in models with multivariate predictors and
characterize properties of admissible designs in terms of admissibility of designs in
conditional univariate regression models.2020-03-24T11:30:44ZCO2-Bepreisung in den Sektoren Verkehr und Wärme: Optionen für eine sozial ausgewogene AusgestaltungFrondel, Manuelhttp://hdl.handle.net/2003/390662020-03-14T02:40:51Z2020-03-13T15:28:44ZTitle: CO2-Bepreisung in den Sektoren Verkehr und Wärme: Optionen für eine sozial ausgewogene Ausgestaltung
Authors: Frondel, Manuel
Abstract: Die Einführung einer nationalen CO2-Bepreisung ab dem Jahr 2021 ist beschlossene Sache:
In den Sektoren Verkehr und Wärme soll ein nationales Emissionshandelssystem
etabliert werden, in dem die CO2-Preise in den Jahren 2021 bis 2025 fixiert sind und beginnend
mit 25 Euro je Tonne sukzessive ansteigen. Dies bringt höhere Kostenbelastungen
für die Verbraucher mit sich. Um dennoch eine breite Akzeptanz für eine CO2-
Bepreisung zu gewinnen, wäre ein vielversprechender Ansatz, die daraus resultierenden
Einnahmen wieder vollständig an die Verbraucher zurückzugeben. Vor diesem Hintergrund
diskutiert dieser Beitrag drei Alternativen zur Rückverteilung der zusätzlichen
staatlichen Einnahmen: a) eine pauschale Pro-Kopf-Rückerstattung für private Haushalte,
b) die Senkung der Stromkosten durch (i) die Steuerfinanzierung der Industrieausnahmen
bei der EEG-Umlage und (ii) die Senkung der Stromsteuer und c) gezielte Zuschüsse
für besonders betroffene Verbraucher, etwa in Form einer Erhöhung des Wohngelds. Am
treffsichersten im Hinblick auf die Entlastung bedürftiger Haushalte wäre die dritte Alternative.
Mit den restlichen Mitteln könnte die unter ökologischen Gesichtspunkten zunehmend
obsolet werdende Stromsteuer reduziert werden. Wenngleich es gute Gründe sowohl
für eine Pro-Kopf-Rückerstattung als auch für eine Stromsteuersenkung gibt, hat
eine Stromsteuersenkung mehrere Vorteile gegenüber einer Pro-Kopfpauschale, insbesondere
im Hinblick auf die Sektorkopplung und die Transaktionskosten des Rückverteilungsaufwands,
welche bei einer Stromsteuersenkung vernachlässigbar wären.2020-03-13T15:28:44ZTests based on sign depth for multiple regressionHorn, MelanieMüller, Christine H.http://hdl.handle.net/2003/390652020-03-14T02:40:50Z2020-03-13T15:26:59ZTitle: Tests based on sign depth for multiple regression
Authors: Horn, Melanie; Müller, Christine H.
Abstract: The extension of simplicial depth to robust regression, the so-called simplicial regression depth,
provides an outlier robust test for the parameter vector of regression models. Since simplicial regression
depth often reduces to counting the subsets with alternating signs of the residuals, this led recently to
the notion of sign depth and sign depth test. Thereby sign depth tests generalize the classical sign tests.
Since sign depth depends on the order of the residuals, one generally assumes that the D-dimensional
regressors (explanatory variables) can be ordered with respect to an inherent order. While the one-dimensional
real space possesses such a natural order, one cannot order these regressors that easily for
D > 1 because there exists no canonical order of the data in most cases.
For this scenario, we present orderings according to the Shortest Hamiltonian Path and an approximation
of it. We compare them with more naive approaches like taking the order in the data set or ordering
on the basis of a single quantity of the regressor. The comparison bases on the computational runtime,
stability of the order when transforming the data, as well as on the power of the resulting sign depth
tests for testing the parameter vector of different multiple regression models. Moreover, we compare the
power of our new tests with the power of the classical sign test and the F-test. Thereby, the sign depth
tests based on our distance based approaches show similar power as the F-test for normally distributed
residuals with the additional benefit of being much more robust against outliers.2020-03-13T15:26:59ZAn asymptotic test for constancy of the variance under short-range dependenceSchmidt, SaraWornowizki, MaxFried, RolandDehling, Heroldhttp://hdl.handle.net/2003/390572020-03-07T02:40:50Z2020-03-06T15:18:46ZTitle: An asymptotic test for constancy of the variance under short-range dependence
Authors: Schmidt, Sara; Wornowizki, Max; Fried, Roland; Dehling, Herold
Abstract: We present a novel approach to test for heteroscedasticity of
a non-stationary time series that is based on Gini's mean difference of
logarithmic local sample variances. In order to analyse the large sample behaviour
of our test statistic, we establish new limit theorems for U-statistics
of dependent triangular arrays.We derive the asymptotic distribution of the
test statistic under the null hypothesis of a constant variance and show that
the test is consistent against a large class of alternatives, including multiple
structural breaks in the variance. Our test is applicable even in the case
of non-stationary processes, assuming a locally stationary mean function.
The performance of the test and its comparatively low computation time
are illustrated in an extensive simulation study. As an application, we analyse
data from civil engineering, monitoring crack widths in concrete bridge
surfaces.2020-03-06T15:18:46ZStatistical inference for high dimensional panel functional time seriesZhou, ZhouDette, Holgerhttp://hdl.handle.net/2003/390202020-02-29T02:40:54Z2020-02-28T13:58:03ZTitle: Statistical inference for high dimensional panel functional time series
Authors: Zhou, Zhou; Dette, Holger
Abstract: In this paper we develop statistical inference tools for high dimensional functional
time series. We introduce a new concept of physical dependent processes in
the space of square integrable functions, which adopts the idea of basis decomposition
of functional data in these spaces, and derive Gaussian and multiplier bootstrap
approximations for sums of high dimensional functional time series. These results
have numerous important statistical consequences. Exemplarily, we consider the development
of joint simultaneous confidence bands for the mean functions and the
construction of tests for the hypotheses that the mean functions in the spatial dimension
are parallel. The results are illustrated by means of a small simulation study
and in the analysis of Canadian temperature data.2020-02-28T13:58:03ZAre deviations in a gradually varying mean relevant? A testing approach based on sup-norm estimatorsBücher, AxelDette, HolgerHeinrichs, Florianhttp://hdl.handle.net/2003/387202020-02-20T02:40:47Z2020-02-19T12:59:46ZTitle: Are deviations in a gradually varying mean relevant? A testing approach based on sup-norm estimators
Authors: Bücher, Axel; Dette, Holger; Heinrichs, Florian
Abstract: Classical change point analysis aims at (1) detecting abrupt changes
in the mean of a possibly non-stationary time series and at (2) identifying regions
where the mean exhibits a piecewise constant behavior. In many applications however,
it is more reasonable to assume that the mean changes gradually in a smooth
way. Those gradual changes may either be non-relevant (i.e., small), or relevant
for a specific problem at hand, and the present paper presents statistical methodology
to detect the latter. More precisely, we consider the common nonparametric
regression model Xi = μ(i/n) +εi with possibly non-stationary errors and propose
a test for the null hypothesis that the maximum absolute deviation of the
regression function μ from a functional g(μ) (such as the value μ(0) or the integral 1
0 μ(t)dt) is smaller than a given threshold on a given interval [x0, x1] [0, 1]. A
test for this type of hypotheses is developed using an appropriate estimator, say
ˆ d∞n, for the maximum deviation d∞ = supt∈[x0,x1] |μ(t) − g(μ)|. We derive the
limiting distribution of an appropriately standardized version of ˆ d∞,n, where the
standardization depends on the Lebesgue measure of the set of extremal points of
the function μ(·) − g(μ). A refined procedure based on an estimate of this set is
developed and its consistency is proved. The results are illustrated by means of a
simulation study and a data example.2020-02-19T12:59:46ZProviding Information by Resource- Constrained Data AnalysisMorik, KatharinaRhode, Wolfganghttp://hdl.handle.net/2003/385712020-02-15T02:40:48Z2020-02-14T15:13:47ZTitle: Providing Information by Resource- Constrained Data Analysis
Authors: Morik, Katharina; Rhode, Wolfgang
Abstract: The Collaborative Research Center SFB 876 (Providing Information by Resource-Constrained Data Analysis) brings together the research fields of data analysis (Data Mining, Knowledge Discovery in Data Bases, Machine Learning, Statistics) and embedded systems and enhances their methods such that information from distributed, dynamic masses of data becomes available anytime and anywhere. The research center approaches these problems with new algorithms respecting the resource constraints in the different scenarios. This Technical Report presents the work of the members of the integrated graduate school.2020-02-14T15:13:47ZExplicit results on conditional distributions of generalized exponential mixturesKlüppelberg, ClaudiaSeifert, Miriam Isabelhttp://hdl.handle.net/2003/385702020-02-15T02:40:47Z2020-02-14T15:10:11ZTitle: Explicit results on conditional distributions of generalized exponential mixtures
Authors: Klüppelberg, Claudia; Seifert, Miriam Isabel
Abstract: For independent exponentially distributed random variables Xi, i ∈ N with distinct rates λi we consider sums ∑i∈AXi for A⊆N which follow generalized exponential mixture (GEM) distributions. We provide novel
explicit results on the conditional distribution of the total sum ∑i∈NXi giventhat a subset sum
∑j∈NXj exceeds a certain threshold value t > 0, and vice versa. Moreover, we investigate the characteristic tail behavior of these conditional distributions for t → ∞,. Finally, we illustrate how our probabilistic results can be applied in practice by providing examples
from both reliability theory and risk management.2020-02-14T15:10:11ZPrediction in locally stationary time seriesDette, HolgerWu, Weichihttp://hdl.handle.net/2003/385302020-01-18T02:41:28Z2020-01-17T15:20:09ZTitle: Prediction in locally stationary time series
Authors: Dette, Holger; Wu, Weichi
Abstract: We develop an estimator for the high-dimensional covariance matrix of a locally
stationary process with a smoothly varying trend and use this statistic to derive consistent
predictors in non-stationary time series. In contrast to the currently available
methods for this problem the predictor developed here does not rely on fitting an
autoregressive model and does not require a vanishing trend. The finite sample properties
of the new methodology are illustrated by means of a simulation study and a
data example.2020-01-17T15:20:09ZDetecting structural breaks in eigensystems of functional time seriesDette, HolgerKutta, Timhttp://hdl.handle.net/2003/383862019-11-20T02:41:03Z2019-11-19T12:07:46ZTitle: Detecting structural breaks in eigensystems of functional time series
Authors: Dette, Holger; Kutta, Tim
Abstract: Detecting structural changes in functional data is a prominent topic in statistical
literature. However not all trends in the data are important in applications, but only
those of large enough in
uence. In this paper we address the problem of identifying
relevant changes in the eigenfunctions and eigenvalues of covariance kernels of L^2[0; 1]-
valued time series. By self-normalization techniques we derive pivotal, asymptotically
consistent tests for relevant changes in these characteristics of the second order structure
and investigate their finite sample properties in a simulation study. The applicability of
our approach is demonstrated analyzing German annual temperature data.2019-11-19T12:07:46ZEquivalence tests for binary efficacy-toxicity responsesMöllenhoff, KathrinDette, HolgerBretz, Frankhttp://hdl.handle.net/2003/383792019-11-14T02:40:48Z2019-11-13T13:37:14ZTitle: Equivalence tests for binary efficacy-toxicity responses
Authors: Möllenhoff, Kathrin; Dette, Holger; Bretz, Frank
Abstract: Clinical trials often aim to compare a new drug with a reference treatment in terms of efficacy and/or toxicity depending on covariates such as, for example, the dose level of the drug. Equivalence of these treatments can be claimed if the difference in average outcome is below a certain threshold over the covariate range. In this paper we assume that the efficacy and toxicity of the treatments are measured as binary outcome variables and we address two problems. First, we develop a new test procedure for the assessment of equivalence of two treatments over the entire covariate range for a single binary endpoint. Our approach is based on a parametric bootstrap, which generates data under the constraint that the distance between the curves is equal to the pre-speciﬁed equivalence threshold. Second, we address equivalence for bivariate binary (correlated) outcomes by extending the previous approach for a univariate response. For this purpose we use a 2-dimensional Gumbel model for binary efficacy-toxicity responses. We investigate the operating characteristics of the proposed approaches by means of a simulation study and present a case study as an illustration.2019-11-13T13:37:14ZConvergence of spectral density estimators in the locally stationary frameworkKawka, Rafaelhttp://hdl.handle.net/2003/382602019-10-03T01:40:44Z2019-10-02T14:28:23ZTitle: Convergence of spectral density estimators in the locally stationary framework
Authors: Kawka, Rafael
Abstract: Locally stationary processes are characterised by spectral densities that are functions
of rescaled time. We study the asymptotic properties of spectral density
estimators in the locally stationary framework. In particular, we show that for a
locally stationary process with time-varying spectral density function f(u; ) standard
spectral density estimators consistently estimate the time-averaged spectral
density R 1 0 f(u; ) du. This result is complemented by some illustrative examples
and applications including HAC-inference in the multiple linear regression model
and a simple visual tool for the detection of unconditional heteroskedasticity.2019-10-02T14:28:23ZSteuer versus Emissionshandel: Optionen für die Ausgestaltung einer CO2-BepreisungFrondel, Manuelhttp://hdl.handle.net/2003/382592019-10-03T01:40:47Z2019-10-02T14:27:25ZTitle: Steuer versus Emissionshandel: Optionen für die Ausgestaltung einer CO2-Bepreisung
Authors: Frondel, Manuel
Abstract: Nach Auffassung von Ökonomen können die Treibhausgase in
Europa am kosteneffizientesten dadurch vermieden werden, dass der bislang auf die
Energiewirtschaft und die Industrie beschränkte EU-Emissionshandel auf alle noch nicht
darin integrierten Sektoren ausgeweitet wird. Allerdings müssen für die Ausweitung des
Emissionshandels Mehrheiten in der Europäischen Union gefunden werden. Solange diese
Ausweitung nicht die Zustimmung aller Mitgliedsstaaten findet, könnte die Einführung
einer nationalen CO2-Bepreisung in diesen Sektoren erwogen und im Prinzip auf zwei
Wegen umgesetzt werden: über einen Emissionshandel, entweder separat als nationales
Handelssystem etabliert oder durch einen Opt-in der noch nicht integrierten Sektoren
Deutschlands in den bestehenden EU-Emissionshandel, oder mittels Einführung einer
nationalen CO2-Steuer. Die in diesem Beitrag vorgenommene Abwägung der Vor- und
Nachteile beider Optionen, CO2-Steuer versus Emissionshandel, zeigt, dass eine CO2-
Steuer gravierende Nachteile aufweist, allen voran die mangelnde Treffsicherheit bei der
Erreichung vorgegebener Emissionsziele.2019-10-02T14:27:25ZCognitive reflection and the valuation of energy efficiencyAndor, Mark A.Frondel, ManuelGerster, AndreasSommer, Stephanhttp://hdl.handle.net/2003/382582019-10-03T01:40:49Z2019-10-02T14:24:46ZTitle: Cognitive reflection and the valuation of energy efficiency
Authors: Andor, Mark A.; Frondel, Manuel; Gerster, Andreas; Sommer, Stephan
Abstract: Based on a stated-choice experiment among about 3,600 German household
heads on the purchase of electricity-using durables, this paper explores the impact
of cognitive reflection on consumers’ valuation of energy efficiency, as well as its
interaction with consumers’ response to the EU energy label. Using a standard
cognitive reflection test, our results indicate that consumers with low cognitive
reflection scores value energy efficiency less than those with high scores. Furthermore,
we find that consumers with a low level of cognitive reflection respond more
strongly to grade-like energy efficiency classes than to detailed information on
annual energy use.2019-10-02T14:24:46ZTwo-sample tests for relevant differences in the eigenfunctions of covariance operatorsAue, AlexanderDette, HolgerRice, Gregoryhttp://hdl.handle.net/2003/382562019-10-03T01:40:46Z2019-10-02T13:48:07ZTitle: Two-sample tests for relevant differences in the eigenfunctions of covariance operators
Authors: Aue, Alexander; Dette, Holger; Rice, Gregory
Abstract: This paper deals with two-sample tests for functional time series data, which have become widely
available in conjunction with the advent of modern complex observation systems. Here, particular interest
is in evaluating whether two sets of functional time series observations share the shape of their primary
modes of variation as encoded by the eigenfunctions of the respective covariance operators. To this end,
a novel testing approach is introduced that connects with, and extends, existing literature in two main
ways. First, tests are set up in the relevant testing framework, where interest is not in testing an exact
null hypothesis but rather in detecting deviations deemed sufficiently relevant, with relevance determined
by the practitioner and perhaps guided by domain experts. Second, the proposed test statistics rely on
a self-normalization principle that helps to avoid the notoriously difficult task of estimating the long-run
covariance structure of the underlying functional time series. The main theoretical result of this paper is
the derivation of the large-sample behavior of the proposed test statistics. Empirical evidence, indicating
that the proposed procedures work well in finite samples and compare favorably with competing methods,
is provided through a simulation study, and an application to annual temperature data.2019-10-02T13:48:07ZA generalized method of moments estimator for structural vector autoregressions based on higher momentsKeweloh, Alexander Saschahttp://hdl.handle.net/2003/382242019-09-12T08:00:42Z2019-09-11T15:31:03ZTitle: A generalized method of moments estimator for structural vector autoregressions based on higher moments
Authors: Keweloh, Alexander Sascha
Abstract: I propose a generalized method of moments estimator for structural vector
autoregressions with independent and non-Gaussian shocks. The shocks are
identified by exploiting information contained in higher moments of the
data. Extending the standard identification approach, which relies on the
covariance, to the coskewness and cokurtosis allows to identify and
estimate the simultaneous interaction without any further restrictions. I
analyze the finite sample properties of the estimator and apply it to
illustrate the simultaneous interaction between economic activity, oil and
stock prices.2019-09-11T15:31:03ZEfficient model-based bioequivalence testingMöllenhoff, KathrinLoingeville, FlorenceBertrand, JulieNguyen, Thu ThuySharan, SatishSun, GuoyingGrosser, StellaZhao, LiangFang, LanyanMentré, FranceDette, Holgerhttp://hdl.handle.net/2003/382132019-09-11T01:40:51Z2019-09-10T09:07:58ZTitle: Efficient model-based bioequivalence testing
Authors: Möllenhoff, Kathrin; Loingeville, Florence; Bertrand, Julie; Nguyen, Thu Thuy; Sharan, Satish; Sun, Guoying; Grosser, Stella; Zhao, Liang; Fang, Lanyan; Mentré, France; Dette, Holger
Abstract: The classical approach to analyze pharmacokinetic (PK) data in bioequivalence studies
aiming to compare two different formulations is to perform noncompartmental analysis
(NCA) followed by two one-sided tests (TOST). In this regard the PK parameters AUC
and Cmax are obtained for both treatment groups and their geometric mean ratios are
considered. According to current guidelines by the U.S. Food and Drug Administration
and the European Medicines Agency the formulations are deemed to be similar if the
90%- confidence interval for these ratios falls between 0:8 and 1:25. As NCA is not a
reliable approach in case of sparse designs, a model-based alternative has already been
proposed for the estimation of AUC and Cmax using non-linear mixed effects models.
Here we propose another test than the TOST, called BOT, and evaluate it through a
simulation study both for NCA and model-based approaches. For products with high
variability on PK parameters, this method appears to have closer type I errors to the
conventionally accepted significance level of 0:05, suggesting its potential use in situations
where conventional bioequivalence analysis is not applicable.2019-09-10T09:07:58ZA note on Herglotz’s theorem for time series on function spacesvan Delft, AnneEichler, Michaelhttp://hdl.handle.net/2003/382072019-09-07T01:40:48Z2019-09-06T13:29:27ZTitle: A note on Herglotz’s theorem for time series on function spaces
Authors: van Delft, Anne; Eichler, Michael
Abstract: In this article, we prove Herglotz’s theorem for Hilbert-valued time series. This requires the notion of an operator-valued measure, which we shall make precise for our setting. Herglotz’s theorem for functional time series allows to generalize existing results that are central to frequency domain analysis on the function space. In particular, we use this result to prove the existence of a functional Cramér representation of a large class of processes, including those with jumps in the spectral distribution and long-memory processes. We furthermore obtain an optimal ﬁnite dimensional reduction of the time series under weaker assumptions than available in the literature. The results of this paper therefore enable Fourier analysis for processes of which the spectral density operator does not necessarily exist.2019-09-06T13:29:27ZTesting for stationarity of functional time series in the frequency domainAue, Alexandervan Delft, Annehttp://hdl.handle.net/2003/382062019-09-07T01:40:47Z2019-09-06T13:27:32ZTitle: Testing for stationarity of functional time series in the frequency domain
Authors: Aue, Alexander; van Delft, Anne
Abstract: Interest in functional time series has spiked in the recent past with papers covering both methodology and applications being published at a much increased pace. This article contributes to the research in this area by proposing a new stationarity test for functional time series based on frequency domain methods. The proposed test statistics is based on joint dimension reduction via functional principal components analysis across the spectral density operators at all Fourier frequencies, explicitly allowing for frequency-dependent levels of truncation to adapt to the dynamics of the underlying functional time series. The properties of the test are derived both under the null hypothesis of stationary functional time series and under the smooth alternative of locally stationary functional time series. The methodology is theoretically justiﬁed through asymptotic results. Evidence from simulation studies and an application to annual temperature curves suggests that the test works well in ﬁnite samples.2019-09-06T13:27:32ZA note on quadratic forms of stationary functional time series under mild conditionsvan Delft, Annehttp://hdl.handle.net/2003/382052019-09-07T01:40:48Z2019-09-06T13:25:34ZTitle: A note on quadratic forms of stationary functional time series under mild conditions
Authors: van Delft, Anne
Abstract: We study the distributional properties of a quadratic form of a stationary functional time series under mild moment conditions. As an important application, we obtain consistency rates of estimators of spectral density operators and prove joint weak convergence to a vector of complex Gaussian random operators. Weak convergence is established based on an approximation of the form via transforms of Hilbert-valued martingale difference sequences. As a side-result, the distributional properties of the long-run covariance operator are established.2019-09-06T13:25:34ZSampling distributions of optimal portfolio weights and characteristics in low and large dimensionsBodnar, TarasDette, HolgerParolya, NestorThorsén, Erikhttp://hdl.handle.net/2003/382042019-09-07T01:40:46Z2019-09-06T13:23:21ZTitle: Sampling distributions of optimal portfolio weights and characteristics in low and large dimensions
Authors: Bodnar, Taras; Dette, Holger; Parolya, Nestor; Thorsén, Erik
Abstract: Optimal portfolio selection problems are determined by the (unknown) parameters of
the data generating process. If an investor want to realise the position suggested by the
optimal portfolios he/she needs to estimate the unknown parameters and to account the
parameter uncertainty into the decision process. Most often, the parameters of interest
are the population mean vector and the population covariance matrix of the asset re
turn distribution. In this paper we characterise the exact sampling distribution of the
estimated optimal portfolio weights and their characteristics by deriving their sampling
distribution which is present in terms of a stochastic representation. This approach pos
sesses several advantages, like (i) it determines the sampling distribution of the estimated
optimal portfolio weights by expressions which could be used to draw samples from this
distribution efficiently; (ii) the application of the derived stochastic representation pro
vides an easy way to obtain the asymptotic approximation of the sampling distribution.
The later property is used to show that the high-dimensional asymptotic distribution
of optimal portfolio weights is a multivariate normal and to determine its parameters.
Moreover, a consistent estimator of optimal portfolio weights and their characteristics
is derived under the high-dimensional settings. Via an extensive simulation study, we
investigate the ﬁnite-sample performance of the derived asymptotic approximation and
study its robustness to the violation of the model assumptions used in the derivation of
the theoretical results.2019-09-06T13:23:21ZIdentifying shifts between two regression curvesDette, HolgerSankar Dhar, SubhraWu, Weichihttp://hdl.handle.net/2003/381962019-08-31T01:40:50Z2019-08-30T14:17:53ZTitle: Identifying shifts between two regression curves
Authors: Dette, Holger; Sankar Dhar, Subhra; Wu, Weichi
Abstract: This article studies the problem whether two convex (concave) regression functions
modelling the relation between a response and covariate in two samples differ by a shift
in the horizontal and/or vertical axis. We consider a nonparametric situation assuming
only smoothness of the regression functions. A graphical tool based on the derivatives
of the regression functions and their inverses is proposed to answer this question and
studied in several examples. We also formalize this question in a corresponding hypothesis
and develop a statistical test. The asymptotic properties of the corresponding
test statistic are investigated under the null hypothesis and local alternatives. In contrast
to most of the literature on comparing shape invariant models, which requires
independent data the procedure is applicable for dependent and non-stationary data.
We also illustrate the finite sample properties of the new test by means of a small
simulation study and a real data example.2019-08-30T14:17:53ZPrediction in regression models with continuous observationsDette, HolgerPepelyshev, AndreyZhigljavsky, Anatolyhttp://hdl.handle.net/2003/381952019-08-31T01:40:46Z2019-08-30T14:16:22ZTitle: Prediction in regression models with continuous observations
Authors: Dette, Holger; Pepelyshev, Andrey; Zhigljavsky, Anatoly
Abstract: We consider the problem of predicting values of a random process or ﬁeld satisfying a linear model y(x) = θ>f(x) + ε(x), where errors ε(x) are correlated. This is a common problem in kriging, where the case of discrete observations is standard. By focussing on the case of continuous observations, we derive expressions for the best linear unbiased predictors and their mean squared error. Our results are also applicable in the case where the derivatives of the process y are available, and either a response or one of its derivatives need to be predicted. The theoretical results are illustrated by several examples in particular for the popular Matérn 3/2 kernel.2019-08-30T14:16:22ZVolatility forecasting accuracy for BitcoinKöchling, GerritSchmidtke, PhilippPosch, Peter N.http://hdl.handle.net/2003/381652019-08-06T01:40:49Z2019-08-05T12:52:14ZTitle: Volatility forecasting accuracy for Bitcoin
Authors: Köchling, Gerrit; Schmidtke, Philipp; Posch, Peter N.
Abstract: We analyse the quality of Bitcoin volatility forecasting of GARCH-type
models applying the commonly used volatility proxy based on squared daily
returns as well as a jump-robust proxy based on intra-day returns and vary
the degrees of asymmetry in robust loss functions. We construct model
confidence sets (MCS) which contain superior models with a high probability
and find them to be systematically smaller for asymmetric loss functions
and the jump robust proxy. Our findings suggest a cautious use of GARCH
models in forecasting Bitcoin's volatility.2019-08-05T12:52:14ZOptimal designs for estimating individual coefficients in polynomial regression with no interceptDette, HolgerMelas, Viatcheslav B.Shpilev, Petrhttp://hdl.handle.net/2003/381372019-07-13T01:40:48Z2019-07-12T10:43:47ZTitle: Optimal designs for estimating individual coefficients in polynomial regression with no intercept
Authors: Dette, Holger; Melas, Viatcheslav B.; Shpilev, Petr
Abstract: In a seminal paper Studden (1968) characterized c-optimal designs in regression
models, where the regression functions form a Chebyshev system. He used these
results to determine the optimal design for estimating the individual coefficients in a
polynomial regression model on the interval [-1; 1] explicitly. In this note we identify
the optimal design for estimating the individual coefficients in a polynomial regression
model with no intercept (here the regression functions do not form a Chebyshev
system).2019-07-12T10:43:47ZFinancial risk measures for a network of individual agents holding portfolios of lighttailed objectsKlüppelberg, ClaudiaSeifert, Miriam Isabelhttp://hdl.handle.net/2003/380882019-06-14T13:10:06Z2019-06-07T13:25:12ZTitle: Financial risk measures for a network of individual agents holding portfolios of lighttailed objects
Authors: Klüppelberg, Claudia; Seifert, Miriam Isabel
Abstract: We investigate a financial network of agents holding portfolios of independent
light-tailed risky objects whose losses are asymptotically exponentially
distributed with distinct tail parameters. We show that the
asymptotic distributions of portfolio losses belong to the class of functional
exponential mixtures which we introduce in this paper. We also
provide statements for Value-at-Risk and Expected Shortfall risk measures
as well as for their conditional counterparts. Compared to heavy
tail settings we establish important qualitative differences in the asymptotic
behavior of portfolio risks under a light tail assumption which have
to be accounted for in practical risk management.2019-06-07T13:25:12ZA new approach for open-end sequential change point monitoringGösmann, JosuaKley, TobiasDette, Holgerhttp://hdl.handle.net/2003/380812019-06-07T01:40:47Z2019-06-06T11:30:05ZTitle: A new approach for open-end sequential change point monitoring
Authors: Gösmann, Josua; Kley, Tobias; Dette, Holger
Abstract: We propose a new sequential monitoring scheme for changes in the parameters of
a multivariate time series. In contrast to procedures proposed in the literature which
compare an estimator from the training sample with an estimator calculated from the
remaining data, we suggest to divide the sample at each time point after the training
sample. Estimators from the sample before and after all separation points are then
continuously compared calculating a maximum of norms of their differences. For openend
scenarios our approach yields an asymptotic level a procedure, which is consistent
under the alternative of a change in the parameter.2019-06-06T11:30:05ZWirtschaftliche Aktivität und Emissionen: Die UmweltkuznetskurveWagner, MartinKnorre, Fabianhttp://hdl.handle.net/2003/380762019-05-30T01:40:49Z2019-05-29T14:22:06ZTitle: Wirtschaftliche Aktivität und Emissionen: Die Umweltkuznetskurve
Authors: Wagner, Martin; Knorre, Fabian
Abstract: Seit dem Beginn der industriellen Revolution ist die mittlere globale Temperatur um circa
ein Grad Celsius gestiegen. Es steht außer Zweifel, dass dieser Anstieg wesentlich auch
durch menschliche Aktivitäten getrieben ist - durch Emissionen von Kohlenstoffdioxid
und anderen Treibhausgasen. Wie sehen die Zusammenhänge zwischen wirtschaftlicher
Aktivität und Emissionen aus? Steigen die Emissionen zwingend mit steigender
wirtschaftlicher Aktivität? In diesem Kapitel wollen wir einige grundlegende Probleme
beleuchten, die bei der statistischen - eigentlich ökonometrischen - Analyse dieser
Zusammenhänge auftreten. Diese Probleme sind symptomatisch für wirtschaftswissenschaftliche
Beziehungen und ein Grund warum sich die Ökonometrie als eigenständige
Disziplin etabliert hat.2019-05-29T14:22:06ZLimit theorems for locally stationary processesKawka, Rafaelhttp://hdl.handle.net/2003/380462019-05-11T01:40:47Z2019-05-10T14:01:56ZTitle: Limit theorems for locally stationary processes
Authors: Kawka, Rafael
Abstract: We present limit theorems for locally stationary processes that have a one sided
time-varying moving average representation. In particular, we prove a central limit
theorem (CLT), a weak and a strong law of large numbers (WLLN, SLLN) and a
law of the iterated logarithm (LIL) under mild assumptions that are closely related
to those originally imposed by Dahlhaus and Polonik (2006).2019-05-10T14:01:56ZSome explicit solutions of c-optimal design problems for polynomial regressionDette, HolgerMelas, Viatcheslav B.Shpilev, Petrhttp://hdl.handle.net/2003/380392019-05-04T01:40:45Z2019-05-03T11:27:26ZTitle: Some explicit solutions of c-optimal design problems for polynomial regression
Authors: Dette, Holger; Melas, Viatcheslav B.; Shpilev, Petr
Abstract: In this paper we consider the optimal design problem for extrapolation and estimation
of the slope at a given point, say z, in a polynomial regression with no intercept.
We provide explicit solutions of these problems in many cases and characterize those
values of z, where this is not possible.2019-05-03T11:27:26ZOn scale estimation under shifts in the meanAxt, IevaFried, Rolandhttp://hdl.handle.net/2003/380142019-04-13T01:40:48Z2019-04-12T11:16:18ZTitle: On scale estimation under shifts in the mean
Authors: Axt, Ieva; Fried, Roland
Abstract: In many situations it is crucial to estimate the variance properly. Ordinary variance estimators
perform poorly in the presence of shifts in the mean. We investigate an approach
based on non-overlapping blocks, which yields good results in this change-point scenario.
We show the strong consistency and the asymptotic normality of such blocks-estimators
of the variance under rather general conditions. For estimation of the standard deviation
a blocks-estimator based on average standard deviations turns out to be preferable over
the square root of the average variances. We provide recommendations on the appropriate
choice of the block size and compare this blocks-approach with difference-based
estimators. If level shifts occur rather frequently even better results can be obtained by
adaptive trimming of the blocks under the assumption of normality.2019-04-12T11:16:18ZOptimal designs for model averaging in non-nested modelsAlhorn, KiraDette, HolgerSchorning, Kirstenhttp://hdl.handle.net/2003/379792019-04-04T01:40:58Z2019-04-03T15:30:14ZTitle: Optimal designs for model averaging in non-nested models
Authors: Alhorn, Kira; Dette, Holger; Schorning, Kirsten
Abstract: In this paper we construct optimal designs for frequentist model averaging estimation.
We derive the asymptotic distribution of the model averaging estimate with fixed weights
in the case where the competing models are non-nested and none of these models is correctly
specified. A Bayesian optimal design minimizes an expectation of the asymptotic
mean squared error of the model averaging estimate calculated with respect to a suitable
prior distribution. We demonstrate that Bayesian optimal designs can improve the
accuracy of model averaging substantially. Moreover, the derived designs also improve
the accuracy of estimation in a model selected by model selection and model averaging
estimates with random weights.2019-04-03T15:30:14ZWTA-WTP disparity: The role of perceived realism of the valuation settingFrondel, ManuelSommer, StephanTomberg, Lukashttp://hdl.handle.net/2003/379442019-03-19T02:40:48Z2019-03-18T11:07:26ZTitle: WTA-WTP disparity: The role of perceived realism of the valuation setting
Authors: Frondel, Manuel; Sommer, Stephan; Tomberg, Lukas
Abstract: Based on a survey among more than 5,000 German households and a single-binary
choice experiment in which we randomly split the respondents into two groups, this
paper elicits both households’ willingness to pay (WTP) for power supply security
and their willingness to accept (WTA) compensations for a reduced security level.
In accord with numerous empirical studies, we find that the mean WTA value substantially
exceeds the mean WTP bid, in our empirical example by a factor of 3.56.
Yet, the WTA-WTP ratio decreases to 2.35 among respondents who believe that the
hypothetical valuation setting is likely to become true. Conversely, the WTA-WTP
ratio increases to 3.81 among respondents who deem the setting unlikely. Given this
discrepancy, we conclude that to diminish the WTA-WTP disparity resulting from
stated-preference surveys at least to some extent, inquiring about respondents’ perception
on the realism of the valuation setting is an essential element of any survey
design.2019-03-18T11:07:26ZEmployee representation and innovation – disentangling the effect of legal and voluntary representation institutions in GermanyKraft, KorneliusLammers, Alexanderhttp://hdl.handle.net/2003/379162019-02-15T02:40:52Z2019-02-14T15:33:46ZTitle: Employee representation and innovation – disentangling the effect of legal and voluntary representation institutions in Germany
Authors: Kraft, Kornelius; Lammers, Alexander
Abstract: This paper studies the effect of employee representation bodies provided by management on product and process innovations. In contrast to statutory forms of co-determination such as works councils, participative practices initiated by management are not equipped with any legally granted rights at all. Such alternative forms of employee representation are far less frequently and thoroughly analyzed than works councils. We compare the effects of these co-determination institutions established voluntarily with those initiated on a legal basis on different kinds of innovation measures. We differentiate between process and product (incremental and radical) innovations. To tackle endogeneity, the estimations are based on recursive bivariate and multivariate probit models. Results show that employee representation provided voluntarily by management supports incremental as well as radical product and process innovations. The effect is much more pronounced when endogeneity is taken into account. Works councils, however, only exhibit a positive effect on incremental innovations. Moreover, the results point to a substitutive relationship between both types of employee representation.2019-02-14T15:33:46ZEquivalence of regression curves sharing common parametersMöllenhoff, KathrinBretz, FrankDette, Holgerhttp://hdl.handle.net/2003/379152019-02-15T02:40:52Z2019-02-14T15:31:27ZTitle: Equivalence of regression curves sharing common parameters
Authors: Möllenhoff, Kathrin; Bretz, Frank; Dette, Holger
Abstract: In clinical trials the comparison of two different populations is a frequently addressed
problem. Non-linear (parametric) regression models are commonly used to
describe the relationship between covariates as the dose and a response variable in
the two groups. In some situations it is reasonable to assume some model parameters
to be the same, for instance the placebo effect or the maximum treatment effect. In
this paper we develop a (parametric) bootstrap test to establish the similarity of two
regression curves sharing some common parameters. We show by theoretical arguments
and by means of a simulation study that the new test controls its level and
achieves a reasonable power. Moreover, it is demonstrated that under the assumption
of common parameters a considerable more powerful test can be constructed compared
to the test which does not use this assumption. Finally, we illustrate potential
applications of the new methodology by a clinical trial example.2019-02-14T15:31:27ZThe empirical process of residuals from an inverse regressionKutta, TimBissantz, NicolaiChown, JustinDette, Holgerhttp://hdl.handle.net/2003/379042019-02-07T02:40:53Z2019-02-06T12:57:27ZTitle: The empirical process of residuals from an inverse regression
Authors: Kutta, Tim; Bissantz, Nicolai; Chown, Justin; Dette, Holger
Abstract: In this paper we investigate an indirect regression model characterized by the
Radon transformation. This model is useful for recovery of medical images obtained by computed tomography scans. The indirect regression function is estimated using a series estimator
motivated by a spectral cut-off technique. Further, we investigate the empirical process of
residuals from this regression, and show that it satsifies a functional central limit theorem.2019-02-06T12:57:27ZGeneralized sign tests based on sign depthLeckey, KevinMalcherczyk, DennisMüller, Christine H.http://hdl.handle.net/2003/378392018-12-18T02:40:53Z2018-12-17T16:56:45ZTitle: Generalized sign tests based on sign depth
Authors: Leckey, Kevin; Malcherczyk, Dennis; Müller, Christine H.
Abstract: We introduce generalized sign tests based on K-sign depth, shortly denoted
by K-depth. These so-called K-depth tests are motivated by simplicial regression
depth. Since they depend only on the signs of the residuals, these test statistics
are easy to comprehend and outlier robust. We show that the K-depth test with
K = 2 is equivalent to the classical sign test so that K-depth tests with K > 2
are generalizations of the classical sign test. Since the K-depth test with K = 2 is
equivalent to the classical sign test, it has the same drawbacks as the classical sign
test. However, the generalized sign tests with K > 2 are much more powerful. We
show this by deriving their behavior at observations with few sign changes. Thereby
we also prove an upper bound for the K-depth which is attained by observations
with alternating signs of residuals. Furthermore, we prove the consistency of the K-
depth. Finally, we demonstrate the good power of the K-depth tests for relevance
testing, quadratic regression, and tests for explosive AR(2) and nonlinear AR(1)
regression.2018-12-17T16:56:45ZOptimal designs for series estimation in nonparametric regression with correlated dataDette, HolgerSchorning, KirstenKonstantinou, Mariahttp://hdl.handle.net/2003/378362018-12-15T02:40:55Z2018-12-14T14:05:07ZTitle: Optimal designs for series estimation in nonparametric regression with correlated data
Authors: Dette, Holger; Schorning, Kirsten; Konstantinou, Maria
Abstract: In this paper we investigate the problem of designing experiments for series estimators in nonparametric regression models with correlated observations. We use projection based estimators to derive an explicit solution of the best linear oracle estimator in the continuous time model for all Markovian-type error processes. These solutions are then used to construct estimators, which can be calculated from the available data along with their corresponding optimal design points. Our results are illustrated by means of a simulation study, which demonstrates that the new series estimator has a better performance than the commonly used techniques based on the optimal linear unbiased estimators. Moreover, we show that the performance of the estimators proposed in this paper can be further improved by choosing the design points appropriately.2018-12-14T14:05:07ZGoodness-of-fit testing the error distribution in multivariate indirect regressionChown, JustinBissantz, NicolaiDette, Holgerhttp://hdl.handle.net/2003/378352018-12-15T02:40:55Z2018-12-14T14:03:05ZTitle: Goodness-of-fit testing the error distribution in multivariate indirect regression
Authors: Chown, Justin; Bissantz, Nicolai; Dette, Holger
Abstract: We propose a goodness-of-fit test for the distribution of errors from a multivariate
indirect regression model. The test statistic is based on the Khmaladze transformation of the
empirical process of standardized residuals. This goodness-of-fit test is consistent at the root-n
rate of convergence, and the test can maintain power against local alternatives converging to
the null at a root-n rate.2018-12-14T14:03:05ZA similarity measure for second order properties of non-stationary functional time series with applications to clustering and testingvan Delft, AnneDette, Holgerhttp://hdl.handle.net/2003/378282018-12-05T02:40:54Z2018-12-04T08:35:05ZTitle: A similarity measure for second order properties of non-stationary functional time series with applications to clustering and testing
Authors: van Delft, Anne; Dette, Holger
Abstract: Due to the surge of data storage techniques, the need for the development of appropri-ate techniques to identify patterns and to extract knowledge from the resulting enormous data sets, which can be viewed as collections of dependent functional data, is of increasing interest in many scientific areas. We develop a similarity measure for spectral density oper-ators of a collection of functional time series, which is based on the aggregation of Hilbert-Schmidt differences of the individual time-varying spectral density operators. Under fairly general conditions, the asymptotic properties of the corresponding estimator are derived and asymptotic normality is established. The introduced statistic lends itself naturally to quantify (dis)-similarity between functional time series, which we subsequently exploit in order to build a spectral clustering algorithm. Our algorithm is the first of its kind in the analysis of non-stationary (functional) time series and enables to discover particular pat-terns by grouping together ‘similar’ series into clusters, thereby reducing the complexity of the analysis considerably. The algorithm is simple to implement and computationally fea-sible. As a further application we provide a simple test for the hypothesis that the second order properties of two non-stationary functional time series coincide.2018-12-04T08:35:05ZAliasing effects for random fields over spheres of arbitrary dimensionDurastanti, ClaudioPatschkowski, Timhttp://hdl.handle.net/2003/378272018-12-05T02:40:55Z2018-12-04T08:32:55ZTitle: Aliasing effects for random fields over spheres of arbitrary dimension
Authors: Durastanti, Claudio; Patschkowski, Tim
Abstract: In this paper, aliasing effects are investigated for random ﬁelds deﬁned on the d-dimensional
sphere Sd, and reconstructed from discrete samples. First, we introduce the concept of an aliasing function
on Sd. The aliasing function allows to identify explicitly the aliases of a given harmonic coefficient in
the Fourier decomposition. Then, we exploit this tool to establish the aliases of the harmonic coefficients approximated by means of the quadrature procedure named spherical uniform sampling. Subsequently, we
study the consequences of the aliasing errors in the approximation of the angular power spectrum of an isotropic random ﬁeld, the harmonic decomposition of its covariance function. Finally, we show that band-
limited random ﬁelds are aliases-free, under the assumption of a sufficiently large amount of nodes in the quadrature rule.2018-12-04T08:32:55ZIncreased market transparency in Germany’s gasoline market: The death of rockets and feathers?Frondel, ManuelHorvath, MarcoVance, ColinKihm, Alexanderhttp://hdl.handle.net/2003/378262018-12-05T02:40:55Z2018-12-04T08:30:36ZTitle: Increased market transparency in Germany’s gasoline market: The death of rockets and feathers?
Authors: Frondel, Manuel; Horvath, Marco; Vance, Colin; Kihm, Alexander
Abstract: Drawing on a consumer search model and a unique panel data set of daily
fuel prices covering over 5,000 fuel stations in Germany, this paper documents a
change in the price setting behavior of retail gas stations following the introduction of
a legally mandated on-line price portal. Prior to the introduction of the portal in 2013,
positive asymmetry is found on the basis of error correction models, with prices following
the “rockets and feathers” pattern documented in many commodity markets,
particularly in retail markets for fuels. In the aftermath of the portal’s introduction, by
contrast, negative asymmetry is observed: fuel price decreases in response to refinery
price decreases are stronger than fuel price increases due to refinery price increases.
This reversal in price pass-through, which is found among both branded and unbranded
stations, suggests welfare gains for consumers from increased market transparency.2018-12-04T08:30:36ZStatistical analysis of the lifetime of diamond impregnated tools for core drilling of concreteMalevich, NadjaMüller, Christine H.Kansteiner, MichaelBiermann, DirkFerreira, ManuelTillmann, Wolfganghttp://hdl.handle.net/2003/378142018-11-28T02:40:58Z2018-11-27T11:46:56ZTitle: Statistical analysis of the lifetime of diamond impregnated tools for core drilling of concrete
Authors: Malevich, Nadja; Müller, Christine H.; Kansteiner, Michael; Biermann, Dirk; Ferreira, Manuel; Tillmann, Wolfgang
Abstract: The lifetime of diamond impregnated tools for core drilling of concrete
is studied via the lifetimes of the single diamonds on the tool. Thereby, the number
of visible and active diamonds on the tool surface is determined by microscopical
inspections of the tool at given points in time. This leads to interval-censored lifetime
data if only the diamonds visible at the beginning are considered. If also the
lifetimes of diamonds appearing during the drilling process are included then the
lifetimes are doubly interval-censored. A statistical method is presented to analyse
the interval-censored data as well as the doubly interval-censored data. The method
is applied to three series of experiments which differ in the size of the diamonds
and the type of concrete. It turns out that the lifetimes of small diamonds used for
drilling into conventional concrete is much shorter than the lifetimes when using
large diamonds or high strength concrete.2018-11-27T11:46:56ZDetection of anomalous sequences in crack data of a bridge monitoringAbbas, SermadFried, RolandHeinrich, JensHorn, MelanieJakubzik, MirkoKohlenbach, JohannaMaurer, ReinhardMichels, AnneMüller, Christine H.http://hdl.handle.net/2003/378132018-11-28T02:40:57Z2018-11-27T11:45:06ZTitle: Detection of anomalous sequences in crack data of a bridge monitoring
Authors: Abbas, Sermad; Fried, Roland; Heinrich, Jens; Horn, Melanie; Jakubzik, Mirko; Kohlenbach, Johanna; Maurer, Reinhard; Michels, Anne; Müller, Christine H.
Abstract: For estimating the remaining lifetime of old prestressed concrete bridges,
a monitoring of crack widths can be used. However, the time series of crack widths
show a strong variation mainly caused by temperature and traffic. Additionally, sequences
with extreme volatility appear where the cause is unknown. They are called
anomalous sequences in the following.We present and compare four methods which
aim to detect these anomalous sequences in the time series. Volatilities caused by
traffic should not be detected.2018-11-27T11:45:06ZMultiscale change point detection for dependent dataDette, HolgerSchüler, TheresaVetter, Mathiashttp://hdl.handle.net/2003/378062018-11-17T02:41:00Z2018-11-16T13:21:21ZTitle: Multiscale change point detection for dependent data
Authors: Dette, Holger; Schüler, Theresa; Vetter, Mathias
Abstract: In this paper we study the theoretical properties of the simultaneous multiscale change
point estimator (SMUCE) proposed by Frick et al. (2014) in regression models with dependent
error processes. Empirical studies show that in this case the change point estimate
is inconsistent, but it is not known if alternatives suggested in the literature for correlated
data are consistent. We propose a modification of SMUCE scaling the basic statistic by
the long run variance of the error process, which is estimated by a difference-type variance
estimator calculated from local means from different blocks. For this modification we prove
model consistency for physical dependent error processes and illustrate the finite sample
performance by means of a simulation study.2018-11-16T13:21:21ZPanel cointegrating polynomial regressions: Group-mean fully modified OLS estimation and inferenceWagner, MartinReichold, Karstenhttp://hdl.handle.net/2003/376692018-11-14T02:41:01Z2018-11-13T12:32:37ZTitle: Panel cointegrating polynomial regressions: Group-mean fully modified OLS estimation and inference
Authors: Wagner, Martin; Reichold, Karsten
Abstract: This paper considers group-mean fully modified OLS estimation for a panel of cointegrating
polynomial regressions, i. e., regressions that include an integrated process and its powers as
explanatory variables. The stationary errors are allowed to be serially correlated, the regressor
to be endogenous and { as usual in the nonstationary panel literature { we include individual
specific fixed effects. We consider a fixed cross-section dimension, asymptotics in the time
dimension only and show that the estimator allows for standard asymptotic inference in this
setting. In both the simulations as well as an illustrative application estimating environmental
Kuznets curves for carbon dioxide emissions we compare our group-mean estimator with the
pooled fully modified OLS estimator of de Jong and Wagner (2018).2018-11-13T12:32:37ZConsistency for the negative binomial regression with fixed covariateWeißbach, RafaelRadloff, Lucashttp://hdl.handle.net/2003/373522018-11-01T02:40:56Z2018-10-31T13:29:33ZTitle: Consistency for the negative binomial regression with fixed covariate
Authors: Weißbach, Rafael; Radloff, Lucas
Abstract: We model an overdispersed count as a dependent measurement, by means of
the Negative Binomial distribution. We consider quantitative regressors that
are ﬁxed by design. The expectation of the dependent variable is assumed to
be a known function of a linear combination involving regressors and their coefficients. In the NB1-parametrization of the negative binomial distribution,
the variance is a linear function of the expectation, inﬂated by the dispersion
parameter, and not a generalized linear model. We apply a general result of
Bradley and Gart (1962) to derive weak consistency and asymptotic normality of the maximum likelihood estimator for all parameters. To this end, we
show (i) how to bound the logarithmic density by a function that is linear
in the outcome of the dependent variable, independently of the parameter.
Furthermore (ii) the positive deﬁniteness of the matrix related to the Fisher
information is shown with the Cauchy-Schwarz inequality.2018-10-31T13:29:33ZUsing the extremal index for value-at-risk backtestingBücher, AxelPosch, Peter N.Schmidtke, Philipphttp://hdl.handle.net/2003/372012018-10-20T01:40:54Z2018-10-19T14:45:07ZTitle: Using the extremal index for value-at-risk backtesting
Authors: Bücher, Axel; Posch, Peter N.; Schmidtke, Philipp
Abstract: We introduce a set of new Value-at-Risk independence backtests by establishing a
connection between the independence property of Value-at-Risk forecasts and the
extremal index, a general measure of extremal clustering of stationary sequences.
We introduce a sequence of relative excess returns whose extremal index has to
be estimated. We compare our backtest to both popular and recent competitors
using Monte-Carlo simulations and find considerable power in many scenarios.
In an applied section we perform realistic out-of-sample forecasts with common
forecasting models and discuss advantages and pitfalls of our approach.2018-10-19T14:45:07ZSwitching to green electricity: Spillover effects on household consumptionSommer, Stephanhttp://hdl.handle.net/2003/372002018-10-20T01:41:03Z2018-10-19T14:43:05ZTitle: Switching to green electricity: Spillover effects on household consumption
Authors: Sommer, Stephan
Abstract: One way to reduce emissions from the consumption of electricity is switching to
green electricity suppliers. This paper identifies the determinants of adopting green electricity
and the effect on electricity consumption, using panel data on more than 9,000
households. To control for potential self-selection into green electricity tariffs, an endogenous
dummy treatment effects model is estimated. The results suggest that wealthier
and better-educated households are more likely to adopt green electricity. Moreover, we
find that switching to green electricity decreases electricity consumption and households
supplied by green electricity are less price-responsive. Consequently, enforcing higher
prices for conventional electricity might prove effective in reducing both greenhouse gas
emissions and electricity consumption at the household level.2018-10-19T14:43:05ZRISE Germany Internship: Applying Deep Learning Methods to the Search for Astrophysical Tau NeutrinosMartin, Williamhttp://hdl.handle.net/2003/371902018-10-13T01:40:58Z2018-10-12T12:28:22ZTitle: RISE Germany Internship: Applying Deep Learning Methods to the Search for Astrophysical Tau Neutrinos
Authors: Martin, William2018-10-12T12:28:22ZFeature Selection for High-Dimensional Data with RapidMinerSangkyun, LeeSchowe, BenjaminSivakumar, ViswanathMorik, Katharinahttp://hdl.handle.net/2003/371892018-10-13T01:40:58Z2018-10-12T12:25:02ZTitle: Feature Selection for High-Dimensional Data with RapidMiner
Authors: Sangkyun, Lee; Schowe, Benjamin; Sivakumar, Viswanath; Morik, Katharina
Abstract: Feature selection is an important task in machine learning, reducing dimensionality of learning problems by selecting few relevant features without losing too much information. Focusing on smaller sets of features, we can learn simpler models from data that are easier to understand and to apply. In fact, simpler models are more robust to input noise and outliers, often leading to better prediction performance than the models trained in higher dimensions with all features. We implement several feature selection algorithms in an extension of RapidMiner, that scale well with the number of features compared to the existing feature selection operators in RapidMiner.2018-10-12T12:25:02ZEnergy-Efficient GPS-Based Positioning in the Android Operating SystemStreicher, JochenSpincyk, Olafhttp://hdl.handle.net/2003/371882018-10-13T01:40:57Z2018-10-12T12:23:41ZTitle: Energy-Efficient GPS-Based Positioning in the Android Operating System
Authors: Streicher, Jochen; Spincyk, Olaf
Abstract: We present our ongoing collaborative work on EnDroid, an energy-efficient GPS-based positioning system for the Android Operating System. EnDroid is based on the EnTracked positioning system, developed at the University of Aarhus, Denmark. We describe the current prototypical state of our implementation and present our experiences and conclusions from preliminarily evaluating EnDroid on the Google Nexus One Smartphone. Although the preliminary results seem to sup- port the approach, there are still several open questions, both at the application interface, as well as at the hardware management level.2018-10-12T12:23:41ZProbabilistic Graphical Models in RapidMinerPiatkowski, Nicohttp://hdl.handle.net/2003/371872018-10-13T01:40:58Z2018-10-12T12:22:11ZTitle: Probabilistic Graphical Models in RapidMiner
Authors: Piatkowski, Nico
Abstract: This Report describes the technical background and usage of the GraphMod plug-in for RapidMiner. The plug-in enables RapidMiner to load factor graphs and interpret Label and Attributes which are contained in an Example as assignments to random variables. A set of examples which belong to the same Batch is treated as assignment to a whole factor graph. New operators allow the estimation of factor weights, the computation of the single-node marginal probability functions and the computation of the most probable assignment for each Labelnode with several methods. All algorithms are optimized for parallel execution on common multi-core processors and NVIDIA CUDA capable many-core processors (also known as Graphics Processing Unit).2018-10-12T12:22:11ZTechnical report for Collaborative Research Center SFB 876 - Graduate SchoolMorik, KatharinaRhode, Wolfganghttp://hdl.handle.net/2003/371862018-10-13T01:40:55Z2018-10-12T09:18:51ZTitle: Technical report for Collaborative Research Center SFB 876 - Graduate School
Authors: Morik, Katharina; Rhode, Wolfgang2018-10-12T09:18:51ZComputing on High Performance Clusters with R: Packages BatchJobs and BatchExperimentsBischl, BerndLang, MichelMersmann, OlafRahnenführer, JörgWeihs, Claushttp://hdl.handle.net/2003/371852018-10-13T01:40:53Z2018-10-12T09:16:55ZTitle: Computing on High Performance Clusters with R: Packages BatchJobs and BatchExperiments
Authors: Bischl, Bernd; Lang, Michel; Mersmann, Olaf; Rahnenführer, Jörg; Weihs, Claus
Abstract: Empirical analysis of statistical algorithms often demands time-consuming experiments which are best performed on high performance computing clusters. We present two R packages which greatly simplify working in batch computing environments. The package BatchJobs implements the basic objects and procedures to control a batch cluster within R. It is structured around cluster versions of the well-known higher order functions Map, Reduce and Filter from functional programming. An important feature is that the state of computation is persistently available in a database. The user can query the status of jobs and then continue working with a desired subset. The second package, BatchExperiments, is tailored for the still very general scenario of analyzing arbitrary algorithms on problem instances. It extends BatchJobs by letting the user define an array of jobs of the kind “apply algorithm A to problem instance P and store results”. It is possible to associate statistical designs with parameters of algorithms and problems and therefore to systematically study their influence on the results. In general our main contributions are: (a) Portability : Both packages use a clear and well-defined interface to the batch system which makes them applicable in most high-performance computing environments. (b) Reproducibility: Every computational part has an associated seed that the user can control to ensure reproducibility even when the underlying batch system changes. (c) Efficiency: Efficiently use batch computing clusters completely within R. (d) Abstraction and good software design: The code layers for algorithms, experiment definitions and execution are cleanly separated and enable the writing of readable and maintainable code.2018-10-12T09:16:55ZTechnical report for Collaborative Research Center SFB 876 - Graduate SchoolMorik, KatharinaRhode, Wolfganghttp://hdl.handle.net/2003/371842018-10-13T01:40:53Z2018-10-12T09:14:26ZTitle: Technical report for Collaborative Research Center SFB 876 - Graduate School
Authors: Morik, Katharina; Rhode, Wolfgang2018-10-12T09:14:26ZOptimization plugin for RapidMinerUmaashankar, VenkateshSangkyun, Leehttp://hdl.handle.net/2003/371832018-10-13T01:40:54Z2018-10-12T09:12:51ZTitle: Optimization plugin for RapidMiner
Authors: Umaashankar, Venkatesh; Sangkyun, Lee
Abstract: Optimization in general means selecting a best choice out of various alternatives, which reduces the cost or disadvantage of an objective. Optimization problems are very popular in the fields such as economics, finance, logistics, etc. Optimization is a science of its own and machine learning or data mining is a diverse growing field which applies techniques from various other areas to find useful insights from data. Many of the machine learning problems can be modelled and solved as optimization problems, which means optimization already provides a set of well established methods and algorithms to solve machine learning problems. Due to the importance of optimization in machine learning, in recent times, machine learning researchers are contributing remarkable improvements in the field of optimization. We implement several popular optimization strategies and algorithms as a plugin for RapidMiner, which adds an optimization tool kit to the list of existing arsenal of operators in RapidMiner.2018-10-12T09:12:51ZThe Streams FrameworkBockermann, ChristianBlom, Hendrikhttp://hdl.handle.net/2003/371822018-10-13T01:40:55Z2018-10-12T09:11:13ZTitle: The Streams Framework
Authors: Bockermann, Christian; Blom, Hendrik
Abstract: In this report, we present the streams library, a generic Java-based library for designing data stream processes. The streams library defines a simple abstraction layer for data processing and provides a small set of online algorithms for counting and classification. Moreover it integrates existing libraries such as MOA. Processes are defined in XML files following the semantics and ideas of well established tools like Ant, Maven or the Spring Framework. The streams library can be easily embedded into existing software, used as a standalone tool or be used to define compute graphs that are executed on other back end systems such as the Stormstream engine. This report reflects the status of the streams framework in version 0.9.6. As the framework is continuously enhanced, the report is extended along. The most recent version of this report is available online.2018-10-12T09:11:13ZMeasuring the Power Consumption of SmartphonesManning-Dahan, TylerPutzke, MarkusWietfeld, Christianhttp://hdl.handle.net/2003/371812018-10-13T01:40:55Z2018-10-12T09:08:46ZTitle: Measuring the Power Consumption of Smartphones
Authors: Manning-Dahan, Tyler; Putzke, Markus; Wietfeld, Christian
Abstract: Smartphones are becoming a part of everyday life and as such, a better understanding of hardware and software power consumption is crucial to develop more efficient smartphones. In order to extend battery life, application developers and phone designers must become aware of the limitations of a phone’s CPU power, as well as the LCD display consumption and connectivity via WiFi, 3G, and GPS systems. We present power consumption measurements of an HTC Incredible S and compare these results to known analytical models. The evaluation shows that power consumption is considerably varying with different types of smartphones and that well known models underestimate the actual consumption. The results illustrate that touching the screen nearly doubles the power consumption , which is not captured by any analytical model. Moreover, we present in which way the transmitted packet size of WiFi and cellular communications affect the power consumption.2018-10-12T09:08:46ZUnimodal regression using Bernstein-Schoenberg-splines and penaltiesKöllmann, ClaudiaBornkamp, BjörnIckstadt, Katjahttp://hdl.handle.net/2003/371802018-10-13T01:40:55Z2018-10-12T09:07:11ZTitle: Unimodal regression using Bernstein-Schoenberg-splines and penalties
Authors: Köllmann, Claudia; Bornkamp, Björn; Ickstadt, Katja
Abstract: Research in the field of nonparametric shape constrained regression has been intensive. However, only few publications explicitly deal with unimodality although there is need for such methods in applications, for example, in dose-response analysis. In this paper we propose unimodal spline regression methods that make use of Bernstein-Schoenberg-splines and their shape preservation property. To achieve unimodal and smooth solutions we use penalized splines, and extend the penalized spline approach towards penalizing against general parametric functions, instead of using just difference penalties. For tuning parameter selection under a unimodality constraint a restricted maximum likelihood and an alternative Bayesian approach for unimodal regression are developed. We compare the proposed methodologies to other common approaches in a simulation study and apply it to a dose-response data set. All results suggest that the unimodality constraint or the combination of unimodality and a penalty can substantially improve estimation of the functional relationship.2018-10-12T09:07:11ZPreserving Confidentiality in Multiagent Systems - An Internship Project within the DAAD RISE ProgramDilger, DanielKrümpelmann, PatrickTadros, Corneliahttp://hdl.handle.net/2003/371792018-10-13T01:40:53Z2018-10-12T09:05:30ZTitle: Preserving Confidentiality in Multiagent Systems - An Internship Project within the DAAD RISE Program
Authors: Dilger, Daniel; Krümpelmann, Patrick; Tadros, Cornelia
Abstract: RISE (Research Internships in Science and Engineering) is a summer internship program for undergraduate students from the United States, Canada and the UK organized by the DAAD (Deutscher Akademischer Austausch Dienst). Within the project A5 in the Collaborative Research Center SFB 876, we have planned and conducted an internship project in the RISE program that should support our research. Daniel Dilger was the intern and has been supervised by the PhD students Patrick Krümpelmann and Cornelia Tadros. The aim was to model an application scenario for our prototype implementation of a confidentiality preserving multiagent system and to run experiments with that prototype.2018-10-12T09:05:30ZTechnical report for Collaborative Research Center SFB 876 - Graduate SchoolMorik, KatharinaRhode, Wolfganghttp://hdl.handle.net/2003/371782018-10-13T01:40:52Z2018-10-12T08:47:15ZTitle: Technical report for Collaborative Research Center SFB 876 - Graduate School
Authors: Morik, Katharina; Rhode, Wolfgang2018-10-12T08:47:15ZRobPer: An R Package to Calculate Periodograms for Light Curves Based On Robust RegressionThieler, Anita MonikaFried, RolandRathjens, Jonathanhttp://hdl.handle.net/2003/371772018-10-13T01:40:50Z2018-10-12T08:44:24ZTitle: RobPer: An R Package to Calculate Periodograms for Light Curves Based On Robust Regression
Authors: Thieler, Anita Monika; Fried, Roland; Rathjens, Jonathan
Abstract: An important task in astroparticle physics is the detection of periodicities in irregularly sampled time series, called light curves. The classic Fourier periodogram cannot deal with irregular sampling and with the measurement accuracies that are typically given for each observation of a light curve. Hence, methods to fit periodic functions using weighted regression were developed in the past to calculate periodograms. We present the R Package RobPer which allows to combine different periodic functions and regression techniques to calculate periodograms. Possible regression techniques are least squares, least absolute deviation, least trimmed, M-, S- and {\tau} -regression. Measurement accuracies can be taken into account including weights. Our periodogram function covers most of the attempts that have been tried earlier and provides new model-regression-combinations that have not been used before. To detect valid periods, we apply an outlier search on the periodogram instead of using fixed critical values that are theoretically only justified in case of least squares regression, independent periodogram bars and a null hypothesis allowing only normal white noise. This outlier search can be performed using RobPer as well. Finally, the package also includes a generator to generate artificial light curves e.g., for simulation studies.2018-10-12T08:44:24ZPreprocessing of Affymetrix Exon Expression ArraysSangkyun, LeeSchramm, Alexanderhttp://hdl.handle.net/2003/371762018-10-13T01:40:58Z2018-10-12T08:39:44ZTitle: Preprocessing of Affymetrix Exon Expression Arrays
Authors: Sangkyun, Lee; Schramm, Alexander
Abstract: The activity of genes can be captured by measuring the amount of messenger RNAs transcribed from the genes, or from their subunits called exons. In our study, we use the Affymetrix Human Exon ST v1.0 micro arrays to measure the activity of exon s in Neuroblastoma cancer patients. The purpose is to discover a small number of genes or exons that play important roles in differentiating high - risk patients fro m low - risk counterparts. Although the technology has been improved for the past 15 years, array measurements still can be contaminated by various factors, including human error. Since the number of arrays is often only few hundreds, atypical errors can hardly be canceled by large numbers of normal arrays. In this article we describe how we filter out low - quality arrays in a principled way, so that we can obtain more reliable results in downstream analyses.2018-10-12T08:39:44ZA Survey of the Stream Processing LandscapeBockermann, Christianhttp://hdl.handle.net/2003/371752018-10-13T01:40:58Z2018-10-12T08:38:07ZTitle: A Survey of the Stream Processing Landscape
Authors: Bockermann, Christian
Abstract: The continuous processing of streaming data has become an important aspect in many applications. Over the last years a variety of different streaming platforms has been developed and a number of open source frameworks is available for the implementation of streaming applications. In this report, we will survey the landscape of existing streaming platforms. Starting with an overview of the evolving developments in the recent past, we will discuss the requirements of modern streaming architectures and present the ways these are approached by the different frameworks.2018-10-12T08:38:07ZRandom projections for Bayesian regressionGeppert, Leo N.Ickstadt, KatjaMunteanu, AlexanderSohler, Christianhttp://hdl.handle.net/2003/371742018-10-13T01:40:56Z2018-10-12T08:35:55ZTitle: Random projections for Bayesian regression
Authors: Geppert, Leo N.; Ickstadt, Katja; Munteanu, Alexander; Sohler, Christian
Abstract: This article introduces random projections applied as a data reduction technique for Bayesian regression analysis. We show sufficient conditions under which the entire d -dimensional distribution is preserved under random projections by reducing the number of data points from n to k element of O(poly(d/epsilon)) in the case n >> d . Under mild assumptions, we prove that evaluating a Gaussian likelihood function based on the projected data instead of the original data yields a (1+ O(epsilon))-approximation in the l_2-Wasserstein distance. Our main result states that the posterior distribution of a Bayesian linear regression is approximated up to a small error depending on only an epsilon-fraction of its defining parameters when using either improper non-informative priors or arbitrary Gaussian priors. Our empirical evaluations involve different simulated settings of Bayesian linear regression. Our experiments underline that the proposed method is able to recover the regression model while considerably reducing the total run-time.2018-10-12T08:35:55ZRessourcenbeschränkte Analyse von Ionenmobilitätsspektren mit dem Raspberry PiEgorov, AlexeyKönig, AlexanderKöppen, MarcelKühn, HenningKullack, IsabellKuthe, EliasMitkovska, SuzanaNiehage, RobertPawelko, AndreasSträßer, ManuelStriewe, ChristianD'Addario, MariannaKopczynski, DominikRahmann, Svenhttp://hdl.handle.net/2003/371732018-10-13T01:40:59Z2018-10-12T08:34:17ZTitle: Ressourcenbeschränkte Analyse von Ionenmobilitätsspektren mit dem Raspberry Pi
Authors: Egorov, Alexey; König, Alexander; Köppen, Marcel; Kühn, Henning; Kullack, Isabell; Kuthe, Elias; Mitkovska, Suzana; Niehage, Robert; Pawelko, Andreas; Sträßer, Manuel; Striewe, Christian; D'Addario, Marianna; Kopczynski, Dominik; Rahmann, Sven
Abstract: Die Zusammensetzung der Umgebungs- oder Ausatemluft kann viele Informationen liefern, die z. B. helfen können, eine Erkrankung oder deren Ursache festzustellen. Die Moleküle der in der Luft enthaltenen Substanzen haben jeweils unterschiedliche Größen und Formen, so dass es möglich ist, sie voneinander zu trennen über Ausschläge in einer Luftmessung die Häufigkeit ihres Vorkommens zu bestimmen. Diese Ausschläge werden als Peaks bezeichnet. Ihre Erkennung ist Gegenstand aktueller Forschung. Das Einsatzgebiet solcher Messungen erstreckt sich von medizinischer Überwachung von Patienten im Krankenhaus bis zur Überprüfung der Umgebungsluft bestimmter Gegenden.2018-10-12T08:34:17ZTechnical report for Collaborative Research Center SFB 876 - Graduate SchoolMorik, KatharinaRhode, Wolfganghttp://hdl.handle.net/2003/371722018-10-13T01:40:56Z2018-10-12T08:30:12ZTitle: Technical report for Collaborative Research Center SFB 876 - Graduate School
Authors: Morik, Katharina; Rhode, Wolfgang2018-10-12T08:30:12ZDemixing empirical distribution functionsMunteanu, AlexanderWornowizki, Maxhttp://hdl.handle.net/2003/371712018-10-13T01:41:00Z2018-10-12T08:28:28ZTitle: Demixing empirical distribution functions
Authors: Munteanu, Alexander; Wornowizki, Max
Abstract: We consider the two-sample homogeneity problem where the information contained in two samples is used to test the equality of the underlying distributions. For instance, in cases where one sample stems from a simulation procedure modelling the data generating process of the other sample consisting of observed data, a mere rejection of the null hypothesis is unsatisfactory. Instead, the data analyst would like to know how the simulation can b e improved while changing it as little as possible. Based on the popular Kolmogorov-Smirnov test and a general nonparametric mixture model, we propose an algorithm which determines an appropriate correction distribution function describing how the simulation procedure can b e corrected. It is constructed in such a way that complementing the simulation sample by a given proportion of observations sampled from the correction distribution do es not lead to a rejection of the null hypothesis of equal distributions when the modified and the observed sample are compared. We prove our algorithm to run in linear time and evaluate it on simulated and real spectrometry data showing that it leads to intuitive results. We illustrate its practical performance considering runtime as well as accuracy in a real world scenario.2018-10-12T08:28:28ZData Modeling of Ubiquitous System SoftwareStreicher, Jochenhttp://hdl.handle.net/2003/371702018-10-13T01:41:00Z2018-10-12T08:26:55ZTitle: Data Modeling of Ubiquitous System Software
Authors: Streicher, Jochen
Abstract: The multitude of events and internal data structures in complex modern system software are an excellent target for data analysis. The tools to collect the data range from low-level tracing frameworks to more sophisticated ones with specialized data collection and processing languages. However, these lack information on the relationship between different data sources and between currently and already collected data. We describe a formal data model that captures the structure of data streams in the system software as well as the relationships between them.2018-10-12T08:26:55ZBeyond unimodal regression: modelling multimodality with piecewise unimodal, mixture or additive regressionKöllmann, ClaudiaIckstadt, KatjaFried, Rolandhttp://hdl.handle.net/2003/371692018-10-13T01:41:00Z2018-10-12T08:25:06ZTitle: Beyond unimodal regression: modelling multimodality with piecewise unimodal, mixture or additive regression
Authors: Köllmann, Claudia; Ickstadt, Katja; Fried, Roland
Abstract: Research in the field of nonparametric shape constrained regression has been extensive and there is need for such methods in various application are as, since shape constraints can reflect prior knowledge about the underlying relationship. It is, for example, often natural that some intensity first increases and then decreases over time, which can be described by a unimodal shape constraint. But the prior knowledge in different applications is also of increasing complexity and data shapes may vary fro m few to plenty of modes and from piecewise unimodal to superpositions of unimodal function courses. Thus, we go beyond unimodal regression in this report and capture multimodality by employing piecewise unimodal regression, mixture regression or additive regression models. We give an overview of the statistical methods, namely the unimodal spline regression approach by and its aforementioned extensions for use with multimodal data. The usefulness of the methods is demonstrated by applying them to data sets from three different application areas: breath gas analysis, marine biology and astroparticle physics. Though the three application areas are quite different, the propose d extensions of unimodal regression yield very helpful results in each of it. This encourages using the methodologies proposed here in many other areas of application as well.2018-10-12T08:25:06ZLogistic Regression in DatastreamsSchwiegelshohn, ChrisSohler, Christianhttp://hdl.handle.net/2003/371682018-10-13T01:41:00Z2018-10-12T08:23:06ZTitle: Logistic Regression in Datastreams
Authors: Schwiegelshohn, Chris; Sohler, Christian
Abstract: Learning from data streams is a well researched task both in theory and practice. As remarked by Clarkson, Hazan and Woodruff, many classification problems cannot be very well solved in a streaming setting. For previous model assumptions, there exist simple, yet highly artificial lower bounds prohibiting space efficient one- pass algorithms. At the same time, several classification algorithms are often successfully used in practice. To overcome this gap, we give a model relaxing the constraints that previously made classification impossible from a theoretical point of view and under these model assumptions provide the first (1 + epsilon) -approximate algorithms for sketching the objective values of logistic regression and perceptron classifiers in data streams.2018-10-12T08:23:06ZUnderstanding Where Your Classifier Does (Not) Work - the SCaPE Model Class for Exceptional Model MiningDuivesteijn, WouterThaele, Juliahttp://hdl.handle.net/2003/371672018-10-13T01:41:00Z2018-10-12T08:21:08ZTitle: Understanding Where Your Classifier Does (Not) Work - the SCaPE Model Class for Exceptional Model Mining
Authors: Duivesteijn, Wouter; Thaele, Julia
Abstract: FACT, the First G-APD Cherenkov Telescope, detects air showers induced by high-energetic cosmic particles. It is desirable to classify a shower as being induced by a gamma ray or a background particle. Generally, it is nontrivial to get any feedback on the real-life training task, but we can attempt to understand how our classifier works by investigating its performance on Monte Carlo simulated data. To this end, in this paper we develop the SCaPE (Soft Classifier Performance Evaluation) model class for Exceptional Model Mining, which is a Local Pattern Mining framework devoted to highlighting unusual interplay between multiple targets. In our Monte Carlo simulated data, we take as targets the computed classifier probabilities and the binary column containing the ground truth: which kind of particle induced the corresponding shower. Using a newly developed quality measure based on ranking loss, the SCaPE model class highlights subspaces of the search space where the classifier performs particularly well or poorly. These subspaces arrive in terms of conditions on attributes of the data, hence they come in a language a domain expert understands, which should aid him in understanding where his/her classifier does (not) work. Additional experiments are carried out on nine UCI datasets. Found subgroups highlight subspaces whose difficulty for classification is corroborated by astrophysical interpretation, as well as subspaces that warrant further investigation.2018-10-12T08:21:08ZAngerona - A Multiagent Framework for Logic Based Agents with Application to Secrecy PreservationKrümpelmann, PatrickJanus, TimKern-Isberner, Gabrielehttp://hdl.handle.net/2003/371662018-10-12T01:41:03Z2018-10-11T13:54:58ZTitle: Angerona - A Multiagent Framework for Logic Based Agents with Application to Secrecy Preservation
Authors: Krümpelmann, Patrick; Janus, Tim; Kern-Isberner, Gabriele2018-10-11T13:54:58ZUntersuchungen zur Analyse von deutschsprachigen TextdatenMorik, KatharinaJung, AlexanderWeckwerth, JanRötner, StefanHess, SibylleBuschjäger, SebastianPfahler, Lukashttp://hdl.handle.net/2003/371652019-10-08T11:34:54Z2018-10-11T13:53:10ZTitle: Untersuchungen zur Analyse von deutschsprachigen Textdaten
Authors: Morik, Katharina; Jung, Alexander; Weckwerth, Jan; Rötner, Stefan; Hess, Sibylle; Buschjäger, Sebastian; Pfahler, Lukas2018-10-11T13:53:10ZTechnical report for Collaborative Research Center SFB 876 - Graduate SchoolMorik, KatharinaRhode, Wolfganghttp://hdl.handle.net/2003/371642018-10-12T01:41:04Z2018-10-11T13:50:51ZTitle: Technical report for Collaborative Research Center SFB 876 - Graduate School
Authors: Morik, Katharina; Rhode, Wolfgang2018-10-11T13:50:51ZPerformance Analysis for Parallel R Programs: Towards Efficient Resource UtilizationKotthaus, HelenaKorb, IngoMarwedel, Peterhttp://hdl.handle.net/2003/371632018-10-12T01:41:00Z2018-10-11T13:48:34ZTitle: Performance Analysis for Parallel R Programs: Towards Efficient Resource Utilization
Authors: Kotthaus, Helena; Korb, Ingo; Marwedel, Peter
Abstract: Parallel computing is becoming more and more popular, since R is increasingly used to process large data sets. We therefore have improved traceR to allow for profiling parallel applications also. TraceR can be used for common cases like parallelization on multiple cores or parallelization on multiple machines. For the parallel performance analysis we added measurements like CPU utilization of parallel tasks and measurements for analyzing the memory usage of parallel programs during execution. With our parallel performance analysis we concentrate on applications that are embarrassingly par- allel consisting of independent tasks. One example application which is embarrassingly parallel and also has a high resource utilization is the model selection. Here the goal is to find the best machine learning algorithm configuration for building a model for the given data. Therefore one has to search through a huge model space. Since the gain from parallel execution can be negated if the memory requirements of all parallel processes exceed the capacity of the system, our profiling data can serve as a constraint to determine the degree of parallelism and also to guide distribution of parallel R applications. Our goal is to provide a resource-aware parallelization strategy. To develop such a strategy we first need to analyze the performance of parallel applications. In the following we therefore will describe different parallel example applications and show how traceR is applied to analyze parallel R applications.2018-10-11T13:48:34ZData Reduction for CORSIKABaack, Dominikhttp://hdl.handle.net/2003/371622018-10-12T01:41:01Z2018-10-11T13:44:45ZTitle: Data Reduction for CORSIKA
Authors: Baack, Dominik
Abstract: For the analysis of measured data by experiments, simulated Monte Carlo data is essential. It is used to test the understanding of the experiment, for separation of signal and background and for reconstruction of real physical properties from observable parameters. With increasing size of the experiments, more and more simulated data is needed. To optimize the simulation and to reduce the huge amount of calculation time needed, two different methods exist. The first method is the low-level optimization of the source code. The second one is the reduction of the actually needed Monte Carlo data. This report focuses on the cosmic ray simulation CORSIKA, which simulates cosmic ray induced particle showers within the atmosphere. In case of CORSIKA, big parts of the program are already optimized. Additionally, parts of the source code are only accessible in binary form so the first method of optimization is nearly impossible. Therefore the preferred method here is the reduction of unnecessarily generated data. This report presents a modified and extended internal structure for CORSIKA, which is shown in Figure 2. The modifications can be divided in two modules: Dynamic Stack and Remote Control. Both have complementary approaches to reduce the amount of needed simulation cycles and provide an easy API for customizations without assuming any of the CORSIKA code or structure.2018-10-11T13:44:45ZRISE Germany Internship: Application of Data Mining Methods on IceCube Event ReconstructionsBhasin, SrishtiBörner, Mathishttp://hdl.handle.net/2003/371612018-10-12T01:41:03Z2018-10-11T13:42:44ZTitle: RISE Germany Internship: Application of Data Mining Methods on IceCube Event Reconstructions
Authors: Bhasin, Srishti; Börner, Mathis
Abstract: In this report the results from a 3-month internship are presented. The goal of the internship was to apply data mining methods to low level IceCube data in order to reconstruct the particle energies. IceCube is a neutrino observatory located at the geographical South Pole, built with the aim of detecting high energy astrophysical neutrinos. The detector consists of 5160 photomultipliers, located 1.5-2.5 kilometers beneath the icecap, which detect Cherenkov light radiated by charged particle propagation through the ice. The reconstruction of detected events directly at the pole is challenging, due to heavy constraints on resources. Due to this, only rudimentary reconstructions are performed on-site. The final results are obtained months later, once the data has been transported from the detector. An effective and prompt reconstruction directly at the pole would open a lot of new possibilities for follow-up studies of detected events. The application of state-of-the-art data mining methods can help to obtain these reconstructions on-site.2018-10-11T13:42:44ZOnline Gauß-Prozesse zur Regression auf FPGAsBuschjäger, Sebastianhttp://hdl.handle.net/2003/371602018-10-12T01:41:01Z2018-10-11T13:40:05ZTitle: Online Gauß-Prozesse zur Regression auf FPGAs
Authors: Buschjäger, Sebastian
Abstract: FPGAs köonnen als eine schnelle und energiesparende Ausführungsplattform genutzt werden, welche jedoch keinerlei Laufzeitumgebung für Dateiabstraktionen oder Peripheriezugriffe anbietet. Aus diesem Grund muss neben der eigentlichen Implementierung auch der Entwurf des umliegenden Systems erfolgen. Dieser Systementwurf hat sich mit der dritten Generation der verf ̈ ugbaren Werkzeuguntersützung für FPGAs stark gewandelt, wodurch sich Unterschiede zu der vorhandenen Literatur ergeben. Das Entwurfsvorgehen für die aktuelle FPGA- und Werkzeuggeneration soll zunächst vorgestellt werden, um darauf aufbauend eine passende Laufzeitumgebung für maschinelle Lernalgorithmen auf dem FPGA zu entwerfen. Hierbei soll eine möglichst modulare und energiesparende Systemarchitektur entworfen werden, sodass sich die hier vorgestellte Systemarchitektur gut in eingebettete System anwenden lässt und zusätzlich der maschinelle Lernalgorithmus wegen der Modularität des Systems einfach ausgetauscht werden kann. Anschließend soll eine beispielhafte Umsetzung eines Gauß-Prozesses auf dem FPGA die Einbettung in das Gesamtsystem zeigen, wobei hier Wert auf eine möglichst hohe Geschwindigkeit der Hardwareimplementierung gelegt werden soll. Die Umsetzung einer energiesparenden Systemarchitektur für verschiedene maschinelle Lernalgorithmen ist nach Wissen des Autors neu, da in der vorhandenen Literatur jeweils ein neues System für einen anderen Algorithmus entworfen wird. Des Weiteren ist Umsetzung von Gauß-Prozessen auf FPGAs ist nach Wissen des Autors ebenfalls neu, sodass ich hier weitere Unterschiede zur vorhanden Literatur ergeben.2018-10-11T13:40:05ZEasyTCGA: An R package for easy batch downloading of TCGA data from FireBrowseKliewer, ViktoriaSangkyun, Leehttp://hdl.handle.net/2003/371592018-10-12T01:41:03Z2018-10-11T13:27:16ZTitle: EasyTCGA: An R package for easy batch downloading of TCGA data from FireBrowse
Authors: Kliewer, Viktoria; Sangkyun, Lee
Abstract: Many organizations deal with the investigation of cancer including the National Institutes of Health, USA. Genomics(CCG). The Cancer Genome Atlas (TCGA) is an establishment of the National Cancer Institute (NCI) and the National Human Genome Research Institute (NHGRI) that has created maps of the key genomic changes in more than 30 cancer types. The aim of TCGA is the improvement of the effectiveness to diagnose, to treat and to guard against cancer through genome analysis. TCGA provides a publically available dataset. The Broad Institute TCGA GDAC Firehose arranges this data set that can be loaded directly with use of FireBrowse. FireBrowse allows simple and smart download and study TCGA data and TCGA analyses. The data is downloaded as zip files. Mario Deng created an R client called FirebrowseR with the objective of getting the TCGA data from FireBrowse conveniently. The size of record sets to download is limited. EasyTCGA is an R package providing easy batch downloading of particular TCGA data from FireBrowse using FirebrowseR. The key advantage of EasyTCGA is the downloading of the whole available data set you are interested in at once as a single data frame. The focus of this technical report is on the presentation of the R package EasyTCGA . That is why all specific expressions and variables like biological data and the like will not be explained. You get all relevant background informations on the given URL’s. EasyTGCA can download clinical data, sample-level log2 miRSeq and mRNASeq expression values, selected columns from the MAF (Mutation Annotation File) generated by MutSig and significantly mutated genes, as scored by MutSig.2018-10-11T13:27:16ZTechnical report for Collaborative Research Center SFB 876 - Graduate SchoolMorik, KatharinaRhode, Wolfganghttp://hdl.handle.net/2003/371582018-10-12T01:41:02Z2018-10-11T11:46:05ZTitle: Technical report for Collaborative Research Center SFB 876 - Graduate School
Authors: Morik, Katharina; Rhode, Wolfgang2018-10-11T11:46:05ZPG594 -- Big DataAsmi, MohamedBainczyk, AlexanderBunse, MirkoGaidel, DennisMay, MichaelPfeiffer, ChristianSchieweck, AlexanderSchönberger, LeaStelzner, KarlSturm, DavidWiethoff, CarolinXu, Lilihttp://hdl.handle.net/2003/371572018-10-12T01:40:59Z2018-10-11T11:43:22ZTitle: PG594 -- Big Data
Authors: Asmi, Mohamed; Bainczyk, Alexander; Bunse, Mirko; Gaidel, Dennis; May, Michael; Pfeiffer, Christian; Schieweck, Alexander; Schönberger, Lea; Stelzner, Karl; Sturm, David; Wiethoff, Carolin; Xu, Lili
Abstract: In der heutigen Welt wird die Verarbeitung großer Mengen von Daten immer wichtiger. Dabei wird eine Vielzahl von Technologien, Frameworks und Software-Lösungen eingesetzt, die explizit für den Big-Data-Bereich konzipiert wurden oder aber auf Big-Data-Systeme portiert werden können. Ziel dieser Projektgruppe (PG) ist der Erwerb von Expertenwissen hinsichtlich aktueller Tools und Systeme im Big-Data-Bereich anhand einer realen, wissenschaftlichen Problemstellung. Vom Wintersemester 2015/2016 bis zum Ende des Sommersemesters 2016 beschäftigte sich diese Projektgruppe mit der Verarbeitung und Analyse der Daten des durch den Fachbereich Physik auf der Insel La Palma betriebenen First G-APD Cherenkov Telescope (FACT). Dieses liefert täglich Daten im Terabyte- Bereich, die mit Hilfe des Clusters des Sonderforschungsbereiches 876 zunächst indiziert und dann auf effiziente Weise verarbeitet werden müssen, sodass diese Projektgruppe im besten Falle die Tätigkeit der Physiker mit ihren Ergebnissen unterstützen kann. Wie genau dies geschehen soll, sei auf den nachfolgenden Seiten genauer beleuchtet - begonnen mit dem dezidierten Anwendungsfall, unter Berücksichtigung der notwendigen fachlichen sowie technischen Grundlagen, bis hin zu den finalen Ergebnissen.2018-10-11T11:43:22ZRISE Germany Internship: Unfolding FACT DataBieker, JacobBörner, MathisBrügge, KaiNöthe, Maximillianhttp://hdl.handle.net/2003/371562018-10-12T01:41:00Z2018-10-11T11:41:22ZTitle: RISE Germany Internship: Unfolding FACT Data
Authors: Bieker, Jacob; Börner, Mathis; Brügge, Kai; Nöthe, Maximillian
Abstract: In this report the results from a 10 week internship are presented. The goal of the internship was to apply different unfolding approaches to conduct measurements of energy spectra from data aquired by FACT, the First G-APD Cherenkov Telescope. FACT is the first operational telescope of its kind, employing a camera equipped with silicon photo multipliers (G-APD aka SiPM) to primarily detect gamma rays. Improving the unfolding method can help with better interpretation of the data and more accurate physics results without the need for new equipment or more observations. The approaches tested during this internship range from simplistic matrix inversion to an improvement over of the previous standard (TRUEE).2018-10-11T11:41:22ZAutomated Data Collection for Modelling Texas Instruments Ultra Low-Power ChargersMasoudinejad, Mojtabahttp://hdl.handle.net/2003/371552018-10-12T01:40:57Z2018-10-11T11:39:45ZTitle: Automated Data Collection for Modelling Texas Instruments Ultra Low-Power Chargers
Authors: Masoudinejad, Mojtaba
Abstract: Some IoT designers develop their ad-hoc conversion solution specifically designed for their entity. However, having Maximum Power Point Tracking (MPPT), battery control, converter and switching logic would require a series of components. These devices will increase the initial cost and the overall energy loss overhead of this middle-ware between the EH and the storage. Nevertheless, these issues can be conquered by integrating all these elements and logics into one single chip. Currently, there are three Texas Instruments (TI) chips from the BQ255XX series and ST (SPV1050) chip available on-the-shelf, specially designed for low energy environments. Among them, TI's BQ25505 and BQ25570 chips promise a better performance out of the box and are dominant in the market. Although multiple designers have used these chips in their IoT devices, no analytical analysis on them is available. Some basic information about these devices are available through their datasheets. However, for a reliable design and fast analysis of the overall energy performance of an IoT device, these chips have to be modelled.2018-10-11T11:39:45ZTechnical report for Collaborative Research Center SFB 876 - Graduate SchoolMorik, KatharinaRhode, Wolfganghttp://hdl.handle.net/2003/371542018-10-12T01:40:58Z2018-10-11T11:37:34ZTitle: Technical report for Collaborative Research Center SFB 876 - Graduate School
Authors: Morik, Katharina; Rhode, Wolfgang2018-10-11T11:37:34ZA Power Model for DC-DC Boost Converters Operating in PFM ModeMasoudinejad, Mojtabahttp://hdl.handle.net/2003/371532018-10-12T01:40:55Z2018-10-11T11:34:22ZTitle: A Power Model for DC-DC Boost Converters Operating in PFM Mode
Authors: Masoudinejad, Mojtaba
Abstract: Next generation of computing is going to be outside of the traditional stationary computing realm. In the future paradigm, many non-stationary objects around us sense and actuate on the environment while they are connected to each other via internet. During the last few years, the number of these devices has been growing rapidly. This is making an explosion of small computing platforms for commercial, consumer, and industrial use cases. The overall concept of IoT is based on the communication (mainly through the internet) between multiple entities which are generalised as things . According to the diversity of the application fields, large number of entities are considered as things . From simple one-bit sensors to complex robots. Even some concepts consider human being as an entity within an IoT system. This leads into ambiguity of the definition for objects. Consequently, no unified definition for things is accepted among different communities. However, Cyber Physical Systems (CPS) as embedded devices with communication capabilities would fit into most (if not all) of them.2018-10-11T11:34:22ZMathematical modelling of the quality-based order assignment problemSchmitt, JacquelineHahn, FlorianDeuse, Jochenhttp://hdl.handle.net/2003/371522018-10-12T01:40:57Z2018-10-11T11:32:41ZTitle: Mathematical modelling of the quality-based order assignment problem
Authors: Schmitt, Jacqueline; Hahn, Florian; Deuse, Jochen
Abstract: The increasing global comp etition forces companies to reduce their pro duction costs and increase the quality of their pro ducts at the same time. Due to individualized customer needs, there can b e numerous customer requirements to the pro ducts that need to b e fulfilled to ensure customer satisfaction. Therefore, many companies established a quality management (QM) system, which aims for continuous improvement of p erformance regarding system, pro cess, and pro duct quality. Basic concepts and requirements for QM systems can be found in the ISO 9000 standards series. A main principle hereby is the customer orientation so that individualized customer needs can be considered within the design of internal quality testing gates. Within this technical report we present two approaches to model the product to customer order assignment problem (PCO-AP) mathematically as a 0,1 assignment problem (0,1- AP) and generalized assignment problem (GAP).2018-10-11T11:32:41ZModel-Based Optimization of Subgroup Weights for Survival AnalysisRichter, JakobMadjar, KatrinRahnenführer, Jörghttp://hdl.handle.net/2003/371512018-10-12T01:40:57Z2018-10-11T11:30:33ZTitle: Model-Based Optimization of Subgroup Weights for Survival Analysis
Authors: Richter, Jakob; Madjar, Katrin; Rahnenführer, Jörg
Abstract: To obtain a reliable prediction model for a specific cancer subgroup or cohort is often difficult due to the limited number of samples and, in survival analysis, even more due to potentially high censoring rates. Sometimes similar datasets are available for other patient subgroups with the same or a similar disease and treatment, e.g., from other clinical centers. Simple pooling of all subgroups can decrease the variance of the predicted parameters of the prediction models, but also increase the bias due to potential high heterogeneity between the cohorts.
A promising compromise is to identify which subgroups are similar enough to the specific subgroup of interest and then include only these for model building.
Similarity here refers to the relationship between input and output in the prediction model, and not necessarily to the distributions of the input and output variables themselves.
Here, we propose a subgroup-based weighted likelihood approach and evaluate it on a set of lung cancer cohorts. When interested in a prediction model for a specific subgroup, then for every other subgroup, an individual weight determines the strength with which its observations enter into the likelihood-based optimization of the model parameters. A weight close to 0 indicates that a subgroup should be discarded, and a weight close to 1 indicates that the subgroup fully enters into the model building process.
MBO (model based optimization) can be used to quickly find a good prediction model in the presence of a large number of hyperparameters to be tuned. Here, we use MBO to identify the best model for survival prediction in lung cancer subgroups, where besides the parameters of a Cox model additionally the individual values of the subgroup weights are optimized. Interestingly, often the resulting models with highest prediction quality are obtained for a mixed weight structure, i.e. both weights close to 0, weights close to 1, and medium weights are optimal, reflecting the similarity of the corresponding cancer subgroups.2018-10-11T11:30:33Z