2024-03-29T10:09:15Zhttps://www.repo.uni-hannover.de/oai/requestoai:www.repo.uni-hannover.de:123456789/562022-12-02T15:02:17Zcom_123456789_1col_123456789_3ddc:004ddc:550doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:600
Nadarajah, Nandakumaran
Paffenholz, Jens-André
Teunissen, Peter J.G.
2015-08-19T09:48:07Z
2015-08-19T09:48:07Z
2014-07-17
Nadarajah, Nandakumaran; Paffenholz, Jens-André; Teunissen, Peter J. G.: Integrated GNSS Attitude Determination and Positioning for Direct Geo-Referencing. In: Sensors 14 (2014), Nr. 7, S. 12715-12734. DOI: http://dx.doi.org/10.3390/s140712715
http://www.repo.uni-hannover.de/handle/123456789/56
http://dx.doi.org/10.15488/38
Direct geo-referencing is an efficient methodology for the fast acquisition of 3D spatial data. It requires the fusion of spatial data acquisition sensors with navigation sensors, such as Global Navigation Satellite System (GNSS) receivers. In this contribution, we consider an integrated GNSS navigation system to provide estimates of the position and attitude (orientation) of a 3D laser scanner. The proposed multi-sensor system (MSS) consists of multiple GNSS antennas rigidly mounted on the frame of a rotating laser scanner and a reference GNSS station with known coordinates. Precise GNSS navigation requires the resolution of the carrier phase ambiguities. The proposed method uses the multivariate constrained integer least-squares (MC-LAMBDA) method for the estimation of rotating frame ambiguities and attitude angles. MC-LAMBDA makes use of the known antenna geometry to strengthen the underlying attitude model and, hence, to enhance the reliability of rotating frame ambiguity resolution and attitude determination. The reliable estimation of rotating frame ambiguities is consequently utilized to enhance the relative positioning of the rotating frame with respect to the reference station. This integrated (array-aided) method improves ambiguity resolution, as well as positioning accuracy between the rotating frame and the reference station. Numerical analyses of GNSS data from a real-data campaign confirm the improved performance of the proposed method over the existing method. In particular, the integrated method yields reliable ambiguity resolution and reduces position standard deviation by a factor of about 0.8, matching the theoretical gain of √3/4 for two antennas on the rotating frame and a single antenna at the reference station.
Submitted by Melanie Koch (melanie.koch@tib.uni-hannover.de) on 2015-08-19T09:06:16Z
No. of bitstreams: 1
sensors-14-12715.pdf: 6305641 bytes, checksum: 238f55bf284add5b2f222a066da79b43 (MD5)
Approved for entry into archive by Melanie Koch (melanie.koch@tib.uni-hannover.de) on 2015-08-19T09:48:07Z (GMT) No. of bitstreams: 1
sensors-14-12715.pdf: 6305641 bytes, checksum: 238f55bf284add5b2f222a066da79b43 (MD5)
Made available in DSpace on 2015-08-19T09:48:07Z (GMT). No. of bitstreams: 1
sensors-14-12715.pdf: 6305641 bytes, checksum: 238f55bf284add5b2f222a066da79b43 (MD5)
Previous issue date: 2014-07-17
DFG
Leibniz Universit¨at Hannover/Graduiertenakademie
publishedVersion
eng
Basel : MDPI AG
CC BY 3.0 Unported
http://creativecommons.org/licenses/by/3.0/
global navigation satellite system
GNSS
attitude determination
multivariate constrained integer least-squares
MC-LAMBDA
carrier phase ambiguity resolution
direct geo-referencing
laser scanner
Globales Navigationssatellitensystem
GNSS
Lagebestimmung
Kleinste-Quadrate-Methode
MC-LAMBDA
Träger-Phasen-Mehrdeutigkeitsauflösung
Trägerphasen-Mehrdeutigkeitsauflösung
Mehrdeutigkeitsauflösung
Trägerphase
Direkte Georeferenzierung
Laserscanner
Positionsbestimmung
Methode der kleinsten Quadrate
GNSS
Lagemessung
Ambiguität
Georeferenzierung
Laserscanner
Dewey Decimal Classification::600 | Technik
Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Integrated GNSS Attitude Determination and Positioning for Direct Geo-Referencing
Article
Text
1424-8220
http://dx.doi.org/10.3390/s140712715
12715
12734openAccessPublikation wurde durch den Publikationsfonds gefördert.Sensors 14 (2014), Nr. 7NandakumaranJens-AndréPeter J. G.NadarajahPaffenholzTeunissenLUH_Fonds
oai:www.repo.uni-hannover.de:123456789/4842022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:600
Frost, Anja
Renners, Eike
Hötter, Michael
Ostermann, Jörn
2016-08-30T10:20:22Z
2016-08-30T10:20:22Z
2013-01
Frost, Anja; Renners, Eike; Hoetter, Michael; Ostermann, Joern: Probabilistic Evaluation of Three-Dimensional Reconstructions from X-Ray Images Spanning a Limited Angle. In: Sensors 13 (2013), Nr. 1, S. 137-151. DOI: http://dx.doi.org/10.3390/s130100137
http://www.repo.uni-hannover.de/handle/123456789/484
http://dx.doi.org/10.15488/461
An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction.
Made available in DSpace on 2016-08-30T10:20:22Z (GMT). No. of bitstreams: 0
Previous issue date: 2013-01
publishedVersion
eng
Basel : Mdpi Ag
Sensors 13 (2013), Nr. 1
1424-8220
http://dx.doi.org/10.3390/s130100137
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0/
x-ray
computed tomography
discrete tomography
three-dimensional image reconstruction
limited data
dempster-shafer theory
data fusion
probability calculus
discrete tomography
algorithms
Dewey Decimal Classification::600 | Technik
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Probabilistic Evaluation of Three-Dimensional Reconstructions from X-Ray Images Spanning a Limited Angle
Article
Text
1
13
137
151openAccess
oai:www.repo.uni-hannover.de:123456789/4932022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:600
Kern, Albert
Martignoli, Stefan
Mathis, Wolfgang
Steeb, Willi-Hans
Stoop, Ralph Lukas
Stoop, Ruedi
2016-08-30T10:20:26Z
2016-08-30T10:20:26Z
2011-06
Kern, Albert; Martignoli, Stefan; Mathis, Wolfgang; Steeb, Willi-Hans; Stoop, Ralph Lukas; Stoop, Ruedi: Analysis of the "Sonar Hopf" Cochlea. In: Sensors 11 (2011), Nr. 6, S. 5808-5818. DOI: http://dx.doi.org/10.3390/s110605808
http://www.repo.uni-hannover.de/handle/123456789/493
http://dx.doi.org/10.15488/470
The "Sonar Hopf" cochlea is a recently much advertised engineering design of an auditory sensor. We analyze this approach based on a recent description by its inventors Hamilton, Tapson, Rapson, Jin, and van Schaik, in which they exhibit the "Sonar Hopf" model, its analysis and the corresponding hardware in detail. We identify problems in the theoretical formulation of the model and critically examine the claimed coherence between the described model, the measurements from the implemented hardware, and biological data.
Made available in DSpace on 2016-08-30T10:20:26Z (GMT). No. of bitstreams: 0
Previous issue date: 2011-06
publishedVersion
eng
Basel : Mdpi Ag
Sensors 11 (2011), Nr. 6
1424-8220
http://dx.doi.org/10.3390/s110605808
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0/
artificial cochlea
biomorphic
mathematical analysis
delayed feedback-control
systems
hearing
Dewey Decimal Classification::600 | Technik
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Analysis of the "Sonar Hopf" Cochlea
Article
Text
6
11
5808
5818openAccess
oai:www.repo.uni-hannover.de:123456789/10542022-12-02T19:35:26Zcom_123456789_11col_123456789_12ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Asheghi, Noushin Rezapour
Sharoff, Serge
Markert, Katja
2017-01-12T08:35:30Z
2017-01-12T08:35:30Z
2016
Asheghi, N.R.; Sharoff, S.; Markert, K.: Crowdsourcing for web genre annotation. In: Language Resources and Evaluation 50 (2016), Nr. 3, S. 603-641. DOI: http://dx.doi.org/10.1007/s10579-015-9331-6
http://www.repo.uni-hannover.de/handle/123456789/1054
http://dx.doi.org/10.15488/1030
Recently, genre collection and automatic genre identification for the web has attracted much attention. However, currently there is no genre-annotated corpus of web pages where inter-annotator reliability has been established, i.e. the corpora are either not tested for inter-annotator reliability or exhibit low inter-coder agreement. Annotation has also mostly been carried out by a small number of experts, leading to concerns with regard to scalability of these annotation efforts and transferability of the schemes to annotators outside these small expert groups. In this paper, we tackle these problems by using crowd-sourcing for genre annotation, leading to the Leeds Web Genre Corpus—the first web corpus which is, demonstrably reliably annotated for genre and which can be easily and cost-effectively expanded using naive annotators. We also show that the corpus is source and topic diverse. © 2016, The Author(s).
Made available in DSpace on 2017-01-12T08:35:30Z (GMT). No. of bitstreams: 0
Previous issue date: 2016
Google Research Award
EPSRC Doctoral Training Grant
publishedVersion
eng
Dordrecht : Springer Netherlands
Language Resources and Evaluation 50 (2016), Nr. 3
1574-020X
https://doi.org/10.1007/s10579-015-9331-6
CC BY 4.0 Unported
https://creativecommons.org/licenses/by/4.0/
Annotation guidelines
Crowdsourcing
Genres on the web
Reliability testing
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Crowdsourcing for web genre annotation
Article
Text
3
50
603
641openAccess
oai:www.repo.uni-hannover.de:123456789/10592022-12-02T16:17:36Zcom_123456789_1col_123456789_3ddc:004ddc:550doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:910
Basiri, Anahid
Jackson, Mike
Amirian, Pouria
Pourabdollah, Amir
Sester, Monika
Winstanley, Adam
Moore, Terry
Zhang, Lijuan
2017-01-12T08:35:34Z
2017-01-12T08:35:34Z
2016
Basiri, A.; Jackson, M.; Amirian, P.; Pourabdollah, A.; Sester, M. et al.: Quality assessment of OpenStreetMap data using trajectory mining. In: Geo-Spatial Information Science 19 (2016), Nr. 1, S. 56-68. DOI: http://dx.doi.org/10.1080/10095020.2016.1151213
http://www.repo.uni-hannover.de/handle/123456789/1059
http://dx.doi.org/10.15488/1035
OpenStreetMap (OSM) data are widely used but their reliability is still variable. Many contributors to OSM have not been trained in geography or surveying and consequently their contributions, including geometry and attribute data inserts, deletions, and updates, can be inaccurate, incomplete, inconsistent, or vague. There are some mechanisms and applications dedicated to discovering bugs and errors in OSM data. Such systems can remove errors through user-checks and applying predefined rules but they need an extra control process to check the real-world validity of suspected errors and bugs. This paper focuses on finding bugs and errors based on patterns and rules extracted from the tracking data of users. The underlying idea is that certain characteristics of user trajectories are directly linked to the type of feature. Using such rules, some sets of potential bugs and errors can be identified and stored for further investigations. © 2016 Wuhan University. Published by Taylor & Francis Group.
Made available in DSpace on 2017-01-12T08:35:34Z (GMT). No. of bitstreams: 0
Previous issue date: 2016
EU/FP7/Marie Curie Initial Training Network MULTI-POS
publishedVersion
eng
Singapore :Taylor and Francis Ltd.
Geo-Spatial Information Science 19 (2016), Nr. 1
1009-5020
https://doi.org/10.1080/10095020.2016.1151213
CC BY 4.0 Unported
https://creativecommons.org/licenses/by/4.0/
OpenStreetMap (OSM)
Spatial data quality
trajectory data mining
data quality
mapping
qualitative analysis
spatial data
trajectory
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften
Dewey Decimal Classification::900 | Geschichte und Geografie::910 | Geografie, Reisen
Quality assessment of OpenStreetMap data using trajectory mining
Article
Text
1
19
56
68openAccess
oai:www.repo.uni-hannover.de:123456789/10622022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Moghaddamnia, Sanam
Waal, Albert
Fuhrwerk, Martin
Le, Chung
Peissig, Jürgen
2017-01-12T08:35:34Z
2017-01-12T08:35:34Z
2016
Moghaddamnia, S.; Waal, A.; Fuhrwerk, M.; Le, C.; Peissig, J.: On the efficiency of PAPR reduction schemes deployed for DRM systems. In: Eurasip Journal on Wireless Communications and Networking 2016 (2016), Nr. 1, 255. DOI: http://dx.doi.org/10.1186/s13638-016-0747-5
http://www.repo.uni-hannover.de/handle/123456789/1062
http://dx.doi.org/10.15488/1038
Digital Radio Mondiale (DRM) is the universally, openly standardized digital broadcasting system for all frequencies including LW, MW, and SW as well as VHF bands. Alongside providing high audio quality to listeners, DRM satisfies technological requirements posed by broadcasters, manufacturers and regulatory authorities and thus bears a great potential for the future of global radio. One of the key issues here concerns green broadcasting. Facing the need for high-power transmitters to cover wide areas, there is room for improvement concerning the power efficiency of DRM-transmitters. A major drawback of DRM is its high peak-to-average power ratio (PAPR) due to the applied transmission technology based on Orthogonal Frequency Division Multiplexing (OFDM), which results in non-linearities in the emitted signal, low power efficiency, and high costs of transmitters. To overcome this, numerous schemes have been investigated for reducing PAPR in OFDM systems. In this paper, we review and analyze various technologies to reduce PAPR providing that the technical feasibility and DRM-specific system architecture and edge conditions regarding the system performance in terms of modulation error rate, compliance with frequency mask, and synchronization efficiency are ensured. All evaluations are carried out with I/Q signals which are monitored in real operation to present the actual performance of proposed PAPR techniques. Subsequently, the capability of the best approach is evaluated via measurements on a DRM test platform, where achieved transmit power gain of 10 dB is shown. According to our evaluation results, PAPR reduction schemes based on active constellation extension followed by a filter prove to be promising towards practical realization of power-efficient transmitters. © 2016, The Author(s).
Made available in DSpace on 2017-01-12T08:35:34Z (GMT). No. of bitstreams: 0
Previous issue date: 2016
publishedVersion
eng
New York : Springer International Publishing
Eurasip Journal on Wireless Communications and Networking 2016 (2016), Nr. 1
1687-1472
https://doi.org/10.1186/s13638-016-0747-5
CC BY 4.0 Unported
https://creativecommons.org/licenses/by/4.0/
Digital broadcasting
DRM30
OFDM
PAPR reduction
Power amplifier efficiency
Broadcasting
Digital radio
Digital television
Efficiency
Frequency division multiplexing
Power amplifiers
Radio broadcasting
Transmitters
Active constellation extensions
Digital Broadcasting
Digital broadcasting systems
DRM30
PAPR reduction
Peak to average power ratio
Power amplifier efficiency
Transmission technologies
Orthogonal frequency division multiplexing
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
On the efficiency of PAPR reduction schemes deployed for DRM systems
Article
Text
1
2016
255openAccess
oai:www.repo.uni-hannover.de:123456789/12502022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Durand, Arnaud
Ebbing, Johannes
Kontinen, Juha
Vollmer, Heribert
2017-03-31T06:25:34Z
2017-03-31T06:25:34Z
2011
Durand, A.; Ebbing, Johannes; Kontinen, J.; Vollmer, Heribert: Dependence logic with a majority quantifier. In: Leibniz International Proceedings in Informatics, LIPIcs 13 (2011), S. 252-263. DOI: https://doi.org/10.4230/LIPIcs.FSTTCS.2011.252
http://www.repo.uni-hannover.de/handle/123456789/1250
http://dx.doi.org/10.15488/1225
We study the extension of dependence logic D by a majority quantifier M over finite structures. We show that the resulting logic is equi-expressive with the extension of second-order logic by second-order majority quantifiers of all arities. Our results imply that, from the point of view of descriptive complexity theory, D(M) captures the complexity class counting hierarchy. © A. Durand, J. Ebbing, J. Kontinen, and H. Vollmer.
Made available in DSpace on 2017-03-31T06:25:34Z (GMT). No. of bitstreams: 0
Previous issue date: 2011
publishedVersion
eng
Wadern : Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH
Leibniz International Proceedings in Informatics, LIPIcs 13 (2011)
1868-8969
https://doi.org/10.4230/LIPIcs.FSTTCS.2011.252
CC BY-NC-ND 3.0 Unported
https://creativecommons.org/licenses/by-nc-nd/3.0/
Counting hierarchy
Dependence logic
Descriptive complexity
Finite model theory
Majority quantifier
Second order logic
Counting hierarchy
Dependence logic
Descriptive complexity
Finite model theory
Majority quantifier
Second-order logic
Software engineering
Formal logic
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dependence logic with a majority quantifier
Article
Text
13
252
263openAccess31st International Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2011, December 12-14 2011, Mumbai, India
oai:www.repo.uni-hannover.de:123456789/12512022-12-02T15:06:08Zcom_123456789_1col_123456789_6ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:600
Denkena, Berend
Köhler, Jens
Breidenstein, Bernd
Mörke, Tobias
2017-03-31T06:25:34Z
2017-03-31T06:25:34Z
2011
Denkena, B.; Köhler, J.; Breidenstein, B.; Mörke, T.: Elementary studies on the inducement and relaxation of residual stress. In: Procedia Engineering 19 (2011), S. 88-93. DOI: https://doi.org/10.1016/j.proeng.2011.11.084
http://www.repo.uni-hannover.de/handle/123456789/1251
http://dx.doi.org/10.15488/1226
In order to qualify residual stress relaxation as an indicator of mechanical overloading of machined parts, an individually designed residual stress profile has to be allocated. Even though numerous investigations have been carried out in the past, residual stress profiles cannot be predicted to a satisfactory degree. For this reason, essential studies on the reproducibility of residual stress profiles for several external cylindrical turning parameters are conducted and it is demonstrated that identical residual stress profiles can be induced successfully. Subsequently, specimens with defined residual stress profiles are loaded in bending tests with various numbers of test cycles. The amount of residual stress relaxation in the specimen's surface layer is measured to determine the influence of theapplied load on the stress relaxation. By applying single tensile and compressive loads below and above thematerial's yield and ultimate strength, the stress relaxation can be evaluated in detail.
Made available in DSpace on 2017-03-31T06:25:34Z (GMT). No. of bitstreams: 0
Previous issue date: 2011
DFG/CRC/SFB/653
publishedVersion
eng
Amsterdam : Elsevier BV
Procedia Engineering 19 (2011)
1877-0509
https://doi.org/10.1016/j.proeng.2011.11.084
CC BY-NC-ND 3.0 Unported
https://creativecommons.org/licenses/by-nc-nd/3.0/
Fatigue
Residual stress
Surface integrity
Compressive loads
Reproducibilities
Residual stress profiles
Satisfactory degree
Surface integrity
Surface layers
Test cycles
Ultimate strength
Bending tests
Fatigue of materials
Stress analysis
Stress relaxation
Tensile strength
Residual stresses
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::600 | Technik
Elementary studies on the inducement and relaxation of residual stress
Article
Text
19
88
93openAccess1st CIRP Conference on Surface Integrity, CSI 2012, January 30 – February 01 2012, Bremen, Germany
oai:www.repo.uni-hannover.de:123456789/12552023-01-03T12:48:13Zcom_123456789_1col_123456789_7ddc:004doc-type:BookPartdoc-type:Textopen_accessstatus-type:publishedVersionddc:600
Arndt, Markus
Bassi, Angelo
Giulini, Domenico
Heidmann, Antoine
Raimond, Jean-Michel
Giacobino, Elisabeth
Pfeifer, Rolf
2017-03-31T06:25:36Z
2017-03-31T06:25:36Z
2011
Arndt, M.; Bassi, A.; Giulini, D.; Heidmann, A.; Raimond, J.-M.: Fundamental frontiers of quantum science and technology. In: Giacobino, E.; Pfeifer, R. (Eds.): 2nd European Future Technologies Conference and Exhibition 2011 (FET 11). Amsterdam [u.a.] : Elsevier, 2011 (Procedia computer science ; 7), S. 77-80. DOI: https://doi.org/10.1016/j.procs.2011.12.024
http://www.repo.uni-hannover.de/handle/123456789/1255
http://dx.doi.org/10.15488/1230
We discuss recent studies on the foundations of quantum physics with photonic, atomic, molecular and micromechanical systems as well as theoretical treatments of the interface between quantum physics and classical observations. Investigations of the type presented here elucidate important boundary conditions for quantum mechanics and help assessing their relevance for future quantum technologies. © Selection and peer-review under responsibility of FET11 conference organizers and published by Elsevier B.V.
Made available in DSpace on 2017-03-31T06:25:36Z (GMT). No. of bitstreams: 0
Previous issue date: 2011
publishedVersion
eng
Amsterdam [u.a.] : Elsevier
2nd European Future Technologies Conference and Exhibition 2011 (FET 11)
Procedia computer science ; 7
978-1-62748-814-3
1877-0509
https://doi.org/10.1016/j.procs.2011.12.024
CC BY-NC-ND 3.0 Unported
https://creativecommons.org/licenses/by-nc-nd/3.0/
Foundations of quantum physic
Matter waves
Molecular quantum optics
Quantum information
Quantum sensing
Matter waves
Micromechanical systems
Quantum Information
Quantum physics
Quantum sensing
Quantum technologies
Science and Technology
Theoretical treatments
MESFET devices
Quantum optics
Quantum theory
Technology
Atomic physics
Konferenzschrift
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::600 | Technik
Fundamental frontiers of quantum science and technology
BookPart
Text
7
77
80openAccess2nd European Future Technologies Conference and Exhibition 2011, 4 - 6 May 2011, Budapest, Hungary
oai:www.repo.uni-hannover.de:123456789/12592023-02-01T12:00:16Zcom_123456789_1col_123456789_7ddc:004doc-type:BookPartdoc-type:Textopen_accessstatus-type:publishedVersionddc:600
De Angelis, M.
Angonin, M.C.
Beaufils, Q.
Becker, C.
Bertoldi, A.
Bongs, Kai
Bourdel, T.
Bouyer, P.
Boyer, V.
Dörscher, S.
Duncker, H.
Ertmer, Wolfgang
Fernholz, T.
Fromhold, T.M.
Herr, Waldemar
Krüger, P.
Kürbis, C.
Mellor, C.J.
Pereira Dos Santos, F.
Peters, A.
Poli, N.
Popp, M.
Prevedelli, M.
Rasel, Ernst Maria
Rudolph, J.
Schreck, F.
Sengstock, K.
Sorrentino, F.
Stellmer, S.
Tino, G.M.
Valenzuela, T.
Wendrich, T.J.
Wicht, A.
Windpassinger, P.
Wolf, P.
Giacobino, Elisabeth
Pfeifer, Rolf
2017-03-31T07:44:22Z
2017-03-31T07:44:22Z
2011
De Angelis, M.; Angonin, M.C.; Beaufils, Q.; Becker, Ch.; Bertoldi, A. et al.: iSense: A portable ultracold-atom-based gravimeter. In: Giacobino, E.; Pfeifer, R. (Eds.): 2nd European Future Technologies Conference and Exhibition 2011 (FET 11). Amsterdam [u.a.] : Elsevier, 2011 (Procedia computer science ; 7), S. 334-336. DOI: https://doi.org/10.1016/j.procs.2011.09.067
http://www.repo.uni-hannover.de/handle/123456789/1259
http://dx.doi.org/10.15488/1234
We present iSense, a recently initiated FET project aiming to use Information and Communication Technologies (ICT) to develop a platform for portable quantum sensors based on cold atoms. A prototype of backpack-size highprecision force sensor will be built to demonstrate the concept.
Made available in DSpace on 2017-03-31T07:44:22Z (GMT). No. of bitstreams: 0
Previous issue date: 2011
publishedVersion
eng
Amsterdam [u.a.] : Elsevier
2nd European Future Technologies Conference and Exhibition 2011 (FET 11)
Procedia computer science ; 7
978-1-62748-814-3
1877-0509
https://doi.org/10.1016/j.procs.2011.09.067
CC BY-NC-ND 3.0 Unported
https://creativecommons.org/licenses/by-nc-nd/3.0/
Cold atoms
Force sensor
High-precision
Information and Communication Technologies
Quantum sensors
Information technology
Sensors
MESFET devices
Konferenzschrift
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::600 | Technik
iSense: A portable ultracold-atom-based gravimeter
BookPart
Text
7
334
336openAccess2nd European Future Technologies Conference and Exhibition 2011 (FET 11), 4 - 6 May 2011, : Budapest, Hungary
oai:www.repo.uni-hannover.de:123456789/12612022-12-02T15:06:08Zcom_123456789_1col_123456789_6ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:600
Denkena, Berend
Böß, Volker
Nespor, D.
Samp, A.
2017-03-31T07:44:22Z
2017-03-31T07:44:22Z
2011
Denkena, B.; Böß, V.; Nespor, D.; Samp, A.: Kinematic and stochastic surface topography of machined tial6v4-parts by means of ball nose end milling. In: Procedia Engineering 19 (2011), S. 81-87. DOI: https://doi.org/10.1016/j.proeng.2011.11.083
http://www.repo.uni-hannover.de/handle/123456789/1261
http://dx.doi.org/10.15488/1236
Ball nose end mills are usually applied during 5-axes machining of high functional parts especially in the aerospace industry. The systematical study of the relationship between process forces and kinematics, surface topography and subsurface properties is fundamental to ensure a high surface integrity. This paper deals with the topography of machined surfaces of TiAl6V4 parts by means of ball nose end milling. The machined surface has been analyzed and the kinematic topography, influenced by the process parameters and the geometry of the cutting tool, has been computed. By subtracting the surface measurements from the computed topography, the stochastic topography of the machined surface, e.g. roughness and cracks, can be determined. Furthermore, an approach is given for predicting the stochastic topography based on the process forces during machining of TiAl6V4. © 2011 Published by Elsevier Ltd.
Made available in DSpace on 2017-03-31T07:44:22Z (GMT). No. of bitstreams: 0
Previous issue date: 2011
DFG/CRC/871
publishedVersion
eng
Amsterdam : Elsevier BV
Procedia Engineering 19 (2011)
1877-0509
https://doi.org/10.1016/j.proeng.2011.11.083
CC BY-NC-ND 3.0 Unported
https://creativecommons.org/licenses/by-nc-nd/3.0/
Milling
Titanium
Topography
Computed topography
End mill
End milling
Functional parts
Machined surface
Process forces
Process parameters
Stochastic surfaces
Subsurface properties
Surface integrity
TiAl6V4
Aerospace industry
Ball milling
Comminution
Kinematics
Milling (machining)
Stochastic systems
Surface measurement
Titanium
Tools
Topography
Surface topography
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::600 | Technik
Kinematic and stochastic surface topography of machined tial6v4-parts by means of ball nose end milling
Article
Text
19
81
87openAccess1st CIRP Conference on Surface Integrity, CSI 2012, January 30 – February 01 2012, Bremen, Germany
oai:www.repo.uni-hannover.de:123456789/12832022-12-02T19:35:27Zcom_123456789_11col_123456789_12ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_access
Dietze, Stefan
Calì, Andrea
Gorgan, Dorian
Ugarte, Martín
2017-04-05T12:01:23Z
2017-04-05T12:01:23Z
2017
Dietze, S.: Retrieval, Crawling and Fusion of Entity-centric Data on the Web. In: Cali, A.; Gorgan, D.; Ugarte, M. (Eds.): Semantic keyword-based search on structured data sources. Berlin ; Heidelberg : Springer, 2017 (Lecture notes in computer science ; 10151), S. 3-16. DOI: https://doi.org/10.1007/978-3-319-53640-8_1
http://www.repo.uni-hannover.de/handle/123456789/1283
http://dx.doi.org/10.15488/1258
While the Web of (entity-centric) data has seen tremendous growth over the past years, take-up and re-use is still limited. Data vary heavily with respect to their scale, quality, coverage or dynamics, what poses challenges for tasks such as entity retrieval or search. This chapter provides an overview of approaches to deal with the increasing heterogeneity of Web data. On the one hand, recommendation, linking, profiling and retrieval can provide efficient means to enable discovery and search of entity-centric data, specifically when dealing with traditional knowledge graphs and linked data. On the other hand, embedded markup such as Microdata and RDFa has emerged a novel, Web-scale source of entitycentric knowledge. While markup has seen increasing adoption over the last few years, driven by initiatives such as schema.org, it constitutes an increasingly important source of entity-centric data on the Web, being in the same order of magnitude as the Web itself with regards to dynamics and scale. To this end, markup data lends itself as a data source for aiding tasks such as knowledge base augmentation, where data fusion techniques are required to address the inherent characteristics of markup data, such as its redundancy, heterogeneity and lack of links. Future directions are concerned with the exploitation of the complementary nature of markup data and traditional knowledge graphs. The final publication is available at Springer via http://dx.doi.org/ 10.1007/978-3-319-53640-8_1.
Made available in DSpace on 2017-04-05T12:01:23Z (GMT). No. of bitstreams: 0
Previous issue date: 2017
acceptedVersion
eng
Heidelberg : Springer Verlag
Semantic keyword-based search on structured data sources
Lecture notes in computer science ; 10151
1611-3349
0302-9743
https://doi.org/10.1007/978-3-319-53640-8_1
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Dataset recommendation
Entity retrieval
Knowledge graphs
Markup
Schema.org
Arches
Data fusion
Knowledge based systems
Semantics
Web crawler
Entity retrieval
Knowledge graphs
Markup
Schema.org
Semantic Web
Konferenzschrift
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Retrieval, Crawling and Fusion of Entity-centric Data on the Web
BookPart
Text
10151
3
16openAccess2nd International KEYSTONE Conference : IKC 2016, September 8–9, 2016, Cluj-Napoca, RomaniaPA-6Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/12842022-12-02T15:03:40Zcom_123456789_1col_123456789_3ddc:004status-type:acceptedVersiondoc-type:Articledoc-type:Textopen_accessddc:620
Feng, D.
Neuweiler, Insa
Nackenhorst, Udo
2017-04-05T12:01:24Z
2018-02-21T23:05:13Z
2017
Feng, D.; Neuweiler, I.; Nackenhorst, U.: A spatially stabilized TDG based finite element framework for modeling biofilm growth with a multi-dimensional multi-species continuum biofilm model. In: Computational Mechanics 59 (2017), Nr. 6, S. 1049-1070. DOI: https://doi.org/10.1007/s00466-017-1388-1
http://www.repo.uni-hannover.de/handle/123456789/1284
http://dx.doi.org/10.15488/1259
We consider a model for biofilm growth in the continuum mechanics framework, where the growth of different components of biomass is governed by a time dependent advection–reaction equation. The recently developed time-discontinuous Galerkin (TDG) method combined with two different stabilization techniques, namely the Streamline Upwind Petrov Galerkin (SUPG) method and the finite increment calculus (FIC) method, are discussed as solution strategies for a multi-dimensional multi-species biofilm growth model. The biofilm interface in the model is described by a convective movement following a potential flow coupled to the reaction inside of the biofilm. Growth limiting substrates diffuse through a boundary layer on top of the biofilm interface. A rolling ball method is applied to obtain a boundary layer of constant height. We compare different measures of the numerical dissipation and dispersion of the simulation results in particular for those with non-trivial patterns. By using these measures, a comparative study of the TDG–SUPG and TDG–FIC schemes as well as sensitivity studies on the time step size, the spatial element size and temporal accuracy are presented. The final publication is available at Springer via http://dx.doi.org/10.1007/s00466-017-1388-1
Made available in DSpace on 2017-04-05T12:01:24Z (GMT). No. of bitstreams: 0
Previous issue date: 2017
State of Lower Saxony
acceptedVersion
eng
Heidelberg : Springer Verlag
Computational Mechanics 59 (2017), Nr. 6
0178-7675
https://doi.org/10.1007/s00466-017-1388-1
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Advection-reaction equations
Finite element
Numerical dissipation and dispersion
TDG-SUPG
TDG-FIC
Advection
Biofilms
Boundary layers
Calculations
Continuum mechanics
Dispersions
Galerkin methods
Interfaces (materials)
Discontinuous galerkin
Multi-species biofilms
Numerical dissipation
Reaction equations
Sensitivity studies
Stabilization techniques
Streamlineupwind / petrov-galerkin methods (SUPG)
Time-dependent advection
Finite element method
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::600 | Technik::620 | Ingenieurwissenschaften und Maschinenbau
A spatially stabilized TDG based finite element framework for modeling biofilm growth with a multi-dimensional multi-species continuum biofilm model
Article
Text
6
59
1
1049
22
1070openAccess2018-02-21PA-73Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/12922022-12-02T15:03:41Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Meier, Arne
Ordyniak, Sebastian
Sridharan, Ramanujan
Schindler, Irena
2017-04-05T12:01:26Z
2017-04-05T12:01:26Z
2017
Meier, A.; Ordyniak, S.; Sridharan, R.; Schindler, I.: Backdoors for linear temporal logic. In: Leibniz International Proceedings in Informatics, LIPIcs 63 (2017), 23. DOI: https://doi.org/10.4230/LIPIcs.IPEC.2016.23
http://www.repo.uni-hannover.de/handle/123456789/1292
http://dx.doi.org/10.15488/1267
In the present paper, we introduce the backdoor set approach into the field of temporal logic for the global fragment of linear temporal logic. We study the parameterized complexity of the satisfiability problem parameterized by the size of the backdoor. We distinguish between backdoor detection and evaluation of backdoors into the fragments of Horn and Krom formulas. Here we classify the operator fragments of globally-operators for past/future/always, and the combination of them. Detection is shown to be fixed-parameter tractable (FPT) whereas the complexity of evaluation behaves differently. We show that for Krom formulas the problem is paraNP-complete. For Horn formulas, the complexity is shown to be either fixed parameter tractable or paraNP-complete depending on the considered operator fragment.
Made available in DSpace on 2017-04-05T12:01:26Z (GMT). No. of bitstreams: 0
Previous issue date: 2017
DFG/ME 4279/1-1
publishedVersion
eng
Wadern : Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH
Leibniz International Proceedings in Informatics, LIPIcs 63 (2017)
1868-8969
https://doi.org/10.4230/LIPIcs.IPEC.2016.23
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0/
Backdoor sets
Linear temporal logic
Parameterized complexity
Computer circuits
Formal logic
Parameterization
Temporal logic
Backdoor detections
Backdoors
Horn formulas
Linear temporal logic
Parameterized
Parameterized complexity
Satisfiability problems
Parameter estimation
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Backdoors for linear temporal logic
Article
Text
63
23openAccess
oai:www.repo.uni-hannover.de:123456789/12992022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Hannula, Miika
Kontinen, Juha
Lück, Martin
Virtema, Jonni
2017-04-06T06:44:30Z
2017-04-06T06:44:30Z
2016
Hannula, M.; Kontinen, J.; Lück, M.; Virtema, J.: On quantified propositional logics and the exponential time hierarchy. In: Electronic Proceedings in Theoretical Computer Science, EPTCS 226 (2016), S. 198-212. DOI: https://doi.org/10.4204/EPTCS.226.14
http://www.repo.uni-hannover.de/handle/123456789/1299
http://dx.doi.org/10.15488/1274
We study quantified propositional logics from the complexity theoretic point of view. First we introduce alternating dependency quantified boolean formulae (ADQBF) which generalize both quantified and dependency quantified boolean formulae. We show that the truth evaluation for ADQBF is AEXPTIME(poly)-complete. We also identify fragments for which the problem is complete for the levels of the exponential hierarchy. Second we study propositional team-based logics. We show that DQBF formulae correspond naturally to quantified propositional dependence logic and present a general NEXPTIME upper bound for quantified propositional logic with a large class of generalized dependence atoms. Moreover we show AEXPTIME(poly)-completeness for extensions of propositional team logic with generalized dependence atoms.
Made available in DSpace on 2017-04-06T06:44:30Z (GMT). No. of bitstreams: 0
Previous issue date: 2016
University of Auckland
Academy of Finland
publishedVersion
eng
Waterloo, NSW : Open Publishing Association
Electronic Proceedings in Theoretical Computer Science, EPTCS 226 (2016)
https://doi.org/10.4204/EPTCS.226.14
CC BY 4.0 Unported
https://creativecommons.org/licenses/by/4.0/
Automata theory
Boolean functions
Computer circuits
Formal logic
Formal verification
Dependence logic
Exponential time
Propositional logic
Quantified Boolean formulas
Upper Bound
Boolean algebra
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
On quantified propositional logics and the exponential time hierarchy
Article
Text
226
198
212openAccess
oai:www.repo.uni-hannover.de:123456789/13042022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Lück, Martin
2017-04-06T06:44:31Z
2017-04-06T06:44:31Z
2016
Lück, M.: Axiomatizations for propositional and modal team logic. In: Leibniz International Proceedings in Informatics, LIPIcs 62 (2016). DOI: https://doi.org/10.4230/LIPIcs.CSL.2016.33
http://www.repo.uni-hannover.de/handle/123456789/1304
http://dx.doi.org/10.15488/1279
A framework is developed that extends Hilbert-style proof systems for propositional and modal logics to comprehend their team-based counterparts. The method is applied to classical propositional logic and the modal logic K. Complete axiomatizations for their team-based extensions, propositional team logic PTL and modal team logic MTL, are presented.
Made available in DSpace on 2017-04-06T06:44:31Z (GMT). No. of bitstreams: 0
Previous issue date: 2016
publishedVersion
eng
Wadern : Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH
Leibniz International Proceedings in Informatics, LIPIcs 62 (2016
https://doi.org/10.4230/LIPIcs.CSL.2016.33
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0/
Axiomatization
Modal team logic
Proof system
Propositional team logic
Team logic
Formal logic
Programmable logic controllers
Axiomatization
Modal team logic
Proof system
Propositional team logic
Team logic
Computer circuits
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Axiomatizations for propositional and modal team logic
Article
Text
62
LIPIcs 62 (2016)openAccess
oai:www.repo.uni-hannover.de:123456789/13542022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_access
Drachsler, Hendrik
Stoyanov, Slavi
d'Aquin, Mathieu
Herder, Eelco
Guy, Marieke
Dietze, Stefan
Rensing, Christoph
Freitas, Sara de
Ley, Tobias
Muñoz-Merino, Pedro J.
2017-04-20T08:42:19Z
2017-04-20T08:42:19Z
2014
Drachsler, H.; Stoyanov, S.; D'Aquin, M.; Herder, E.; Guy, M.; Dietze, S.: An evaluation framework for data competitions in TEL. In: Rensing, C.; Freitas, S.; Ley, T.; Muñoz-Merino, P. (Eds.): Open Learning and Teaching in Educational Communities. Heidelberg : Springer Verlag, 2014 (Lecture Notes in Computer Science ; 8719), S. 70-83. DOI: https://doi.org/10.1007/978-3-319-11200-8_6
http://www.repo.uni-hannover.de/handle/123456789/1354
http://dx.doi.org/10.15488/1329
This paper presents a study describing the development of an Evaluation Framework (EF) for data competitions in TEL. The study applies the Group Concept Method (GCM) to empirically depict criteria and their indicators for evaluating software applications in TEL. A statistical analysis including multidimensional scaling and hierarchical clustering on the GCM data identified the following six evaluation criteria: 1.Educational Innovation, 2.Usability, 3.Data, 4.Performance, 5.Privacy, and 6.Audience. Each of them was operationalized through a set of indicators. The resulting Evaluation Framework (EF) incorporating these criteria was applied to the first data competition of the LinkedUp project. The EF was consequently improved using the results from reviewers' interviews, which were analysed qualitatively and quantitatively. The outcome of these efforts is a comprehensive EF that can be used for TEL data competitions and for the evaluation of TEL tools in general. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-11200-8_6.
Made available in DSpace on 2017-04-20T08:42:19Z (GMT). No. of bitstreams: 0
Previous issue date: 2014
EC/FP7/LinkedUp
EC/FP7/DURAARK
acceptedVersion
eng
Heidelberg : Springer Verlag
Open Learning and Teaching in Educational Communities
Lecture Notes in Computer Science ; 8719
0302-9743
978-3-319-11199-5
978-3-319-11200-8
10.1007/978-3-319-11200-8_6
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Assessment of TEL tools
Data competition
Evaluation Framework
Group Concept Mapping
Artificial intelligence
Computer science
Computers
Concept mapping
Evaluating software
Evaluation criteria
Hier-archical clustering
Multi-dimensional scaling
Application programs
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
An evaluation framework for data competitions in TEL
BookPart
Text
8719
70
83openAccess9th European Conference on Technology Enhanced Learning, EC-TEL 2014, September 16-19, 2014, Graz, AustriaPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/13552022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_accessddc:370
Pereira Nunes, Bernardo
Kawase, Ricardo
Dietze, Stefan
Bernardino De Campos, Gilda Helena
Nejdl, Wolfgang
Popescu, Elvira
Li, Quing
Klamma, Ralf
Leung, Howard
Specht, Marcus
2017-04-20T08:42:20Z
2017-04-20T08:42:20Z
2012
Pereira Nunes, B.; Kawase, R.; Dietze, S.; Bernardino De Campos, G.H.; Nejdl, W.: Annotation tool for enhancing e-learning courses. In: Popescu, E.; Li, Q.; Klamma, R.; Leung, H.; Specht, M. (Eds.): Advances in Web-Based Learning - ICWL 2012. Heidelberg : Springer Verlag, 2012 (Lecture Notes in Computer Science ; 7558), S. 51-60. DOI: https://doi.org/10.1007/978-3-642-33642-3_6
http://www.repo.uni-hannover.de/handle/123456789/1355
http://dx.doi.org/10.15488/1330
One of the most popular forms of learning is through reading and for years we have used hard copy documents as the main material to learn. With the advent of the Internet and the fast development of new technologies, new tools have been developed to assist the learning process. However, reading is still the main learning method that is an individual activity. In this paper we propose a highlighting tool that enables the reading and learning process to become a collaborative and shared activity. In other words, the highlighting tool supports the so-called active-reading, a well-known and efficient means of learning. The highlighting tool brings to the digital environment the same metaphor of the traditional highlight marker and puts it in a social context. It enables users to emphasize certain portions of digital learning objects. Furthermore, it provides students, tutors, course coordinators and educational institutions new possibilities in the teaching and learning process. In this work we expose the first quantitative and qualitative results regarding the use of the highlight tool by over 750 students through 8 weeks of courses. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-33642-3_6.
Made available in DSpace on 2017-04-20T08:42:20Z (GMT). No. of bitstreams: 0
Previous issue date: 2012
EC/ECP 2008 EDU 428016
CAPES
acceptedVersion
eng
Heidelberg : Springer Verlag
Advances in Web-Based Learning - ICWL 2012
Lecture Notes in Computer Science ; 7558
0302-9743
978-3-642-33641-6
978-3-642-33642-3
10.1007/978-3-642-33642-3_6
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Active-Reading
e-Learning
Evaluation
User Feedback
Annotation tool
Digital environment
Digital-learning
Educational institutions
Hard copies
Learning methods
Learning process
Online-Annotations
Social context
Teaching and learning
Tool support
User feedback
Computer aided instruction
Learning systems
Teaching
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::300 | Sozialwissenschaften, Soziologie, Anthropologie::370 | Erziehung, Schul- und Bildungswesen
Annotation tool for enhancing e-learning courses
BookPart
Text
7558
51
60openAccess11th International Conference, September 2-4, 2012, Sinaia, RomaniaPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/13562022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_accessddc:370
Pereira Nunes, Bernardo
Pedrosa, Stella
Kawase, Ricardo
Alrifai, Mohammad
Marenzi, Ivana
Dietze, Stefan
Casanova, Marco Antonio
Hernández-Leo, Davinia
Ley, Tobias
Klamma, Ralf
Harrer, Andreas
2017-04-20T08:42:20Z
2017-04-20T08:42:20Z
2013
Pereira Nunes, B.; Pedrosa, S.; Kawase, R.; Alrifai, M.; Marenzi, I. et al.: Answering confucius: The reason why we complicate. In: Hernández-Leo, D.; Ley, T.; Klamma, R.; Harrer, A. (Eds.): Scaling up Learning for Sustained Impact. Heidelberg : Springer Verlag, 2013 (Lecture Notes in Computer Science ; 8095), S. 496-501. DOI: https://doi.org/10.1007/978-3-642-40814-4_45
http://www.repo.uni-hannover.de/handle/123456789/1356
http://dx.doi.org/10.15488/1331
Learning is a level-progressing process. In any field of study, one must master basic concepts to understand more complex ones. Thus, it is important that during the learning process learners are presented and challenged with knowledge which they are able to comprehend (not a level below, not a level too high). In this work we focus on language learners. By gradually improving (complicating) texts, readers are challenged to learn new vocabulary. To achieve such goals, in this paper we propose and evaluate the 'complicator' that translates given sentences to a chosen level of higher degree of difficulty. The 'complicator' is based on natural language processing and information retrieval approaches that perform lexical replacements. 30 native English speakers participated in a user study evaluating our methods on an expert-tailored dataset of children books. Results show that our tool can be of great utility for language learners who are willing to improve their vocabulary. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-40814-4_45.
Made available in DSpace on 2017-04-20T08:42:20Z (GMT). No. of bitstreams: 0
Previous issue date: 2013
TERENCE
EC/FP7
acceptedVersion
eng
Heidelberg : Springer Verlag
Scaling up Learning for Sustained Impact
Lecture Notes in Computer Science ; 8095
0302-9743
978-3-642-40813-7
978-3-642-40814-4
10.1007/978-3-642-40814-4_45
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
language development
learning process
Technology enhanced learning
Basic concepts
Children books
Information retrieval approach
Language development
Natural language processing
Technology enhanced learning
User study
Natural language processing systems
Learning systems
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::300 | Sozialwissenschaften, Soziologie, Anthropologie::370 | Erziehung, Schul- und Bildungswesen
Answering confucius: The reason why we complicate
BookPart
Text
8095
496
501openAccess8th European Conference, on Technology Enhanced Learning, EC-TEL 2013, September 17-21, 2013, Paphos, CyprusPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/13572022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_access
Pereira Nunes, Bernardo
Dietze, Stefan
Casanova, Marco Antonio
Kawase, Ricardo
Fetahu, Besnik
Nejdl, Wolfgang
Cimiano, Philipp
Corcho, Oscar
Presutti, Valentina
Hollink, Laura
Rudolph, Sebastian
2017-04-20T08:42:21Z
2017-04-20T08:42:21Z
2013
Pereira Nunes, B.; Dietze, S.; Casanova, M.A.; Kawase, R.; Fetahu, B.; Nejdl, W.: Combining a co-occurrence-based and a semantic measure for entity linking. In: Cimiano, P.; Corcho, O.; Presutti, V.; Hollink, L.; Rudolph, S. (Eds.): The Semantic Web: Semantics and Big Data. Heidelberg : Springer Verlag, 2013 (Lecture Notes in Computer Science ; 7882), S. 548-562. DOI: https://doi.org/10.1007/978-3-642-38288-8_37
http://www.repo.uni-hannover.de/handle/123456789/1357
http://dx.doi.org/10.15488/1332
One key feature of the Semantic Web lies in the ability to link related Web resources. However, while relations within particular datasets are often well-defined, links between disparate datasets and corpora of Web resources are rare. The increasingly widespread use of cross-domain reference datasets, such as Freebase and DBpedia for annotating and enriching datasets as well as documents, opens up opportunities to exploit their inherent semantic relationships to align disparate Web resources. In this paper, we present a combined approach to uncover relationships between disparate entities which exploits (a) graph analysis of reference datasets together with (b) entity co-occurrence on the Web with the help of search engines. In (a), we introduce a novel approach adopted and applied from social network theory to measure the connectivity between given entities in reference datasets. The connectivity measures are used to identify connected Web resources. Finally, we present a thorough evaluation of our approach using a publicly available dataset and introduce a comparison with established measures in the field. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-38288-8_37.
Made available in DSpace on 2017-04-20T08:42:21Z (GMT). No. of bitstreams: 0
Previous issue date: 2013
acceptedVersion
eng
Heidelberg : Springer Verlag
The Semantic Web: Semantics and Big Data
Lecture Notes in Computer Science ; 7882
0302-9743
978-3-642-38287-1
978-3-642-38288-8
10.1007/978-3-642-38288-8-37
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
co-occurrence-based measure
link detection
linked data
semantic associations
Semantic connectivity
Co-occurrence
Graph analysis
Linked datum
Semantic measures
Semantic relationships
Web resources
Data integration
Search engines
World Wide Web
Semantic Web
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Combining a co-occurrence-based and a semantic measure for entity linking
BookPart
Text
7882
548
562openAccess10th International Conference, ESWC 2013, May 26-30, 2013, Montpellier, FrancePA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/13582022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_access
Pereira Nunes, Bernardo
Mera, Alexander
Casanova, Marco Antonio
Fetahu, Besnik
Paes Leme, Luiz André P.
Dietze, Stefan
Decker, Hendrik
Lhotská, Lenka
Link, Sebastian
Basl, Josef
Tjoa, A Min
2017-04-20T08:42:21Z
2017-04-20T08:42:21Z
2013
Pereira Nunes, B.; Mera, A.; Casanova, M.A.; Fetahu, B.; Paes Leme, L.A.P.; Dietze, S.: Complex matching of RDF datatype properties. In: Decker, H.; Lhotská, L.; Link, S.; Basl, J.; Tjoa, A M. (Eds.): Database and Expert Systems Applications : 24th International Conference, DEXA 2013, Prague, Czech Republic, August 26-29, 2013, Proceedings, Part I. Heidelberg : Springer, 2013 (Lecture Notes in Computer Science ; 8055), S. 195-208. DOI: https://doi.org/10.1007/978-3-642-40285-2_18
http://www.repo.uni-hannover.de/handle/123456789/1358
http://dx.doi.org/10.15488/1333
Property mapping is a fundamental component of ontology matching, and yet there is little support that goes beyond the identification of single property matches. Real data often requires some degree of composition, trivially exemplified by the mapping of "first name" and "last name" to "full name" on one end, to complex matchings, such as parsing and pairing symbol/digit strings to SSN numbers, at the other end of the spectrum. In this paper, we propose a two-phase instance-based technique for complex datatype property matching. Phase 1 computes the Estimate Mutual Information matrix of the property values to (1) find simple, 1:1 matches, and (2) compute a list of possible complex matches. Phase 2 applies Genetic Programming to the much reduced search space of candidate matches to find complex matches. We conclude with experimental results that illustrate how the technique works. Furthermore, we show that the proposed technique greatly improves results over those obtained if the Estimate Mutual Information matrix or the Genetic Programming techniques were to be used independently. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-40285-2_18.
Made available in DSpace on 2017-04-20T08:42:21Z (GMT). No. of bitstreams: 0
Previous issue date: 2013
acceptedVersion
eng
Heidelberg : Springer
Database and Expert Systems Applications : 24th International Conference, DEXA 2013, Prague, Czech Republic, August 26-29, 2013, Proceedings, Part I
Lecture Notes in Computer Science ; 8055
0302-9743
978-3-642-40284-5
0302-9743
10.1007/978-3-642-40285-2_18
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Genetic Programming
Mutual Information
Ontology Matching
Schema Matching
Fundamental component
Genetic programming technique
Property value
Search spaces
Expert systems
Ontology
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Complex matching of RDF datatype properties
BookPart
Text
195
208openAccess24th International Conference, DEXA 2013, August 26-29 2013, Prague, Czech RepublicPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/13592022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_accessddc:370
Taibi, Davide
Fulantelli, Giovanni
Dietze, Stefan
Fetahu, Besnik
Hernández-Leo, Davinia
Ley, Tobias
Klamma, Ralf
Harrer, Andreas
2017-04-20T08:42:21Z
2017-04-20T08:42:21Z
2013
Taibi, D.; Fulantelli, G.; Dietze, S.; Fetahu, B.: Evaluating relevance of educational resources of social and Semantic Web. In: Hernández-Leo, D.; Ley, T.; Klamma, R.; Harrer, A. (Eds.): Scaling up Learning for Sustained Impact. Heidelberg : Springer Verlag, 2013 (Lecture Notes in Computer Science ; 8095), S. 637-638. DOI: https://doi.org/10.1007/978-3-642-40814-4_89
http://www.repo.uni-hannover.de/handle/123456789/1359
http://dx.doi.org/10.15488/1334
The social web paradigm has modified the way people behave on the Web. Amongst the many consequences of this change the amount of online resources directly produced and shared by users has increased considerably. In this scenario the importance of methods to evaluate the educational relevance of the resources raises up. In this poster we propose an approach based on recent advancements of Linked Open Data. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-40814-4_89
Made available in DSpace on 2017-04-20T08:42:21Z (GMT). No. of bitstreams: 0
Previous issue date: 2013
acceptedVersion
eng
Heidelberg : Springer Verlag
Scaling up Learning for Sustained Impact
Lecture Notes in Computer Science ; 8095
0302-9743
978-3-642-40813-7
978-3-642-40814-4
10.1007/978-3-642-40814-4_89
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Educational relevance of resource
Linked Open Data
OER
Educational relevance of resource
Educational resource
Linked open datum
Online resources
Social webs
Artificial intelligence
Computer science
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::300 | Sozialwissenschaften, Soziologie, Anthropologie::370 | Erziehung, Schul- und Bildungswesen
Evaluating relevance of educational resources of social and Semantic Web
BookPart
Text
8095
637
638openAccess8th European Conference, on Technology Enhanced Learning, EC-TEL 2013, September 17-21, 2013, Paphos, CyprusPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/13602022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_access
Risse, Thomas
Dietze, Stefan
Peters, Wim
Doka, Katerina
Stavrakas, Yannis
Senellart, Pierre
Zaphiris, Panayiotis
Buchanan, George
Rasmussen, Edie
Loizides, Fernando
2017-04-20T08:42:21Z
2017-04-20T08:42:21Z
2012
Risse, T.; Dietze, S.; Peters, W.; Doka, K.; Stavrakas, Y.; Senellart, P.: Exploiting the social and semantic web for guided web archiving. In: Zaphiris, P.; Buchanan, G.; Rasmussen, E.; Loizides, F. (Eds.): Theory and Practice of Digital Libraries. Heidelberg : Springer Verlag, 2012 (Lecture Notes in Computer Science ; 7489), S. 426-432. DOI: https://doi.org/10.1007/978-3-642-33290-6_47
http://www.repo.uni-hannover.de/handle/123456789/1360
http://dx.doi.org/10.15488/1335
The constantly growing amount of Web content and the success of the Social Web lead to increasing needs for Web archiving. These needs go beyond the pure preservation of Web pages. Web archives are turning into "community memories" that aim at building a better understanding of the public view on, e.g., celebrities, court decisions, and other events. In this paper we present the ARCOMEM architecture that uses semantic information such as entities, topics, and events complemented with information from the social Web to guide a novel Web crawler. The resulting archives are automatically enriched with semantic meta-information to ease the access and allow retrieval based on conditions that involve high-level concepts. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-33290-6_47.
Made available in DSpace on 2017-04-20T08:42:21Z (GMT). No. of bitstreams: 0
Previous issue date: 2012
German Federal Ministry for the Environment, Nature Conservation and Nuclear Safety/0325296
Solland Solar Cells BV
SolarWorld Innovations GmbH
SCHOTT Solar AG
RENA GmbH
SINGULUS TECHNOLOGIES AG
acceptedVersion
eng
Heidelberg : Springer Verlag
Theory and Practice of Digital Libraries
Lecture Notes in Computer Science ; 7489
0302-9743
978-3-642-33289-0
978-3-642-33290-6
10.1007/978-3-642-33290-6_47
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Social Web
Text Analysis
Web Archiving
Web Crawler
Court decisions
Meta information
Semantic information
Text analysis
Web archives
Web content
Artificial intelligence
Digital libraries
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Exploiting the social and semantic web for guided web archiving
BookPart
Text
7489
426
432openAccessSecond International Conference, TPDL 2012, September 23-27, 2012, Paphos, CyprusPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/13832022-12-02T15:02:18Zcom_123456789_1col_123456789_3ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Schaumann, Peter
Steppeler, S.
2017-04-21T09:09:54Z
2017-04-21T09:09:54Z
2013
Schaumann, P.; Steppeler, S.: Fatigue tests of axially loaded butt welds up to very high cycles. In: Procedia Engineering 66 (2013), S. 88-97. DOI: https://doi.org/10.1016/j.proeng.2013.12.065
http://www.repo.uni-hannover.de/handle/123456789/1383
http://dx.doi.org/10.15488/1358
Fatigue strength curves that are established from fatigue tests provide a basis for the fatigue assessment applying nominal stress approach. In the codes valid for steel structures, like the EC 3, the fatigue strength curves for constant amplitude loading have a knee point in the transition region. The fatigue strength curve beyond this knee point is commonly assumed to be a horizontal asymptote. However, the behaviour of the fatigue strength curve in the area of very high cycles and more importantly the existence of an endurance limit are much discussed. In the case of welded joints the experimental data beyond 107 load cycles is limited due to the possibilities in testing. Testing techniques with high frequencies are necessary to obtain experimental data with very high cycles in a reasonable period of time. In this scope a testing device with approximately 390 Hz operates by alternating current magnets and using resonance amplification, which was developed by a third party. This testing device was investigated and advanced for the application of long term tests reaching 5·108 load cycles. Fatigue tests on axially loaded butt welds with constant amplitude loading are conducted in three test series until very high cycles. The fatigue tests include the area of high and very high cycles. The influence of test frequency and stress ratio is investigated.
Made available in DSpace on 2017-04-21T09:09:54Z (GMT). No. of bitstreams: 0
Previous issue date: 2013
publishedVersion
eng
Amsterdam : Elsevier
Procedia Engineering 66 (2013)
1877-7058
https://doi.org/10.1016/j.proeng.2013.12.065
CC BY-NC-ND 3.0 Unported
https://creativecommons.org/licenses/by-nc-nd/3.0/
SN-curves
Steel
Stress ratio
Test frequency
Very high cylce fatigue
Welded joints
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Fatigue tests of axially loaded butt welds up to very high cycles
Article
Text
66
88
97openAccess5th International Conference on Fatigue Design, Fatigue Design 2013, November 27-28, 2013, Senlis, France
oai:www.repo.uni-hannover.de:123456789/13892023-01-04T07:53:30Zcom_123456789_11col_123456789_12ddc:004doc-type:BookPartdoc-type:Textopen_accessstatus-type:publishedVersion
Nunes, Bernardo Pereira
Kawase, Ricardo
Fetahu, Besnik
Dietze, Stefan
Casanova, Marco Antonio
Maynard, Diana
Watada, Junzo
Jain, Lakhmi C.
Howlett, Robert J.
Mukai, Naoto
Asakura, Koichi
2017-04-21T09:09:56Z
2017-04-21T09:09:56Z
2013
Nunes, B.P.; Kawase, R.; Fetahu, B.; Dietze, S.; Casanova, M.A. et al.: Interlinking documents based on semantic graphs. In: Watada, J.; Jain, L.C.; Howlett, R.J.; Mukai, N.o; Asakura, K. (Eds.): 17th International Conference on Knowledge Based and Intelligent Information and Engineering Systems : (KES 2013). Amsterdam [u.a.] : Elsevier, 2013 (Procedia computer science ; 22), S. 231-240. DOI: https://doi.org/10.1016/j.procs.2013.09.099
http://www.repo.uni-hannover.de/handle/123456789/1389
http://dx.doi.org/10.15488/1364
Connectivity and relatedness of Web resources are two concepts that define to what extent different parts are connected or related to one another. Measuring connectivity and relatedness between Web resources is a growing field of research, often the starting point of recommender systems. Although relatedness is liable to subjective interpretations, connectivity is not. Given the Semantic Web's ability of linking Web resources, connectivity can be measured by exploiting the links between entities. Further, these connections can be exploited to uncover relationships between Web resources. In this paper, we apply and expand a relationship assessment methodology from social network theory to measure the connectivity between documents. The connectivity measures are used to identify connected and related Web resources. Our approach is able to expose relations that traditional text-based approaches fail to identify. We validate and assess our proposed approaches through an evaluation on a real world dataset, where results show that the proposed techniques outperform state of the art approaches.
Made available in DSpace on 2017-04-21T09:09:56Z (GMT). No. of bitstreams: 0
Previous issue date: 2013
CAPES
EU/FP7/2007-2013
CNP
FAPERJ
publishedVersion
eng
Amsterdam [u.a.] : Elsevier
17th International Conference on Knowledge Based and Intelligent Information and Engineering Systems : (KES 2013)
Procedia computer science ; 22
978-1-62993-662-8
1877-0509
https://doi.org/10.1016/j.procs.2013.09.099
CC BY-NC-ND 3.0 Unported
https://creativecommons.org/licenses/by-nc-nd/3.0/
Document connectivity
Document recommendation
Semantic connections
Semantic graphs
Konferenzschrift
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Interlinking documents based on semantic graphs
BookPart
Text
22
231
240openAccess17th International Conference in Knowledge Based and Intelligent Information and Engineering Systems, KES 2013, September 9-11, 2013, Kitakyushu, Japan
oai:www.repo.uni-hannover.de:123456789/14022022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_access
Leme, Luiz André P. Paes
Lopes, Giseli Rabello
Pereira Nunes, Bernardo
Casanova, Marco Antonio
Dietze, Stefan
Daniel, Florian
Dolog, Peter
Li, Quing
2017-04-21T11:19:48Z
2017-04-21T11:19:48Z
2013
Leme, L.A.P.P.; Lopes, G.R.; Pereira Nunes, B.; Casanova, M.A.; Dietze, S.: Identifying candidate datasets for data interlinking. In: Daniel, F.; Dolog, P.; Li, Q. (Eds.): Web Engineering. Heidelberg : Springer Verlag, 2013 (Lecture Notes in Computer Science ; 7977), S. 354-366. DOI: https://doi.org/10.1007/978-3-642-39200-9_29
http://www.repo.uni-hannover.de/handle/123456789/1402
http://dx.doi.org/10.15488/1377
One of the design principles that can stimulate the growth and increase the usefulness of the Web of data is URIs linkage. However, the related URIs are typically in different datasets managed by different publishers. Hence, the designer of a new dataset must be aware of the existing datasets and inspect their content to define sameAs links. This paper proposes a technique based on probabilistic classifiers that, given a datasets S to be published and a set T of known published datasets, ranks each Ti ∈ T according to the probability that links between S and Ti can be found by inspecting the most relevant datasets. Results from our technique show that the search space can be reduced up to 85%, thereby greatly decreasing the computational effort. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-39200-9_29.
Made available in DSpace on 2017-04-21T11:19:48Z (GMT). No. of bitstreams: 0
Previous issue date: 2013
acceptedVersion
eng
Heidelberg : Springer Verlag
Web Engineering
Lecture Notes in Computer Science ; 7977
0302-9743
978-3-642-39199-6
978-3-642-39200-9
10.1007/978-3-642-39200-9_29
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Bayesian classifier
data interlinking
Linked Data
Computational effort
datasets recommendation
Design Principles
Linked datum
Probabilistic classifiers
Search spaces
Artificial intelligence
Computer science
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Identifying candidate datasets for data interlinking
BookPart
Text
7977
354
366openAccess13th International Conference, ICWE 2013, July 8-12, 2013, Aalborg, DenmarkPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/14032022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_access
Lopes, Giseli Rabello
Leme, Luiz André P. Paes
Pereira Nunes, Bernardo
Casanova, Marco Antonio
Dietze, Stefan
Lin, Xuemin
Manolopoulos, Yannis
Srivastava, Divesh
Huang, Guangyan
2017-04-21T11:19:48Z
2017-04-21T11:19:48Z
2013
Lopes, G.R.; Leme, L.A.P.P.; Pereira Nunes, B.; Casanova, M.A.; Dietze, S.: Recommending tripleset interlinking through a social network approach. In: Lin, X.; Manolopoulos, Y.; Srivastava, D.; Huang, G. (Eds.): Web Information Systems Engineering – WISE 2013. Heidelberg : Springer Verlag (Lecture Notes in Computer Science ; 8180), S. 149-161. DOI: https://doi.org/10.1007/978-3-642-41230-1_13
http://www.repo.uni-hannover.de/handle/123456789/1403
http://dx.doi.org/10.15488/1378
Tripleset interlinking is one of the main principles of Linked Data. However, the discovery of existing triplesets relevant to be linked with a new tripleset is a non-trivial task in the publishing process. Without prior knowledge about the entire Web of Data, a data publisher must perform an exploratory search, which demands substantial effort and may become impracticable, with the growth and dissemination of Linked Data. Aiming at alleviating this problem, this paper proposes a recommendation approach for this scenario, using a Social Network perspective. The experimental results show that the proposed approach obtains high levels of recall and reduces in up to 90% the number of triplesets to be further inspected for establishing appropriate links. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-41230-1_13.
Made available in DSpace on 2017-04-21T11:19:48Z (GMT). No. of bitstreams: 0
Previous issue date: 2013
CNPq/160326/2012-5
CNPq/301497/2006-0
CNPq/475717/2011-2
CNPq/57128/2009-9
FAPERJ/E-26/170028/2008
FAPERJ/E-26/103.070/2011
CAPES/PROCAD/NF 1128/2010
acceptedVersion
eng
Heidelberg : Springer Verlag
Web Information Systems Engineering – WISE 2013
Lecture Notes in Computer Science ; 8180
0302-9743
978-3-642-41229-5
978-3-642-41230-1
10.1007/978-3-642-41230-1_13
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Linked Data
Recommender Systems
Social Networks
Exploratory search
Linked datum
Non-trivial tasks
Prior knowledge
Publishing process
Web of datum
Data handling
Systems engineering
World Wide Web
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Recommending tripleset interlinking through a social network approach
BookPart
Text
8180
149
161openAccess14th International Conference, October 13-15, 2013, Nanjing, ChinaPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/14042022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_access
Fetahu, Besnik
Pereira Nunes, Bernardo
Dietze, Stefan
Daniel, Florian
Dolog, Peter
Li, Quing
2017-04-21T11:19:48Z
2017-04-21T11:19:48Z
2013
Fetahu, B.; Pereira Nunes, B.; Dietze, S.: Summaries on the fly: Query-based extraction of structured knowledge from web documents. In: Daniel, F.; Dolog, P.; Li, Q. (Eds.): Web Engineering. Heidelberg : Springer Verlag, 2013 (Lecture Notes in Computer Science ; 7977), S. 249-264. DOI: https://doi.org/10.1007/978-3-642-39200-9_22
http://www.repo.uni-hannover.de/handle/123456789/1404
http://dx.doi.org/10.15488/1379
A large part of Web resources consists of unstructured textual content. Processing and retrieving relevant content for a particular information need is challenging for both machines and humans. While information retrieval techniques provide methods for detecting suitable resources for a particular query, information extraction techniques enable the extraction of structured data and text summarization allows the detection of important sentences. However, these techniques usually do not consider particular user interests and information needs. In this paper, we present a novel method to automatically generate structured summaries from user queries that uses POS patterns to identify relevant statements and entities in a certain context. Finally, we evaluate our work using the publicly available New York Times corpus, which shows the applicability of our method and the advantages over previous works. The final publication is available at Springer via https://doi.org/10.1007/978-3-642-39200-9_22
Made available in DSpace on 2017-04-21T11:19:48Z (GMT). No. of bitstreams: 0
Previous issue date: 2013
acceptedVersion
eng
Heidelberg : Springer Verlag
Web Engineering
Lecture Notes in Computer Science ; 7977
0302-9743
978-3-642-39199-6
978-3-642-39200-9
10.1007/978-3-642-39200-9_22
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
entity recognition
knowledge extraction
POS pattern analysis
query-based summaries
text summarization
Entity recognition
Knowledge extraction
Pattern analysis
query-based summaries
Text summarization
Information retrieval systems
Information science
Text processing
World Wide Web
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Summaries on the fly: Query-based extraction of structured knowledge from web documents
BookPart
Text
7977
249
264openAccess13th International Conference, ICWE 2013, July 8-12, 2013, Aalborg, DenmarkPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/14052022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:BookPartdoc-type:Textopen_access
Lopes, Giseli Rabello
Leme, Luiz André P. Paes
Pereira Nunes, Bernardo
Casanova, Marco Antonio
Dietze, Stefan
Benatallah, Boualem
Bestavros, Azer
Manolopoulos, Yannis
Vakali, Athena
Zhang, Yanchun
2017-04-21T11:19:48Z
2017-04-21T11:19:48Z
2014
Lopes, G.R.; Leme, L.A.P.P.; Pereira Nunes, B.; Casanova, M.A.; Dietze, S.: Two approaches to the dataset interlinking recommendation problem. In: Benatallah, B.; Bestavros, A.; Manolopoulos, Y.; Vakali, A.; Zhang, Y. (Eds.): Web Information Systems Engineering – WISE 2014. Heidelberg : Springer Verlag, 2014 (Lecture Notes in Computer Science ; 8786), S. 324-339. DOI: https://doi.org/10.1007/978-3-319-11749-2_25
http://www.repo.uni-hannover.de/handle/123456789/1405
http://dx.doi.org/10.15488/1380
Whenever a dataset t is published on the Web of Data, an exploratory search over existing datasets must be performed to identify those datasets that are potential candidates to be interlinked with t. This paper introduces and compares two approaches to address the dataset interlinking recommendation problem, respectively based on Bayesian classifiers and on Social Network Analysis techniques. Both approaches define rank score functions that explore the vocabularies, classes and properties that the datasets use, in addition to the known dataset links. After extensive experiments using real-world datasets, the results show that the rank score functions achieve a mean average precision of around 60%. Intuitively, this means that the exploratory search for datasets to be interlinked with t might be limited to just the top-ranked datasets, reducing the cost of the dataset interlinking process. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-11749-2_25.
Made available in DSpace on 2017-04-21T11:19:48Z (GMT). No. of bitstreams: 0
Previous issue date: 2014
EC/FP7/LinkedUp
CNPq/160326/2012-5
CNPq/303332/2013-1
CNPq/557128/2009-9
FAPERJ/E-26/170028/2008
FAPERJ/E-26/103.070/2011
FAPERJ/E-26/101.382/2014
CAPES/1410827
acceptedVersion
eng
Heidelberg : Springer Verlag
Web Information Systems Engineering – WISE 2014
Lecture Notes in Computer Science ; 8786
0302-9743
978-3-319-11748-5
978-3-319-11749-2
10.1007/978-3-319-11749-2_25
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Bayesian classifier
Data interlinking
Linked Data
Recommender systems
Social networks
Bayesian networks
Recommender systems
Social networking (online)
Data interlinking
Exploratory search
Linked datum
Rank scores
Real-world datasets
Web of datum
Classification (of information)
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Two approaches to the dataset interlinking recommendation problem
BookPart
Text
8786
324
339openAccess15th International Conference, October 12-14, 2014, Thessaloniki, GreecePA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/14062022-12-02T15:03:41Zcom_123456789_1col_123456789_4ddc:100ddc:004status-type:acceptedVersiondoc-type:Articledoc-type:Textopen_access
Bedürftig, Thomas
Murawski, Roman
2017-04-21T11:19:49Z
2018-02-16T23:05:14Z
2017
Bedürftig, T.; Murawski, R.: Historische und philosophische Notizen über das Kontinuum. In: Mathematische Semesterberichte 74 (2017), Nr. 1, S. 63-88. DOI: https://doi.org/10.1007/s00591-017-0179-2
http://www.repo.uni-hannover.de/handle/123456789/1406
http://dx.doi.org/10.15488/1381
The final publication is available at Springer via http://dx.doi.org/10.1007/s00591-017-0179-2
Made available in DSpace on 2017-04-21T11:19:49Z (GMT). No. of bitstreams: 0
Previous issue date: 2017
acceptedVersion
ger
Heidelberg : Springer Verlag
Mathematische Semesterberichte 74 (2017), Nr. 1
0720-728X
10.1007/s00591-017-0179-2
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::100 | Philosophie
Historische und philosophische Notizen über das Kontinuum
Article
Text
1
26openAccess2018-02-16PA-81Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/14082022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:Articledoc-type:Textopen_accessddc:370
Dietze, Stefan
Taibi, Davide
Yu, Hong Qing
Dovrolis, Nikolas
2017-04-21T11:19:49Z
2017-04-21T11:19:49Z
2015
Dietze, S.; Taibi, D.; Yu, H.Q.; Dovrolis, N.: A Linked Dataset of medical educational resources. In: British Journal of Educational Technology 46 (2015), Nr. 5, S. 1123-1129. DOI: https://doi.org/10.1111/bjet.12276
http://www.repo.uni-hannover.de/handle/123456789/1408
http://dx.doi.org/10.15488/1383
Reusable educational resources became increasingly important for enhancing learning and teaching experiences, particularly in the medical domain where resources are particularly expensive to produce. While interoperability across educational resources metadata repositories is yet limited to the heterogeneity of metadata standards and interface mechanisms with a lack of shared or aligned controlled vocabularies, Linked Data (LD) principles, based on W3C standards and supported through a wide range of tools, open up opportunities to alleviate such problems. We introduce the "mEducator Linked Educational Resources" dataset, which offers a range of open educational resources for the medical domain, exposed through LD principles. Data have been generated through a combination of manual curation and semi-automated harvesting techniques, and state-of-the-art enrichment and clustering techniques were deployed in order to classify and categorize data, toward improved reusability and access. Data are currently used by a range of educational applications and is accessible for third parties and developers, for instance through the LinkedUp Catalog and other registries, to facilitate further take-up and applications.
Made available in DSpace on 2017-04-21T11:19:49Z (GMT). No. of bitstreams: 0
Previous issue date: 2015
EC/eContentplus/mEducator project
acceptedVersion
eng
Hoboken, NJ : Blackwell Publishing Ltd
British Journal of Educational Technology 46 (2015), Nr. 5
0007-1013
10.1111/bjet.12276
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Metadata
Reusability
Clustering techniques
Educational Applications
Educational resource
Interface mechanisms
Learning and teachings
Metadata repositories
Metadata Standards
Open educational resources
Education
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::300 | Sozialwissenschaften, Soziologie, Anthropologie::370 | Erziehung, Schul- und Bildungswesen
A Linked Dataset of medical educational resources
Article
Text
5
46
1123
1129openAccessPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/14142022-12-02T15:03:41Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:Articledoc-type:Textopen_access
Gadiraju, Ujwal
Demartini, Gianluca
Kawase, Ricardo
Dietze, Stefan
2017-04-21T11:19:50Z
2017-04-21T11:19:50Z
2015
Gadiraju, U.; Demartini, G.; Kawase, R.; Dietze, S.: Human Beyond the Machine: Challenges and Opportunities of Microtask Crowdsourcing. In: IEEE Intelligent Systems 30 (2015), Nr. 4, S. 81-85. DOI: https://doi.org/10.1109/MIS.2015.66
http://www.repo.uni-hannover.de/handle/123456789/1414
http://dx.doi.org/10.15488/1389
In the 21st century, where automated systems and artificial intelligence are replacing arduous manual labor by supporting data-intensive tasks, many problems still require human intelligence. Over the last decade, by tapping into human intelligence through microtasks, crowdsourcing has found remarkable applications in a wide range of domains. In this article, the authors discuss the growth of crowdsourcing systems since the term was coined by columnist Jeff Howe in 2006. They shed light on the evolution of crowdsourced microtasks in recent times. Next, they discuss a main challenge that hinders the quality of crowdsourced results: the prevalence of malicious behavior. They reflect on crowdsourcing's advantages and disadvantages. Finally, they leave the reader with interesting avenues for future research.
Made available in DSpace on 2017-04-21T11:19:50Z (GMT). No. of bitstreams: 0
Previous issue date: 2015
acceptedVersion
eng
Piscataway, NJ : Institute of Electrical and Electronics Engineers Inc.
IEEE Intelligent Systems 30 (2015), Nr. 4
1541-1672
10.1109/MIS.2015.66
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
collective intelligence
crowdsourcing
data centric task
data science
microtask
MTurk
worker behavior
Artificial intelligence
Collective intelligences
Data centric
worker behavior
Automation
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Human Beyond the Machine: Challenges and Opportunities of Microtask Crowdsourcing
Article
Text
4
30
81
85openAccessPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/14152022-12-02T15:04:50Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:Articledoc-type:Textopen_accessddc:370
Yu, Hong Qing
Pedrinaci, Carlos
Dietze, Stefan
Domingue, John
2017-04-21T11:19:51Z
2017-04-21T11:19:51Z
2012
Yu, H.Q.; Pedrinaci, C.; Dietze, S.; Domingue, J.: Using linked data to annotate and search educational video resources for supporting distance learning. In: IEEE Transactions on Learning Technologies 5 (2012), Nr. 2, S. 130-142. DOI: https://doi.org/10.1109/TLT.2012.1
http://www.repo.uni-hannover.de/handle/123456789/1415
http://dx.doi.org/10.15488/1390
Multimedia educational resources play an important role in education, particularly for distance learning environments. With the rapid growth of the multimedia web, large numbers of educational video resources are increasingly being created by several different organizations. It is crucial to explore, share, reuse, and link these educational resources for better e-learning experiences. Most of the video resources are currently annotated in an isolated way, which means that they lack semantic connections. Thus, providing the facilities for annotating these video resources is highly demanded. These facilities create the semantic connections among video resources and allow their metadata to be understood globally. Adopting Linked Data technology, this paper introduces a video annotation and browser platform with two online tools: Annomation and SugarTube. Annomation enables users to semantically annotate video resources using vocabularies defined in the Linked Data cloud. SugarTube allows users to browse semantically linked educational video resources with enhanced web information from different online resources. In the prototype development, the platform uses existing video resources for the history courses from the Open University (United Kingdom). The result of the initial development demonstrates the benefits of applying Linked Data technology in the aspects of reusability, scalability, and extensibility.
Made available in DSpace on 2017-04-21T11:19:51Z (GMT). No. of bitstreams: 0
Previous issue date: 2012
EU/FP7/SOA4ALL
EU/FP7/NoTube
acceptedVersion
eng
Piscataway, NJ : Institute of Electrical and Electronics Engineers Inc.
IEEE Transactions on Learning Technologies 5 (2012), Nr. 2
1939-1382
10.1109/TLT.2012.1
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Distance learning
e-learning
educational video resources
linked data
semantic annotation
semantic search
Semantic Web
web services
Distance learning environment
Educational resource
Educational videos
Linked datum
On-line tools
Online resources
Open universities
Prototype development
Rapid growth
Semantic annotations
Semantic search
United kingdom
Video annotations
Web information
Curricula
Data handling
Distance education
Metadata
Reusability
Semantic Web
Multimedia systems
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::300 | Sozialwissenschaften, Soziologie, Anthropologie::370 | Erziehung, Schul- und Bildungswesen
Using linked data to annotate and search educational video resources for supporting distance learning
Article
Text
2
5
130
142openAccessPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/14192022-12-02T15:03:41Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:Articledoc-type:Textopen_accessddc:370
Dietze, Stefan
Sanchez-Alonso, Salvador
Ebner, Hannes
Yu, Hong Qing
Giordano, Daniela
Marenzi, Ivana
Pereira Nunes, Bernardo
2017-04-21T11:19:52Z
2017-04-21T11:19:52Z
2013
Dietze, S.; Sanchez-Alonso, S.; Ebner, H.; Yu, H.Q.; Giordano, D. et al.: Interlinking educational resources and the web of data: A survey of challenges and approaches. In: Program 47 (2013), Nr. 1, S. 60-91. DOI: https://doi.org/10.1108/00330331211296312
http://www.repo.uni-hannover.de/handle/123456789/1419
http://dx.doi.org/10.15488/1394
Purpose: Research in the area of technology-enhanced learning (TEL) throughout the last decade has largely focused on sharing and reusing educational resources and data. This effort has led to a fragmented landscape of competing metadata schemas, or interface mechanisms. More recently, semantic technologies were taken into account to improve interoperability. The linked data approach has emerged as the de facto standard for sharing data on the web. To this end, it is obvious that the application of linked data principles offers a large potential to solve interoperability issues in the field of TEL. This paper aims to address this issue. Design/methodology/approach: In this paper, approaches are surveyed that are aimed towards a vision of linked education, i.e. education which exploits educational web data. It particularly considers the exploitation of the wealth of already existing TEL data on the web by allowing its exposure as linked data and by taking into account automated enrichment and interlinking techniques to provide rich and well-interlinked data for the educational domain. Findings: So far web-scale integration of educational resources is not facilitated, mainly due to the lack of take-up of shared principles, datasets and schemas. However, linked data principles increasingly are recognized by the TEL community. The paper provides a structured assessment and classification of existing challenges and approaches, serving as potential guideline for researchers and practitioners in the field. Originality/value: Being one of the first comprehensive surveys on the topic of linked data for education, the paper has the potential to become a widely recognized reference publication in the area. This article is © Emerald Group Publishing and permission has been granted for this version to appear here http://www.emeraldinsight.com/doi/full/10.1108/00330331211296312. Emerald does not grant permission for this article to be further copied/distributed or hosted elsewhere without the express permission from Emerald Group Publishing Limited.
Made available in DSpace on 2017-04-21T11:19:52Z (GMT). No. of bitstreams: 0
Previous issue date: 2013
EC/eContenplus/mEducator project
acceptedVersion
eng
Bingley : Emerald Publishing
Program 47 (2013), Nr. 1
0033-0337
10.1108/00330331211296312
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Education
Information technology
Learning methods
Linked data
Open educational resources
Semantic web
Technology-enhanced learning
Web data
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::300 | Sozialwissenschaften, Soziologie, Anthropologie::370 | Erziehung, Schul- und Bildungswesen
Interlinking educational resources and the web of data: A survey of challenges and approaches
Article
Text
1
47
60
91openAccessPA-16Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/15422022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Bauland, Michael
Schneider, Thomas
Schnoor, Henning
Schnoor, Ilka
Vollmer, Heribert
2017-05-11T07:53:37Z
2017-05-11T07:53:37Z
2009
Bauland, M.; Schneider, T.; Schnoor, H.; Schnoor, I.; Vollmer, H.: The complexity of generalized satisfiability for linear temporal logic. In: Logical Methods in Computer Science 5 (2009), Nr. 1, S. 1-21. DOI: https://doi.org/10.2168/LMCS-5(1:1)2009
http://www.repo.uni-hannover.de/handle/123456789/1542
http://dx.doi.org/10.15488/1517
In a seminal paper from 1985, Sistla and Clarke showed that satisfiability for Linear Temporal Logic (LTL) is either NP-complete or PSPACE-complete, depending on the set of temporal operators used. If, in contrast, the set of propositional operators is restricted, the complexity may decrease. This paper undertakes a systematic study of satisfiability for LTL formulae over restricted sets of propositional and temporal operators. Since every propositional operator corresponds to a Boolean function, there exist infinitely many propositional operators. In order to systematically cover all possible sets of them, we use Post's lattice. With its help, we determine the computational complexity of LTL satisfiability for all combinations of temporal operators and all but two classes of propositional functions. Each of these infinitely many problems is shown to be either PSPACE-complete, NP-complete, or in P.
Made available in DSpace on 2017-05-11T07:53:37Z (GMT). No. of bitstreams: 0
Previous issue date: 2009
publishedVersion
eng
Braunschweig : International Federation for Computational Logic
Logical Methods in Computer Science 5 (2009), Nr. 1
1860-5974
https://doi.org/10.2168/LMCS-5(1:1)2009
CC BY-ND 2.0 Unported
https://creativecommons.org/licenses/by-nd/2.0/
Computational complexity
Linear temporal logic
Satisfiability
Linear temporal logic
LTL formulae
Post's lattice
Propositional functions
PSPACE-complete
Satisfiability
Systematic study
Temporal operators
Boolean functions
Computational complexity
Temporal logic
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
The complexity of generalized satisfiability for linear temporal logic
Article
Text
1
5
1
21openAccess
oai:www.repo.uni-hannover.de:123456789/15432022-12-02T15:04:50Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Meier, Arne
Mundhenk, Martin
Thomas, Michael
Vollmer, Heribert
2017-05-11T07:53:37Z
2017-05-11T07:53:37Z
2008
Meier, Arne; Mundhenk, M.; Thomas, Michael; Vollmer, Heribert: The Complexity of Satisfiability for Fragments of CTL and CTL⋆. In: Electronic Notes in Theoretical Computer Science 223 (2008), Nr. C, S. 201-213. DOI: https://doi.org/10.1016/j.entcs.2008.12.040
http://www.repo.uni-hannover.de/handle/123456789/1543
http://dx.doi.org/10.15488/1518
The satisfiability problems for CTL and CTL⋆ are known to be EXPTIME-complete, resp. 2EXPTIME-complete (Fischer and Ladner (1979), Vardi and Stockmeyer (1985)). For fragments that use less temporal or propositional operators, the complexity may decrease. This paper undertakes a systematic study of satisfiability for CTL- and CTL⋆-formulae over restricted sets of propositional and temporal operators. We show that restricting the temporal operators yields satisfiability problems complete for 2EXPTIME, EXPTIME, PSPACE, and NP. Restricting the propositional operators either does not change the complexity (as determined by the temporal operators), or yields very low complexity like NC1, TC0, or NLOGTIME.
Made available in DSpace on 2017-05-11T07:53:37Z (GMT). No. of bitstreams: 0
Previous issue date: 2008
publishedVersion
eng
Amsterdam : Elsevier BV
Electronic Notes in Theoretical Computer Science 223 (2008), Nr. C
1571-0661
https://doi.org/10.1016/j.entcs.2008.12.040
CC BY-NC-ND 3.0 Unported
https://creativecommons.org/licenses/by-nc-nd/3.0/
Post's Lattice
Satisfiability
Temporal Logic
Real time systems
Post's Lattice
Satisfiability
Satisfiability problems
Systematic studies
Temporal operators
Very low complexities
Temporal logic
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
The Complexity of Satisfiability for Fragments of CTL and CTL⋆
Article
Text
C
223
201
213openAccess
oai:www.repo.uni-hannover.de:123456789/15442022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Bauland, Michael
Mundhenk, Martin
Schneider, Thomas
Schnoor, Henning
Schnoor, Ilka
Vollmer, Heribert
2017-05-11T07:53:37Z
2017-05-11T07:53:37Z
2009
Bauland, M.; Mundhenk, M.; Schneider, T.; Schnoor, H.; Schnoor, I. et al.: The Tractability of Model-checking for LTL: The Good, the Bad, and the Ugly Fragments. In: Electronic Notes in Theoretical Computer Science 231 (2009), Nr. C, S. 277-292. DOI: https://doi.org/10.1016/j.entcs.2009.02.041
http://www.repo.uni-hannover.de/handle/123456789/1544
http://dx.doi.org/10.15488/1519
In a seminal paper from 1985, Sistla and Clarke showed that the model-checking problem for Linear Temporal Logic (LTL) is either NP-complete or PSPACE-complete, depending on the set of temporal operators used. If, in contrast, the set of propositional operators is restricted, the complexity may decrease. This paper systematically studies the model-checking problem for LTL formulae over restricted sets of propositional and temporal operators. For almost all combinations of temporal and propositional operators, we determine whether the model-checking problem is tractable (in P) or intractable (NP-hard). We then focus on the tractable cases, showing that they all are NL-complete or even logspace solvable. This leads to a surprising gap in complexity between tractable and intractable cases. It is worth noting that our analysis covers an infinite set of problems, since there are infinitely many sets of propositional operators. © 2009 Elsevier B.V. All rights reserved.
Made available in DSpace on 2017-05-11T07:53:37Z (GMT). No. of bitstreams: 0
Previous issue date: 2009
publishedVersion
eng
Amsterdam : Elsevier
Electronic Notes in Theoretical Computer Science 231 (2009), Nr. C
1571-0661
https://doi.org/10.1016/j.entcs.2009.02.041
CC BY-NC-ND 3.0 Unported
https://creativecommons.org/licenses/by-nc-nd/3.0/
computational complexity
linear temporal logic
model checking
Computational complexity
Mathematical operators
Real time systems
Temporal logic
linear temporal logic
Logspace
Ltl formulae
Model-checking problems
NP-complete
Np-hard
Temporal operators
Model checking
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
The Tractability of Model-checking for LTL: The Good, the Bad, and the Ugly Fragments
Article
Text
C
231
277
292openAccess
oai:www.repo.uni-hannover.de:123456789/15782022-12-02T19:35:26Zcom_123456789_11col_123456789_12ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Demidova, Elena
Barbieri, Nicola
Dietze, Stefan
Funk, Adam
Holzmann, Helge
Maynard, Diana
Papailiou, Nikolaos
Peters, Wim
Risse, Thomas
Spiliotopoulos, Dimitris
2017-05-30T11:42:06Z
2017-05-30T11:42:06Z
2014
Demidova, Elena; Barbieri, Nicola; Dietze, Stefan; Funk, Adam; Holzmann, Helge et al.: Analysing and Enriching Focused Semantic Web Archives for Parliament Applications. In: Future Internet 6 (2014), Nr. 3, S. 433-456. DOI: https://doi.org/10.3390/fi6030433
http://www.repo.uni-hannover.de/handle/123456789/1578
http://dx.doi.org/10.15488/1553
The web and the social web play an increasingly important role as an information source for Members of Parliament and their assistants, journalists, political analysts and researchers. It provides important and crucial background information, like reactions to political events and comments made by the general public. The case study presented in this paper is driven by two European parliaments (the Greek and the Austrian parliament) and targets an effective exploration of political web archives. In this paper, we describe semantic technologies deployed to ease the exploration of the archived web and social web content and present evaluation results.
Made available in DSpace on 2017-05-30T11:42:06Z (GMT). No. of bitstreams: 0
Previous issue date: 2014
EC/ARCOMEM
ERC/ALEXANDRIA
ERC/KEYSTONE
publishedVersion
eng
Basel : MDPI AG
Future Internet 6 (2014), Nr. 3
1999-5903
https://doi.org/10.3390/fi6030433
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0/
enrichment
entity and event extraction
parliament libraries
semantic content analysis
topic detection
web archiving
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Analysing and Enriching Focused Semantic Web Archives for Parliament Applications
Article
Text
3
6
433
456openAccess
oai:www.repo.uni-hannover.de:123456789/16192022-12-02T19:35:26Zcom_123456789_11col_123456789_12ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Maynard, Diana
Gossen, Gerhard
Funk, Adam
Fisichella, Marco
2017-05-31T11:36:48Z
2017-05-31T11:36:48Z
2014
Maynard, Diana; Gossen, Gerhard; Funk, Adam; Fisichella, Marco: Should I Care about Your Opinion? : Detection of Opinion Interestingness and Dynamics in Social Media. In: Future Internet 6 (2014), Nr. 3, S. 457-481. DOI: https://doi.org/10.3390/fi6030457
http://www.repo.uni-hannover.de/handle/123456789/1619
http://dx.doi.org/10.15488/1594
In this paper, we describe a set of reusable text processing components for extracting opinionated information from social media, rating it for interestingness, and for detecting opinion events. We have developed applications in GATE to extract named entities, terms and events and to detect opinions about them, which are then used as the starting point for opinion event detection. The opinions are then aggregated over larger sections of text, to give some overall sentiment about topics and documents, and also some degree of information about interestingness based on opinion diversity. We go beyond traditional opinion mining techniques in a number of ways: by focusing on specific opinion-target extraction related to key terms and events, by examining and dealing with a number of specific linguistic phenomena, by analysing and visualising opinion dynamics over time, and by aggregating the opinions in different ways for a more flexible view of the information contained in the documents.
Made available in DSpace on 2017-05-31T11:36:48Z (GMT). No. of bitstreams: 0
Previous issue date: 2014
EU/270239
publishedVersion
eng
Basel : MDPI AG
Future Internet 6 (2014), Nr. 3
1999-5903
https://doi.org/10.3390/fi6030457
CC BY-NC-SA 3.0 Unported
https://creativecommons.org/licenses/by-nc-sa/3.0/
opinion event detection
opinion mining
social media
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Should I Care about Your Opinion? : Detection of Opinion Interestingness and Dynamics in Social Media
Article
Text
3
6
457
481openAccess
oai:www.repo.uni-hannover.de:123456789/16252022-12-02T19:35:26Zcom_123456789_11col_123456789_12ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Risse, Thomas
Demidova, Elena
Dietze, Stefan
Peters, Wim
Papailiou, Nikolaos
Doka, Katerina
Stavrakas, Yannis
Plachouras, Vassilis
Senellart, Pierre
Carpentier, Florent
Mantrach, Amin
Cautis, Bogdan
Siehndel, Patrick
Spiliotopoulos, Dimitris
2017-05-31T11:36:49Z
2017-05-31T11:36:49Z
2014
Risse, Thomas; Demidova, Elena; Dietze, Stefan; Peters, Wim; Papailiou, Nikolaos et al.: The ARCOMEM Architecture for Social- and Semantic-Driven Web Archiving. In: Future Internet 6 (2014), Nr. 4, S. 688-716. DOI: https://doi.org/10.3390/fi6040688
http://www.repo.uni-hannover.de/handle/123456789/1625
http://dx.doi.org/10.15488/1600
The constantly growing amount ofWeb content and the success of the SocialWeb lead to increasing needs for Web archiving. These needs go beyond the pure preservationo of Web pages. Web archives are turning into “community memories” that aim at building a better understanding of the public view on, e.g., celebrities, court decisions and other events. Due to the size of the Web, the traditional “collect-all” strategy is in many cases not the best method to build Web archives. In this paper, we present the ARCOMEM (From Future Internet 2014, 6 689 Collect-All Archives to Community Memories) architecture and implementation that uses semantic information, such as entities, topics and events, complemented with information from the Social Web to guide a novel Web crawler. The resulting archives are automatically enriched with semantic meta-information to ease the access and allow retrieval based on conditions that involve high-level concepts.
Made available in DSpace on 2017-05-31T11:36:49Z (GMT). No. of bitstreams: 0
Previous issue date: 2014
publishedVersion
eng
Basel : MDPI AG
Future Internet 6 (2014), Nr. 4
1999-5903
https://doi.org/10.3390/fi6040688
CC BY 4.0 Unported
https://creativecommons.org/licenses/by/4.0/
architecture
social Web
text analysis
web archiving
web crawler
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
The ARCOMEM Architecture for Social- and Semantic-Driven Web Archiving
Article
Text
4
6
688
716openAccess
oai:www.repo.uni-hannover.de:123456789/16922022-12-02T15:04:50Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:Articledoc-type:Textopen_accessddc:530
Fuhrwerk, Martin
Moghaddamnia, Sanam
Peissig, Jürgen
2017-07-04T10:06:01Z
2017-07-04T10:06:01Z
2017
Fuhrwerk, M.; Moghaddamnia, S.; Peissig, J.: Scattered Pilot-Based Channel Estimation for Channel Adaptive FBMC-OQAM Systems. In: IEEE Transactions on Wireless Communications 16 (2017), Nr. 3, S. 1687-1702. DOI: https://doi.org/10.1109/TWC.2017.2651806
http://www.repo.uni-hannover.de/handle/123456789/1692
http://dx.doi.org/10.15488/1667
Shaping the pulse of FilterBank MultiCarrier with Offset Quadrature Amplitude Modulation subcarrier modulation (FBMC-OQAM) systems offers a new degree of freedom for the design of mobile communication systems. In previous studies, we evaluated the gains arising from the application of Prototype Filter Functions (PFFs) and subcarrier spacing matched to the delay and Doppler spreads of doubly dispersive channels. In this paper, we investigate the impact of having imperfect channel knowledge at the receiver on the performance of Channel Adaptive Modulation (CAM) in terms of channel estimation errors and Bit Error Rate (BER). To this end, the channel estimation error for two different interference mitigation schemes proposed in the literature is derived analytically and its influence on the BER performance is analyzed for practical channel scenarios. The results show that FBMC-OQAM systems utilizing CAM and scattered pilot-based channel estimation provide a significant performance gain compared with the current one system design for a variety of channel scenarios ("one-fits-all") approach. Additionally, we verified that the often used assumption of a flat channel in the direct neighborhood of a pilot symbol is not valid for practical scenarios. © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Made available in DSpace on 2017-07-04T10:06:01Z (GMT). No. of bitstreams: 0
Previous issue date: 2017
acceptedVersion
eng
Piscataway, NJ : Institute of Electrical and Electronics Engineers Inc.
IEEE Transactions on Wireless Communications 16 (2017), Nr. 3
1536-1276
https://doi.org/10.1109/TWC.2017.2651806
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
channel adaptive systems
channel estimation
FBMC
interference mitigation
intrinsic interference
offset-QAMOFDM/ FBMC-OQAM
scattered pilots
Adaptive modulation
Bit error rate
Cams
Degrees of freedom (mechanics)
Error statistics
Errors
Fading channels
Interference suppression
Matched filters
Mobile telecommunication systems
Modulation
Quadrature amplitude modulation
Radio broadcasting
Channel adaptive
FBMC
Interference mitigation
offset-QAMOFDM/ FBMC-OQAM
Scattered-pilot
Channel estimation
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::500 | Naturwissenschaften::530 | Physik
Scattered Pilot-Based Channel Estimation for Channel Adaptive FBMC-OQAM Systems
Article
Text
3
16
1687
1702openAccessPA-19Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/16932022-12-02T15:03:41Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:Articledoc-type:Textopen_accessddc:530
Fuhrwerk, Martin
Thein, Christoph
Peissig, Jürgen
2017-07-04T10:06:03Z
2017-07-04T10:06:03Z
2013
Fuhrwerk, M.; Thein, C.; Peissig, J.: Audio quality measurements for wireless microphones in spectrum pooling scenarios. In: IEEE International Conference on Communications (2013), S. 2823-2828. DOI: https://doi.org/10.1109/ICC.2013.6654968
http://www.repo.uni-hannover.de/handle/123456789/1693
http://dx.doi.org/10.15488/1668
In this contribution the influence of different broadband OFDM schemes on the perceptual audio quality of narrowband wireless microphone links is evaluated, since coexistence scenarios of wireless microphones and Orthogonal frequency-division multiplexing (OFDM) based services arise in the TV bands. Therefore, we present different non-contiguous cyclic-prefix (CP-)OFDM and offset quadrature amplitude modulation (OQAM-)OFDM system designs based on the spectrum pooling concept. We measure their power suppression in the subchannel allocated for the wireless microphone. As an indicator of the perceptual audio quality, we measure the objective difference grade of a colored noise audio signal emitted over a consumerlike hardware. The measurements show that the non-contiguous OQAM-OFDM scheme not only introduces lower interference to the FM link, but also has the advantage of requiring less number of notched carriers in comparison to CP-OFDM. By application of non-contiguous OQAM-OFDM with an appropriate number of notched carriers instead of the classical CP-OFDM scheme, wireless microphone systems can still sustain a significant low SIR when non-professional hardware is applied. © 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Made available in DSpace on 2017-07-04T10:06:03Z (GMT). No. of bitstreams: 0
Previous issue date: 2013
Sennheiser Electronic GmbH & Co. KG
Sennheiser electronic GmbH & Co. KG
acceptedVersion
eng
Piscataway, NJ : Institute of Electrical and Electronics Engineers Inc.
IEEE International Conference on Communications (2013)
1550-3607
https://doi.org/10.1109/ICC.2013.6654968
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Audio signal processing
Hardware
Microphones
Orthogonal frequency division multiplexing
Sound reproduction
Telecommunication systems
Audio quality
Colored noise
Cyclic Prefix
Narrow bands
OFDM schemes
Offset quadrature amplitude modulations
Spectrum pooling
Wireless Microphone
Audition
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::500 | Naturwissenschaften::530 | Physik
Audio quality measurements for wireless microphones in spectrum pooling scenarios
Article
Text
2823
2828openAccessPA-19Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/16952022-12-02T15:03:41Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:Articledoc-type:Textopen_accessddc:530
Dietrich, David
Abujoda, Ahmed
Rizk, Amr
Papadimitriou, Panagiotis
2017-07-04T10:06:03Z
2017-07-04T10:06:03Z
2017
Dietrich, D.; Abujoda, A.; Rizk, A.; Papadimitriou, P.: Multi-Provider Service Chain Embedding With Nestor. In: IEEE Transactions on Network and Service Management 14 (2017), Nr. 1, S. 91-105. DOI: https://doi.org/10.1109/TNSM.2017.2654681
http://www.repo.uni-hannover.de/handle/123456789/1695
http://dx.doi.org/10.15488/1670
Network function (NF) virtualization decouples NFs from the underlying middlebox hardware and promotes their deployment on virtualized network infrastructures. This essentially paves the way for the migration of NFs into clouds (i.e., NF-as-a-Service), achieving a drastic reduction of middlebox investment and operational costs for enterprises. In this context, service chains (expressing middlebox policies in the enterprise network) should be mapped onto datacenter networks, ensuring correctness, resource efficiency, as well as compliance with the provider's policy. The network service embedding (NSE) problem is further exacerbated by two challenging aspects: 1) traffic scaling caused by certain NFs (e.g., caches and WAN optimizers) and 2) NF location dependencies. Traffic scaling requires resource reservations different from the ones specified in the service chain, whereas NF location dependencies, in conjunction with the limited geographic footprint of NF providers (NFPs), raise the need for NSE across multiple NFPs. In this paper, we present a holistic solution to the multi-provider NSE problem. We decompose NSE into: 1) NF-graph partitioning performed by a centralized coordinator and 2) NF-subgraph mapping onto datacenter networks. We present linear programming formulations to derive near-optimal solutions for both problems. We address the challenging aspect of traffic scaling by introducing a new service model that supports demand transformations. We also define topology abstractions for NF-graph partitioning. Furthermore, we discuss the steps required to embed service chains across multiple NFPs, using our NSE orchestrator (Nestor). We perform an evaluation study of multi-provider NSE with emphasis on NF-graph partitioning optimizations tailored to the client and NFPs. Our evaluation results further uncover significant savings in terms of service cost and resource consumption due to the demand transformations. © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works..
Made available in DSpace on 2017-07-04T10:06:03Z (GMT). No. of bitstreams: 0
Previous issue date: 2017
EU/FP7/T-NOVA/619520
DFG/Collaborative Research Center/1053 (MAKI)
EU/FP7/T-NOVA
DFG/CRC/1053
acceptedVersion
eng
Piscataway, NJ : Institute of Electrical and Electronics Engineers Inc.
IEEE Transactions on Network and Service Management 14 (2017), Nr. 1
1932-4537
https://doi.org/10.1109/TNSM.2017.2654681
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Network function virtualization
network service embedding
orchestration
service chaining
Chains
Function evaluation
Graph theory
Linear programming
Topology
Transfer functions
Virtual reality
Virtualization
Geographic footprints
Linear programming formulation
Near-optimal solutions
Network infrastructure
Network services
orchestration
Resource reservations
service chaining
Network function virtualization
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::500 | Naturwissenschaften::530 | Physik
Multi-Provider Service Chain Embedding With Nestor
Article
Text
1
14
91
105openAccessPA-21Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/17022022-12-02T15:03:41Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:Articledoc-type:Textopen_accessddc:530
Dietrich, David
Rizk, Amr
Papadimitriou, Panagiotis
2017-07-04T10:06:08Z
2017-07-04T10:06:08Z
2015
Dietrich, D.; Rizk, A.; Papadimitriou, P.: Multi-Provider Virtual Network Embedding With Limited Information Disclosure. In: IEEE Transactions on Network and Service Management 12 (2015), Nr. 2, S. 188-201. DOI: https://doi.org/10.1109/TNSM.2015.2417652
http://www.repo.uni-hannover.de/handle/123456789/1702
http://dx.doi.org/10.15488/1677
The ever-increasing need to diversify the Internet has recently revived the interest in network virtualization. Wide-area virtual network (VN) deployment raises the need for VN embedding (VNE) across multiple Infrastructure Providers (InPs), due to the InP's limited geographic footprint. Multi-provider VNE, in turn, requires a layer of indirection, interposed between the Service Providers and the InPs. Such brokers, usually known as VN Providers, are expected to have very limited knowledge of the physical infrastructure, since InPs will not be willing to disclose detailed information about their network topology and resource availability to third parties. Such information disclosure policies entail significant implications on resource discovery and allocation. In this paper, we study the challenging problem of multi-provider VNE with limited information disclosure (LID). In this context, we initially investigate the visibility of VN Providers on substrate network resources and question the suitability of topology-based requests for VNE. Subsequently, we present linear programming formulations for: (i) the partitioning of traffic matrix based VN requests into segments mappable to InPs, and (ii) the mapping of VN segments into substrate network topologies. VN request partitioning is carried out under LID, i.e., VN Providers access only information which is not deemed confidential by InPs. We further investigate the suboptimality of LID on VNE against a best-case scenario where the complete network topology and resource availability information is available to VN Providers. © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Made available in DSpace on 2017-07-04T10:06:08Z (GMT). No. of bitstreams: 0
Previous issue date: 2015
acceptedVersion
eng
Piscataway, NJ : Institute of Electrical and Electronics Engineers Inc.
IEEE Transactions on Network and Service Management 12 (2015), Nr. 2
0090-6778
https://doi.org/10.1109/TNSM.2015.2417652
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Network virtualization
Virtual network embedding
Topology abstraction
Virtualized infrastructures
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::500 | Naturwissenschaften::530 | Physik
Multi-Provider Virtual Network Embedding With Limited Information Disclosure
Article
Text
2
12
188
201openAccessPA-21Verlagspolicy
oai:www.repo.uni-hannover.de:123456789/17862022-12-02T18:18:53Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:Textdoc-type:ConferenceObjectopen_access
Deifel, Hans-Peter
Dietrich, Christian
Göttlinger, Merlin
Milius, Stefan
Lohmann, Daniel
Schröder, Lutz
2017-08-08T10:55:51Z
2017-08-08T10:55:51Z
2017-08-08
Deifel, H.-P.; Dietrich, C.; Göttlinger, M.; Milius, S.; Lohmann, D.; Schröder, L.: Automatic Verification of Application-Tailored OSEK Kernels. In: Formal Methods in Computer Aided Design (FMCAD), 2017, S. 196-203. DOI: http://dx.doi.org/10.23919/FMCAD.2017.8102260
http://www.repo.uni-hannover.de/handle/123456789/1786
http://dx.doi.org/10.15488/1761
The OSEK industrial standard governs the design of embedded real-time operating systems in the automotive domain. We report on efforts to develop verification methods for OSEK-conformant compilers, specifically of a code generator that weaves system calls and application code using a static configuration file, producing a stand-alone application that incorporates the relevant parts of the kernel. Our methodology involves two verification steps: On the one hand, we extract an OS-application interaction graph during the compilation phase and verify that it conforms to the standard, in particular regarding prioritized scheduling and interrupt handling. To this end, we generate from the configuration file a temporal specification of standard-conformant behaviour and model check the arising formulas on a labelled transition system extracted from the interaction graph. On the other hand, we verify that the actual generated code conforms to the interaction graph; this is done by graph isomorphism checking of the interaction graph against a dynamically-explored state-transition graph of the generated system.
Submitted by Christian Dietrich (dietrich@sra.uni-hannover.de) on 2017-08-08T10:53:45Z
No. of bitstreams: 1
kernels.pdf: 352074 bytes, checksum: 01e0f3bfcb82ed7821a6a6eca4451b79 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2017-08-08T10:55:51Z (GMT) No. of bitstreams: 1
kernels.pdf: 352074 bytes, checksum: 01e0f3bfcb82ed7821a6a6eca4451b79 (MD5)
Made available in DSpace on 2017-08-08T10:55:51Z (GMT). No. of bitstreams: 1
kernels.pdf: 352074 bytes, checksum: 01e0f3bfcb82ed7821a6a6eca4451b79 (MD5)
acceptedVersion
ger
Piscataway : IEEE
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
OSEK
Echtzeitbetriebssystem
Formale Verifikation
Applications-spezifische Anpassungen
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Automatic Verification of Application-Tailored OSEK Kernels
ConferenceObject
Text
196
203openAccessFull Version of the FMCAD 2017 PaperHans-PeterChristianMerlinStefanDanielLutzDeifelDietrichGöttlingerMiliusLohmannSchröderFormal Methods in Computer-Aided Design (FMCAD 2017), 2-6 October 2017, ViennaVerlagspolicy
oai:www.repo.uni-hannover.de:123456789/19502022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:510
Grabowski, Darius
Platte, Daniel
Hedrich, Lars
Barke, Erich
2017-09-14T14:08:59Z
2017-09-14T14:08:59Z
2006
Grabowski, D.; Platte, D.; Hedrich, L.; Barke, E.: Time Constrained Verification of Analog Circuits using Model-Checking Algorithms. In: Electronic Notes in Theoretical Computer Science 153 (2006), Nr. 3, S. 37-52. DOI: https://doi.org/10.1016/j.entcs.2006.01.026
http://www.repo.uni-hannover.de/handle/123456789/1950
http://dx.doi.org/10.15488/1925
In this contribution we present algorithms for model checking of analog circuits enabling the specification of time constraints. Furthermore, a methodology for defining time-based specifications is introduced. An already known method for model checking of integrated analog circuits has been extended to take into account time constraints. The method will be presented using three industrial circuits. The results of model checking will be compared to verification by simulation.
Made available in DSpace on 2017-09-14T14:08:59Z (GMT). No. of bitstreams: 0
Previous issue date: 2006
publishedVersion
eng
Amsterdam : Elsevier BV
Electronic Notes in Theoretical Computer Science 153 (2006), Nr. 3
15710661
https://doi.org/10.1016/j.entcs.2006.01.026
CC BY-NC-ND 3.0 Unported
https://creativecommons.org/licenses/by-nc-nd/3.0
Constraint theory
Integrated circuits
Mathematical models
Specifications
Analog Circuits
CTL
Model Checking
Time Constraints
Analog computers
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::500 | Naturwissenschaften::510 | Mathematik
Time Constrained Verification of Analog Circuits using Model-Checking Algorithms
Article
Text
3 SPEC. ISS.
153
37
52openAccess
oai:www.repo.uni-hannover.de:123456789/19512022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:620
Rupp, Markus
Kaiser, Thomas
Nezan, Jean-Francois
Schmidt, Gerhard
2017-09-14T14:09:00Z
2017-09-14T14:09:00Z
2006
Rupp, M.; Kaiser, T.; Nezan, J.-F.; Schmidt, G.: Signal processing with high complexity: Prototyping and industrial design. In: Eurasip Journal on Embedded Systems 2006 (2006), No. 90363. DOI: https://doi.org/10.1155/ES/2006/90363
http://www.repo.uni-hannover.de/handle/123456789/1951
http://dx.doi.org/10.15488/1926
[No abstract available]
Made available in DSpace on 2017-09-14T14:09:00Z (GMT). No. of bitstreams: 0
Previous issue date: 2006
publishedVersion
eng
Heidelberg : SpringerOpen
Eurasip Journal on Embedded Systems 2006 (2006)
16873955
https://doi.org/10.1155/ES/2006/90363
CC BY 4.0 Unported
https://creativecommons.org/licenses/by/4.0
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::600 | Technik::620 | Ingenieurwissenschaften und Maschinenbau
Signal processing with high complexity: Prototyping and industrial design
Article
Text
2006
90363openAccess
oai:www.repo.uni-hannover.de:123456789/19722022-12-02T15:03:41Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:510
Chandoo, Maurice
2017-09-22T12:52:56Z
2017-09-22T12:52:56Z
2016
Chandoo, M.: On the implicit graph conjecture. In: Leibniz International Proceedings in Informatics, LIPIcs 58 (2016), No. 23. DOI: https://doi.org/10.4230/LIPIcs.MFCS.2016.23
http://www.repo.uni-hannover.de/handle/123456789/1972
http://dx.doi.org/10.15488/1947
The implicit graph conjecture states that every sufficiently small, hereditary graph class has a labeling scheme with a polynomial-time computable label decoder. We approach this conjecture by investigating classes of label decoders defined in terms of complexity classes such as P and EXP. For instance, GP denotes the class of graph classes that have a labeling scheme with a polynomial-time computable label decoder. Until now it was not even known whether GP is a strict subset of GR where R is the class of recursive languages. We show that this is indeed the case and reveal a strict hierarchy akin to classical complexity. We also show that classes such as GP can be characterized in terms of graph parameters. This could mean that certain algorithmic problems are feasible on every graph class in GP. Lastly, we define a more restrictive class of label decoders using first-order logic that already contains many natural graph classes such as forests and interval graphs. We give an alternative characterization of this class in terms of directed acyclic graphs. By showing that some small, hereditary graph class cannot be expressed with such label decoders a weaker form of the implicit graph conjecture could be disproven.
Made available in DSpace on 2017-09-22T12:52:56Z (GMT). No. of bitstreams: 0
Previous issue date: 2016
publishedVersion
eng
Saarbrücken : Dagstuhl Publishing
Leibniz International Proceedings in Informatics, LIPIcs 58 (2016)
18688969
https://doi.org/10.4230/LIPIcs.MFCS.2016.23
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0
Adjacency labeling scheme
Complexity classes
Diagonalization
Logic
Computational complexity
Computer circuits
Decoding
Formal logic
Polynomial approximation
Adjacency labeling
Algorithmic problems
Complexity class
Diagonalizations
Directed acyclic graph (DAG)
First order logic
Logic
Recursive languages
Directed graphs
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::500 | Naturwissenschaften::510 | Mathematik
On the implicit graph conjecture
Article
Text
58
23openAccess
oai:www.repo.uni-hannover.de:123456789/19732022-12-02T15:04:50Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:510
Durand, Arnaud
Haak, Anselm
Kontinen, Juha
Vollmer, Heribert
2017-09-22T12:52:57Z
2017-09-22T12:52:57Z
2016
Durand, A.; Haak, A.; Kontinen, J.; Vollmer, H.: Descriptive complexity of #AC0 functions. In: Leibniz International Proceedings in Informatics, LIPIcs 62 (2016). DOI: https://doi.org/10.4230/LIPIcs.CSL.2016.20
http://www.repo.uni-hannover.de/handle/123456789/1973
http://dx.doi.org/10.15488/1948
We introduce a new framework for a descriptive complexity approach to arithmetic computations. We define a hierarchy of classes based on the idea of counting assignments to free function variables in first-order formulae. We completely determine the inclusion structure and show that #P and #AC0 appear as classes of this hierarchy. In this way, we unconditionally place #AC0 properly in a strict hierarchy of arithmetic classes within #P. We compare our classes with a hierarchy within #P defined in a model-theoretic way by Saluja et al. We argue that our approach is better suited to study arithmetic circuit classes such as #AC0 which can be descriptively characterized as a class in our framework.
Made available in DSpace on 2017-09-22T12:52:57Z (GMT). No. of bitstreams: 0
Previous issue date: 2016
publishedVersion
eng
Saarbrücken : Dagstuhl Publishing
Leibniz International Proceedings in Informatics, LIPIcs 62 (2016)
18688969
https://doi.org/10.4230/LIPIcs.CSL.2016.20
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0
Arithmetic circuits
Counting classes
Fagin's theorem
Finite model theory
Skolem function
Computational complexity
Logic circuits
Arithmetic circuit
Arithmetic computations
Counting class
Descriptive complexity
Fagin's theorem
Finite model theory
Function variables
Inclusion structure
Computer circuits
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::500 | Naturwissenschaften::510 | Mathematik
Descriptive complexity of #AC0 functions
Article
Text
62openAccess
oai:www.repo.uni-hannover.de:123456789/19742022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:510
Kontinen, Juha
Kuusisto, Antti
Virtema, Jonni
2017-09-22T12:52:57Z
2017-09-22T12:52:57Z
2016
Kontinen, J.; Kuusisto, A.; Virtema, J.: Decidability of predicate logics with team semantics. In: Leibniz International Proceedings in Informatics, LIPIcs 58 (2016), No. 60. DOI: https://doi.org/10.4230/LIPIcs.MFCS.2016.60
http://www.repo.uni-hannover.de/handle/123456789/1974
http://dx.doi.org/10.15488/1949
We study the complexity of predicate logics based on team semantics. We show that the satisfiability problems of two-variable independence logic and inclusion logic are both NEXPTIMEcomplete. Furthermore, we show that the validity problem of two-variable dependence logic is undecidable, thereby solving an open problem from the team semantics literature. We also briefly analyse the complexity of the Bernays-Schönfinkel-Ramsey prefix classes of dependence logic.
Made available in DSpace on 2017-09-22T12:52:57Z (GMT). No. of bitstreams: 0
Previous issue date: 2016
Academy of Finland
ERC/647289
Jenny and Antti Wihuri Foundation
Vilho, Yrjö and Kalle Väisälä Foundation
publishedVersion
eng
Saarbrücken : Dagstuhl Publishing
Leibniz International Proceedings in Informatics, LIPIcs 58 (2016)
18688969
https://doi.org/10.4230/LIPIcs.MFCS.2016.60
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0
Complexity
Dependence logic
Team semantics
Two-variable logic
Formal logic
Problem solving
Semantics
Complexity
Dependence logic
Predicate logic
Satisfiability problems
Two-variable logic
Computer circuits
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::500 | Naturwissenschaften::510 | Mathematik
Decidability of predicate logics with team semantics
Article
Text
58
60openAccess
oai:www.repo.uni-hannover.de:123456789/20052022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Ebbing, Johannes
Kontinen, Juha
Mueller, Julian-Steffen
Vollmer, Heribert
2017-10-10T07:24:36Z
2017-10-10T07:24:36Z
2014
Ebbing, Johannes; Kontinen, Juha; Mueller, Julian-Steffen; Vollmer, Heribert: A fragment of dependence logic capturing polynomial time. In: Logical Methods in Computer Science 10 (2014), Nr. 3, 3. DOI: https://doi.org/10.2168/LMCS-10(3:3)2014
http://www.repo.uni-hannover.de/handle/123456789/2005
http://dx.doi.org/10.15488/1980
In this paper we study the expressive power of Horn-formulae in dependence logic and show that they can express NP-complete problems. Therefore we define an even smaller fragment D*-Horn and show that over finite successor structures it captures the complexity class P of all sets decidable in polynomial time. Furthermore, we show that the open D*-Horn-formulae correspond to the negative fragment of SO there exists-Horn.
Made available in DSpace on 2017-10-10T07:24:36Z (GMT). No. of bitstreams: 0
Previous issue date: 2014
publishedVersion
eng
Braunschweig : Tech. Univ. Braunschweig
Logical Methods in Computer Science 10 (2014), Nr. 3
1860-5974
https://doi.org/10.2168/LMCS-10(3:3)2014
CC BY-ND 2.0 Unported
https://creativecommons.org/licenses/by-nd/2.0/
dependence logic
horn-formulae
computational complexity
descriptive complexity
complexity
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
A fragment of dependence logic capturing polynomial time
Article
Text
3
10
3openAccess
oai:www.repo.uni-hannover.de:123456789/20132022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Boehler, Elmar
Creignou, Nadia
Galota, Matthias
Reith, Steffen
Schnoor, Henning
Vollmer, Heribert
2017-10-10T07:24:39Z
2017-10-10T07:24:39Z
2012
Boehler, Elmar; Creignou, Nadia; Galota, Matthias; Reith, Steffen; Schnoor, Henning et al.: Complexity classifications for different equivalence and audit problems for boolean circuits. In: Logical Methods in Computer Science 8 (2012), Nr. 3, 31. DOI: https://doi.org/10.2168/LMCS-8(3:31)2012
http://www.repo.uni-hannover.de/handle/123456789/2013
http://dx.doi.org/10.15488/1988
We study Boolean circuits as a representation of Boolean functions and conskier different equivalence, audit, and enumeration problems. For a number of restricted sets of gate types (bases) we obtain efficient algorithms, while for all other gate types we show these problems are at least NP-hard.
Made available in DSpace on 2017-10-10T07:24:39Z (GMT). No. of bitstreams: 0
Previous issue date: 2012
publishedVersion
eng
Braunschweig : Tech. Univ. Braunschweig
Logical Methods in Computer Science 8 (2012), Nr. 3
1860-5974
https://doi.org/10.2168/LMCS-8(3:31)2012
CC BY-ND 2.0 Unported
https://creativecommons.org/licenses/by-nd/2.0/
boolean circuits
complexity classification
isomorphism
satisfiability problems
hierarchy
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Complexity classifications for different equivalence and audit problems for boolean circuits
Article
Text
3
8
31openAccess
oai:www.repo.uni-hannover.de:123456789/20332022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Kontinen, Juha
Vollmer, Heribert
2017-10-10T08:16:57Z
2017-10-10T08:16:57Z
2010
Kontinen, Juha; Vollmer, Heribert: On second-order monadic monoidal and groupoidal quantifiers. In: Logical Methods in Computer Science 6 (2010), Nr. 3, 25. DOI: https://doi.org/10.2168/LMCS-6(3:25)2010
http://www.repo.uni-hannover.de/handle/123456789/2033
http://dx.doi.org/10.15488/2008
We study logics defined in terms of second-order monadic monoidal and groupoidal quantifiers. These are generalized quantifiers defined by monoid and groupoid word-problems, equivalently, by regular and context-free languages. We give a computational classification of the expressive power of these logics over strings with varying built-in predicates. In particular, we show that ATIME(n) can be logically characterized in terms of second-order monadic monoidal quantifiers.
Made available in DSpace on 2017-10-10T08:16:57Z (GMT). No. of bitstreams: 0
Previous issue date: 2010
publishedVersion
eng
Braunschweig : Tech. Univ. Braunschweig
Logical Methods in Computer Science 6 (2010), Nr. 3
1860-5974
https://doi.org/10.2168/LMCS-6(3:25)2010
CC BY-ND 2.0 Unported
https://creativecommons.org/licenses/by-nd/2.0/
monoid
groupoid
word-problem
leaf language
second-order generalized quantifier
computational complexity
descriptive complexity
generalized quantifiers
regular languages
permanent
logcfl
logic
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
On second-order monadic monoidal and groupoidal quantifiers
Article
Text
3
6
25openAccess
oai:www.repo.uni-hannover.de:123456789/20672023-04-14T04:47:08Zcom_123456789_1col_123456789_4ddc:004doc-type:BookPartdoc-type:Textopen_accessstatus-type:publishedVersion
Lück, Martin
Goranko, Valentin
Dam, Mads
2017-10-12T10:54:36Z
2017-10-12T10:54:36Z
2017
Lück, M.: The power of the filtration technique for modal logics with team semantics. In: Leibniz International Proceedings in Informatics, LIPIcs 82 (2017), 31. DOI: https://doi.org/10.4230/LIPIcs.CSL.2017.31
978-3-95977-045-3
http://www.repo.uni-hannover.de/handle/123456789/2067
https://doi.org/10.15488/2042
Modal Team Logic (MTL) extends Väänänen's Modal Dependence Logic (MDL) by Boolean negation. Its satisfiability problem is decidable, but the exact complexity is not yet understood very well. We investigate a model-theoretical approach and generalize the successful filtration technique to work in team semantics. We identify an "existential" fragment of MTL that enjoys the exponential model property and is therefore, like Propositional Team Logic (PTL), complete for the class AEXP(poly). Moreover, superexponential filtration lower bounds for different fragments of MTL are proven, up to the full logic having no filtration for any elementary size bound. As a corollary, superexponential gaps of succinctness between MTL fragments of equal expressive power are shown.
Made available in DSpace on 2017-10-12T10:54:36Z (GMT). No. of bitstreams: 0
Previous issue date: 2017
publishedVersion
eng
Wadern : Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH
26th EACSL Annual Conference on Computer Science Logic (CSL 2017)
Leibniz international proceedings in informatics : LIPIcs ; 82
1868-8969
https://doi.org/10.4230/LIPIcs.CSL.2017.31
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0/
dependence logic
team logic
modal logic
finite model theory
Konferenzschrift
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
The power of the filtration technique for modal logics with team semantics
BookPart
Text
82
31openAccess26th Annual EACSL Conference on Computer Science Logic, CSL 2017, August 20-24, 2017, Stockholm, Sweden
oai:www.repo.uni-hannover.de:123456789/21372023-04-14T04:56:06Zcom_123456789_1col_123456789_3ddc:004doc-type:BookPartdoc-type:Textopen_accessstatus-type:publishedVersion
Kiermeier, Marie
Werner, Martin
Schewe, Sven
Schneider, Thomas
Wijsen, Jef
2017-10-24T08:25:16Z
2017-10-24T08:25:16Z
2017
Kiermeier, M.; Werner, M.: Similarity search for spatial trajectories using online lower bounding DTW and presorting strategies. In: Leibniz International Proceedings in Informatics, LIPIcs 90 (2017), 18. DOI: https://doi.org/10.4230/LIPIcs.TIME.2017.18
http://www.repo.uni-hannover.de/handle/123456789/2137
http://dx.doi.org/10.15488/2112
Similarity search with respect to time series has received much attention from research and industry in the last decade. Dynamic time warping is one of the most widely used distance measures in this context. This is due to the simplicity of its definition and the surprising quality of dynamic time warping for time series classification. However, dynamic time warping is not well-behaving with respect to many dimensionality reduction techniques as it does not fulfill the triangle inequality. Additionally, most research on dynamic time warping has been performed with one-dimensional time series or in multivariate cases of varying dimensions. With this paper, we propose three extensions to LBRotation for two-dimensional time series (trajectories). We simplify LBRotation and adapt it to the online and data streaming case and show how to tune the pruning ratio in similarity search by using presorting strategies based on simple summaries of trajectories. Finally, we provide a thorough evaluation of these aspects on a large variety of datasets of spatial trajectories.
Made available in DSpace on 2017-10-24T08:25:16Z (GMT). No. of bitstreams: 0
Previous issue date: 2017
publishedVersion
eng
Wadern : Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH
24th International Symposium on Temporal Representation and Reasoning (TIME 2017)
Leibniz international proceedings in informatics : LIPIcs ; 90
1868-8969
978-3-95977-052-1
https://doi.org/10.4230/LIPIcs.TIME.2017.18
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0/
Trajectory Computing
Similarity Search
Dynamic Time Warping
Lower Bounds
k Nearest Neighbor Search
Spatial Presorting
Konferenzschrift
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Similarity search for spatial trajectories using online lower bounding DTW and presorting strategies
BookPart
Text
90
18openAccess24th International Symposium on Temporal Representation and Reasoning, TIME 2017, October 16-18 2017, Mons, Belgium
oai:www.repo.uni-hannover.de:123456789/23132022-12-13T15:12:27Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:620
Povse, B.
Haddadin, S.
Belder, R.
Koritnik, D.
Bajd, T.
2017-11-17T08:10:19Z
2017-11-17T08:10:19Z
2015
Povse, B.; Haddadin, S.; Belder, R.; Koritnik, D.; Bajd, T.: A tool for the evaluation of human lower arm injury: approach, experimental validation and application to safe robotics. In: Robotica 34 (2015), Nr. 11, S. 2499-2515. DOI: https://doi.org/10.1017/S0263574715000156
http://www.repo.uni-hannover.de/handle/123456789/2313
http://dx.doi.org/10.15488/2287
This paper treats the systematic injury analysis of lower arm robot–human impacts. For this purpose, a passive mechanical lower arm (PMLA) was developed that mimics the human impact response and is suitable for systematic impact testing and prediction of mild contusions and lacerations. A mathematical model of the passive human lower arm is adopted to the control of the PMLA. Its biofidelity is verified by a number of comparative impact experiments with the PMLA and a human volunteer. The respective dynamic impact responses show very good consistency and support the fact that the developed device may serve as a human substitute in safety analysis for the described conditions. The collision tests were performed with two different robots: the DLR Lightweight Robot III (LWR-III) and the EPSON PS3L industrial robot. The data acquired in the PMLA impact experiments were used to encapsulate the results in a robot independent safety curve, taking into account robot's reflected inertia, velocity and impact geometry. Safety curves define the velocity boundaries on robot motions based on the instantaneous manipulator dynamics and possible human injury due to unforeseen impacts. Copyright © Cambridge University Press 2015
Made available in DSpace on 2017-11-17T08:10:19Z (GMT). No. of bitstreams: 0
Previous issue date: 2015
publishedVersion
eng
Cambridge : Cambridge University Press
Robotica 34 (2015), Nr. 11
0263-5747
https://doi.org/10.1017/S0263574715000156
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden. Dieser Beitrag ist aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
Control of robotic systems
Human biomechanics
Man-machine systems
Mechatronic systems
Robot dynamics
Impact testing
Interactive computer systems
Man machine systems
Robotics
Robots
Experimental validations
Impact experiment
Manipulator dynamics
Mechanical lower arms
Mechatronic systems
Robot dynamics
Robotic systems
Velocity boundary
Manipulators
Dewey Decimal Classification::600 | Technik::620 | Ingenieurwissenschaften und Maschinenbau
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
A tool for the evaluation of human lower arm injury: approach, experimental validation and application to safe robotics
Article
Text
11
34
2499
2515openAccessNationallizenz
oai:www.repo.uni-hannover.de:123456789/23872022-12-13T15:12:27Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:620
Nieto, J.
Slawiñski, E.
Mut, V.
Wagner, B.
2017-11-17T12:26:20Z
2017-11-17T12:26:20Z
2012
Nieto, J.; Slawiñski, E.; Mut, V.; Wagner, B.: Toward safe and stable time-delayed mobile robot teleoperation through sampling-based path planning. In: Robotica 30 (2012), Nr. 3, S. 351-361. DOI: https://doi.org/10.1017/S0263574711000695
http://www.repo.uni-hannover.de/handle/123456789/2387
http://dx.doi.org/10.15488/2361
This work proposes a teleoperation architecture for mobile robots in partially unknown environments under the presence of variable time delay. The system is provided with artificial intelligence represented by a probabilistic path planner that, in combination with a prediction module, assists the operator while guaranteeing a collision-free motion. For this purpose, a certain level of autonomy is given to the system. The structure was tested in indoor environments for different kinds of operators. A maximum time delay of 2s was successfully coped with. © 2011 Cambridge University Press.
Made available in DSpace on 2017-11-17T12:26:20Z (GMT). No. of bitstreams: 0
Previous issue date: 2012
publishedVersion
eng
Cambridge : Cambridge University Press
Robotica 30 (2012), Nr. 3
0263-5747
https://doi.org/10.1017/S0263574711000695
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden. Dieser Beitrag ist aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
Human operator
Mobile robots
Path planning
Supervisory control
Time-delayed teleoperation
Human operator
Indoor environment
Path planners
Robot teleoperation
Sampling-based
Supervisory control
Time-delayed teleoperation
Variable time delay
Artificial intelligence
Degrees of freedom (mechanics)
Mobile robots
Remote control
Time delay
Motion planning
Dewey Decimal Classification::600 | Technik::620 | Ingenieurwissenschaften und Maschinenbau
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Toward safe and stable time-delayed mobile robot teleoperation through sampling-based path planning
Article
Text
3
30
351
361openAccessNationallizenz
oai:www.repo.uni-hannover.de:123456789/25842022-12-02T19:24:35Zcom_123456789_15col_123456789_16ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Heller, Lambert
2018-01-05T07:57:00Z
2018-01-05T07:57:00Z
2018-01-05
Heller, Lambert: Forschung und Lehre in offenen P2P-Netzwerken – Konsequenzen von Blockchain für Informations-Infrastrukturen an Hochschulen. - Hannover : Institutionelles Repositorium der Leibniz Universität Hannover, 2018. DOI: https://doi.org/10.15488/2558
http://www.repo.uni-hannover.de/handle/123456789/2584
http://dx.doi.org/10.15488/2558
Mehr Information: tib.eu/blockchain
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-01-05T07:53:49Z
No. of bitstreams: 1
Heller_Forschung und Lehre in offenen P2P-Netzwerken.pdf: 2504564 bytes, checksum: 73d9bc9be19fe2ec9f3cb5573faa10dc (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-01-05T07:57:00Z (GMT) No. of bitstreams: 1
Heller_Forschung und Lehre in offenen P2P-Netzwerken.pdf: 2504564 bytes, checksum: 73d9bc9be19fe2ec9f3cb5573faa10dc (MD5)
Made available in DSpace on 2018-01-05T07:57:00Z (GMT). No. of bitstreams: 1
Heller_Forschung und Lehre in offenen P2P-Netzwerken.pdf: 2504564 bytes, checksum: 73d9bc9be19fe2ec9f3cb5573faa10dc (MD5)
publishedVersion
ger
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
CC0 1.0 Universal
https://creativecommons.org/publicdomain/zero/1.0/
Textmining
Blockchain
Peer-to-Peer
Protokoll <Datenverarbeitungssystem>
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Forschung und Lehre in offenen P2P-Netzwerken – Konsequenzen von Blockchain für Informations-Infrastrukturen an Hochschulen
ConferenceObject
TextopenAccessLambertHeller10-Jahres-Feier des ZBIW, Köln, 21.11.2017
oai:www.repo.uni-hannover.de:123456789/26022022-12-02T19:24:35Zcom_123456789_15col_123456789_16ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Heller, Lambert
2018-01-15T15:25:39Z
2018-01-15T15:25:39Z
2018-01-15
Heller, Lambert: Advanced P2P architectures will set new standards for how we take care for scholarly works & interactions. - Hannover : Institutionelles Repositorium der Leibniz Universität Hannover, 2018. DOI: https://doi.org/10.15488/2576
http://www.repo.uni-hannover.de/handle/123456789/2602
http://dx.doi.org/10.15488/2576
Further information: https://tib.eu/Lambo. German version: https://doi.org/ch5d
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-01-15T15:25:12Z
No. of bitstreams: 1
Advanced P2P architectures will set new standards for how we take care for scholarly works & interaction.pdf: 175457 bytes, checksum: 5d38ef78e7f81d6cf1dfd09d35211e5f (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-01-15T15:25:38Z (GMT) No. of bitstreams: 1
Advanced P2P architectures will set new standards for how we take care for scholarly works & interaction.pdf: 175457 bytes, checksum: 5d38ef78e7f81d6cf1dfd09d35211e5f (MD5)
Made available in DSpace on 2018-01-15T15:25:39Z (GMT). No. of bitstreams: 1
Advanced P2P architectures will set new standards for how we take care for scholarly works & interaction.pdf: 175457 bytes, checksum: 5d38ef78e7f81d6cf1dfd09d35211e5f (MD5)
publishedVersion
ger
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
CC BY 3.0 DE
http://creativecommons.org/licenses/by/3.0/de/
Peer-to-Peer
BitTorrent
Blockchain
Protokoll <Datenverarbeitungssystem>
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Advanced P2P architectures will set new standards for how we take care for scholarly works & interaction
ConferenceObject
TextopenAccessLambertHellerAcademic Publishing Europe (APE), Berlin, 17. January 2018
oai:www.repo.uni-hannover.de:123456789/27432022-12-02T19:35:26Zcom_123456789_11col_123456789_12ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Stecher, Rodolfo
Niederée, Claudia
Nejdl, Wolfgang
Bouquet, Paolo
2018-02-09T08:47:10Z
2018-02-09T08:47:10Z
2008
Stecher, R.; Niederée, C.; Nejdl, W.; Bouquet, P.: Adaptive ontology re-use: Finding and re-using sub-ontologies. In: International Journal of Web Information Systems 4 (2008), Nr. 2, S. 198-214. DOI: https://doi.org/10.1108/17440080810882379
http://www.repo.uni-hannover.de/handle/123456789/2743
http://dx.doi.org/10.15488/2717
Purpose - The discovery of the "right" ontology or ontology part is a central ingredient for effective ontology re-use. The purpose of this paper is to present an approach for supporting a form of adaptive re-use of sub-ontologies, where the ontologies are deeply integrated beyond pure referencing. Design/methodology/approach - Starting from an ontology draft which reflects the intended modeling perspective, the ontology engineer can be supported by suggesting similar already existing sub-ontologies and ways for integrating them with the existing draft ontology. This paper's approach combines syntactic, linguistic, structural and logical methods into an innovative modeling-perspective aware solution for detecting matchings between concepts from different ontologies. This paper focuses on the discovery and matching phase of this re-use process. Findings - Owing to the combination of techniques presented in this general approach, the work described performs in the general case as well as approaches tailored for a specific usage scenario. Research limitations/implications - The methods used rely on lexical information obtained from the labels of the concepts and properties in the ontologies, which makes this approach appropriate in cases where this information is available. Also, this approach can handle some missing label information. Practical implications - Ontology engineering tasks can take advantage from the proposed adaptive re-use approach in order to re-use existing ontologies or parts of them without introducing inconsistencies in the resulting ontology. Originality/value - The adaptive re-use of ontologies by finding and partially re-using parts of existing ontological resources for building new ontologies is a new idea in the field, and the inclusion of the modeling perspective in the computation of the matches adds a new perspective that could also be exploited by other matching approaches. © Emerald Group Publishing Limited.
Made available in DSpace on 2018-02-09T08:47:10Z (GMT). No. of bitstreams: 0
Previous issue date: 2008
publishedVersion
eng
Bingley : Emerald Group Publishing Ltd.
International Journal of Web Information Systems 4 (2008), Nr. 2
1744-0084
https://doi.org/10.1108/17440080810882379
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden. Dieser Beitrag ist aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
Computer software
Computer theory
Knowledge management systems
Specifications
Task specialization
Design/methodology/approach
Knowledge management system
Label information
Lexical information
Logical methods
Ontology engineering
Task specialization
Usage scenarios
Computer software
Information systems
Specifications
Hardware
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Adaptive ontology re-use: Finding and re-using sub-ontologies
Article
Text
2
4
198
214openAccess
oai:www.repo.uni-hannover.de:123456789/27522022-12-13T15:12:26Zcom_123456789_1col_123456789_6ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Overmeyer, Ludger
Stock, Andreas
2018-02-09T08:47:14Z
2018-02-09T08:47:14Z
2007
Overmeyer, L.; Stock, A.: Decision - Information - time/space. In: Kybernetes 36 (2007), Nr. 1, S. 32-41. DOI: https://doi.org/10.1108/03684920710741125
http://www.repo.uni-hannover.de/handle/123456789/2752
http://dx.doi.org/10.15488/2726
Purpose - The standard terms of time and space lead to logical contradictions near the borderline of physics. Therefore, we suggest a new term for time and space, which enlarges the understanding but takes the actual knowledge of quantum mechanics into account. Design/methodology/approach - The procedure to define a broader and basic term of time and space is based on the relation between the information stored inside a system, the decision which is to be connected with it and the interaction with other systems. Findings - Fundamental to this new understanding is a definition and an explanation of the terms system, decision and information. Based on these three terms we developed a new understanding of time and space. In which case, it involves a specifying and extension of the present understandings. A meaningful point is that these three terms determine each other as they are open terms. Thus, they must be used in the interactive description before and during their own definition. Originality/value - Produces a new understanding and viewpoint which is enlarged by taking into account the knowledge of quantum mechanics. © Emerald Group Publishing Limited.
Made available in DSpace on 2018-02-09T08:47:14Z (GMT). No. of bitstreams: 0
Previous issue date: 2007
publishedVersion
eng
Bingley : Emerald Group Publishing Ltd.
Kybernetes 36 (2007), Nr. 1
0368-492X
https://doi.org/10.1108/03684920710741125
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden. Dieser Beitrag ist aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
Cybernetics
Decision making
Information control
Time study
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Decision - Information - time/space
Article
Text
1
36
32
41openAccessNationallizenz
oai:www.repo.uni-hannover.de:123456789/27562022-12-13T15:14:00Zcom_123456789_1col_123456789_10ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Le Dinh, Thang
Rickenberg, Tim A.
Fill, Hans-Georg
Breitner, Michael H.
2018-02-09T09:27:50Z
2018-02-09T09:27:50Z
2015
Le Dinh, T.; Rickenberg, T.A.; Fill, H.-G.; Breitner, M.H.: Enterprise content management systems as a knowledge infrastructure: The knowledge-based content management framework. In: International Journal of e-Collaboration 11 (2015), Nr. 3, S. 49-70. DOI: https://doi.org/10.4018/ijec.2015070104
http://www.repo.uni-hannover.de/handle/123456789/2756
http://dx.doi.org/10.15488/2730
The rise of the knowledge-based economy has significantly transformed the economies of developed countries from managed economies into entrepreneurial economies, which deal with knowledge as both input and output. Consequently, knowledge has become a key asset for organizations and knowledge management is one of the driving forces of business success. One of the most important challenges faced by enterprises today is to manage both knowledge assets and the e-collaboration process between knowledge workers. Critical business knowledge and information is often contained in mostly unstructured documents in content management systems. Therefore, content management based on knowledge perspectives is crucial for organizations, especially knowledge-intensive organizations. Enterprise Content Management has evolved as an integrated approach to managing documents and content on an enterprise-wide scale. This approach must be enhanced in order to build a robust foundation to support knowledge development and the collaboration process. This paper presents the KBCM (Knowledge-Based Content Management) framework for constructing a knowledge infrastructure based on the perspective of knowledge components that could help enterprises create more business value by classifying content formally and enabling its transformation into valuable knowledge assets. Copyright © 2015 IGI Global.
Made available in DSpace on 2018-02-09T09:27:50Z (GMT). No. of bitstreams: 0
Previous issue date: 2015
publishedVersion
eng
Hershey, PA : I G I Global
International Journal of e-Collaboration 11 (2015), Nr. 3
1548-3673
https://doi.org/10.4018/ijec.2015070104
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden. Dieser Beitrag ist aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
Design science research
Enterprise content management
Information management
Knowledge management
Knowledge-based content management framework
Design
Information management
Information services
Knowledge based systems
Knowledge engineering
Portals
Queueing networks
Societies and institutions
Content management
Content management system
Design-science researches
Enterprise content management systems
Enterprise content managements
Knowledge based economy
Knowledge-intensive organizations
Unstructured documents
Knowledge management
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Enterprise content management systems as a knowledge infrastructure: The knowledge-based content management framework
Article
Text
3
11
49
70openAccessNationallizenz
oai:www.repo.uni-hannover.de:123456789/27812022-12-13T15:14:00Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Syrjakow, Michael
Syrjakow, Elisabeth
Szczerbicki, Helena
2018-02-09T10:22:03Z
2018-02-09T10:22:03Z
2006
Syrjakow, M.; Syrjakow, E.; Szczerbicki, H.: Tool Support for Performance Modeling and Optimization. In: International Journal of Enterprise Information Systems (IJEIS) 2 (2006), Nr. 1, S. 30-53. DOI: https://doi.org/10.4018/jeis.2006010103
http://www.repo.uni-hannover.de/handle/123456789/2781
http://dx.doi.org/10.15488/2755
Most of the available modeling and simulation tools for performance analysis do not support model optimization sufficiently. One reason for this unsatisfactory situation is the lack of universally applicable and adaptive optimization strategies. Another reason is that modeling and simulation tools usually have a monolithic software design, which is difficult to extend with experimentation functionality. Such functionality has gained on importance in recent years due to the capability of an automatic extraction of valuable information and knowledge out of complex models. One of the most important experimentation goals is to find model parameter settings, which produce optimal model behavior. In this paper, we elaborate on the design of a powerful optimization component and its integration into existing modeling and simulation tools. For that purpose, we propose a hybrid integration approach being a combination of loose document-based and tight invocation-based integration concepts. Beside the integration concept for the optimization component, we also give a detailed insight into the applied optimization strategies. © 2006, IGI Global. All rights reserved.
Made available in DSpace on 2018-02-09T10:22:03Z (GMT). No. of bitstreams: 0
Previous issue date: 2006
publishedVersion
eng
Hershey, PA : I G I Global
International Journal of Enterprise Information Systems (IJEIS) 2 (2006), Nr. 1
1548-1115
https://doi.org/10.4018/jeis.2006010103
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden. Dieser Beitrag ist aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
knowledge and information management
model optimization
performance modeling
Petri Nets
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Tool Support for Performance Modeling and Optimization
Article
Text
1
2
30
53openAccessNationallizenz
oai:www.repo.uni-hannover.de:123456789/30962022-12-02T18:24:34Zcom_123456789_1col_123456789_4ddc:004doc-type:Textopen_accessdoc-type:MasterThesisstatus-type:publishedVersion
Nejdl, Wolfgang
Akter, Morsheda
2018-03-13T13:55:35Z
2018-03-13T13:55:35Z
2015
Akter, M.: Mining Entities from Events. Hannover : Leibniz Universität Hannover, Department of Electrical Engineering and Computer Science, Master Thesis, 2015, 65 S. DOI: https://doi.org/10.15488/3066
http://www.repo.uni-hannover.de/handle/123456789/3096
http://dx.doi.org/10.15488/3066
Now-a-day Wikipedia is becoming the main source of data for analyzing, researching and finding insights from it. Many researchers are for this reason interested about Wikipedia data. Understand the relationship between entities according to a given set of entities which we called seed entities. Finding insights from them, it is also very important to obtain more related entities and analyze them. Entity resolution is a problem that arises in many information integration scenarios. To fulfilling this purpose visualizing entities through graph has an increasing demand and interest among researchers. To analyzing this data as a source our main goal objective is mining entities from events and study how to effectively use crowdsourcing techniques to generate an automated trustable entity graph. Based on this foundation we develop a model for generating entities and inside the page based on the link entity it extends the input seed entity. We develop models and methods that find out the co-occurences between entities based on their events and automatically generate the entity graph. Also this produces a word cloud representation according to user's given input.
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-03-13T13:55:20Z
No. of bitstreams: 1
Akter_MasterThesis.pdf: 4582621 bytes, checksum: 553c48946c7a926ede6cabda7a8deb86 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-03-13T13:55:34Z (GMT) No. of bitstreams: 1
Akter_MasterThesis.pdf: 4582621 bytes, checksum: 553c48946c7a926ede6cabda7a8deb86 (MD5)
Made available in DSpace on 2018-03-13T13:55:35Z (GMT). No. of bitstreams: 1
Akter_MasterThesis.pdf: 4582621 bytes, checksum: 553c48946c7a926ede6cabda7a8deb86 (MD5)
Previous issue date: 2015
publishedVersion
eng
Hannover : Leibniz Universität Hannover. Department of Electrical Engineering and Computer Science
CC BY-NC-ND 3.0 DE
http://creativecommons.org/licenses/by-nc-nd/3.0/de/
Entity relationship model
Semantic data model
crowdsourcing
Wikipedia
Entity-Relationship-Datenmodell
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Mining Entities from Events
MasterThesis
Text65 S.openAccessMorshedaGottfried Wilhelm Leibniz UniversitätHannoverGottfried Wilhelm Leibniz Universität HannoverHannoverAkterDEmasterPA-47
oai:www.repo.uni-hannover.de:123456789/32082022-12-02T16:19:29Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:Articledoc-type:Textopen_access
Johannsmeier, Lars
Haddadin, Sami
2018-04-25T07:52:03Z
2018-04-25T07:52:03Z
2016-02-29
Johannsmeier, L. & Haddadin, S.: A hierarchical human-robot interaction-planning framework for task allocation in collaborative industrial assembly processes. In: IEEE Robotics and Automation Letters 2 (2017), Nr. 1, S. 41-48. DOI: https://doi.org/10.1109/LRA.2016.2535907
http://www.repo.uni-hannover.de/handle/123456789/3208
http://dx.doi.org/10.15488/3178
In this paper we propose a framework for task allocation in human-robot collaborative assembly planning. Our framework distinguishes between two main layers of abstraction and allocation. In the higher layer we use an abstract world model, incorporating a multi-agent human-robot team approach in order to describe the collaborative assembly planning problem. From this, nominal coordinated skill sequences for every agent are produced. In order to be able to treat humans and robots as agents of the same form, we move relevant differences/peculiarities into distinct cost functions. The layer beneath handles the concrete skill execution. On atomic level, skills are composed of complex hierarchical and concurrent hybrid state machines, which in turn coordinate the real-time behavior of the robot. Their careful design allows to cope with unpredictable events also on decisional level without having to explicitly plan for them, instead one may rely also on manually designed skills. Such events are likely to happen in dynamic and potentially partially known environments, which is especially true in case of human presence. © 2017 IEEE
Submitted by Torsten Lilge (lilge@irt.uni-hannover.de) on 2018-04-24T19:13:37Z
No. of bitstreams: 1
JohannsmeierHad2017_accepted.pdf: 830969 bytes, checksum: 8af9739781b148f50b9a6be67a71beee (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-04-25T07:52:02Z (GMT) No. of bitstreams: 1
JohannsmeierHad2017_accepted.pdf: 830969 bytes, checksum: 8af9739781b148f50b9a6be67a71beee (MD5)
Made available in DSpace on 2018-04-25T07:52:03Z (GMT). No. of bitstreams: 1
JohannsmeierHad2017_accepted.pdf: 830969 bytes, checksum: 8af9739781b148f50b9a6be67a71beee (MD5)
Previous issue date: 2016-02-29
EU/H2020/688857/EU
acceptedVersion
eng
Piscataway, NJ : Institute of Electrical and Electronics Engineers Inc.
info:eu-repo/grantAgreement/EU/H2020/688857/EU
IEEE Robotics and Automation Letters 2 (2017), Nr. 1
2377-3766
10.1109/LRA.2016.2535907
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Physical Human-Robot Interaction
Assembly
Co-Worker
Optimal Planning
Mensch-Roboter Kollaboration
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::600 | Technik::620 | Ingenieurwissenschaften und Maschinenbau::621 | Angewandte Physik::621,3 | Elektrotechnik, Elektronik
A Hierarchical Human-Robot Interaction-Planning Framework for Task Allocation in Collaborative Industrial Assembly Processes
Article
Text
41
48openAccessIEEE Robotics and Automation Letters 2 (2017), Nr. 1LarsSamiJohannsmeierHaddadinprohibitedVerlagspolicy
oai:www.repo.uni-hannover.de:123456789/32612022-12-02T15:04:49Zcom_123456789_1col_123456789_4col_123456789_6ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:600
Dao, Quang Huy
Skubacz-Feucht, Alexandra
Lüers, Bernard
Witzendorff, Philipp von
Ahe, Christopher von der
Overmeyer, Ludger
Geck, Bernd
2018-05-04T13:15:08Z
2018-05-04T13:15:08Z
2016
Dao, Q.H.; Skubacz-Feucht, A.; Lüers, B.; von Witzendorff, P.; von der Ahe, C. et al.: Novel Design Concept of an Optoelectronic Integrated RF Communication Module. In: Procedia Technology 26 (2016), S. 245-251. DOI: https://doi.org/10.1016/j.protcy.2016.08.033
http://www.repo.uni-hannover.de/handle/123456789/3261
http://dx.doi.org/10.15488/3231
This contribution presents a novel design concept of a 24 GHz radio frequency communication module. The integration of optical and electrical components is a particular challenge since the module is miniaturized in order to be integrated into any metallic workpieces. The design concept and the scope of functions of the communication unit acting as a wireless sensor node are discussed. The development of a highly integrated radio frequency circuit and the realization of through glass vias are some main aspects. The central control unit is an ultra-low power microcontroller capable of a flexible connection of sensors. By using an energy harvesting concept consisting of a solar cell with an efficiency of 40% and a supercapacitor the availability of energy is unlimited. Different lighting conditions are investigated in order to evaluate the available power of the solar cell. Furthermore, the power supply is investigated concerning voltage-current characteristics and the resulting operating time of the whole unit for a low ambient light scenario.
Made available in DSpace on 2018-05-04T13:15:08Z (GMT). No. of bitstreams: 0
Previous issue date: 2016
publishedVersion
eng
Amsterdam : Elsevier
Procedia Technology 26
2212-0173
https://doi.org/10.1016/j.protcy.2016.08.033
CC BY-NC-ND 4.0 Unported
https://creativecommons.org/licenses/by-nc-nd/4.0/
24 GHz RFID
wireless sensor node
RF communication
3D-MID
optoelectronic packaging
optical power transfer
Konferenzschrift
Dewey Decimal Classification::600 | Technik
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Novel Design Concept of an Optoelectronic Integrated RF Communication Module
Article
Text
26
245
251openAccess3rd International Conference on System-integrated Intelligence: New Challenges for Product and Production Engineering, SysInt 2016, 13.06.-15.06.2016, Paderborn, Germany
oai:www.repo.uni-hannover.de:123456789/32852022-12-02T19:24:35Zcom_123456789_15col_123456789_16ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersionddc:020
Heller, Lambert
2018-05-07T11:09:38Z
2018-05-07T11:09:38Z
2018
Heller, Lambert: Blockchain based educational certificates as a model for a P2P commons of scholarly metadata interaction. - Hannover : Institutionelles Repositorium der Leibniz Universität, 2018. DOI: https://doi.org/10.15488/3255
http://www.repo.uni-hannover.de/handle/123456789/3285
http://dx.doi.org/10.15488/3255
Talk given at the SPONBC2018, 7-8 May 2018, Vienna, Austria
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-05-07T11:09:08Z
No. of bitstreams: 1
Blockchain based educational certificates as a model for a P2P commons of scholarly metadata interaction.pdf: 614703 bytes, checksum: 0788a8db339c8a69884d23a91f178600 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-05-07T11:09:38Z (GMT) No. of bitstreams: 1
Blockchain based educational certificates as a model for a P2P commons of scholarly metadata interaction.pdf: 614703 bytes, checksum: 0788a8db339c8a69884d23a91f178600 (MD5)
Made available in DSpace on 2018-05-07T11:09:38Z (GMT). No. of bitstreams: 1
Blockchain based educational certificates as a model for a P2P commons of scholarly metadata interaction.pdf: 614703 bytes, checksum: 0788a8db339c8a69884d23a91f178600 (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität
CC BY 3.0 DE
http://creativecommons.org/licenses/by/3.0/de/
Blockchain
Peer-to-Peer
Decentralized Identifiers
Blockcerts
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::020 | Bibliotheks- und Informationswissenschaft
Blockchain based educational certificates as a model for a P2P commons of scholarly metadata interaction
ConferenceObject
Text
https://www.blockchainforscience.com/2018/02/09/sponbc2018/openAccessLambertHellerScientific Publishing on the Blockchain SPONBC2018, 7-8 May 2018, Vienna, Austriaallowed
oai:www.repo.uni-hannover.de:123456789/33652023-01-03T10:16:04Zcom_123456789_1col_123456789_4ddc:004doc-type:BookPartdoc-type:Textopen_accessstatus-type:publishedVersion
Lübke, Daniel
Cruz-Cunha, Maria Manuela
Quintela Varajão, João Eduardo
Rijo, Rui
Martinho, Ricardo
Peppard, Joe
San Cristóbal, José Ramón
Monguet, Josep
2018-05-18T12:03:54Z
2018-05-18T12:03:54Z
2017
Lübke, D.: Extracting and Conserving Production Data as Test Cases in Executable Business Process Architectures. In: Cruz-Cunha, M.M.; Quintela Varajão, J.E.; Rijo, R.; Martinho, R.; Peppard, J.; San Cristóbal, J.R.; Monguet, J. (Eds.): CENTERIS 2017 - International Conference on ENTERprise Information Systems : ProjMAN 2017 - International Conference on Project MANagement : HCist 2017 - International Conference on Health and Social Care Information Systems and Technologies, CENTERIS/ProjMAN/HCist 2017. Amsterdam [u.a.] : Elsevier, 2017 (Procedia computer science ; 121), S. 1006-1013. DOI: https://doi.org/10.1016/j.procs.2017.11.130
http://www.repo.uni-hannover.de/handle/123456789/3365
http://dx.doi.org/10.15488/3335
Because executable business processes are an important and critical software asset of organizations because they control and integrate critical information systems. Thus, testing them thoroughly is a very important task within the software development process. However, failures due to implementation defects still occur in production, which in turn means that the development team needs to analyze, fix and repair the failing processes. In order to support the activities of reproducing the problem outside of the production system and to create better test cases for verifying the fixed implementation, we propose to use process mining techniques on the production process event logs to aid the support & development teams. With our approach it is possible to automatically extract a working unit test case with all partner services being mocked that can run in a development environment. Within in this paper we present the extraction algorithm, our implementation, and possible ways to integrate the tool into the support & development process. © 2017 The Authors. Published by Elsevier B.V.
Made available in DSpace on 2018-05-18T12:03:54Z (GMT). No. of bitstreams: 0
Previous issue date: 2017
publishedVersion
eng
Amsterdam [u.a.] : Elsevier
CENTERIS 2017 - International Conference on ENTERprise Information Systems : ProjMAN 2017 - International Conference on Project MANagement : HCist 2017 - International Conference on Health and Social Care Information Systems and Technologies, CENTERIS/ProjMAN/HCist 2017
Procedia computer science ; 121
1877-0509
https://doi.org/10.1016/j.procs.2017.11.130
CC BY-NC-ND 4.0 Unported
https://creativecommons.org/licenses/by-nc-nd/4.0/
Process Mining
Regression Test
Test Case Extraction
Unit Test
Data mining
Extraction
Information systems
Management science
Project management
Software design
Software engineering
Software testing
Telecommunication services
Testing
Business process architectures
Development environment
Extraction algorithms
Process mining
Regression tests
Software development process
Test case
Unit tests
Information management
Konferenzschrift
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Extracting and Conserving Production Data as Test Cases in Executable Business Process Architectures
BookPart
Text
121
1006
1013openAccessCENTERIS 2017 - International Conference on ENTERprise Information Systems : ProjMAN 2017 - International Conference on Project MANagement : HCist 2017 - International Conference on Health and Social Care Information Systems and Technologies, CENTERIS/ProjMAN/HCist 2017, 8-10 November 2017, Barcelona, Spain
oai:www.repo.uni-hannover.de:123456789/33942022-12-02T15:03:41Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Hella, Lauri
Kuusisto, Antti
Meier, Arne
Virtema, Jonni
2018-05-23T07:46:37Z
2018-05-23T07:46:37Z
2017
Hella, L.; Kuusisto, A.; Meier, A.; Virtema, J.: Model checking and validity in propositional and modal inclusion logics. In: Leibniz International Proceedings in Informatics, LIPIcs 83 (2017), 32. DOI: https://doi.org/10.4230/LIPIcs.MFCS.2017.32
http://www.repo.uni-hannover.de/handle/123456789/3394
http://dx.doi.org/10.15488/3364
Propositional and modal inclusion logic are formalisms that belong to the family of logics based on team semantics. This article investigates the model checking and validity problems of these logics. We identify complexity bounds for both problems, covering both lax and strict team semantics. By doing so, we come close to finalising the programme that ultimately aims to classify the complexities of the basic reasoning problems for modal and propositional dependence, independence, and inclusion logics.
Made available in DSpace on 2018-05-23T07:46:37Z (GMT). No. of bitstreams: 0
Previous issue date: 2017
publishedVersion
eng
Wadern : Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH
Leibniz International Proceedings in Informatics, LIPIcs 83 (2017)
1868-8969
https://doi.org/10.4230/LIPIcs.MFCS.2017.32
CC BY 4.0 Unported
https://creativecommons.org/licenses/by/4.0/
Complexity
Inclusion Logic
Model Checking
Computer circuits
Semantics
Complexity
Complexity bounds
Reasoning problems
Model checking
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Model checking and validity in propositional and modal inclusion logics
Article
Text
32
83openAccess42nd International Symposium on Mathematical Foundations of Computer Science, MFCS 2017, August 21-25, 2017, Aalborg, Denmark
oai:www.repo.uni-hannover.de:123456789/34312022-12-02T19:24:35Zcom_123456789_11com_123456789_15col_123456789_16col_123456789_12ddc:004status-type:acceptedVersiondoc-type:Textdoc-type:ConferenceObjectopen_accessddc:020
Auer, Sören
Kovtun, Viktor
Prinz, Manuel
Kasprzik, Anna
Stocker, Markus
2018-05-24T10:31:34Z
2018-05-24T10:31:34Z
2018
Auer, S.; Kovtun, V.; Prinz, M.; Kasprzik, A.; Stocker, M.: Towards a Knowledge Graph for Science. In: Proceedings of the 8th International Conference on Web Intelligence, Mining and Semantics (WIMS 2018), 6 S. https://wims2018.pmf.uns.ac.rs/
http://www.repo.uni-hannover.de/handle/123456789/3431
http://dx.doi.org/10.15488/3401
The document-centric workflows in science have reached (or already exceeded) the limits of adequacy. This is emphasized by recent discussions on the increasing proliferation of scientific literature and the reproducibility crisis. This presents an opportunity to rethink the dominant paradigm of document-centric scholarly information communication and transform it into knowledge-based information flows by representing and expressing information through semantically rich, interlinked knowledge graphs. At the core of knowledge-based information flows is the creation and evolution of information models that establish a common understanding of information communicated between stakeholders as well as the integration of these technologies into the infrastructure and processes of search and information exchange in the research library of the future. By integrating these models into existing and new research infrastructure services, the information structures that are currently still implicit and deeply hidden in documents can be made explicit and directly usable. This has the potential to revolutionize scientific work as information and research results can be seamlessly interlinked with each other and better matched to complex information needs. Furthermore, research results become directly comparable and easier to reuse. As our main contribution, we propose the vision of a knowledge graph for science, present a possible infrastructure for such a knowledge graph as well as our early attempts towards an implementation of the infrastructure.
Submitted by Sören Auer (auer@et-inf.uni-hannover.de) on 2018-05-24T10:14:13Z
No. of bitstreams: 1
knowledge-graph-science.pdf: 475098 bytes, checksum: 10f7871178d7ad053626867e7ad15fd6 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-05-24T10:31:34Z (GMT) No. of bitstreams: 1
knowledge-graph-science.pdf: 475098 bytes, checksum: 10f7871178d7ad053626867e7ad15fd6 (MD5)
Made available in DSpace on 2018-05-24T10:31:34Z (GMT). No. of bitstreams: 1
knowledge-graph-science.pdf: 475098 bytes, checksum: 10f7871178d7ad053626867e7ad15fd6 (MD5)
Previous issue date: 2018-06
acceptedVersion
eng
New York : ACM
WIMS 2018: 8th International Conference on Web Intelligence, Mining and Semantics
10.1145/3227609.3227689
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Knowledge Graph
Science and Technology
Research Infrastructure
Libraries
Information Science
Wissensgraphen
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::020 | Bibliotheks- und Informationswissenschaft
Towards a Knowledge Graph for Science
ConferenceObject
Text6 S.openAccessSörenViktorManuelAnnaMarkusAuerKovtunPrinzKasprzikStockerWIMS 2018: 8th International Conference on Web Intelligence, Mining and Semantics, 25-27 June, Novi Sad, SerbiaallowedVerlagspolicy
oai:www.repo.uni-hannover.de:123456789/34392022-12-02T19:24:35Zcom_123456789_15col_123456789_16ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Rückemann, Claus-Peter
Hülsmann, Friedrich
Gersbeck-Schierholz, Birgit
Skurowski, Przemyslaw
Staniszewski, Michal
Rückemann, Claus-Peter
2018-05-25T10:46:58Z
2018-05-25T10:46:58Z
2015
Rückemann, C.-P.; Hülsmann, F.; Gersbeck-Schierholz, B.; Skurowski, P.; Staniszewski, M.: Best Practice and Definitions of Knowledge and Computing. ICNAAM 2015, 23-29 September 2015, Rhodes, Greece. Hannover : Institutionelles Repositorium der Leibniz Universität Hannover, 2015. DOI: https://doi.org/10.15488/3409
http://www.repo.uni-hannover.de/handle/123456789/3439
http://dx.doi.org/10.15488/3409
Post-Summit Results of the Delegates' Summit, September 23, 2015, The Fifth Symposium on Advanced Computation and Information in Natural and Applied Sciences (SACINAS) at The 13th International Conference of Numerical Analysis and Applied Mathematics (ICNAAM), September 23-29, 2015, Rhodes, Greece
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-05-25T10:46:44Z
No. of bitstreams: 1
Rueckemann 2015, Delegates Summit Best Practices and Definitions of Knowledge and Computing.pdf: 1347880 bytes, checksum: 074c7f8bf8642496dd353afdcd767f97 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-05-25T10:46:57Z (GMT) No. of bitstreams: 1
Rueckemann 2015, Delegates Summit Best Practices and Definitions of Knowledge and Computing.pdf: 1347880 bytes, checksum: 074c7f8bf8642496dd353afdcd767f97 (MD5)
Made available in DSpace on 2018-05-25T10:46:58Z (GMT). No. of bitstreams: 1
Rueckemann 2015, Delegates Summit Best Practices and Definitions of Knowledge and Computing.pdf: 1347880 bytes, checksum: 074c7f8bf8642496dd353afdcd767f97 (MD5)
Previous issue date: 2015
publishedVersion
ger
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Knowledge
Computing
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Best Practice and Defnitions of Knowledge and Computing
ConferenceObject
TextopenAccessClaus-PeterFriedrichBirgitPrzemyslawMichalRückemannHülsmannGersbeck-SchierholzSkurowskiStaniszewskiThe 13th International Conference of Numerical Analysis and Applied Mathematics (ICNAAM), 23-29 September, 2015, Rhodes, Greeceallowed
oai:www.repo.uni-hannover.de:123456789/34402022-12-02T19:24:35Zcom_123456789_15col_123456789_16ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Rückemann, Claus-Peter
Kovacheva, Zlatinka
Schubert, Lutz
Lishchuk, Iryna
Gersbeck-Schierholz, Birgit
Hülsmann, Friedrich
Rückemann, Claus-Peter
2018-05-25T10:59:26Z
2018-05-25T10:59:26Z
2016
Rückemann, C.-P.; Kovacheva, Z.; Schubert, L.; Lishchuk, I.; Gersbeck-Schierholz, B.; Hülsmann, F.: Best Practice and Definitions of Data-centric and Big Data : Science, Society, Law, Industry, and Engineering. ICNAAM 2016, 19-25 September 2016, Rhodes, Greece. Hannover : Institutionelles Repositorium der Leibniz Universität Hannover, 2016. DOI: https://doi.org/10.15488/3410
http://www.repo.uni-hannover.de/handle/123456789/3440
http://dx.doi.org/10.15488/3410
Post-Summit Results of the Delegates' Summit, September 19, 2016, The Sixth Symposium on Advanced Computation and Information in Natural and Applied Sciences (SACINAS) at The 14th International Conference of Numerical Analysis and Applied Mathematics (ICNAAM), September 19-25, 2016, Rhodes, Greece
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-05-25T10:58:53Z
No. of bitstreams: 1
Rueckemann 2016, Delegates Summit Best Practices and Definitions of Data-centric and Big Data.pdf: 1412157 bytes, checksum: 26a36ba95015df4b33971f57402dee91 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-05-25T10:59:26Z (GMT) No. of bitstreams: 1
Rueckemann 2016, Delegates Summit Best Practices and Definitions of Data-centric and Big Data.pdf: 1412157 bytes, checksum: 26a36ba95015df4b33971f57402dee91 (MD5)
Made available in DSpace on 2018-05-25T10:59:26Z (GMT). No. of bitstreams: 1
Rueckemann 2016, Delegates Summit Best Practices and Definitions of Data-centric and Big Data.pdf: 1412157 bytes, checksum: 26a36ba95015df4b33971f57402dee91 (MD5)
Previous issue date: 2016
publishedVersion
ger
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Data-centric
Big Data
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Best Practice and Definitions of Data-centric and Big Data : Science, Society, Law, Industry, and Engineering
ConferenceObject
TextopenAccessClaus-PeterZlatkinaLutzIrynaBirgitFriedrichRückemannKovachevaSchubertLishchukGersbeck-SchierholzHülsmannThe 14th International Conference of Numerical Analysis and Applied Mathematics (ICNAAM), 19-25 September 2016, Rhodes, Greeceallowed
oai:www.repo.uni-hannover.de:123456789/34412022-12-02T19:24:35Zcom_123456789_15col_123456789_16ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Rückemann, Claus-Peter
Iakushkin, Oleg O.
Gersbeck-Schierholz, Birgit
Hülsmann, Friedrich
Schubert, Lutz
Lau, Olaf
Rückemann, Claus-Peter
2018-05-25T11:06:57Z
2018-05-25T11:06:57Z
2017
Rückemann, C.-P.; Iakushkin, O.O.; Gersbeck-Schierholz, B.; Hülsmann, F.; Schubert, L.; Lau, O.: Best Practice and Definitions of Data Sciences : Beyond Statistics. ICNAAM 2017, 25-30 September 2017, Thessaloniki, Greece. Hannover : Institutionelles Repositorium der Leibniz Universität Hannover, 2017. DOI: https://doi.org/10.15488/3411
http://www.repo.uni-hannover.de/handle/123456789/3441
http://dx.doi.org/10.15488/3411
Post-Summit Results of the Delegates' Summit, September 25, 2017, The Seventh Symposium on Advanced Computation and Information in Natural and Applied Sciences (SACINAS) at The 15th International Conference of Numerical Analysis and Applied Mathematics (ICNAAM), September 25-30, 2017, Thessaloniki, Greece
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-05-25T11:06:31Z
No. of bitstreams: 1
Rueckemann 2017, Delegates Summit Best Practices and Definitions of Data Sciences.pdf: 1430607 bytes, checksum: eab34e287ec464c95ebafcb485c24576 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-05-25T11:06:57Z (GMT) No. of bitstreams: 1
Rueckemann 2017, Delegates Summit Best Practices and Definitions of Data Sciences.pdf: 1430607 bytes, checksum: eab34e287ec464c95ebafcb485c24576 (MD5)
Made available in DSpace on 2018-05-25T11:06:57Z (GMT). No. of bitstreams: 1
Rueckemann 2017, Delegates Summit Best Practices and Definitions of Data Sciences.pdf: 1430607 bytes, checksum: eab34e287ec464c95ebafcb485c24576 (MD5)
Previous issue date: 2017
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Data Science
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Best Practice and Definitions of Data Sciences : Beyond Statistics
ConferenceObject
TextopenAccessClaus-PeterOleg O.BirgitFriedrichLutzOlafRückemannIakushkinGersbeck-SchierholzHülsmannSchubertLauThe 15th International Conference of Numerical Analysis and Applied Mathematics (ICNAAM), 25-30 September 2017, Thessaloniki, Greeceallowed
oai:www.repo.uni-hannover.de:123456789/34672022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Stefanidis, Kostas
Ntoutsi, Eirini
2018-06-08T11:57:14Z
2018-06-08T11:57:14Z
2016
Stefanidis, K.; Ntoutsi, E.: Cluster-based contextual recommendations. In: Advances in Database Technology - EDBT 2016-March (2016), Nr. März, S. 712-713. DOI: https://doi.org/10.5441/002/edbt.2016.100
http://www.repo.uni-hannover.de/handle/123456789/3467
http://dx.doi.org/10.15488/3437
In this work, we address the problem of contextual recommendations by exploiting the concept of subspace clustering. Specifically, we pre-partition users that have rated subsets of data items similarly into clusters and we associate a context situation with each cluster. The cluster context is defined as any internally stored information that can be used to characterize the cluster members per se. Then, given a query context, we identify the clusters with the most similar context, and we use their members for making suggestions in a collaborative filtering manner. © 2016, Copyright is with the authors.
Made available in DSpace on 2018-06-08T11:57:14Z (GMT). No. of bitstreams: 0
Previous issue date: 2016
publishedVersion
eng
Konstanz : OpenProceedings.org
Advances in Database Technology - EDBT 2016-March (2016)
2367-2005
https://doi.org/10.5441/002/edbt.2016.100
CC BY-NC-ND 4.0 Unported
https://creativecommons.org/licenses/by-nc-nd/4.0/
Clustering algorithms
Collaborative filtering
Database systems
Cluster-based
Context situations
Contextual recommendations
Data items
Query context
Sub-Space Clustering
Query processing
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Cluster-based contextual recommendations
Article
Text
März
2016
712
713openAccess19th International Conference on Extending Database Technology, EDBT 2016, March 15-18, 2016, Bordeaux, France
oai:www.repo.uni-hannover.de:123456789/34922022-12-02T08:05:50Zcom_123456789_1com_123456789_2961col_123456789_4col_123456789_2962ddc:004doc-type:Textopen_accessstatus-type:publishedVersiondoc-type:DoctoralThesis
Kwoczek, Simon
2018-06-11T06:11:11Z
2018-06-11T06:11:11Z
2018
Kwoczek, Simon: Enhanced mobility awareness : a data-driven approach to analyze traffic under planned special event scenarios. Hannover : Gottfried Wilhelm Leibniz Universität, Diss., 2018, xv, 155 S. DOI: https://doi.org/10.15488/3462
http://www.repo.uni-hannover.de/handle/123456789/3492
http://dx.doi.org/10.15488/3462
Traffic disruptions impose societal costs of billions of dollars every year. A constant increase in mobility demand, combined with ongoing urbanization, exacerbates the problem. Since extensions of the infrastructure are for the most part no longer feasible, researchers are trying to find solutions to increase the efficiency of the road network usage. One key element to meeting that goal is to use smart prediction techniques on as many traffic-influencing factors as possible. With the availability of traffic datasets with high spatial and temporal resolutions, more and more data-driven solutions to predict the impact of these factors have been presented by the community. However, while the impacts of hazards, road accidents, and daily rush hour have been the subjects of intense study and analysis the specific impact of so-called planned special events on traffic remains mostly unexplored. Are the effects of upcoming concerts, sporting events, etc. predictable at all? This is the main question that we address in this thesis. We focus our analysis on three different aspects. First, we analyze the general characteristics of event-caused traffic disruptions around different venues in Germany. The results show, that the impact of events varies strongly, being highly affected by its venue location, the time of day, and the event category. In the second step, we analyze the spatial impact of events around different venues. This spatial impact describes a set of road segments, that people tend to use to get to and from the venue. To identify those preferred routes, we propose a classification-based technique that measures event influence for each road segment separately. The approach is based on a large scale analysis across many different venues in Germany. Results show impact zones around several soccer venues in Germany that we discuss in detail. In the third part of this thesis we analyze features from online sources (Twitter, Facebook, etc.) in terms of their explanatory power towards the expected event impact. We collect a large list of different information sources for major events in different venues. Based on that collection, we present prediction models for various measures of event impact. Our results show, that these approaches are capable to predict the severity of event impact under certain conditions, which allows decision makers to create traffic management strategies tailored to event caused traffic disruptions.
Submitted by Simon Kwoczek (s.kwoczek@stud.uni-hannover.de) on 2018-06-08T12:42:10Z
No. of bitstreams: 1
Dissertation_Simon_Kwoczek_Final.pdf: 21203848 bytes, checksum: d58aa52754880bda1bb472495f4eaa60 (MD5)
Approved for entry into archive by Ursula Krys (ursula.krys@tib.eu) on 2018-06-11T06:11:11Z (GMT) No. of bitstreams: 1
Dissertation_Simon_Kwoczek_Final.pdf: 21203848 bytes, checksum: d58aa52754880bda1bb472495f4eaa60 (MD5)
Made available in DSpace on 2018-06-11T06:11:11Z (GMT). No. of bitstreams: 1
Dissertation_Simon_Kwoczek_Final.pdf: 21203848 bytes, checksum: d58aa52754880bda1bb472495f4eaa60 (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Traffic prediction
Planned special event
Social media
Verkehrsprognosen
Veranstaltungen
Soziale Medien
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Enhanced mobility awareness : a data-driven approach to analyze traffic under planned special event scenarios
DoctoralThesis
Textxv, 155 S.openAccess2018-06-08SimonGottfried Wilhelm Leibniz Universität HannoverHannoverGottfried Wilhelm Leibniz Universität HannoverHannoverWelfengarten 1 B, 30167 HannoverKwoczekDEthesis.doctoraljaallowed
oai:www.repo.uni-hannover.de:123456789/34942022-12-02T18:18:53Zcom_123456789_1col_123456789_4ddc:004status-type:acceptedVersiondoc-type:Textdoc-type:ConferenceObjectopen_access
Endris, Kemele M
Almhithawi, Zuhair
Lytra, Ioanna
Vidal, Maria-Esther
Auer, Sören
2018-06-11T09:00:38Z
2018-06-11T09:00:38Z
2018
Endris, K.M.; Almhithawi, Z.; Lytra, I.; Vidal, M.-E.; Auer, S.: BOUNCER: Privacy-aware Query Processing Over Federations of RDF Datasets. DEXA 2018, 3-6 September 2018, Regensburg, Germany. http://www.dexa.org/dexa2018
http://www.repo.uni-hannover.de/handle/123456789/3494
http://dx.doi.org/10.15488/3464
Data provides the basis for emerging scientific and interdisciplinary data-centric applications with the potential of improving the quality of life for the citizens. However, effective data-centric applications demand data management techniques able to process a large volume of data which may include sensitive data, e.g., financial transactions, medical procedures, or personal data. Managing sensitive data requires the enforcement of privacy and access control regulations, particularly, during the execution of queries against datasets that include sensitive and nonsensitive data. In this paper, we tackle the problem of enforcing privacy regulations during query processing, and propose BOUNCER, a privacy-aware query engine over federations of RDF datasets. BOUNCER allows for the description of RDF datasets in terms of RDF molecule templates, i.e., abstract descriptions of the properties of the entities in an RDF dataset and their privacy regulations. Furthermore, BOUNCER implements query decomposition and optimization techniques able to identify query plans over RDF datasets that not only contain the relevant entities to answer a query, but that are also regulated by policies that allow for accessing these relevant entities. We empirically evaluate the effectiveness of the BOUNCER privacy-aware techniques over state-of-the-art benchmarks of RDF datasets. The observed results suggest that BOUNCER can effectively enforce access control regulations at different granularity without impacting the performance of query processing.
Submitted by Kemele Endris (endris@l3s.de) on 2018-06-09T08:56:24Z
No. of bitstreams: 1
camera-ready.pdf: 1442907 bytes, checksum: 34066d6ed2ceb7adbd0201d2f474c2f1 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-06-11T09:00:38Z (GMT) No. of bitstreams: 1
camera-ready.pdf: 1442907 bytes, checksum: 34066d6ed2ceb7adbd0201d2f474c2f1 (MD5)
Made available in DSpace on 2018-06-11T09:00:38Z (GMT). No. of bitstreams: 1
camera-ready.pdf: 1442907 bytes, checksum: 34066d6ed2ceb7adbd0201d2f474c2f1 (MD5)
Previous issue date: 2018
acceptedVersion
eng
Heidelberg : Springer
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Federated Engine
Access-control
Semantic Web
Linked Data
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
BOUNCER: Privacy-aware Query Processing Over Federations of RDF Datasets
ConferenceObject
TextopenAccessKemele MZuhairIoannaMaria-EstherSörenEndrisAlmhithawiLytraVidalAuer29th International Conference on Database and Expert Systems Applications - DEXA 2018, 3-6 September 2018, Regensburg, GermanyallowedVerlagspolicy
oai:www.repo.uni-hannover.de:123456789/35362022-12-02T18:18:53Zcom_123456789_1com_123456789_3565col_123456789_4col_123456789_3566ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Bakhshi Golestani, Hossein
Bauer, Johannes
Erfurt, Johannes
Fischer, Kristian
Gehlert, Alexander
Genser, Nils
Grosche, Simon
Kuhnke, Felix
Laude, Thorsten
Meuel, Holger
Munderloh, Marco
Spruck, Andreas
Voges, Jan
Voges, Jan
Leibniz Universität Hannover, Institut für Informationsverarbeitung
2018-07-03T13:50:54Z
2018-07-03T13:50:54Z
2018-07-03
Voges, Jan (Ed.): Proceedings of the 4th Summer School on Video Compression and Processing (SVCP) 2018. Hannover: Institutionelles Repositorium der Leibniz Universität Hannover, 2018, 8 S. http://svcp2018.tnt.uni-hannover.de/
http://www.repo.uni-hannover.de:8080/handle/123456789/3536
http://dx.doi.org/10.15488/3506
Proceedings of the 4th Summer School on Video Compression and Processing (SVCP) 2018
Submitted by Jan Voges (voges@tnt.uni-hannover.de) on 2018-07-03T13:43:02Z
No. of bitstreams: 1
SVCP_2018_Proceedings.pdf: 221073 bytes, checksum: 94218698c03fb54d94b8e1a88ddf0b3f (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-03T13:50:54Z (GMT) No. of bitstreams: 1
SVCP_2018_Proceedings.pdf: 221073 bytes, checksum: 94218698c03fb54d94b8e1a88ddf0b3f (MD5)
Made available in DSpace on 2018-07-03T13:50:54Z (GMT). No. of bitstreams: 1
SVCP_2018_Proceedings.pdf: 221073 bytes, checksum: 94218698c03fb54d94b8e1a88ddf0b3f (MD5)
Previous issue date: 2018-07-03
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
SVCP
Machine learning
Augmented reality
Virtual reality
Signal processing
Information Theory
Video compression
Image compression
Summer School
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Proceedings of the 4th Summer School on Video Compression and Processing (SVCP) 2018
ConferenceObject
Text8 SopenAccessHosseinJohannesJohannesKristianAlexanderNilsSimonFelixThorstenHolgerMarcoAndreasJanBakhshi GolestaniBauerErfurtFischerGehlertGenserGroscheKuhnkeLaudeMeuelMunderlohSpruckVoges4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germanyprohibited
oai:www.repo.uni-hannover.de:123456789/35552022-12-02T07:47:02Zcom_123456789_1com_123456789_2961col_123456789_8col_123456789_2962ddc:004doc-type:Bookdoc-type:Textopen_accessstatus-type:publishedVersiondoc-type:DoctoralThesisddc:600
Rath, Thomas
2018-07-13T11:20:49Z
2018-07-13T11:20:49Z
1992
Rath, Thomas: Einsatz wissensbasierter Systeme zur Modellierung und Darstellung von gartenbautechnischem Fachwissen am Beispiel des hybriden Expertensystems HORTEX. Hannover : Institut für Technik in Gartenbau und Landwirtschaft, 1992 (Gartenbautechnische Informationen ; 34), 237 S.
978-3-926203-08-3
https://www.repo.uni-hannover.de:443/handle/123456789/3555
http://dx.doi.org/10.15488/3525
Die vorliegende Arbeit beschreibt den Einsatz von wissensbasierten Systemen (Expertensystemen) zur Modellierung und Darstellung von Fachwissen im Bereich Gartenbautechnik. Im einzelnen wurde untersucht, inwieweit die Methoden zur Entwicklung von Expertensystemen im Bereich der Gartenbautechnik eingesetzt werden können, inwieweit sie angepaßt werden müssen oder inwieweit sie eine Erweiterung konventioneller Modellierung ermöglichen. Wissensbasierte Systeme werden dabei als Computerprogramme verstanden, bei denen eine strikte Trennung zwischen einer Wissensbasis und einem Ableitungsmechanismus funktioniert.
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-13T11:19:35Z
No. of bitstreams: 1
Diss_Rath_Hortex.pdf: 5749246 bytes, checksum: 41a5a977197fa7c39080b4bd9f127fa1 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-13T11:20:49Z (GMT) No. of bitstreams: 1
Diss_Rath_Hortex.pdf: 5749246 bytes, checksum: 41a5a977197fa7c39080b4bd9f127fa1 (MD5)
Made available in DSpace on 2018-07-13T11:20:49Z (GMT). No. of bitstreams: 1
Diss_Rath_Hortex.pdf: 5749246 bytes, checksum: 41a5a977197fa7c39080b4bd9f127fa1 (MD5)
Previous issue date: 1992
publishedVersion
ger
Hannover : Institut für Technik in Gartenbau und Landwirtschaft
Gartenbautechnische Information; 34
CC BY 3.0 DE
http://creativecommons.org/licenses/by/3.0/de/
Expertensystem
Wissensbasierte Systeme
Wissensmodellierung
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Dewey Decimal Classification::600 | Technik
Einsatz wissensbasierter Systeme zur Modellierung und Darstellung von gartenbautechnischem Fachwissen am Beispiel des hybriden Expertensystems HORTEX
DoctoralThesis
Book
Text237 S.openAccess1992ThomasGottfried Wilhelm Leibniz Universität HannoverHannoverGottfried Wilhelm Leibniz Universität HannoverWelfengarten 1 B, 30167 HannoverRathDEthesis.doctoralallowed
oai:www.repo.uni-hannover.de:123456789/35602022-12-02T18:18:53Zcom_123456789_1com_123456789_3565col_123456789_4col_123456789_3566ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Golestani, Hossein
Meuel, Holger
Voges, Jan
Laude, Thorsten
Erfurt, Johannes
Lim, Wang
Schwarz, Heiko
Marpe, Detlev
Wiegand, Thomas
Genser, Nils
Seiler, Jürgen
Kaup, André
Munderloh, Marco
Dedjouong, Armel
Bahlau, Sascha
Klemt-Albert, Katharina
Ostermann, Jörn
Samayoa, Yasser
Purushothaman, Suraja Koottachiara Nikarth
Voges, Jan
2018-07-19T10:41:01Z
2018-07-19T10:41:01Z
2018-07-18
Voges, Jan (Ed.): Extended Proceedings 4th Summer School on Video Compression and Processing (SVCP) 2018. Hannover: Institutionelles Repositorium der Leibniz Universität Hannover, 2018, 137 S. http://svcp2018.tnt.uni-hannover.de/
https://www.repo.uni-hannover.de:443/handle/123456789/3560
http://dx.doi.org/10.15488/3530
Extended Proceedings 4th Summer School on Video Compression and Processing (SVCP) 2018
Submitted by Jan Voges (voges@tnt.uni-hannover.de) on 2018-07-18T13:28:51Z
No. of bitstreams: 1
Extended_Proceedings.pdf: 35325767 bytes, checksum: de7a590007095cdbcd2fffbf98410e52 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-19T10:41:01Z (GMT) No. of bitstreams: 1
Extended_Proceedings.pdf: 35325767 bytes, checksum: de7a590007095cdbcd2fffbf98410e52 (MD5)
Made available in DSpace on 2018-07-19T10:41:01Z (GMT). No. of bitstreams: 1
Extended_Proceedings.pdf: 35325767 bytes, checksum: de7a590007095cdbcd2fffbf98410e52 (MD5)
Previous issue date: 2018-07-18
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Virtual reality
SVCP
Augmented reality
Machine learning
Signal processing
Information theory
Video compression
Image compression
Motion compensation
Video coding
MPEG-G
Genomic information compression
JEM
AV1
HEVC
Adaptive loop filter
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Extended Proceedings 4th Summer School on Video Compression and Processing (SVCP) 2018
ConferenceObject
Text137 S.openAccessHosseinHolgerJanThorstenJohannesWangHeikoDetlevThomasNilsJürgenAndréMarcoArmelSaschaKatharinaJörnYasserSuraja Koottachiara NikarthGolestaniMeuelVogesLaudeErfurtLimSchwarzMarpeWiegandGenserSeilerKaupMunderlohDedjouongBahlauKlemt-AlbertOstermannSamayoaPurushothaman4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germanyprohibited
oai:www.repo.uni-hannover.de:123456789/35672022-12-02T18:18:53Zcom_123456789_1com_123456789_3565col_123456789_4col_123456789_3566ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Erfurt, Johannes
Lim, W
Schwarz, H.
Marpe, D.
Wiegand, T.
Leibniz Universität Hannover, Institut für Informationsverarbeitung
2018-07-20T13:00:14Z
2018-07-20T13:00:14Z
2018
Erfurt, J.; Lim, W.; Schwarz, H.; Marpe, D.; Wiegand, T.: Multiple Feature-Based Classifications Adaptive Loop Filter (MCALF). Presentation at the 4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germany
https://www.repo.uni-hannover.de:443/handle/123456789/3567
http://dx.doi.org/10.15488/3535
Presentation at the SVCP 2018
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T12:59:51Z
No. of bitstreams: 1
Erfurt et al, MULTIPLE FEATURE-BASED CLASSIFICATIONS.pdf: 2952200 bytes, checksum: bbbd68d8f6c0ce45fa1887680d7dc358 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:00:14Z (GMT) No. of bitstreams: 1
Erfurt et al, MULTIPLE FEATURE-BASED CLASSIFICATIONS.pdf: 2952200 bytes, checksum: bbbd68d8f6c0ce45fa1887680d7dc358 (MD5)
Made available in DSpace on 2018-07-20T13:00:14Z (GMT). No. of bitstreams: 1
Erfurt et al, MULTIPLE FEATURE-BASED CLASSIFICATIONS.pdf: 2952200 bytes, checksum: bbbd68d8f6c0ce45fa1887680d7dc358 (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
In-Loop Filter
Loop Filter
HEVC
Adaptive Loop Filtering
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Multiple Feature-Based Classifications Adaptive Loop Filter (MCALF)
ConferenceObject
TextopenAccessJohannesWH.D.T.ErfurtLimSchwarzMarpeWiegand4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germanyprohibited
oai:www.repo.uni-hannover.de:123456789/35682022-12-02T18:18:53Zcom_123456789_1com_123456789_3565col_123456789_4col_123456789_3566ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Genser, Nils
Seiler, Jürgen
Kaup, André
Leibniz Universität Hannover, Institut für Informationsverarbeitung
2018-07-20T13:09:07Z
2018-07-20T13:09:07Z
2018
Genser, N.; Seiler, J.; Kaup, A.: Demonstration of Rapid Frequency Selective Reconstruction for Image Resolution Enhancement. Poster Presentation at the 4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germany. Hannover : Institutionelles Repositorium der Leibniz Universität Hannover. DOI: https://doi.org/10.15488/3536
https://www.repo.uni-hannover.de:443/handle/123456789/3568
http://dx.doi.org/10.15488/3536
Poster Presentation at the SVCP 2018
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:08:52Z
No. of bitstreams: 1
Genser et al, Demonstration of Rapid Requency Selective Reconstruction.pdf: 577194 bytes, checksum: 76ad907c9757cc03863ec65e5005af5f (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:09:06Z (GMT) No. of bitstreams: 1
Genser et al, Demonstration of Rapid Requency Selective Reconstruction.pdf: 577194 bytes, checksum: 76ad907c9757cc03863ec65e5005af5f (MD5)
Made available in DSpace on 2018-07-20T13:09:07Z (GMT). No. of bitstreams: 1
Genser et al, Demonstration of Rapid Requency Selective Reconstruction.pdf: 577194 bytes, checksum: 76ad907c9757cc03863ec65e5005af5f (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
FSR
Optical Cluster Eye
Micro-Optical Artificial Compound Eyes
Super-Resolution techniques
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Demonstration of Rapid Frequency Selective Reconstruction for Image Resolution Enhancement
ConferenceObject
TextopenAccessNilsJürgenAndréGenserSeilerKaup4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germanyprohibited
oai:www.repo.uni-hannover.de:123456789/35692022-12-02T18:18:53Zcom_123456789_1com_123456789_3565col_123456789_4col_123456789_3566ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Golestani, Hossein
Leibniz Universität Hannover, Institut für Informationsverarbeitung
2018-07-20T13:15:38Z
2018-07-20T13:15:38Z
2018
Golestani, H.: 3D Models in Motion Compensation. Presentation at the 4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germany
https://www.repo.uni-hannover.de:443/handle/123456789/3569
http://dx.doi.org/10.15488/3537
Presentation at the SVCP 2018
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:15:22Z
No. of bitstreams: 1
Golestani, 3D Models in Motion Compensation.pdf: 3032527 bytes, checksum: 062feacf4d201770470af5d49dff9cac (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:15:38Z (GMT) No. of bitstreams: 1
Golestani, 3D Models in Motion Compensation.pdf: 3032527 bytes, checksum: 062feacf4d201770470af5d49dff9cac (MD5)
Made available in DSpace on 2018-07-20T13:15:38Z (GMT). No. of bitstreams: 1
Golestani, 3D Models in Motion Compensation.pdf: 3032527 bytes, checksum: 062feacf4d201770470af5d49dff9cac (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Structure from Motion
Multi-View Reconstruction
Virtual View Synthesis
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
3D Models in Motion Compensation
ConferenceObject
TextopenAccessHosseinGolestani4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germanyprohibited
oai:www.repo.uni-hannover.de:123456789/35702022-12-02T18:18:53Zcom_123456789_1com_123456789_3565col_123456789_4col_123456789_3566ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Laude, Thorsten
Leibniz Universität Hannover, Institut für Informationsverarbeitung
2018-07-20T13:21:09Z
2018-07-20T13:21:09Z
2018
Laude, T.: A Comparison of JEM and AV1 with HEVC. Presentation at the 4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germany
https://www.repo.uni-hannover.de:443/handle/123456789/3570
http://dx.doi.org/10.15488/3538
Presentation at the SVCP 2018
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:20:59Z
No. of bitstreams: 1
Laude, A Comparison of JEM and AV1 with HEVC.pdf: 614600 bytes, checksum: 49d4576019a78ba525f6ee792c0a5717 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:21:09Z (GMT) No. of bitstreams: 1
Laude, A Comparison of JEM and AV1 with HEVC.pdf: 614600 bytes, checksum: 49d4576019a78ba525f6ee792c0a5717 (MD5)
Made available in DSpace on 2018-07-20T13:21:09Z (GMT). No. of bitstreams: 1
Laude, A Comparison of JEM and AV1 with HEVC.pdf: 614600 bytes, checksum: 49d4576019a78ba525f6ee792c0a5717 (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Video Codec
Coding Efficiency
Runtimes
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
A Comparison of JEM and AV1 with HEVC
ConferenceObject
TextopenAccessThorstenLaude4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germanyprohibited
oai:www.repo.uni-hannover.de:123456789/35712022-12-02T18:18:53Zcom_123456789_1com_123456789_3565col_123456789_4col_123456789_3566ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Meuel, Holger
Leibniz Universität Hannover, Institut für Informationsverarbeitung
2018-07-20T13:25:31Z
2018-07-20T13:25:31Z
2018
Meuel, H.: Rate-Distortion Theory for Affine (Global) Motion Compensation in Video Coding. Presentation at the 4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germany
https://www.repo.uni-hannover.de:443/handle/123456789/3571
http://dx.doi.org/10.15488/3539
Presentation at the SVCP 2018
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:25:21Z
No. of bitstreams: 1
Meuel, Rate-Distortion Theory for Affine.pdf: 9993693 bytes, checksum: 52adc4553e12bef923bf747fc87bd44e (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:25:31Z (GMT) No. of bitstreams: 1
Meuel, Rate-Distortion Theory for Affine.pdf: 9993693 bytes, checksum: 52adc4553e12bef923bf747fc87bd44e (MD5)
Made available in DSpace on 2018-07-20T13:25:31Z (GMT). No. of bitstreams: 1
Meuel, Rate-Distortion Theory for Affine.pdf: 9993693 bytes, checksum: 52adc4553e12bef923bf747fc87bd44e (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Motion Compensated Prediction
Motion Estimation
Prediction Error
Probability Density Function
Power Spectral Density
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Rate-Distortion Theory for Affine (Global) Motion Compensation in Video Coding
ConferenceObject
TextopenAccessHolgerMeuel4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germanyprohibited
oai:www.repo.uni-hannover.de:123456789/35722022-12-02T18:18:53Zcom_123456789_1com_123456789_3565col_123456789_4col_123456789_3566ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Munderloh, Marco
Dedjouong, Armel
Suraja, K.P.
Bahlau, Sascha
Klemt-Albert, Katharina
Östermann, Jörn
Leibniz Universität Hannover, Institut für Informationsverarbeitung
2018-07-20T13:32:38Z
2018-07-20T13:32:38Z
2018
Munderloh, M. et al.: Optical Validation of Precast and Reinforced Concrete. Poster Presentation at the 4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germany
https://www.repo.uni-hannover.de:443/handle/123456789/3572
http://dx.doi.org/10.15488/3540
Poster Presentation at the SVCP 2018
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:32:28Z
No. of bitstreams: 1
Munderloh et al, Optical Validation of Precast and Reinforced Concrete.pdf: 13230867 bytes, checksum: f779efeed8c3ae0ba72a19d30debf2c5 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:32:38Z (GMT) No. of bitstreams: 1
Munderloh et al, Optical Validation of Precast and Reinforced Concrete.pdf: 13230867 bytes, checksum: f779efeed8c3ae0ba72a19d30debf2c5 (MD5)
Made available in DSpace on 2018-07-20T13:32:38Z (GMT). No. of bitstreams: 1
Munderloh et al, Optical Validation of Precast and Reinforced Concrete.pdf: 13230867 bytes, checksum: f779efeed8c3ae0ba72a19d30debf2c5 (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Building Information Modeling
Automatic Optical Validation
Reinforcement Bar
Precast Concrete
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Optical Validation of Precast and Reinforced Concrete
ConferenceObject
TextopenAccessMarcoArmelK.P.SaschaKatharinaJörnMunderlohDedjouongSurajaBahlauKlemt-AlbertÖstermann4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germanyprohibited
oai:www.repo.uni-hannover.de:123456789/35732022-12-02T18:24:33Zcom_123456789_1com_123456789_3565col_123456789_4col_123456789_3566ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Samayoa, Yasser
Ostermann, Jörn
Leibniz Universität Hannover, Institut für Informationsverarbeitung
2018-07-20T13:41:56Z
2018-07-20T13:41:56Z
2018
Samayoa, Y.; Ostermann, J.: Video Transmission : An Overview of Video Compression and Communication Systems. Poster Presentation at the 4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germany. Hannover : Institutionelles Repositorium der Leibniz Universität Hannover. DOI: https://doi.org/10.15488/3541
https://www.repo.uni-hannover.de:443/handle/123456789/3573
http://dx.doi.org/10.15488/3541
Poster Presentation at the SVCP 2018
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:41:47Z
No. of bitstreams: 1
Samayoa et al, Video Transmission.pdf: 487192 bytes, checksum: a3bd219f019d24c04fae44603d9a5d3c (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:41:56Z (GMT) No. of bitstreams: 1
Samayoa et al, Video Transmission.pdf: 487192 bytes, checksum: a3bd219f019d24c04fae44603d9a5d3c (MD5)
Made available in DSpace on 2018-07-20T13:41:56Z (GMT). No. of bitstreams: 1
Samayoa et al, Video Transmission.pdf: 487192 bytes, checksum: a3bd219f019d24c04fae44603d9a5d3c (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
HEVC
Orthogonal Frequency Division Modulation
OFDM
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Video Transmission : An Overview of Video Compression and Communication Systems
ConferenceObject
TextopenAccessYasserJörnSamayoaOstermann4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germanyprohibited
oai:www.repo.uni-hannover.de:123456789/35742022-12-02T18:18:53Zcom_123456789_1com_123456789_3565col_123456789_4col_123456789_3566ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Voges, Jan
Leibniz Universität Hannover, Institut für Informationsverarbeitung
2018-07-20T13:46:49Z
2018-07-20T13:46:49Z
2018
Voges, J.: MPEG-G: The Standard for Genomic Information Representation. Presentation at the 4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germany
https://www.repo.uni-hannover.de:443/handle/123456789/3574
http://dx.doi.org/10.15488/3542
Presentation at the SVCP 2018
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:46:40Z
No. of bitstreams: 1
Voges, MPEG-G.pdf: 4056382 bytes, checksum: a9b60256e5429aa5e23b4f5e365507a5 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-07-20T13:46:49Z (GMT) No. of bitstreams: 1
Voges, MPEG-G.pdf: 4056382 bytes, checksum: a9b60256e5429aa5e23b4f5e365507a5 (MD5)
Made available in DSpace on 2018-07-20T13:46:49Z (GMT). No. of bitstreams: 1
Voges, MPEG-G.pdf: 4056382 bytes, checksum: a9b60256e5429aa5e23b4f5e365507a5 (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Genome Sequencing
ISO/IEC 23092
File Format
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
MPEG-G: The Standard for Genomic Information Representation
ConferenceObject
TextopenAccessJanVoges4th Summer School on Video Compression and Processing (SVCP) 2018, 4-6 July 2018, Hannover, Germanyprohibited
oai:www.repo.uni-hannover.de:123456789/36002022-12-02T08:09:00Zcom_123456789_1com_123456789_2961col_123456789_4col_123456789_2962ddc:004doc-type:Textopen_accessstatus-type:publishedVersiondoc-type:DoctoralThesis
Nejdl, Wolfgang
De Natale, Francesco G.B.
Ceroni, Andrea
2018-08-06T08:07:07Z
2018-08-06T08:07:07Z
2018
Ceroni, Andrea: Methods for managing, validating and retrieving event-related information in evolving contexts. Hannover : Gottfried Wilhelm Leibniz Universität, Diss., 2018, XVI, 138 S. DOI: https://doi.org/10.15488/3568
https://www.repo.uni-hannover.de:443/handle/123456789/3600
http://dx.doi.org/10.15488/3568
Events have always been fundamental building blocks of individual lives as well as of the whole world. Nowadays, thanks to the several technological advances achieved within the digital age, the processes of capturing, describing and spreading events have never been so simple and intuitive. This results in an ubiquitous presence of event-related information, which is digitally embedded in any form of media. Both the pervasiveness of such information as well as the benefits of its exploitation for many purposes have fostered decades of research effort to detect and summarize it. However, several issues emerge at subsequent stages and shall be addressed to support the proper exploitation and consumption of event-related information. The work presented within this thesis is indeed committed to this goal. The aforementioned ubiquity of events makes them exhibit different characteristics and appear in a diverse range of scenarios. Therefore, we categorize events according to three main aspects that come into play when considering the management and usage of event-related information over time, once it has been created. These are the degree of privacy, as events can be of public domain or rather pertain to a more personal sphere, the type of description, which is the form (e.g. textual or visual) in which events are described, and the time of usage, namely the temporal horizon over which event-related information is expected to be accessed and used. The problems addressed in this thesis regard different combinations of such aspects, each one subject to specific issues to be dealt with. Concerning the private sphere, we aim at properly managing large amounts of photographs taken during personal events, so that they can be easily revisited and enjoyed in the future. The common habit of dumping every single picture, encouraged by the availability of cheap storage devices, poses serious threats to their future revisiting and calls for more selective strategies to identify the most important pictures from an entire collection, thus making the future reminiscence of the related events more enjoyable and less tedious. In fact, going through the whole stored photo collections can be such a cumbersome procedure to discourage from doing it at all. We present a selection method that learns to identify the photos that the collection owner would like to keep from a whole collection for future reminiscence, outperforming approaches based on clustering and on the concept of coverage. Then, moving towards more public settings, we consider the problem of validating the occurrence of events of public domain in the real world based on the information contained in textual document collections. In scenarios where events are detected from large amounts of natural language text by automatic procedures, which might introduce false positive detections, being able to retain true events while discarding the false ones becomes fundamental for a proper exploitation of the detected event-related information for any subsequent purpose. We therefore validate the verity of events by checking whether they are reported within a set of documents, which serve as ground truth, reaching substantial agreement with human evaluators. Moreover, when performing event validation as a post-processing step of event detection, we observed an increase of precision within the set of detected events. Finally, we make a temporal jump and consider a scenario where descriptive information of public events (e.g. news articles) are read after few decades. Since the original context of an event, needed for its proper comprehension, might have been forgotten or never known at all after such a relatively long time, we aim at retrieving contextualizing information to support the understanding of old events in presence of wide temporal and contextual gaps. We investigate methods to formulate queries from event descriptions as seeds for retrieving topically and temporally relevant information from a context source, particularly aiming at high recall. Targeting recall as query performance criterion makes the set of retrieved results a favorable starting point for pursuing additional objectives at subsequent stages.
Submitted by Andrea Ceroni (andrea.ceroni@siemens.com) on 2018-07-31T20:04:09Z
No. of bitstreams: 1
phd_thesis_final.pdf: 2685440 bytes, checksum: cd2f821d1327f635d43f6e869a433682 (MD5)
Approved for entry into archive by Anke Bartsch (anke.bartsch@tib.eu) on 2018-08-06T08:07:07Z (GMT) No. of bitstreams: 1
phd_thesis_final.pdf: 2685440 bytes, checksum: cd2f821d1327f635d43f6e869a433682 (MD5)
Made available in DSpace on 2018-08-06T08:07:07Z (GMT). No. of bitstreams: 1
phd_thesis_final.pdf: 2685440 bytes, checksum: cd2f821d1327f635d43f6e869a433682 (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
CC BY 3.0 DE
http://creativecommons.org/licenses/by/3.0/de/
personal photo selection
event validation
recall-based query formulation
Persönliche Fotoauswahl
Ereignisvalidierung
Recall-basierte Anfragenformulierung
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Methods for managing, validating and retrieving event-related information in evolving contexts
DoctoralThesis
TextXVI, 138 S.openAccess2018-06-05AndreaGottfried Wilhelm Leibniz Universität HannoverHannoverGottfried Wilhelm Leibniz Universität HannoverHannoverWelfengarten 1 B, 30167 HannoverCeroniDEthesis.doctoraljaallowed
oai:www.repo.uni-hannover.de:123456789/36022022-12-02T08:09:01Zcom_123456789_1com_123456789_2961col_123456789_4col_123456789_2962ddc:004doc-type:Textopen_accessstatus-type:publishedVersiondoc-type:DoctoralThesis
Schindler, Irena
2018-08-09T13:13:14Z
2018-08-09T13:13:14Z
2018
Schindler, Irena: Parameterized complexity of decision problems in non-classical logics. Hannover : Gottfried Wilhelm Leibniz Universität, Diss., 2018, xv, 117 S. DOI: https://doi.org/10.15488/3570
https://www.repo.uni-hannover.de:443/handle/123456789/3602
http://dx.doi.org/10.15488/3570
Parameterized complexity is a branch of a computational complexity. The pioneers of this new and promising research field are Downey and Fellows. They suggest to examine the structural properties of a given problem and restrict the instance by a parameter. In this thesis we investigate the parameterized complexity of various problems in default logic and in temporal logics. In the first section of Chapter 3 we introduce a dynamic programming algorithm which decides whether a given default theory has a consistent stable extension in fpt-time and enumerates all generating defaults that lead to a stable extension with a pre-computation step that is linear in the input theory and triple exponential in the tree-width followed by a linear delay to output solutions. In the second part of this chapter we lift the notion of backdoors to the field of default logics. We consider two problems, first we are interested to detect a backdoor and then to evaluate it for the target formulae classes HORN, KROM, POSITIVE-UNITand MONOTONE. In Chapter 4, we investigate the parameterized complexity of problems in various tem- poral logics. In the first section we introduce several graph-like structures for formula representation and the corresponding notion of tree-width and path-width. To obtain the fixed parameter tractability of different fragments, we generalize the prominent Courcelle’s Theorem to work for infinite signatures. In this section, we also consider Boolean operator fragments in the sense of Post’s lattice. In the second part of Chapter 4 we introduce the notion of backdoors for the glob- ally fragment of linear temporal logic. Again, our problems of interest are to detect a backdoor and to evaluate it, this time, for the target formulae classes HORN and KROM.
Submitted by Irena Schindler (schindler@thi.uni-hannover.de) on 2018-08-04T13:19:27Z
No. of bitstreams: 1
Dissertation_Schindler.pdf: 1088639 bytes, checksum: 0cd6614644fbe17310bdc3cd69c0b438 (MD5)
Approved for entry into archive by Anke Bartsch (anke.bartsch@tib.eu) on 2018-08-09T13:13:14Z (GMT) No. of bitstreams: 1
Dissertation_Schindler.pdf: 1088639 bytes, checksum: 0cd6614644fbe17310bdc3cd69c0b438 (MD5)
Made available in DSpace on 2018-08-09T13:13:14Z (GMT). No. of bitstreams: 1
Dissertation_Schindler.pdf: 1088639 bytes, checksum: 0cd6614644fbe17310bdc3cd69c0b438 (MD5)
Previous issue date: 2018-07-04
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
CC BY 3.0 DE
http://creativecommons.org/licenses/by/3.0/de/
default logic
temporal logic
backdoor
Post’s Lattice
tree- and pathwidth
temporal depth
Parameterized complexity
Default Logik
Temporale Logik
Post’s Lattice
Baum- und Pfadweite
temporale Tiefe
Parametrisierte Komplexität
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Parameterized complexity of decision problems in non-classical logics
DoctoralThesis
Textxv, 117 S.openAccess2018-06-07IrenaGottfried Wilhelm Leibniz Universität HannoverHannoverGottfried Wilhelm Leibniz Universität HannoverHannoverWelfengarten 1 B, 30167 HannoverSchindlerDEthesis.doctoraljaprohibited
oai:www.repo.uni-hannover.de:123456789/36012022-12-02T08:02:56Zcom_123456789_1com_123456789_2961col_123456789_4col_123456789_2962ddc:004doc-type:Textopen_accessstatus-type:publishedVersiondoc-type:DoctoralThesis
Chandoo, Maurice
2018-08-08T15:21:02Z
2018-08-08T15:21:02Z
2018
Chandoo, Maurice: Computational complexity aspects of implicit graph representations. Hannover : Gottfried Wilhelm Leibniz Universität, Diss., 2018, xiii, 89 S. DOI: https://doi.org/10.15488/3569
https://www.repo.uni-hannover.de:443/handle/123456789/3601
http://dx.doi.org/10.15488/3569
Implicit graph representations are immutable data structures for restricted classes of graphs such as planar graphs. A graph class has an implicit representation if the vertices of every graph in this class can be assigned short labels such that the adjacency of two vertices can be decided by an algorithm which gets the two labels of these vertices as input. A representation of a graph in that class is then given by the set of labels of its vertices. The algorithm which determines adjacency is only allowed to depend on the graph class. Such representations are attractive because they are space-efficient and in many cases also allow for constant-time edge queries. Therefore they outperform less specialized representations such as adjacency matrices or lists and are even optimal in an asymptotic sense. In the first part of this thesis we investigate the limitations of such representations when constraining the complexity of an algorithm which decodes adjacency. First, we prove that imposing such computational constraints does indeed affect what graph classes have an implicit representation. Then we observe that the adjacency structure of almost all graph classes that are known to have an implicit representation can be described by formulas of first-order logic. The quantifier-free fragment of this logic can be characterized in terms of RAMs: a graph class can be expressed by a quantifier-free formula if and only if it has an implicit representation where edges can be queried in constant-time on a RAM without division. We provide two reduction notions for graph classes which reveal that trees and interval graphs are representative for certain fragments of this logic. We conclude this part by providing a big picture of the newly introduced classes and point out viable research directions. In the second part we consider the tractability of algorithmic problems on graph classes with implicit representations. Intuitively, if a graph class has an implicit representation with very low complexity then it should have a simple adjacency structure. Therefore it seems plausible to expect certain algorithmic problems to be tractable on such graph classes. We consider how realistic it is to expect an algorithmic meta-theorem of the form ``if a graph class X has an implicit representation with complexity Y then problem Z is tractable on X''. Our considerations quickly reveal that even for the most humble choices of Y and various Z this is either impossible or leads to the frontiers of algorithmic research. We show that the complexity classes of graph classes introduced in the previous chapter can be interpreted as graph parameters and therefore can be considered within the framework of parameterized complexity. We embark on a case study where Z is the graph isomorphism problem and Y is the quantifier-free, four-variable fragment of first order logic with only the order predicate on the universe. This leads to a problem that has been studied independently and resisted classification for over two decades: the isomorphism problem for circular-arc (CA) graphs. We examine how a certain method, which we call flip trick, can be applied to this problem. We show that for a broad class of CA graphs the isomorphism problem reduces to the representation problem and as a consequence can be solved in polynomial-time.
Submitted by Maurice Chandoo (chandoo@thi.uni-hannover.de) on 2018-08-08T11:44:36Z
No. of bitstreams: 1
main.pdf: 917363 bytes, checksum: f7ee6ef62b040f69a17695279b3e94c8 (MD5)
Approved for entry into archive by Heike Rohde (heike.rohde@tib.eu) on 2018-08-08T15:21:02Z (GMT) No. of bitstreams: 1
main.pdf: 917363 bytes, checksum: f7ee6ef62b040f69a17695279b3e94c8 (MD5)
Made available in DSpace on 2018-08-08T15:21:02Z (GMT). No. of bitstreams: 1
main.pdf: 917363 bytes, checksum: f7ee6ef62b040f69a17695279b3e94c8 (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
CC BY 3.0 DE
http://creativecommons.org/licenses/by/3.0/de/
adjacency labeling schemes
implicit graph conjecture
circular-arc graph isomorphism
Implizite Repräsentationen
Kreisbogengraphen
Graphenisomorphie
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Computational complexity aspects of implicit graph representations
DoctoralThesis
Textxiii, 89 S.openAccess2018-07-20MauriceGottfried Wilhelm Leibniz Universität HannoverHannoverGottfried Wilhelm Leibniz Universität HannoverHannoverWelfengarten 1 B, 30167 HannoverChandooDEthesis.doctoraljaprohibited
oai:www.repo.uni-hannover.de:123456789/36062022-12-02T08:12:00Zcom_123456789_1com_123456789_2961col_123456789_4col_123456789_2962ddc:004doc-type:Textopen_accessstatus-type:publishedVersiondoc-type:DoctoralThesis
Tran, Anh Tuan
2018-08-14T12:14:19Z
2018-08-14T12:14:19Z
2018
Tran, Anh Tuan: Temporal models in data mining : enrichment, summarization and recommendation. Hannover : Gottfried Wilhelm Leibniz Universität, Diss., 2018, xviii, 126 S. DOI: https://doi.org/10.15488/3574
https://www.repo.uni-hannover.de:443/handle/123456789/3606
http://dx.doi.org/10.15488/3574
Die Zeit spielt eine wichtige und vielseitige Rolle dabei, die digitalen Sammlungen unter Berücksichtigung der Beziehung zu den Nutzern zu studieren. Dies gilt inbesondere, wenn digitale Inhalte lange nach ihrem Erstellungszeitpunkt verarbeitet werden und die Zeitraum, in dem zahlreiche Ereignisse beobachtet werden können, erzeugt wird. Zum Beispiel: Die Dokumente werden geändert oder überarbeitet; Benutzer werden anderen verbundenen Information in der Sammlung ausgesetzt und damit ihr Wissen, ihre Interpretation und Ihre Interesse aktualisieren. Ein weiteres Beispiel können die Veränderung des Kontexts und die Entstehung neuer Ereignisse die Wahrnehmung der Nutzer von Relevanz oder Werten der Inhalte beeinflussen. Daher sollte hochwertige Informationsverarbeitungs -und- aufrufsysteme diese Auswirkungen der Zeit berücksichtigen, nicht nur innerhalb eines einzelnen Objekt, sondern auch von Intra- sowie Inter-Sammlungen. Auf dieser kollektiven Ebene ist es wichtig, das kognitive Verhalten des Nutzers bei der Verarbeitung von Informationen einzugehen. Der Grund dafür ist, dass Menschen die Fähigkeit haben, Informationen aus verschiedenen Quellen zu verbinden, und dass sie diese Befugnis oft bewußt oder bewußtlos ausüben, wenn sie digitale Inhalte erstellen oder erfassen. Trotz jahrzehntelanger Forschung in zeitlichem Data Mining und Information Retrieval wurde bislang den Auswirkungen der kognitiven Aspekte auf den zeitabhängigen Informationsverarbeitung auf der Sammlungsebene nur wenig Aufmerksamkeit geschenkt. Dazu gehören Aspekte wie die Art und Weise, wie Nutzer Dauerereignisse z.B. in Online- Nachrichten verfolgen oder auswendig lernen, oder wie sich menschliche Vergesslichkeit auf ihr Verhalten bei der Suche nach ihrem eigenen digitalen Material auswirkt. In dieser Dissertation legen wir verschiedene Forschungsfragen im zeitlichen Data Mining aus einer neuen Perspektive fest, inspiriert von den menschlichen kognitiven Prozessen bei den Erstellungen, Organisierungen, Austauschen und Suchen nach zeitlichen Informationen. Insbesondere richten wir auf folgende Fragestellung aus: (1) Wie die zeitlichen Themen von Textinhalte ermittelt werden, und damit wie die Inhalte um semantische Information angereichert werden; (2) Wie die Textdaten anhand der Zeitleiste sowie von kognitiv basierten Modellen richtig zusammengefasst werden; (3) Wie ein System die Nutzer bei der Such nach ihrem eigenen Dokumente unterstützen kann, indem die Auswirkung der Zeit auf ihre Gedächtnis berücksichtigt wird. Zur ersten Frage führen wir eine neue Methode für Anreicherung der zeitlichen Themen von Textdaten ein, die soziale und zeitliche Merkmale verwendet, und zeigen ihre Wirksamkeit, um Social-Media-Trending z.B. in Twitter anzureichern. Außerdem befassen wir uns mit dem Thema Skalierbarkeit, sowohl in Bezug auf die algorithmischen Modellen als auch auf die Infrastruktur. Zur zweiten Frage setzen wir das neue Konzept der Entity-basierten Zeitleiste als Zusammenfassung der Dauerereignisse ein, und entwickeln wir neue Methodenansätzen zur effektiven Zusammenfassen von Online- Nachrichtenartikeln. Unsere Methode kombiniert soziale und zeitliche Merkmalen, die aus Enzyklopädischen wie Wikipedia entstehen, mit herkömmliche Merkmalen der Textinhalte. Die Methode kann auch anhand eines neuartigen adaptiven Lernalgorithmus die Relevanz und die Neuheit von Nachrichtenartikeln ausgleichen. Neben Online-Nachrichten untersuchen wir weiteren Bereich des Zusammenfassens von gesprochenen Dialogen unter Berücksichtigung der menschlichen Entscheidungsverfahren. Zur dritten Frage leisten wir Beiträge zur Erforschung menschlicher Erinnerungen im Unternehmensbereich und entwerfen eine neue Graph-Lernmethode, um professionelle Inhalte für eine zeitliche Aufgabe zu empfehlen.
Submitted by Anh Tuan Tran (ttran@l3s.uni-hannover.de) on 2018-08-12T21:35:25Z
No. of bitstreams: 1
dissertation.pdf: 4295552 bytes, checksum: 4b09cf62c61a226f35405c8caa45bbfd (MD5)
Approved for entry into archive by Ursula Krys (ursula.krys@tib.eu) on 2018-08-14T12:14:19Z (GMT) No. of bitstreams: 1
dissertation.pdf: 4295552 bytes, checksum: 4b09cf62c61a226f35405c8caa45bbfd (MD5)
Made available in DSpace on 2018-08-14T12:14:19Z (GMT). No. of bitstreams: 1
dissertation.pdf: 4295552 bytes, checksum: 4b09cf62c61a226f35405c8caa45bbfd (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
CC BY 3.0 DE
http://creativecommons.org/licenses/by/3.0/de/
information extraction
temporal data analysis
semantic data
cognitive models
summarization
recommender system
learning to rank
structured learning
Informationsextraktion
Zeitliche Datenanalyse
semantische Daten
Kognitives Modell
Textzusammenfassung
Empfehlungsdienst
strukturiertes Lernen
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Temporal models in data mining : enrichment, summarization and recommendation
DoctoralThesis
Textxviii, 126 S.openAccess2018-08-07Anh TuanGottfried Wilhelm Leibniz Universität HannoverHannoverGottfried Wilhelm Leibniz Universität HannoverHannoverWelfengarten 1 B, 30167 HannoverTranDEthesis.doctoraljaallowed
oai:www.repo.uni-hannover.de:123456789/36712022-12-02T19:24:35Zcom_123456789_15col_123456789_16ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersion
Rückemann, Claus-Peter
Pavani, Raffaella
Schubert, Lutz
Gersbeck-Schierholz, Birgit
Hülsmann, Friedrich
Lau, Olaf
Hofmeister, Martin
2018-08-27T10:25:47Z
2019-01-01T23:05:03Z
2018
Rückemann, C.-P.; Pavani, R.; Schubert, L.; Gersbeck-Schierholz, B.; Hülsmann, F.; Lau, O.; Hofmeister, M.: Best Practice and Definitions of Data Value. ICNAAM 2018, 12-17 September 2018, Rhodes, Greece. Hannover : Institutionelles Repositorium der Leibniz Universität Hannover, 2018. DOI: https://doi.org/10.15488/3639
https://www.repo.uni-hannover.de/handle/123456789/3671
http://dx.doi.org/10.15488/3639
Post-Summit Results of the Delegates' Summit, September 13, 2018, The Eighth Symposium on Advanced Computation and Information in Natural and Applied Sciences (SACINAS) at The 16th International Conference of Numerical Analysis and Applied Mathematics (ICNAAM), September 13-17, 2018, Rhodes, Greece
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-08-27T10:25:34Z
No. of bitstreams: 1
rueckemann_tib_20180600_wrap.pdf: 99642 bytes, checksum: 38be6830366bb7f70bb322b2ed4de310 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-08-27T10:25:47Z (GMT) No. of bitstreams: 1
rueckemann_tib_20180600_wrap.pdf: 99642 bytes, checksum: 38be6830366bb7f70bb322b2ed4de310 (MD5)
Made available in DSpace on 2018-08-27T10:25:47Z (GMT). No. of bitstreams: 1
rueckemann_tib_20180600_wrap.pdf: 99642 bytes, checksum: 38be6830366bb7f70bb322b2ed4de310 (MD5)
Previous issue date: 2018
publishedVersion
eng
Hannover : Institutionelles Repositorium der Leibniz Universität Hannover
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden.
Data Science
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Best Practice and Definitions of Data Value
ConferenceObject
TextopenAccessClaus-PeterRaffaellaLutzBirgitFriedrichOlafMartinRückemannPavaniSchubertGersbeck-SchierholzHülsmannLauHofmeisterallowed
oai:www.repo.uni-hannover.de:123456789/38182022-12-02T16:17:36Zcom_123456789_1col_123456789_4ddc:004doc-type:BookPartdoc-type:Textopen_accessstatus-type:publishedVersion
Zhou, Yiwei
Demidova, Elena
Cristea, Alexandra I.
Cardoso, Jorge
Guerra, Francesco
Houben, Geert-Jan
Pinto, Alexandre Miguel
Velegrakis, Yannis
2018-10-10T08:42:36Z
2018-10-10T08:42:36Z
2015
Zhou, Y.; Demidova, E.; Cristea, A.I.: Analysing entity context in multilingual wikipedia to support entity-centric retrieval applications. In: Cardoso, J.; Guerra, F.; Houben, G.; Pinto, A.; Velegrakis, Y. (Eds.): Semantic Keyword-Based Search on Structured Data Sources. Heidelberg : Springer Verlag, 2015 (Lecture Notes in Computer Science ; 9398), S. 197-208. DOI: https://doi.org/10.1007/978-3-319-27932-9_17
https://www.repo.uni-hannover.de/handle/123456789/3818
http://dx.doi.org/10.15488/3784
Representation of influential entities, such as famous people and multinational corporations, on the Web can vary across languages, reflecting language-specific entity aspects as well as divergent views on these entities in different communities. A systematic analysis of languagespecific entity contexts can provide a better overview of the existing aspects and support entity-centric retrieval applications over multilingual Web data. An important source of cross-lingual information about influential entities is Wikipedia — an online community-created encyclopaedia — containing more than 280 language editions. In this paper we focus on the extraction and analysis of the language-specific entity contexts from different Wikipedia language editions over multilingual data. We discuss alternative ways such contexts can be built, including graph-based and article-based contexts. Furthermore, we analyse the similarities and the differences in these contexts in a case study including 80 entities and five Wikipedia language editions.
Made available in DSpace on 2018-10-10T08:42:36Z (GMT). No. of bitstreams: 0
Previous issue date: 2015
publishedVersion
eng
Heidelberg : Springer Verlag
Semantic Keyword-Based Search on Structured Data Sources
Lecture Notes in Computer Science ; 9398
978-3-319-27931-2
978-3-319-27932-9
03029743
https://doi.org/10.1007/978-3-319-27932-9_17
CC BY-NC 3.0 Unported
https://creativecommons.org/licenses/by-nc/3.0/
Arches
Computational linguistics
Graphic methods
Semantics
Websites
Cross-lingual information
Entity contexts
Graph-based
Multi-national corporations
On-line communities
Retrieval applications
Support entities
Systematic analysis
Search engines
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Analysing entity context in multilingual wikipedia to support entity-centric retrieval applications
BookPart
Text
9398
197
208openAccessFirst COST Action IC1302 International KEYSTONE Conference, IKC 2015, September 8–9, 2015, Coimbra, Portugal,
oai:www.repo.uni-hannover.de:123456789/38192022-12-02T16:17:36Zcom_123456789_1col_123456789_3ddc:004doc-type:BookPartdoc-type:Textopen_accessstatus-type:publishedVersion
Menze, Moritz
Heipke, Christian
Geiger, Andreas
Gall, Juergen
Gehler, Peter
Leibe, Bastian
2018-10-10T08:42:36Z
2018-10-10T08:42:36Z
2015
Menze, M.; Heipke, C.; Geiger, A.: Discrete optimization for optical flow. In: Gall, J.; Gehler, P.; Leibe, B. (Eds.): Pattern Recognition. Heidelberg : Springer Verlag, 2015 (Lecture Notes in Computer Science ; 9358), S. 16-28. DOI: https://doi.org/10.1007/978-3-319-24947-6_2
https://www.repo.uni-hannover.de/handle/123456789/3819
http://dx.doi.org/10.15488/3785
We propose to look at large-displacement optical flow from a discrete point of view. Motivated by the observation that sub-pixel accuracy is easily obtained given pixel-accurate optical flow, we conjecture that computing the integral part is the hardest piece of the problem. Consequently, we formulate optical flow estimation as a discrete inference problem in a conditional random field, followed by sub-pixel refinement. Naive discretization of the 2D flow space, however, is intractable due to the resulting size of the label set. In this paper, we therefore investigate three different strategies, each able to reduce computation and memory demands by several orders of magnitude. Their combination allows us to estimate large-displacement optical flow both accurately and efficiently and demonstrates the potential of discrete optimization for optical flow. We obtain state-of-the-art performance on MPI Sintel and KITTI.
Made available in DSpace on 2018-10-10T08:42:36Z (GMT). No. of bitstreams: 0
Previous issue date: 2015
publishedVersion
eng
Heidelberg : Springer Verlag
Pattern Recognition
Lecture Notes in Computer Science ; 9358
978-3-319-24946-9
978-3-319-24947-6
03029743
https://doi.org/10.1007/978-3-319-24947-6_2
CC BY-NC 2.5 Unported
https://creativecommons.org/licenses/by-nc/2.5/
Image registration
Optimization
Pattern recognition
Pixels
Conditional random field
Discrete optimization
Inference problem
Large displacements
Optical flow estimation
Orders of magnitude
State-of-the-art performance
Subpixel accuracy
Optical flows
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Discrete optimization for optical flow
BookPart
Text
9358
16
28openAccess37th German Conference, GCPR 2015, October 7-10, 2015, Aachen, Germany
oai:www.repo.uni-hannover.de:123456789/38202023-04-14T04:37:01Zcom_123456789_1col_123456789_4ddc:004doc-type:BookPartdoc-type:Textopen_accessstatus-type:publishedVersion
Wägemann, Peter
Dietrich, Christian
Distler, Tobias
Ulbrich, Peter
Schröder-Preikschat, Wolfgang
Altmeyer, Sebastian
2018-10-10T08:42:36Z
2018-10-10T08:42:36Z
2018
Wägemann, P.; Dietrich, C.; Distler, T.; Ulbrich, P.; Schröder-Preikschat, W.: Whole-system worst-case energy-consumption analysis for energy-constrained real-time systems. In: Leibniz International Proceedings in Informatics, LIPIcs 106 (2018), 24. DOI: https://doi.org/10.4230/LIPIcs.ECRTS.2018.24
978-3-95977-075-0
https://www.repo.uni-hannover.de/handle/123456789/3820
http://dx.doi.org/10.15488/3786
Although internal devices (e.g., memory, timers) and external devices (e.g., transceivers, sensors) significantly contribute to the energy consumption of an embedded real-time system, their impact on the worst-case response energy consumption (WCRE) of tasks is usually not adequately taken into account. Most WCRE analysis techniques, for example, only focus on the processor and therefore do not consider the energy consumption of other hardware units. Apart from that, the typical approach for dealing with devices is to assume that all of them are always activated, which leads to high WCRE overestimations in the general case where a system switches off the devices that are currently not needed in order to minimize energy consumption. In this paper, we present SysWCEC, an approach that addresses these problems by enabling static WCRE analysis for entire real-time systems, including internal as well as external devices. For this purpose, SysWCEC introduces a novel abstraction, the power-state-transition graph, which contains information about the worst-case energy consumption of all possible execution paths. To construct the graph, SysWCEC decomposes the analyzed real-time system into blocks during which the set of active devices in the system does not change and is consequently able to precisely handle devices being dynamically activated or deactivated.
Made available in DSpace on 2018-10-10T08:42:36Z (GMT). No. of bitstreams: 0
Previous issue date: 2018
publishedVersion
eng
Wadern : Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH
30th Euromicro Conference on Real-Time Systems (ECRTS 2018)
Leibniz international proceedings in informatics : LIPIcs ; 106
1868-8969
https://doi.org/10.4230/LIPIcs.ECRTS.2018.24
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0/
energy-constrained real-time systems
worst-case energy consumption (WCEC)
worst-case response energy consumption (WCRE)
static whole-system analysis
Konferenzschrift
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Whole-system worst-case energy-consumption analysis for energy-constrained real-time systems
BookPart
Text
106
24openAccess30th Euromicro Conference on Real-Time Systems : ECRTS 2018, July 3rd-6th, 2018, Barcelona, Spain
oai:www.repo.uni-hannover.de:123456789/38502022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Gritzner, Daniel
Greenyer, Joel
2018-10-10T09:25:41Z
2018-10-10T09:25:41Z
2018
Gritzner, D.; Greenyer, J.: Generating Correct, Compact, and Efficient PLC Code from Scenario-based Assume-Guarantee Specifications. In: Procedia Manufacturing 24 (2018), S. 153-158. DOI: https://doi.org/10.1016/j.promfg.2018.06.046
https://www.repo.uni-hannover.de/handle/123456789/3850
http://dx.doi.org/10.15488/3816
Cyber-physical systems can be found in many areas, e.g., manufacturing, health care or smart cities. They consist of many distributed components cooperating to provide increasingly complex functionality. The design and development of such a system is difficult and error-prone. To help engineers overcome these challenges we created a formal, scenario-based specification language. Short scenarios, i.e., event sequences, specify requirements and the desired behaviors by describing how components may, must, or must not behave. Scenarios provide an intuitive way for creating formal assume-guarantee (GR(1)) specifications, giving engineers easy access to simulation, for validating the specified behavior, and controller synthesis, for creating controller software which is correct by construction. In this paper we present an approach for generating Programmable Logic Controller (PLC) code from a scenario-based specification. Previous code generation efforts, including our own, created large, verbose source files causing some tools, e.g., compilers or editors, to perform slowly or even become unresponsive. Our new approach creates compact files, shifting significant amounts of code from executable instructions to data, to reduce the burden on the compiler and other tools. The generated code is efficient and introduces minimal to no latency between the occurrence of an event and the system's reaction to it.
Made available in DSpace on 2018-10-10T09:25:41Z (GMT). No. of bitstreams: 0
Previous issue date: 2018
publishedVersion
eng
Amsterdam : Elsevier B.V.
Procedia Manufacturing 24 (2018)
23519789
https://doi.org/10.1016/j.promfg.2018.06.046
CC BY-NC-ND 4.0 Unported
https://creativecommons.org/licenses/by-nc-nd/4.0/
assume-guarantee specification
code generation
controller synthesis
programmable logic controller
scenarios
Konferenzschrift
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Generating Correct, Compact, and Efficient PLC Code from Scenario-based Assume-Guarantee Specifications
Article
Text
24
153
158openAccess
oai:www.repo.uni-hannover.de:123456789/38752022-12-02T15:04:49Zcom_123456789_1col_123456789_4ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Barrett, Chris
Drozda, Martin
Marathe, Madhav V.
Ravi, S.S.
Smith, James P.
2018-10-11T09:16:14Z
2018-10-11T09:16:14Z
2004
Barrett, C.; Drozda, M.; Marathe, M.V.; Ravi, S.S.; Smith, J.P.: A mobility and traffic generation framework for modeling and simulating ad hoc communication networks. In: Scientific Programming 12 (2004), Nr. 1, S. 1-23. DOI: https://doi.org/10.1155/2004/921065
https://www.repo.uni-hannover.de/handle/123456789/3875
http://dx.doi.org/10.15488/3841
We present a generic mobility and traffic generation framework that can be incorporated into a tool for modeling and simulating large scale ad hoc networks. Three components of this framework, namely a mobility data generator (MDG), a graph structure generator (GSG) and an occlusion modification tool (OMT) allow a variety of mobility models to be incorporated into the tool. The MDG module generates positions of transceivers at specified time instants. The GSG module constructs the graph corresponding to the ad hoc network from the mobility data provided by MDG. The OMT module modifies the connectivity of the graph produced by GSG to allow for occlusion effects. With two other modules, namely an activity data generator (ADG) which generates packet transmission activities for transceivers and a packet activity simulator (PAS) which simulates the movement and interaction of packets among the transceivers, the framework allows the modeling and simulation of ad hoc communication networks. The design of the framework allows a user to incorporate various realistic parameters crucial in the simulation. We illustrate the utility of our framework through a comparative study of three mobility models. Two of these are synthetic models (random waypoint and exponentially correlated mobility) proposed in the literature. The third model is based on an urban population mobility modeling tool (TRANSIMS) developed at the Los Alamos National Laboratory. This tool is capable of providing comprehensive information about the demographics, mobility and interactions of members of a large urban population. A comparison of these models is carried out by computing a variety of parameters associated with the graph structures generated by the models. There has recently been interest in the structural properties of graphs that arise in real world systems. We examine two aspects of this for the graphs created by the mobility models: change associated with power control (range of transceivers). and variation in time as transceivers move in space.
Made available in DSpace on 2018-10-11T09:16:14Z (GMT). No. of bitstreams: 0
Previous issue date: 2004
publishedVersion
eng
New York, NY : Hindawi Publishing Corporation
Scientific Programming 12 (2004), Nr. 1
10589244
https://doi.org/10.1155/2004/921065
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0/
Graph theory
Mobile and ad hoc networks
Mobility models
Simulation and modeling
Algorithms
Computer simulation
Graph theory
Packet switching
Power control
Simulators
Software prototyping
Telecommunication traffic
Transceivers
Ad hoc communication networks
Graph structure generator
Mobility data generator
Occlusion modification tool
Urban population mobility modeling tool
Mobile telecommunication systems
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
A mobility and traffic generation framework for modeling and simulating ad hoc communication networks
Article
Text
1
12
1
23openAccess
oai:www.repo.uni-hannover.de:123456789/39282022-12-02T18:18:51Zcom_123456789_1col_123456789_8ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersion
Bilder, Christopher R.
Zhang, Boan
Schaarschmidt, Frank
Tebbs, Joshua M.
2018-10-26T13:54:54Z
2018-10-26T13:54:54Z
2010
Bilder, C.R. et al.: binGroup: A Package for Group Testing. In: The R Journal 2 (2010), S. 56-60
https://www.repo.uni-hannover.de/handle/123456789/3928
http://dx.doi.org/10.15488/3894
When the prevalence of a disease or of some other binary characteristic is small, group testing (also known as pooled testing) is frequently used to estimate the prevalence and/or to identify individuals as positive or negative. We have developed the binGroup package as the first package designed to address the estimation problem in group testing. We present functions to estimate an overall prevalence for a homogeneous population. Also, for this setting, we have functions to aid in the very important choice of the group size. When individuals come from a heterogeneous population, our group testing regression functions can be used to estimate an individual probability of disease positivity by using the group observations only. We illustrate our functions with data from a multiple vector transfer design experiment and a human infectious disease prevalence study.
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-10-26T13:54:45Z
No. of bitstreams: 1
Bilder et al 2010, binGroup A Package for Group Testing.pdf: 160509 bytes, checksum: 4dd9c8363479ba92eb978bc19c620f39 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-10-26T13:54:54Z (GMT) No. of bitstreams: 1
Bilder et al 2010, binGroup A Package for Group Testing.pdf: 160509 bytes, checksum: 4dd9c8363479ba92eb978bc19c620f39 (MD5)
Made available in DSpace on 2018-10-26T13:54:54Z (GMT). No. of bitstreams: 1
Bilder et al 2010, binGroup A Package for Group Testing.pdf: 160509 bytes, checksum: 4dd9c8363479ba92eb978bc19c620f39 (MD5)
Previous issue date: 2010
publishedVersion
eng
Wien : The R Foundation
The R Journal 2 (2010)
2073-4859
CC BY 3.0 Unported
https://creativecommons.org/licenses/by/3.0/
Pooled Testing
R (programming language)
Estimation
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
binGroup: A Package for Group Testing
Article
Text
56
60openAccessThe R Journal 2 (2010)Christopher R.BoanFrankJoshua M.BilderZhangSchaarschmidtTebbsallowed
oai:www.repo.uni-hannover.de:123456789/39522022-12-02T19:24:35Zcom_123456789_15col_123456789_16ddc:004doc-type:Articledoc-type:Textopen_accessstatus-type:publishedVersionddc:020
Bähr, Thomas
Friedrichsen, Merle
2018-11-06T14:12:16Z
2018-11-06T14:12:16Z
2017
Bähr, T.; Friedrichsen, M.: Konvertierung von PDF in XML für die Langzeitarchivierung und Weiterverarbeitung. In: ABI Technik 37 (2017), S. 21-29. DOI: https://doi.org/10.1515/abitech-2017-0004
https://www.repo.uni-hannover.de/handle/123456789/3952
http://dx.doi.org/10.15488/3918
In der Darstellung, Weitergabe und Aufbewahrung elektronischer Publikationen steht das Format PDF unangefochten an erster Stelle. Die Stärken des ISO-standardisierten Formats liegen in der Plattform- und Hardwareunabhängigkeit, in der seitengenauen Darstellung von Publikationen sowie in der einfachen Navigierbarkeit von komplexen Dokumenten. Dank der stetigen Weiterentwicklung des Formats existiert mittlerweile eine große Anzahl an PDF Profilen wie PDF/A, PDF/X, PDF/UA oder PDF/E. Eine flexiblere Dokumentendarstellung ermöglicht hingegen die eXtensible Markup Language XML, welche nicht nur im Web, sondern auch vermehrt in der Druckvorstufe eingesetzt wird. Wie PDF ist auch XML medienneutral und plattformunabhängig. Im Gegensatz zu PDF-Dokumenten erlaubt XML hingegen mittels Erfassung der Inhalte in einer dokumentierten und transparenten Struktur eine Validierung der Inhalte wie auch eine gezielte Weiternutzung einzelner Teilinhalte. Die Technische Informationsbibliothek (TIB) führte eine Analyse zur Machbarkeit einer PDF-nach-XML-Konvertierung durch. Ziel ist die Vorhaltung von XML-Dokumenten für zwei Prozesse: Erstens zur automatischen Katalogisierung von Kongressbänden auf Aufsatzebene, zweitens zur Aufbewahrung einer parallelen Repräsentation neben PDF-Dokumenten im Langzeitarchiv. Dieser Artikel stellt die Ergebnisse der Machbarkeitsstudie dar.
PDF is without a doubt the most common file format choice when it comes to presenting, sharing and preserving electronic publications. The strengths of the ISO-standardized format lie in its independent platform and hardware, its page-exact rendering of publications as well as its smooth navigation of complex documents. Due to the ever-growing requirements of the community, a number of profiles for the file format exist today, such as: PDF/A, PDF/X, PDF/UA or PDF/E. The eXtensible Markup Language XML, on the other hand, allows for more flexible handling of document display, leading to a high adoption of the format not only in the web but also in printing and publishing processes. Like PDF, XML is media-neutral and platform-independent. Contrary to PDF, XML makes use of a transparent and well-documented content structure, allowing for validation processes as well as for extraction processes targeting specific content parts. TIB (the Technische Informationsbibliothek) conducted a proof-of-concept study on PDF to XML conversion. The study’s background is the usage of XML as a second representation of the original PDF content in the digital archive. This article presents the outcome of the proof-of-concept.
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-11-06T14:12:05Z
No. of bitstreams: 1
Bähr & Fridrichsen 2017, Konvertierung von PDF in XML für die Langzeitarchivierung und Weiterverarbeitung.pdf: 2297682 bytes, checksum: 90777c317942c9a1d29dde974f9754e9 (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-11-06T14:12:16Z (GMT) No. of bitstreams: 1
Bähr & Fridrichsen 2017, Konvertierung von PDF in XML für die Langzeitarchivierung und Weiterverarbeitung.pdf: 2297682 bytes, checksum: 90777c317942c9a1d29dde974f9754e9 (MD5)
Made available in DSpace on 2018-11-06T14:12:16Z (GMT). No. of bitstreams: 1
Bähr & Fridrichsen 2017, Konvertierung von PDF in XML für die Langzeitarchivierung und Weiterverarbeitung.pdf: 2297682 bytes, checksum: 90777c317942c9a1d29dde974f9754e9 (MD5)
Previous issue date: 2017
publishedVersion
ger
Berlin, Boston : De Gruyter
ABI Technik 37 (2017)
10.1515/abitech-2017-0004
Es gilt deutsches Urheberrecht. Das Dokument darf zum eigenen Gebrauch kostenfrei genutzt, aber nicht im Internet bereitgestellt oder an Außenstehende weitergegeben werden. Dieser Beitrag ist aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
Strucutral Analysis
File Format Conversion
Automatic Layout recognition
Strukturanalyse
Dateiformatkonvertierung
automatische Layouterkennung
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::020 | Bibliotheks- und Informationswissenschaft
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Konvertierung von PDF in XML für die Langzeitarchivierung und Weiterverarbeitung
Conversion of PDF to XML for preservation and usage
Article
Text
21
29openAccessABI Technik 37 (2017)ThomasMerleBährFriedrichsenprohibited
oai:www.repo.uni-hannover.de:123456789/40002022-12-02T19:24:35Zcom_123456789_15col_123456789_16ddc:004doc-type:Textdoc-type:ConferenceObjectopen_accessstatus-type:publishedVersionddc:020
Saurbier, Felix
2018-11-16T09:09:28Z
2018-11-16T09:09:28Z
2018
Saurbier, Felix: Semantic modelling of video annotations – the TIB AV-Portal's metadata structure. Zenodo (2018). DOI: https://doi.org/10.5281/zenodo.1305890
https://www.repo.uni-hannover.de/handle/123456789/4000
http://dx.doi.org/10.15488/3966
The TIB AV-Portal (https://av.tib.eu) is an online platform for sharing scientific videos operated by the German National Library of Science and Technology (TIB). Besides the allocation of Digital Object Identifiers (DOI) and Media Fragment Identifiers (MFID) for video citation, long-term preservation of all material and open licenses like Creative Commons, the core feature of the TIB AV-Portal are its various methods of automated metadata extraction to fundamentally improve search functionalities (e.g. fine-grained search and faceting). These comprise of an automated chaptering, extraction of superimposed text, speech to text recognition, and the detection of predefined visual concepts. In addition, extracted metadata are consequently mapped against authority files like the German “Gemeinsame Normdatei” and knowledge bases like DBpedia and Library of Congress Subject Headings via a process of automated named entity linking (NEL) to enable semantic and cross-lingual search. The results of this process are expressed as temporal and/or spatial video annotations, linking extracted metadata to certain key frames and video segments. In order to structure the data, express relations between single entities, and link to external information resources, several common vocabularies, ontologies and knowledge bases are being used. These include amongst others the Open Annotation Data Model, the NLP Interchange Format (NIF), BIBFRAME, the Friend of Friend Vocabulary (FOAF), and Schema.org. Furthermore, all data is stored adhering to the Resource Description Framework (RDF) data model and published as linked open data. This provides third parties with an interoperable and easy to reuse RDF graph representation of the AV-Portal’s metadata. On our poster we illustrate the general structure of the TIB AV-Portal’s comprehensive metadata both authoritative and extracted automatically. Here, the main focus is on the underlying video annotation graph model and on semantic interoperability and reusability of the data. In particular we visualize how the use of vocabularies, ontologies and knowledge bases allows for rich semantic descriptions of video materials as well as for easy metadata publication, interlinking, and opportunities of reuse by third parties (e.g. for information retrieval and enrichment). In doing so, we present the AV-Portal’s metadata structure as an illustrative example for the complexity of modelling temporal and spatial video metadata and as a set of best practices in the field of audio-visual resources.
Submitted by Corinna Schneider (corinna.schneider@tib.eu) on 2018-11-16T09:09:10Z
No. of bitstreams: 1
Semantic modelling of video annotations.pdf: 575438 bytes, checksum: 73212b2f354fe6765727587c972d2d8c (MD5)
Approved for entry into archive by Corinna Schneider (corinna.schneider@tib.eu) on 2018-11-16T09:09:28Z (GMT) No. of bitstreams: 1
Semantic modelling of video annotations.pdf: 575438 bytes, checksum: 73212b2f354fe6765727587c972d2d8c (MD5)
Made available in DSpace on 2018-11-16T09:09:28Z (GMT). No. of bitstreams: 1
Semantic modelling of video annotations.pdf: 575438 bytes, checksum: 73212b2f354fe6765727587c972d2d8c (MD5)
Previous issue date: 2018
publishedVersion
eng
CC BY 4.0 Unported
https://creativecommons.org/licenses/by/4.0/
Linked Open Data
Data Model
Video Annotation
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::020 | Bibliotheks- und Informationswissenschaft
Dewey Decimal Classification::000 | Allgemeines, Wissenschaft::000 | Informatik, Wissen, Systeme::004 | Informatik
Semantic modelling of video annotations – the TIB AV-Portal's metadata structure
ConferenceObject
Text
10.5281/zenodo.1305890openAccessFelixSaurbierLIBER Annual Conference 2018, Lille, France, 4-6 July 2019allowed
dim///ddc:004/100