Assessing temporal behavior in lidar point clouds of urban environments

Downloadstatistik des Dokuments (Auswertung nach COUNTER):

Schachtschneider, J.; Schlichting, A.; Brenner, C.: Assessing temporal behavior in lidar point clouds of urban environments. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives 42 (2017), Nr. 1W1, S. 543-550. DOI: https://doi.org/10.5194/isprs-archives-XLII-1-W1-543-2017

Version im Repositorium

Zum Zitieren der Version im Repositorium verwenden Sie bitte diesen DOI: https://doi.org/10.15488/1694

Zeitraum, für den die Download-Zahlen angezeigt werden:

Jahr: 
Monat: 

Summe der Downloads: 435




Kleine Vorschau
Zusammenfassung: 
Self-driving cars and robots that run autonomously over long periods of time need high-precision and up-to-date models of the changing environment. The main challenge for creating long term maps of dynamic environments is to identify changes and adapt the map continuously. Changes can occur abruptly, gradually, or even periodically. In this work, we investigate how dense mapping data of several epochs can be used to identify the temporal behavior of the environment. This approach anticipates possible future scenarios where a large fleet of vehicles is equipped with sensors which continuously capture the environment. This data is then being sent to a cloud based infrastructure, which aligns all datasets geometrically and subsequently runs scene analysis on it, among these being the analysis for temporal changes of the environment. Our experiments are based on a LiDAR mobile mapping dataset which consists of 150 scan strips (a total of about 1 billion points), which were obtained in multiple epochs. Parts of the scene are covered by up to 28 scan strips. The time difference between the first and last epoch is about one year. In order to process the data, the scan strips are aligned using an overall bundle adjustment, which estimates the surface (about one billion surface element unknowns) as well as 270,000 unknowns for the adjustment of the exterior orientation parameters. After this, the surface misalignment is usually below one centimeter. In the next step, we perform a segmentation of the point clouds using a region growing algorithm. The segmented objects and the aligned data are then used to compute an occupancy grid which is filled by tracing each individual LiDAR ray from the scan head to every point of a segment. As a result, we can assess the behavior of each segment in the scene and remove voxels from temporal objects from the global occupancy grid.
Lizenzbestimmungen: CC BY 3.0 Unported
Publikationstyp: Article
Publikationsstatus: publishedVersion
Erstveröffentlichung: 2017
Die Publikation erscheint in Sammlung(en):Fakultät für Bauingenieurwesen und Geodäsie

Verteilung der Downloads über den gewählten Zeitraum:

Herkunft der Downloads nach Ländern:

Pos. Land Downloads
Anzahl Proz.
1 image of flag of Germany Germany 202 46,44%
2 image of flag of United States United States 53 12,18%
3 image of flag of China China 34 7,82%
4 image of flag of Japan Japan 18 4,14%
5 image of flag of Korea, Republic of Korea, Republic of 14 3,22%
6 image of flag of Taiwan Taiwan 8 1,84%
7 image of flag of Hong Kong Hong Kong 8 1,84%
8 image of flag of Canada Canada 8 1,84%
9 image of flag of India India 7 1,61%
10 image of flag of United Kingdom United Kingdom 7 1,61%
    andere 76 17,47%

Weitere Download-Zahlen und Ranglisten:


Hinweis

Zur Erhebung der Downloadstatistiken kommen entsprechend dem „COUNTER Code of Practice for e-Resources“ international anerkannte Regeln und Normen zur Anwendung. COUNTER ist eine internationale Non-Profit-Organisation, in der Bibliotheksverbände, Datenbankanbieter und Verlage gemeinsam an Standards zur Erhebung, Speicherung und Verarbeitung von Nutzungsdaten elektronischer Ressourcen arbeiten, welche so Objektivität und Vergleichbarkeit gewährleisten sollen. Es werden hierbei ausschließlich Zugriffe auf die entsprechenden Volltexte ausgewertet, keine Aufrufe der Website an sich.