Mostrando entradas con la etiqueta LIDAR. Mostrar todas las entradas
Mostrando entradas con la etiqueta LIDAR. Mostrar todas las entradas

lunes, 7 de diciembre de 2020

LIDAR sheds new light on plant phenomics for plant breeding and management: Recent advances and future prospects



Plant phenomics is a new avenue for linking plant genomics and environmental studies, thereby improving plant breeding and management. Remote sensing techniques have improved high-throughput plant phenotyping. However, the accuracy, efficiency, and applicability of three-dimensional (3D) phenotyping are still challenging, especially in field environments.

LIDAR (Light Detection And Ranging) provides a powerful new tool for 3D Phenotyping with the rapid development of facilities and algorithms. Numerous efforts have been devoted to studying static and dynamic changes of structural and functional phenotypes using LIDAR in agriculture. These progresses also improve 3D plant modeling across different spatial–temporal scales and disciplines, providing easier and less expensive association with genes and analysis of environmental practices and affords new insights into breeding and management.

Beyond agriculture phenotyping, LIDAR shows great potential in forestry, horticultural, and grass phenotyping. Although LIDAR has resulted in remarkable improvements in plant phenotyping and modeling, the synthetization of LIDAR-based phenotyping for breeding and management has not been fully explored. In this study, the authors identify three main challenges in LIDAR-based phenotyping development: 1) developing low cost, high spatial–temporal, and hyperspectral LIDAR facilities, 2) moving into multi-dimensional phenotyping with an endeavor to generate new algorithms and models, and 3) embracing open source and big data.

Read more:

https://www.sciencedirect.com/science/article/pii/S0924271620303130?dgcid=rss_sd_all

domingo, 15 de noviembre de 2020

Aspen detection in boreal forests: Capturing a key component of biodiversity using airborne hyperspectral, lidar, and UAV data

Importance of biodiversity is increasingly highlighted as an essential part of sustainable forest management.

As direct monitoring of biodiversity is not possible, proxy variables have been used to indicate site's species richness and quality. In boreal forests, European aspen (Populus tremula L.) is one of the most significant proxies for biodiversity.

Aspen is a keystone species, hosting a range of endangered species, hence having a high importance in maintaining forest biodiversity. Still, reliable and fine-scale spatial data on aspen occurrence remains scarce and incomprehensive. Although remote sensing-based species classification has been used for decades for the needs of forestry, commercially less significant species (e.g., aspen) have typically been excluded from the studies.

This creates a need for developing general methods for tree species classification covering also ecologically significant species. Our study area, located in Evo, Southern Finland, covers approximately 83 km2, and contains both managed and protected southern boreal forests. The main tree species in the area are Scots pine (Pinus sylvestris L.), Norway spruce (Picea abies (L.) Karst), and birch (Betula pendula and pubescens L.), with relatively sparse and scattered occurrence of aspen.

Along with a thorough field data, airborne hyperspectral and LiDAR data have been acquired from the study area. We also collected ultra high resolution UAV data with RGB and multispectral sensors. The aim is to gather fundamental data on hyperspectral and multispectral species classification, that can be utilized to produce detailed aspen data at large scale. For this, we first analyze species detection at tree-level. We test and compare different machine learning methods (Support Vector Machines, Random Forest, Gradient Boosting Machine) and deep learning methods (3D Convolutional Neural Networks), with specific emphasis on accurate and feasible aspen detection.

The results will show, how accurately aspen can be detected from the forest canopy, and which bandwidths have the largest importance for aspen. This information can be utilized for aspen detection from satellite images at large scale.

Read more at https://ui.adsabs.harvard.edu/abs/2020EGUGA..2221268K/abstract

sábado, 19 de septiembre de 2020

Utilizing Airborne LiDAR and UAV Photogrammetry Techniques in Local Geoid Model Determination and Validation


This investigation evaluates the performance of Digital Terrain Models (DTMs) generated in different vertical datums by aerial LiDAR and UAV (Unmanned Aerial Vehicle) photogrammetry techniques, for the determination and validation of local geoid models.

Many engineering projects require the point heights referring to a physical surface, i.e., geoid, rather than an ellipsoid. When a high-accuracy local geoid model is available in the study area, the physical heights are practically obtained with the transformation of Global Navigation Satellite System (GNSS) ellipsoidal heights of the points.

Besides the commonly used geodetic methods, this study introduces a novel approach for the determination and validation of the local geoid surface models using photogrammetry. The numeric tests were carried out in the Bergama region, in the west of TurkeyUsing direct georeferenced airborne LiDAR and indirect georeferenced UAV photogrammetry-derived point clouds, DTMs were generated in ellipsoidal and geoidal vertical datums, respectively.

After this, the local geoid models were calculated as differences between the generated DTMs. Generated local geoid models in the grid and pointwise formats were tested and compared with the regional gravimetric geoid model (TG03) and a high-resolution global geoid model (EIGEN6C4), respectively. In conclusion, the applied approach provided sufficient performance for modeling and validating the geoid heights with centimeter-level accuracy. 

Read more at https://www.researchgate.net/publication/344146054_Utilizing_Airborne_LiDAR_and_UAV_Photogrammetry_Techniques_in_Local_Geoid_Model_Determination_and_Validation

sábado, 2 de mayo de 2020

UAV Photogrammetry for topographic monitoring of coastal areas


Coastal areas suffer degradation due to the action of the sea and other natural and human-induced causes.

Topographical changes in beaches and sand dunes need to be assessed, both after severe events and on a regular basis, to build models that can predict the evolution of these natural environments.

This is an important application for airborne Laser Imaging Detection and Ranging (LIDAR) and conventional photogrammetry is also being used for regular monitoring programs of sensitive coastal areas.

This paper analyses the use of UAVs (Unmanned Aerial Vehicles) to map and monitor sand dunes and beaches. A very light plane equipped with a very cheap, non-metric camera was used to acquire images with ground resolutions better than 5 cm.

The Agisoft Photoscan software was used to orientate the images, extract point clouds, build a digital surface model and produce orthoimage mosaics. The processing, which includes automatic aerial triangulation with camera calibration and subsequent model generation, was mostly automated.

To achieve the best positional accuracy for the whole process, signalised ground control points were surveyed with a differential GPS (Ground Positioning System) receiver. Two very sensitive test areas on the Portuguese northwest coast were analysed.

Detailed DSMs were obtained with 10 cm grid spacing and vertical accuracy (RMS) ranging from 3.5 to 5.0 cm, which is very similar to the image ground resolution (3.2–4.5 cm). Where possible to assess, the planimetric accuracy of the orthoimage mosaics was found to be subpixel.

Within the regular coastal monitoring programme being carried out in the region, UAVs can replace many of the conventional flights, with considerable gains in the cost of the data acquisition and without any loss in the quality of topographic and aerial imagery data.

Read more:


sábado, 25 de junio de 2016

INAER Spain to use UAVs for Fire Spotting


During a presentation at the UNVEX16 event at MadridCuatro Vientos Airport, Spain, José Luis Saiz, INAER Spain’s director of research and development, explained that the extensive experience of INAER Spain using EO / IR, SAR and LIDAR sensors for surveillance services, grounding that will allow them to become the first European operator to perform surveillance and observation activities in firefighting missions, not only daytime but also at night. (Read more)

viernes, 2 de octubre de 2015

PlexiDrone: The Pocket-UAV


As consumer UAVs are becoming more and more popular, we are starting to see more options. In this case we are talking about DreamQii's PlexiDrone.


Designed for aerial photography and videography, the PlexiDrone was born out of feedback from filmmakers and photographers who wanted a portable drone for aerial footage capture.


The four propellers and landing gear can be attached to the main body in about a minute – less than that if you work fast enough. The components are designed to snap in and lock on without any tools, and can be disassembled just as quickly for portability; DreamQii also cleverly designed them so that you can’t accidentally attach a propeller in the wrong section, making assemble foolproof (DreamQii says it’s impossible to put together wrongly).


Attach the proprietary Bluetooth wireless router that communicates with your smartphone or tablet – up to 1 mile – and you’re ready to go. The battery only lasts between 15-35 minutes, so you may need to keep a charger or extra battery handy if you plan to use it for longer than that. For the camera, the PlexiDrone doesn’t come with one built in; instead, the user supplies one. It is compatible with most cameras weighing less than 1 kilogram, or 2.2 pounds. It’ll handle small action cams like those from GoPro and Sony, as well as compact mirrorless cameras, 360-degree panorama cameras, thermal cameras, and LIDAR scanners; you can even attach a claw to use it to hold something light. And unlike other drones, DreamQii says the retractable landing gear and camera’s positioning allow for an unobstructed 360-degree field of view; you won’t have to crop out anything from a scene later. Also, you will not be worried about trespassing onto drone-prohibited territory, as the PlexiDrone has geofencing built in: Without the user input, the PlexiDrone’s software uses known data of where it can and cannot fly, and will avoid (or prohibit you, rather) from flying in those areas.


Klever Freire, DreamQii’s CEO and cofounder, tells that PlexiDrone is designed to be flexible. Want a larger payload? In the near future, you could swap in more powerful propellers and attach a camera gimbal for a DSLR or cinema camcorder. Accidentally crash and break one of the propellers? Instead of replacing the whole unit, you can just replace the part you need. The PlexiDrone is easily controlled through the PlexiGCS software for iOS and Android. You don’t need any expertise to control it. Through GPS and the 3D map on the app, you simply draw a flight path for the drone, and tell it what to do. There’s also a “GPS follow me” feature, where you can have the PlexiDrone automatically follow and film you, without you having to manually control it. The wireless router, called the PlexiHub, also lets you create and control a swarm of PlexiDrones. A single pilot can capture multiple footages. DreamQii says swarm technology also lets you “accomplish goals like following search grid patterns or surveying larger surface areas.”


You can also control the attached camera via the app, so you won’t need to switch between apps or have a second pilot. Ultrasonic on the PlexiDrone will alert the unit if there’s an obstacle in its path while in flight. Users can also pilot the PlexiDrone with remote control unit, if they wish. Instead of LED lights, the PlexiDrone uses customizable voice prompts to give you status reports (you can even add theme music, let you personalize your drone).

martes, 28 de enero de 2014

GeoDragon: 3D ISR


A sensor system designed to create 3-D reconstructions in near-real time and output high-resolution digital elevation models, LIDAR-like datasets, and wide area maps has been flown on an Arcturus T-20 Tier II UAV.

The system, called GeoDragon, is enclosed in a wing-mounted pod and is capable of high resolution 2-D and 3-D image capture. Urban Robotics in Portland, OR, which designed the sensor, says among its "unique aspects" are low weight (equating to longer endurance and loiter time,) small operational footprint on the ground (1 or 2 full size pick-up trucks,) a quick mount/dismount pod, quiet operation (the T-20 utilizes a modified 4-stroke engine,) and the ability to fly simultaneously with other payloads on the T-20, such as EO/IR.

GeoDragon imagery is post-processed using automated 3-D algorithms to rapidly generate large mapping and modeling datasets. Urban Robotics develops software and hardware solutions for 3-D ISR, remote sensing and geospatial applications, including collection, post-processing, and data management. The aircraft was built by Arcturus UAV in Rohnert Park, CA. Urban Robotics says the GeoDragon adds significant 3-D imaging and mapping capabilities to the T-20 UAV. The system is scheduled to be released in mid-2014.

miércoles, 20 de marzo de 2013

EMT offers LIDAR fit for UAV sense and avoid


EMT is offering to equip its unmanned air vehicles with a combination of an ADS-B transponder and a light detection and ranging (LIDAR) sensor to provide operators with an enhanced sense and avoid capability. (Read more)