jueves, 31 de diciembre de 2020

UAVs para Animación 3D: CamFly Films


CamFly Films se fundó en 2014 para proporcionar servicios de fotografía y video mediante UAVs.

Su fundador, Serge Kouperschmidt, tiene más de 30 años de experiencia en producción de video y trabajó como camarógrafo y director de fotografía para la industria del cine en todo el mundo.

CamFly Films Ltd. tiene su sede en Londres y es un operador de UAVs certificado por la CAA (Civil Aviation Authority). Ofrece servicios profesionales de fotografía y filmación tanto en Londres como en cualquier lugar del Reino Unido.

https://www.youtube.com/watch?v=LHSDlY_IkLE

Desde magníficas filmaciones aéreas cinematográficas como la que se muestra en el enlace anterior, hasta inspecciones industriales, servicios de mapeo aéreo, estudios de tejados, e informes topográficos, proporciona servicios a medida basados en su gran experiencia de vuelo.

Entre sus actividades más demandadas cabría destacar el estudio y seguimiento de construcción de inmuebles, filmando el progreso diario de la construcción. También merecen destacarse las actividades relacionadas con la captación de imágenes térmicas, la fotogrametría, la ortofotografía, el mapeo con UAVs, la fotografía aérea 360°, así como el modelado 3D a partir de fotografías tomadas con UAVs.

Además de su PFCO (Permission for Commercial Operationsestándar, CamFly Films cuenta con un OSC (Operating Safety Case). Este permiso especialmente difícil de obtener, les permite volar legalmente a una distancia menor (10 metros de su objetivo) a una altitud mayor (188 metros) y más allá de la línea de visión. Esto hace que, a diferencia de la gran mayoría de otros operadores de UAVs, puedan operar con total eficiencia en el corazón de Londres.

CamFly Films también es una compañía de producción de video que ofrece fotografía, videografía y todos los servicios de filmación adjuntos: grabación de videos 4K impresionantes con cámaras de última generación, edición, gradación de color, adición de música, títulos, voz en off, efectos visuales, etc.

domingo, 27 de diciembre de 2020

Clifford Geometric Algebra-Based Approach for 3D Modeling of Agricultural Images Acquired by UAVs



Three-dimensional image modeling is essential in many scientific disciplines, including computer vision and precision agriculture.

So far, various methods of creating three-dimensional models have been considered. However, the processing of transformation matrices of each input image data is not controlled.

Site-specific crop mapping is essential because it helps farmers determine yield, biodiversity, energy, crop coverage, etc. Clifford Geometric Algebraic understanding of signaling and image processing has become increasingly important in recent years.

Geometric Algebraic treats multi-dimensional signals in a holistic way to maintain relationship between side sizes and prevent loss of information. This article has used agricultural images acquired by UAVs to construct three-dimensional models using Clifford geometric algebra. The qualitative and quantitative performance evaluation results show that Clifford geometric algebra can generate a three-dimensional geometric statistical model directly from UAVs’ RGB (Red Green Blue) images.

Through Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and visual comparison, the proposed algorithm’s performance is compared with latest algorithms. Experimental results show that proposed algorithm is better than other leading 3D modeling algorithms.

Read more:

https://www.researchgate.net/publication/347679848_Clifford_Geometric_Algebra-Based_Approach_for_3D_Modeling_of_Agricultural_Images_Acquired_by_UAVs


jueves, 24 de diciembre de 2020

Dji Phantom 3 para Animación 3D con texturas PBR


Las texturas PBR hacen referencia a una técnica de renderizado que permite calcular la luz de una escena en 3D, en base a la vida real.

Todavía no se ha conseguido un realismo al 100%, pero esta técnica permite calcular cómo se refleja la luz y las sombras que producen los objetos, de una manera más realista que en el pasado.

Estas texturas permiten simplificar el trabajo al aplicar materiales y se pueden usar en la mayoría de plataformas. Las texturas PBR dan información sobre el nivel de detalle, el color del material, el desplazamiento de los polígonos, la cantidad de reflexión, el detalle de la superficie y otro tipo de información como transparencia, refracción, curvatura, posición de los polígonos, etc.

En este video de Animación 3D se muestran las posibilidades de ésta técnica en combinación con las imágenes captadas mediante UAVs. Más concretamente, captadas por un Dji Phantom 3.

Enlace al vídeo:

https://www.youtube.com/watch?v=64fYOyrNN0c&list=PL2UsAzNdeUau_YvGOi-JBwXIGvKwhEAMn


miércoles, 23 de diciembre de 2020

TVP: UAVs para animación 3D


TVP
es una de las principales videoproductoras del Reino Unido y se ha ganado una merecida reputación de creatividad y excelencia desde sus inicios en 1983 hasta la actualidad.

Ubicada en Aberdeen, TVP realiza producciones de video de la más alta calidad y animaciones 3D utilizando la última tecnología de producción disponible. Los equipos de TVP graban en todos los formatos, desde video HD hasta RED Digital Cinema 5K RAW.

Su personal están capacitado para filmar tanto en tierra como en alta mar y en el aire utilizando UAVsEl equipo creativo de TVP se encarga de todo el proceso de producción, desde el concepto inicial hasta el guión, la gestión de la producción y el rodaje, pasando por la posproducción y luego hasta la entrega final en cualquier formato.

Más información: http://tvpstudios.tv/


viernes, 18 de diciembre de 2020

3D Mapping and Modeling Market Global Forecast to 2025



The global 3D mapping and modeling market size is expected to grow from USD 3.8 billion in 2020 to USD 7.6 billion by 2025, at a Compounded Annual Growth Rate (CAGR) of 15.0% during the forecast period.

High demand for 3D animation in mobile applications, games, and movies for the enhanced viewing experience, technological advancements in 3D scanners, 3D sensors, and the increasing availability of 3D content to drive the growth of market.

Stringent government regulations and lack of investments, and impact of COVID-19 on the global economy are one of the major challenges in the market. Moreover, Increasing corruption and piracy concerns and high technological and installation costs as one of the key restraining factor in the market.

Read more:

https://www.marketsandmarkets.com/Market-Reports/3d-mapping-market-819.html

domingo, 13 de diciembre de 2020

Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system



Unmanned Aerial Vehicles (UAVs) as a data acquisition platform and as a measurement instrument are becoming attractive for many surveying applications in civil engineering.

Their performance, however, is not well understood for these particular tasks. The scope of the presented work is the performance evaluation of a UAV system that was built to rapidly and autonomously acquire mobile 3D mapping data.

Details to the components of the UAV system (hardware and control software) are explained. A novel program for photogrammetric flight planning and its execution for the generation of 3D point clouds from digital mobile images is explained.

A performance model for estimating the position error was developed and tested in several realistic construction environments. Test results are presented as they relate to large excavation and earth moving construction sites.

The experiences with the developed UAV system are useful to researchers or practitioners in need for successfully adapting UAV technology for their applications.

Read more:

https://www.researchgate.net/publication/260270622_Mobile_3D_mapping_for_surveying_earthwork_projects_using_an_Unmanned_Aerial_Vehicle_UAV_system

martes, 8 de diciembre de 2020

Ventajas de la integración de componentes electrónicos en el interior de las PCBs a la hora de diseñar y fabricar electrónica para UAVs



Hoy en día, uno de los mayores retos de la industria aeronautica para uso militar estriba en el diseño y fabricación de pequeñas plataformas aereas gobernadas por inteligencia artificial, tales como micro-misiles inteligentes, micro-UAVs y nano-UAVs.

Para satisfacer la funcionalidad requerida, los diseñadores de sus correspondientes circuitos electrónicos utilizan cada vez más componentes lo cual requiere PCBs de mayor superficie y crea un techo de rendimiento dictado por el espacio disponible, por lo que es necesario reinventar la fabricación de circuitos electrónicos.

Una manera de reducir el espacio consiste en integrar componentes en las capas internas de la PCB, y esto es ya posible mediante la tecnología AME de Nano Dimension, que abre la puerta a un mundo de nuevas capacidades gracias a la integración de componentes activos y pasivos dentro de las PCBs, tal y como se muestra en este vídeo:

https://www.youtube.com/watch?v=E8GeucfOCJU&feature=emb_logo


Para más información:

https://integral3dprinting.com/nano-dimension-dragonfly/

lunes, 7 de diciembre de 2020

LIDAR sheds new light on plant phenomics for plant breeding and management: Recent advances and future prospects



Plant phenomics is a new avenue for linking plant genomics and environmental studies, thereby improving plant breeding and management. Remote sensing techniques have improved high-throughput plant phenotyping. However, the accuracy, efficiency, and applicability of three-dimensional (3D) phenotyping are still challenging, especially in field environments.

LIDAR (Light Detection And Ranging) provides a powerful new tool for 3D Phenotyping with the rapid development of facilities and algorithms. Numerous efforts have been devoted to studying static and dynamic changes of structural and functional phenotypes using LIDAR in agriculture. These progresses also improve 3D plant modeling across different spatial–temporal scales and disciplines, providing easier and less expensive association with genes and analysis of environmental practices and affords new insights into breeding and management.

Beyond agriculture phenotyping, LIDAR shows great potential in forestry, horticultural, and grass phenotyping. Although LIDAR has resulted in remarkable improvements in plant phenotyping and modeling, the synthetization of LIDAR-based phenotyping for breeding and management has not been fully explored. In this study, the authors identify three main challenges in LIDAR-based phenotyping development: 1) developing low cost, high spatial–temporal, and hyperspectral LIDAR facilities, 2) moving into multi-dimensional phenotyping with an endeavor to generate new algorithms and models, and 3) embracing open source and big data.

Read more:

https://www.sciencedirect.com/science/article/pii/S0924271620303130?dgcid=rss_sd_all

domingo, 6 de diciembre de 2020

Developing a strategy for precise 3D modelling of large-scale scenes for VR



In this work, it is presented a methodology for precise 3D modelling and multi-source geospatial data blending for the purposes of Virtual Reality immersive and interactive experiences. It has been evaluated on the volcanic island of Santorini due to its formidable geological terrain and the interest it poses for scientific and touristic purposes.

The methodology developed here consists of three main steps: Initially, bathymetric and SRTM (Shuttle Radar Topography Mission) data are scaled down to match the smallest resolution of the datasetAfterwards, the resulted elevations are combined based on the slope of the relief, while considering a buffer area to enforce a smoother terrain. As a final step, the orthophotos are combined with the estimated DTM (Digital Terrain Model) via applying a nearest neighbour matching schema leading to the final terrain background.

In addition to this, both onshore and offshore points-of-interest were modelled via image-based 3D reconstruction and added to the virtual scene. The overall geospatial data that need to be visualized in applications demanding phototextured hyper-realistic models pose a significant challenge. The 3D models are treated via a mesh optimization workflow, suitable for efficient and fast visualization in virtual reality engines, through mesh simplification, physically based rendering texture maps baking, and level-of-details. 

Read more at https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B4-2020/567/2020/isprs-archives-XLIII-B4-2020-567-2020.pdf

sábado, 5 de diciembre de 2020

Ventajas de usar UAVs en investigación criminal



Vamos a tratar en este post el uso de UAVs para investigación criminal.

Indudablemente el software de fotogrametría y el escaneo 3D "in situ" ya se está utilizando con éxito para documentar estas escenas, pero los UAVs ofrecen importantes ventajas que vamos a ver a continuación.

1. Recorte de tiempos

Cuando ocurre un crimen, lo mejor para todos es despejar el área lo antes posible, pero la escena debe documentarse primero.

Los instrumentos más frecuentemente usados por los equipos de investigación criminal son el escáner 3D y/o las estaciones totales y/o la fotografía digital, o una combinación de los tres al objeto de recopilar datos y crear la nube de puntos en 3D correspondiente a la escena del crimen.

Sin embargo, estos métodos pueden requerir una gran cantidad de tiempo y personal capacitado, que puede no siempre estar disponible. Por si esto fuera poco, los alrededores de la escena del crimen pueden ofrecer mucha información muy útil, que sólo se percibe desde las alturas.

Para la documentación de un crimen desde una cierta altura y en un amplio area, los UAVs resultan muy útiles porque pueden recorrer fácilmente distancias más grandes a una altitud conveniente para lograr una cobertura más rápida y precisa, permitiendo reducir entre un 60 y un 80% el tiempo requerido para documentar con precisión la escena del crimen.

2. Recorte de costes

Cerrar el paso a un area e investigar una escena criminal, requiere un trabajo humano que conlleva un coste laboral directamente proporcional al tiempo empleado.

Sin embargo, utilizar UAVs como herramienta para la toma de imágenes tiene un coste inversamente proporcional al tiempo empleado y puede resolverse en mucho menos tiempo utilizando planificadores de vuelo automatizado.

Con estos instrumentos, resulta relativamente económico documentar con precisión la escena de un crimen y acelerar el proceso, especialmente en situaciones con restricciones extremas de tiempo, de personal, o de otros equipos alternativos.

Por otra parte, la adquisición de datos en 3D mediante UAVs permite captar datos allí donde no es posible obtenerlos mediante el uso del escaner láser o la estacion total, como por ejemplo cuando existen obstáculos insalvables.

3. Sus resultados constituyen pruebas documentales

En ultima instancia, el objeto de recopilar datos no es otro que aportar evidencias que puedan ser presentadas ante un tribunal.

Muchas veces, la falta de datos por imposibilidad humana de acceder a ciertas areas hace que sea trabajo imposible presentar pruebas que avalen una sospecha, y es por esto que los datos recogidos por un UAV en un solo vuelo, combinados con un adecuado software de fotogrametría, pueden constituir la prueba definitiva para descartar o confirmar un asesinato o un suicidio, así como para confirmar o descartar la culpabilidad de un imputado.



domingo, 29 de noviembre de 2020

Development and Evaluation of a UAV-Photogrammetry System for Precise 3D Environmental Modeling



The specific requirements of UAV-photogrammetry needs particular solutions for system development, which have mostly been ignored or not assessed adequately in recent studies.

Accordingly, this paper presents the methodological and experimental aspects of correctly implementing an UAV-photogrammetry system. The hardware of the system consists of an electric-powered helicopter, a high-resolution digital camera and an inertial navigation system.

The software of the system includes the in-house programs specifically designed for camera calibration, platform calibration, system integration, on-board data acquisition, flight planning and on-the-job self-calibration. The detailed features of the system are discussed, and solutions are proposed in order to enhance the system and its photogrammetric outputs.

The developed system is extensively tested for precise modeling of the challenging environment of an open-pit gravel mine. The accuracy of the results is evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points.

Additionally, the effects of imaging configuration and network stability on modeling accuracy are assessed. The experiments demonstrated that 1.55 m horizontal and 3.16 m vertical absolute modeling accuracy could be achieved via direct geo-referencing, which was improved to 0.4 cm and 1.7 cm after indirect geo-referencing.

Read more:

https://www.researchgate.net/publication/283328189_Development_and_Evaluation_of_a_UAV-Photogrammetry_System_for_Precise_3D_Environmental_Modeling

sábado, 28 de noviembre de 2020

Nano Dimension redefine el diseño y fabricación de electrónica para micro-UAVs de uso militar



Cuando nos planteamos el diseño y fabricación de circuitos electrónicos para micro-UAVs de uso militar para operaciones encubiertas, nos encontramos con ciertos retos difíciles o imposibles de superar mediante las tecnologías convencionales de fabricación.

¿Qué podemos hacer cuando el espacio que nos queda para la electrónica, es insuficiente para albergar los circuitos impresos? ¿Sacrificamos prestaciones? ¿Aumentamos el tamaño del micro-UAV? Es un verdadero dilema.

Afortunadamente para los diseñadores y fabricantes de electrónica para uso militar, existe ya la posibilidad de aplicar la manufactura aditiva al diseño y fabricación de circuitos electrónicos.

La tecnología es muy simple, pero han sido necesarios muchos años de investigación hasta conseguir la precisión y repetitividad requeridas para ese tipo de aplicaciones.

Como se estarán imaginando, la tecnología ha sido desarrollada en Israel, en este caso por la firma Nano Dimension (Nasdaq, TASE: NNDM) cuyas máquinas de Impresión 3D para electrónica trabajan inyectando materiales dieléctricos y conductivos de manera simultánea, capa a capa, sin limitaciones de geometría.

Lo verán más claro en este vídeo:

https://www.youtube.com/watch?v=P4NFf42b04E&feature=emb_logo

Coastal Mapping using DJI Phantom 4 RTK in Post-Processing Kinematic Mode



Topographic and geomorphological surveys of coastal areas usually require the aerial mapping of long and narrow sections of littoral.

The georeferencing of photogrammetric models is generally based on the signalization and survey of GCPs (Ground Control Points) which are very time-consuming tasks.

Direct georeferencing with high camera location accuracy due to on-board multi-frequency Global Navigation Satellite System (GNSS) receivers can limit the need for GCPs.

Recently, DJI has made available the Phantom 4 Real-Time Kinematic (RTK) (DJI-P4RTK) which combines the versatility and the ease of use of previous DJI Phantom models with the advantages of a multi-frequency on-board GNSS receiver.

In this paper, the authors have investigated the accuracy of both photogrammetric models and Digital Terrain Models (DTMs) generated in Agisoft Metashape from two different image datasets (nadiral and oblique) acquired by a DJI-P4RTK.

Camera locations were computed with the Post-Processing Kinematic (PPK) of the Receiver Independent Exchange Format (RINEX) file recorded by the aircraft during flight missions. A Continuously Operating Reference Station (CORS) located at a 15 km distance from the site was used for this task.

The results highlighted that the oblique dataset produced very similar results, with GCPs (3D RMSE = 0.025 m) and without (3D RMSE = 0.028 m), while the nadiral dataset was affected more by the position and number of the GCPs (3D RMSE from 0.034 to 0.075 m).

The introduction of a few oblique images into the nadiral dataset without any GCP improved the vertical accuracy of the model (Up RMSE from 0.052 to 0.025 m) and can represent a solution to speed up the image acquisition of nadiral datasets for PPK with the DJI-P4RTK and no GCPs.

Moreover, the results of this research are compared to those obtained in RTK mode for the same datasets. The novelty of this research is the combination of a multitude of aspects regarding the DJI Phantom 4 RTK aircraft and the subsequent data processing strategies for assessing the quality of photogrammetric models, DTMs, and cross-section profiles.

Read more:

https://www.researchgate.net/publication/340328284_Coastal_Mapping_using_DJI_Phantom_4_RTK_in_Post-Processing_Kinematic_Mode

viernes, 27 de noviembre de 2020

UAVs en la Industria 4.0: Escaneo 3D, Optimización Topológica y Gemelos Digitales para el rediseño de UAVs y su fabricación mediante Manufactura Aditiva


La empresa australiana
Silvertone desarrolla, diseña y fabrica vehículos aéreos no tripulados con capacidades de carga útil flexible.

Uno de sus sistemas de aviones no tripulados, el Flamingo Mk3, lleva un paquete pesado de equipos de telemetría y sensores.

Para mejorar la eficiencia del vuelo, el soporte que une el paquete de equipos al fuselaje y soporta el tren de aterrizaje, debía rediseñarse para reducir el peso pero conservando a su vez el rendimiento mecánico.

El diseño existente fue sometido a procesos de optimización topológica hasta obtener un diseño orgánico de forma libre que satisfacía las condiciones de carga y de contorno requeridas.

La geometría resultante de la optimización topológica generalmente no se adapta bien a los métodos de fabricación tradicionales. Sin embargo, la fabricación aditiva produce componentes a través de capas y no ofrece limitación alguna a la hora de fabricar una geometría por compleja que sea.

Se contrató a Amiga Engineering para fabricar el componente de topología optimizada. La muestra se imprimió en titanio Gr23 en una máquina ProX DMP de 3D Systems. El componente fabricado logró una reducción de peso significativa: 800 gramos menos que los 4 kilogramos originales, mejor equilibrio y rigidez, mayor seguridad, tiempos de vuelo más largos, mayor capacidad de carga útil y mejor eficiencia de la batería.

Se contrató al proveedor de servicios de metrología Scan-Xpress para medir el componente fabricado utilizando el escáner GOM ATOS Q de alta resolución y para realizar comprobaciones críticas de control de calidad antes de la instalación. El sistema de medición óptica ATOS Q de GOM es muy adecuado para medir superficies orgánicas y de forma libre generadas a partir de la optimización de la topología, incluido el soporte del paquete.

El sensor ATOS Q proyecta una trama de franjas que van cambiando de fase a medida que recorren la superficie de medición, al objeto de recolectar millones de puntos y generar un modelo tridimensional preciso. El sensor fue colocado en diferentes posiciones alrededor del componente hasta que toda la superficie se definió y capturó con precisión. Con la nube de puntos generada se creó una malla 3D en formato STL, conocida como gemelo digital. El gemelo digital generado se comparó con el modelo CAD y se registraron las diferencias.

La información capturada brindó a Amiga Engineering la capacidad de validar sus métodos de producción y simulaciones. Los datos capturados también proporcionaron información para modificar los parámetros del proceso de Manufactura Aditiva para futuras ejecuciones de producción. Esta retroalimentación proporcionada por la calidad de los datos generados fue un factor determinante para que Amiga Engineering comprara el primer ATOS Q de Australia.

domingo, 22 de noviembre de 2020

Impresión 3D para camuflar puntos de acceso IoT/WiFi en nano-UAVs



Los usuarios y los fabricantes de nano-UAVs están viendose constantemente impulsados ​​a demandar y agregar nuevas capacidades al producto final, y entre esas capacidades merecen destacarse todas aquellas relacionadas con el IoT (Internet of Things).

La firma israelí Nano Dimension Ltd. ha demostrado que es posible fabricar dispositivos de comunicación IoT/WiFi impresos en 3D para que los OEMs de nano-UAVs puedan añadirlos a su producto final.

La velocidad de fabricación distingue a estos nuevos dispositivos, ya que Nano Dimension afirma que pueden estar listos para funcionar en sólo 18 horas, lo que equivale a una velocidad de producción un 90 por ciento más rápida que utilizando métodos convencionales.

Más información:

https://www.nano-di.com/capabilities-and-use-cases

Digital Innovations in European Archaeology


 

European archaeologists in the last two decades have worked to integrate a wide range of emerging digital tools to enhance the recording, analysis, and dissemination of archaeological data.

These techniques have expanded and altered the data collected by archaeologists as well as their interpretations. At the same time archaeologists have expanded the capabilities of using these data on a large scale, across platforms, regions, and time periods, utilising new and existing digital research infrastructures to enhance the scale of data used for archaeological interpretations.

This Element discusses some of the most recent, innovative uses of these techniques in European archaeology at different stages of archaeological work. In addition to providing an overview of some of these techniques, it critically assesses these approaches and outlines the recent challenges to the discipline posed by self-reflexive use of these tools and advocacy for their open use in cultural heritage preservation and public engagement.

Among these techniques used frequently in various archaeological contexts across Europe, aerial photogrammetry, utilising photographs taken by UAVs (Unmanned Aerial Vehicles) has been used to document larger landscapes and close-range photogrammetry is becoming a ubiquitous recording tool on excavations and for historic architectural recording. The low financial entry point to photogrammetry has made it an ideal technique for archaeologists, who are often working on a shoe-string budget.

Most archaeological projects are already equipped with a digital SLR (Single Lens Reflex) camera and most of the necessary software licenses for image processing are open access or available at steeply reduced educational discounts.

Read more: https://www.cambridge.org/core/elements/digital-innovations-in-european-archaeology/BDEA933427350E7D500F773A31EC9F4B/core-reader

sábado, 21 de noviembre de 2020

Impresión 3D para circuitos electrónicos alojados en nano-UAVs



Durante los últimos años, los nano-UAVs han venido siendo utilizados como un instrumento clave en operaciones encubiertas llevadas a cabo por la CIA, el FBI, el M16, el Mosad, la Sayeret Matkal así como otros grupos de inteligencia de diversos países.

Lo ideal para misiones ISR sería contar con un instrumento dotado de un conjunto de sensores capaces de llevar a cabo la misión permitiendo al operario ver mediante cámaras multiespectro, escuchar todo tipo de sonidos dentro y fuera del rango 20Hz-20KHz, e incluso detectar la presencia de explosivos, isótopos radioactivos, gases tóxicos, etc.

Por supuesto, ese instrumento debería estar diseñado para no ser detectado a simple vista por un humano, pasando desapercibido como un insecto. Ok, ¿Y qué más? Porque todo eso requiere diseñar circuitos electrónicos muy complejos, que deben ser alojados en volúmenes muy reducidos de geometría muy compleja, y ante ese tipo de situaciones, el diseño y fabricación convencionales de circuitos electrónicos no sirve.

Se hacía necesario pensar otra manera de fabricar, y otra manera de diseñar. Afortunadamente esta nueva manera de fabricar ya está disponible no sólo para uso militar sino también para uso civil, y sus siglas son AME que corresponden a Additive Manufacturing for Electronics. Una tecnología extraordinaria desarrollada en Israel por ingenieros de la firma Nano Dimension. ¿Se imaginan diseñar circuitos electrónicos no sólo en XY, sino también en Z? ¿Se imaginan poder ocultar componentes electrónicos en el interior de una PCB? ¿Y si la PCB pudiera tener cualquier geometría en los tres ejes?

Es increíble hasta dónde puede llegar esta tecnología. Les invito a descubrirlo a través de este vídeo:



Accuracy assessment of RTK-GNSS equipped UAV conducted as-built surveys for construction site modelling


 

Regular as-built surveys have become a necessary input for building information modelling.

Such large-scale 3D data capturing can be conducted effectively by combining structure-from-motion and UAVs (Unmanned Aerial Vehicles).

Using a RTK-GNSS (Real Time Kinematic-Global Navigation Satellite Systemequipped UAV, 22 repeated weekly campaigns were conducted at two altitudes in various conditions.

The photogrammetric approach yielded 3D models, which were compared to the terrestrial laser scanning based ground truth. Better than 2.8 cm geometry RMSE (Root Mean Square Error) was consistently achieved using integrated georeferencing.

It is concluded that the RTK-GNSS based georeferencing enables reaching better than 5 cm geometry accuracy by utilising at least one ground control point.

Read more at:

https://www.tandfonline.com/doi/abs/10.1080/00396265.2020.1830544


domingo, 15 de noviembre de 2020

Nano Dimension: Assure Your Electronics Projects Confidentiality



The Nano Dimension’s DragonFly™ Pro Additive Manufacturing Platform for Electronics, is the one-stop solution for creating high-quality 3D Printed electronics confidentially.

The system can 3D print using metals and dielectric polymers simultaneously, allowing for the manufacture of non-planar electronics, antennas, RFIDs, multilayer PCBs, Complex Geometry PCBs, and many other components.

Discover it at:

https://www.youtube.com/watch?v=MDfSrb7FQ7w



Aspen detection in boreal forests: Capturing a key component of biodiversity using airborne hyperspectral, lidar, and UAV data

Importance of biodiversity is increasingly highlighted as an essential part of sustainable forest management.

As direct monitoring of biodiversity is not possible, proxy variables have been used to indicate site's species richness and quality. In boreal forests, European aspen (Populus tremula L.) is one of the most significant proxies for biodiversity.

Aspen is a keystone species, hosting a range of endangered species, hence having a high importance in maintaining forest biodiversity. Still, reliable and fine-scale spatial data on aspen occurrence remains scarce and incomprehensive. Although remote sensing-based species classification has been used for decades for the needs of forestry, commercially less significant species (e.g., aspen) have typically been excluded from the studies.

This creates a need for developing general methods for tree species classification covering also ecologically significant species. Our study area, located in Evo, Southern Finland, covers approximately 83 km2, and contains both managed and protected southern boreal forests. The main tree species in the area are Scots pine (Pinus sylvestris L.), Norway spruce (Picea abies (L.) Karst), and birch (Betula pendula and pubescens L.), with relatively sparse and scattered occurrence of aspen.

Along with a thorough field data, airborne hyperspectral and LiDAR data have been acquired from the study area. We also collected ultra high resolution UAV data with RGB and multispectral sensors. The aim is to gather fundamental data on hyperspectral and multispectral species classification, that can be utilized to produce detailed aspen data at large scale. For this, we first analyze species detection at tree-level. We test and compare different machine learning methods (Support Vector Machines, Random Forest, Gradient Boosting Machine) and deep learning methods (3D Convolutional Neural Networks), with specific emphasis on accurate and feasible aspen detection.

The results will show, how accurately aspen can be detected from the forest canopy, and which bandwidths have the largest importance for aspen. This information can be utilized for aspen detection from satellite images at large scale.

Read more at https://ui.adsabs.harvard.edu/abs/2020EGUGA..2221268K/abstract

sábado, 14 de noviembre de 2020

Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment


 
Efficient and rapid data collection techniques are necessary to obtain transitory information in the aftermath of natural hazards, which is not only useful for post-event management and planning, but also for post-event structural damage assessment.

Aerial imaging from UAVs permits highly detailed site characterization, in particular in the aftermath of extreme events with minimal ground support, to document current conditions of the region of interest. However, aerial imaging results in a massive amount of data in the form of two-dimensional (2D) orthomosaic images and three-dimensional (3D) point clouds.

Both types of datasets require effective and efficient data processing workflows to identify various damage states of structures. This manuscript aims to introduce two deep learning models based on both 2D and 3D convolutional neural networks to process the orthomosaic images and point clouds, for post windstorm classification.

In detail, 2D CNN (2D Convolutional Neural Networks) are developed based on transfer learning from two well-known networks AlexNet and VGGNetIn contrast, a 3DFCN (3D Fully Convolutional Network) with skip connections was developed and trained based on the available point cloud data. Within this study, the datasets were created based on data from the aftermath of Hurricanes Harvey (Texas) and Maria (Puerto Rico). The developed 2DCNN and 3DFCN models were compared quantitatively based on the performance measures, and it was observed that the 3DFCN was more robust in detecting the various classes.

This demonstrates the value and importance of 3D Datasets, particularly the depth information, to distinguish between instances that represent different damage states in structures.

domingo, 8 de noviembre de 2020

Empower innovation in your micro-UAVs with the technology of Nano Dimension




Meet Nano Dimension, a technology company, disrupting, shaping and defining the future of how electronics are made:

https://www.youtube.com/watch?v=P4NFf42b04E&feature=emb_logo

The products and solutions they offer are bridging today's world with the electronics of tomorrow.

Moving the industry, from 2D to 3D, from initial design right through to manufacturing, with the DragonFly  Pro Additive Manufacturing System, the world's first professional 3D printer for electronics, highly conductive silver nanoparticle ink, dielectric ink and advanced 3D software

Learn more about Nano Dimension DragonFly Pro System technology here: https://bit.ly/2LZtbkr

Learn more about additive manufacturing for electronics here: https://bit.ly/2StPkgB

Contact Nano Dimension here: https://bit.ly/2TavMLb

For more information about Nano Dimension, please click here: https://www.nano-di.com

Get the latest news from Nano Dimension: https://bit.ly/2E22Nnv



3D Fire Front Reconstruction in UAV-Based Forest-Fire Monitoring System



This work presents a new method of 3D reconstruction of the forest-fire front based on uncertain observations captured by remote sensing from UAVs within the forest-fire monitoring system.

The use of multiple cameras simultaneously to capture the scene and recognize its geometry including depth is proposed. Multi-directional observation allows perceiving and representing a volumetric nature of the fire front as well as the dynamics of the fire process.

The novelty of the proposed approach lies in the use of soft rough set to represent forest fire model within the discretized hierarchical model of the terrain and the use of 3D CNN (3D Convolutional Neural Network) to classify voxels within the reconstructed scene.

The developed method provides sufficient performance and good visual representation to fulfill the requirements of fire response decision makers. 

Read more at: https://ieeexplore.ieee.org/abstract/document/9204196

lunes, 2 de noviembre de 2020

Method for establishing the UAV-rice vortex 3D model and extracting spatial parameters

With the deepening research on the rotor wind field of UAV operation, it has become a mainstream to quantify the UAV operation effect and study the distribution law of rotor wind field via the spatial parameters of the UAV-rice interaction wind field vortex. 

At present, the point cloud segmentation algorithms involved in most wind field vortex spatial parameter extraction methods cannot adapt to the instantaneous changes and indistinct boundary of the vortex.  As a result, there are problems such as inaccurate three-dimensional (3D) shape and boundary contour of the wind field vortex as well as large errors in the vortex’s spatial parameters. 

To this end, this paper proposes an accurate method for establishing the UAV-rice interaction vortex 3D model and extracting vortex spatial parameters.  Firstly, the original point cloud data of the wind filed vortex were collected in the image acquisition area.  Secondly, DDC-UL processed the original point cloud data to develop the 3D point cloud image of the wind field vortex. 

Thirdly, the 3D curved surface was reconstructed and spatial parameters were then extracted.  Finally, the volume parameters and top surface area parameters of the UAV-rice interaction vortex were calculated and analyzed.  The results show that the error rate of the 3D model of the UAV-rice interaction wind field vortex developed by the proposed method is kept within 2%, which is at least 13 percentage points lower than that of algorithms like PointNet

The average error rates of the volume parameters and the top surface area parameters extracted by the proposed method are 1.4% and 4.12%, respectively.  This method provides 3D data for studying the mechanism of rotor wind field in the crop canopy through the 3D vortex model and its spatial parameters.

Read more at: http://www.ijpaa.org/index.php/ijpaa/article/view/84

domingo, 1 de noviembre de 2020

Federated Learning in the Sky: Aerial-Ground Air Quality Sensing Framework with UAV Swarms



Due to air quality significantly affects human health, it is becoming increasingly important to accurately and timely predict the Air Quality Index (AQI).

To this end, this paper proposes a new federated learning-based aerial-ground air quality sensing framework for fine-grained 3D air quality monitoring and forecasting.

Specifically, in the air, this framework leverages a light-weight Dense-MobileNet model to achieve energy-efficient end-to-end learning from haze features of haze images taken by UAVs (Unmanned Aerial Vehicles) for predicting AQI scale distribution.

Furthermore, the Federated Learning Framework not only allows various organizations or institutions to collaboratively learn a well-trained global model to monitor AQI without compromising privacy, but also expands the scope of UAV swarms monitoring.

For ground sensing systems, it is proposed a GC-LSTM (Graph Convolutional neural network-based Long Short-Term Memory) model to achieve accurate, real-time and future AQI inference. The GC-LSTM model utilizes the topological structure of the ground monitoring station to capture the spatio-temporal correlation of historical observation data, which helps the aerial-ground sensing system to achieve accurate AQI inference.

Through extensive case studies on a real-world dataset, numerical results show that the proposed framework can achieve accurate and energy-efficient AQI sensing without compromising the privacy of raw data.

Read more: https://ieeexplore.ieee.org/abstract/document/9184079

martes, 20 de octubre de 2020

DroneCaps: Recognition Of Human Actions In UAV Videos Using Capsule Networks With Binary Volume Comparisons

Understanding human actions from videos captured by UAVs is a challenging task in computer vision due to the unfamiliar viewpoints of individuals and changes in their size due to the camera’s location and motion.

This work proposes DroneCaps, a capsule network architecture for multi-label HAR (Human Action Recognition) in videos captured by UAVs. DroneCaps uses features computed by 3D convolution neural networks plus a new set of features computed by a novel Binary Volume Comparison layer.

All these features, in conjunction with the learning power of CapsNets, allow understanding and abstracting the different viewpoints and poses of the depicted individuals very efficiently, thus improving multi-label HAR.

The evaluation of the DroneCaps architecture’s performance for multi-label classification shows that it outperforms state-of-the-art methods on the Okutama-Action dataset.

Read more at: https://ieeexplore.ieee.org/document/9190864

lunes, 19 de octubre de 2020

Desertification Glassland Classification and Three-Dimensional Convolution Neural Network Model for Identifying Desert Grassland Landforms with UAV Hyperspectral Remote Sensing Images



Based on deep learning, a Desertification Grassland Classification (DGC) and three-dimensional Convolution Neural Network (3D-CNN) model is established.

The F-norm paradigm is used to reduce the data; the data volume was effectively reduced while ensuring the integrity of the spatial information. Through structure and parameter optimization, the accuracy of the model is further improved by 9.8%, with an overall recognition accuracy of the optimized model greater than 96.16%.

Accordingly, high-precision classification of desert grassland features is achieved, informing continued grassland remote sensing research.

Read more at: https://link.springer.com/article/10.1007/s10812-020-01001-6

domingo, 18 de octubre de 2020

Vision-Based Obstacle Avoidance for UAVs via Imitation Learning with Sequential Neural Networks

This paper explores the feasibility of a framework for vision-based obstacle avoidance techniques that can be applied to UAVs (Unmanned Aerial Vehicles) where such decision-making policies are trained upon supervision of actual human flight data.

The neural networks are trained based on aggregated flight data from human experts, learning the implicit policy for visual obstacle avoidance by extracting the necessary features within the image. The images and flight data are collected from a simulated environment provided by Gazebo, and Robot Operating System is used to provide the communication nodes for the framework.

The framework is tested and validated in various environments with respect to four types of neural network including fully connected neural networks, two- and three-dimensional CNNs (Convolutional Neural Networks), and Recurrent Neural Networks (RNNs). Among the networks, sequential neural networks (i.e., 3D-CNNs and RNNs) provide the better performance due to its ability to explicitly consider the dynamic nature of the obstacle avoidance problem.

Read more at: https://link.springer.com/article/10.1007/s42405-020-00254-x

lunes, 12 de octubre de 2020

Tree Species Classification of UAV Hyperspectral and RGB Imagery with Deep Learning Convolutional Neural Networks

Interest in UAV solutions in forestry applications is growing.

Using UAVs, datasets can be captured flexibly and at high spatial and temporal resolutions when needed.

In forestry applications, fundamental tasks include the detection of individual trees, tree species classification, biomass estimation, etc. Deep Neural Networks (DNN) have shown superior results when comparing with conventional machine learning methods such as MLP (Multi-Layer Perceptron) in cases of huge input data.

The objective of this research is to investigate 3D Convolutional Neural Networks (3D-CNN) to classify three major tree species in a boreal forest: pine, spruce, and birch. The proposed 3D-CNN models were employed to classify tree species in a test site in Finland. The classifiers were trained with a dataset of 3039 manually labelled trees. Then the accuracies were assessed by employing independent datasets of 803 records.

To find the most efficient set of feature combination, it was compared the performances of 3D-CNN models trained with HS (HyperSpectral) channels, Red-Green-Blue (RGB) channels, and Canopy Height Model (CHM), separately and combined. It is demonstrated that the proposed 3D-CNN model with RGB and HS layers produces the highest classification accuracy. The producer accuracy of the best 3D-CNN classifier on the test dataset were 99.6%, 94.8%, and 97.4% for pines, spruces, and birches, respectively.

The best 3D-CNN classifier produced ~5% better classification accuracy than the MLP with all layers. The results suggest that the proposed method provides excellent classification results with acceptable performance metrics for HS datasets. The results show that pine class was detectable in most layers. Spruce was most detectable in RGB data, while birch was most detectable in the HS layers. Furthermore, the RGB datasets provide acceptable results for many low-accuracy applications.

Read more at: https://www.mdpi.com/2072-4292/12/7/1070


domingo, 11 de octubre de 2020

Classification of Grassland Desertification in China Based on Vis-NIR UAV Hyperspectral Remote Sensing

In this study, a vis-NIR (visual Near Infra Red) hyperspectral remote sensing system for UAVs (Unmanned Aerial Vehicles) was used to analyze the type and presence of vegetation and soil of typical desertified grassland in Inner Mongolia using a DBN (Deep Belief Network), 2D CNN (2D Convolutional Neural Network) and 3D CNN (3D Convolutional Neural Network).

The results show that these typical deep learning models can effectively classify hyperspectral data on desertified grassland features. The highest classification accuracy was achieved by 3D CNN, with an overall accuracy of 86.36%. This study enriches the spatial scale of remote sensing research on grassland desertification, and provides a basis for further high-precision statistics and inversion of remote sensing of grassland desertification.

Read more: https://www.spectroscopyonline.com/view/classification-grassland-desertification-china-based-vis-nir-uav-hyperspectral-remote-sensing

sábado, 10 de octubre de 2020

Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment


Aerial imaging from
UAVs (Unmanned Aerial Vehicles) permits highly detailed site characterization, in particular in the aftermath of extreme events with minimal ground support, to document current conditions of the region of interest.

However, aerial imaging results in a massive amount of data in the form of two-dimensional (2D) orthomosaic images and three-dimensional (3D) point clouds. Both types of datasets require effective and efficient data processing workflows to identify various damage states of structures.

This study aims to introduce two deep learning models based on both 2D and 3D convolutional neural networks to process the orthomosaic images and point clouds, for post windstorm classification. In detail, 2D CNN (2D Convolutional Neural Networks) are developed based on transfer learning from two well-known networks: AlexNet and VGGNet.

In contrast, a 3DFCN (3D Fully Convolutional Network) with skip connections was developed and trained based on the available point cloud data. Within this study, the datasets were created based on data from the aftermath of Hurricanes Harvey (Texas) and Maria (Puerto Rico). The developed 2DCNN and 3DFCN models were compared quantitatively based on the performance measures, and it was observed that the 3DFCN was more robust in detecting the various classes. 

This demonstrates the value and importance of 3D Datasets, particularly the depth information, to distinguish between instances that represent different damage states in structures.

Read more: https://www.mdpi.com/2504-446X/4/2/24/htm

domingo, 4 de octubre de 2020

Accurate 3D Facade Reconstruction using UAVs



Automatic reconstruction of a 3D model from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision.

These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision.

However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition.

In this paper, it is presented and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. It is also proposed a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark.

Further, it is evaluated the system on real outdoor scenes, and show that the interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.

More info: