tag:blogger.com,1999:blog-24878998238883828922024-03-13T11:40:44.156+01:00UAV ACTUALA Blog by David del Fresno, specialist in Additive Manufacturing since 2010David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comBlogger756125tag:blogger.com,1999:blog-2487899823888382892.post-71403858107235539742020-12-31T16:08:00.006+01:002020-12-31T16:08:56.482+01:00UAVs para Animación 3D: CamFly Films<p style="text-align: justify;"><span style="font-family: verdana;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: verdana;"><a href="https://1.bp.blogspot.com/-e-6vV6qkVJs/X-3pWSOZESI/AAAAAAAASIY/Ruelubb2yc0np9orqeeYUv_QpGA44Wm_wCLcBGAsYHQ/s1280/CAMFLY.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="323" src="https://1.bp.blogspot.com/-e-6vV6qkVJs/X-3pWSOZESI/AAAAAAAASIY/Ruelubb2yc0np9orqeeYUv_QpGA44Wm_wCLcBGAsYHQ/w574-h323/CAMFLY.jpg" width="574" /></a></span></div><span style="font-family: verdana;"><b><br /></b></span><p></p><p style="text-align: justify;"><span style="font-family: verdana;"><b>CamFly Films</b> se fundó en 2014 para proporcionar servicios de fotografía y video mediante <b>UAVs.</b></span></p><p style="text-align: justify;"><span style="font-family: verdana;">Su fundador, </span><b style="font-family: verdana;">Serge Kouperschmidt</b><span style="font-family: verdana;">, tiene más de 30 años de experiencia en producción de video y trabajó como camarógrafo y director de fotografía para la industria del cine en todo el mundo.</span></p><p style="text-align: justify;"><span style="font-family: verdana;"><b>CamFly Films Ltd.</b> tiene su sede en <b>Londres</b> y es un operador de <b>UAVs</b> certificado por la <b>CAA</b> (<b>Civil Aviation Authority</b>). Ofrece servicios profesionales de fotografía y filmación tanto en <b>Londres</b> como en cualquier lugar del <b>Reino Unido</b>.</span></p><p style="text-align: center;"><span style="font-family: verdana;"><a href="https://www.youtube.com/watch?v=LHSDlY_IkLE">https://www.youtube.com/watch?v=LHSDlY_IkLE</a></span></p><p style="text-align: justify;"><span style="font-family: verdana;">Desde magníficas filmaciones aéreas cinematográficas como la que se muestra en el enlace anterior, hasta inspecciones industriales, servicios de mapeo aéreo, estudios de tejados, e informes topográficos, proporciona servicios a medida basados en su gran experiencia de vuelo.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Entre sus actividades más demandadas cabría destacar el estudio y seguimiento de construcción de inmuebles, filmando el progreso diario de la construcción. </span><span style="text-align: left;"><span style="font-family: verdana;">También merecen destacarse las actividades relacionadas con la captación de imágenes térmicas, la fotogrametría, la ortofotografía, el mapeo con <b>UAVs</b>, la fotografía aérea 360°, así como el <b>modelado 3D</b> a partir de fotografías tomadas con <b>UAVs</b>.</span></span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;">Además de su <b>PFCO</b> (</span></span><span style="text-align: left;"><span style="font-family: verdana;"><b>Permission for Commercial Operations</b>) </span></span><span style="font-family: verdana; text-align: left;">estándar, <b>CamFly Films</b> cuenta con un <b>OSC</b> (</span><span style="text-align: left;"><span style="font-family: verdana;"><b>Operating Safety Case</b>)</span></span><span style="font-family: verdana; text-align: left;">. Este permiso especialmente difícil de obtener, les permite volar legalmente a una distancia menor (10 metros de su objetivo) a una altitud mayor (188 metros) y más allá de la línea de visión. Esto hace que, a diferencia de la gran mayoría de otros operadores de <b>UAVs</b>, puedan operar con total eficiencia en el corazón de <b>Londres</b>.</span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;"><b>CamFly Films</b> también es una compañía de producción de video que ofrece fotografía, videografía y todos los servicios de filmación adjuntos: grabación de videos <b>4K</b> impresionantes con cámaras de última generación, edición, gradación de color, adición de música, títulos, voz en off, efectos visuales, etc.</span></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-32714102683258993552020-12-27T18:04:00.001+01:002020-12-27T18:04:13.878+01:00Clifford Geometric Algebra-Based Approach for 3D Modeling of Agricultural Images Acquired by UAVs<p style="text-align: justify;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-u18zm0FbMfo/X-i-glDlemI/AAAAAAAASH8/2sByYfCE_eEDjigmmyI7e6RiKCZQQvSjwCLcBGAsYHQ/s550/agriculture-08-00116-g002-550.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="312" data-original-width="550" src="https://1.bp.blogspot.com/-u18zm0FbMfo/X-i-glDlemI/AAAAAAAASH8/2sByYfCE_eEDjigmmyI7e6RiKCZQQvSjwCLcBGAsYHQ/s320/agriculture-08-00116-g002-550.jpg" width="320" /></a></div><br /><span style="font-family: verdana;"><br /></span><p></p><p style="text-align: justify;"><span style="font-family: verdana;">Three-dimensional image modeling is essential in many scientific disciplines, including computer vision and precision agriculture.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">So far, various methods of creating three-dimensional models have been considered. </span><span style="font-family: verdana;">However, the processing of transformation matrices of each input image data is not controlled.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Site-specific crop mapping is essential because it helps farmers determine yield, biodiversity, energy, crop coverage, etc. <b>Clifford Geometric Algebraic</b> understanding of signaling and image processing has become increasingly important in recent years.</span></p><p style="text-align: justify;"><span style="font-family: verdana;"><b>Geometric Algebraic</b> treats multi-dimensional signals in a holistic way to maintain relationship between side sizes and prevent loss of information. This article has used agricultural images acquired by <b>UAVs</b> to construct three-dimensional models using <b>Clifford</b> geometric algebra. The qualitative and quantitative performance evaluation results show that <b>Clifford</b> geometric algebra can generate a three-dimensional geometric statistical model directly from <b>UAVs’ RGB</b> (Red Green Blue) images.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Through <b>Peak Signal-to-Noise Ratio</b> (<b>PSNR</b>), <b>Structural Similarity Index Measure</b> (<b>SSIM</b>), and visual comparison, the proposed algorithm’s performance is compared with latest algorithms. Experimental results show that proposed algorithm is better than other leading <b>3D modeling</b> <b>algorithms</b>.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Read more:</span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;"><a href="https://www.researchgate.net/publication/347679848_Clifford_Geometric_Algebra-Based_Approach_for_3D_Modeling_of_Agricultural_Images_Acquired_by_UAVs">https://www.researchgate.net/publication/347679848_Clifford_Geometric_Algebra-Based_Approach_for_3D_Modeling_of_Agricultural_Images_Acquired_by_UAVs</a></span></span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;"><br /></span></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-67692084778896894882020-12-24T17:12:00.002+01:002020-12-24T17:13:08.012+01:00Dji Phantom 3 para Animación 3D con texturas PBR<p style="text-align: justify;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-KO0J97WXiHY/X-S9QeQ6Q0I/AAAAAAAASHs/7mIalokmY3sR4kVrtJtHVsnRDohZH1wqQCLcBGAsYHQ/s1280/DJi%2BPhantom%2B3.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1280" height="225" src="https://1.bp.blogspot.com/-KO0J97WXiHY/X-S9QeQ6Q0I/AAAAAAAASHs/7mIalokmY3sR4kVrtJtHVsnRDohZH1wqQCLcBGAsYHQ/w400-h225/DJi%2BPhantom%2B3.jpg" width="400" /></a></div><span style="font-family: verdana;"><br /></span><p></p><p style="text-align: justify;"><span style="font-family: verdana;">Las texturas <b>PBR</b> hacen referencia a una técnica de renderizado que permite calcular la luz de una escena en 3D, en base a la vida real.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Todavía no se ha conseguido un realismo al 100%, pero esta técnica permite calcular cómo se refleja la luz y las sombras que producen los objetos, de una manera más realista que en el pasado.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Estas texturas permiten simplificar el trabajo al aplicar materiales y se pueden usar en la mayoría de plataformas. Las texturas <b>PBR</b> dan información sobre el nivel de detalle, el color del material, el desplazamiento de los polígonos, la cantidad de reflexión, el detalle de la superficie y otro tipo de información como transparencia, refracción, curvatura, posición de los polígonos, etc.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">En este video de <b>Animación 3D</b> se muestran las posibilidades de ésta técnica en combinación con las imágenes captadas mediante <b>UAVs</b>. Más concretamente, captadas por un </span><span style="font-family: verdana;"><b>Dji Phantom 3</b>.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Enlace al vídeo:</span></p><p style="text-align: left;"><span style="font-family: verdana;"><a href="https://www.youtube.com/watch?v=64fYOyrNN0c&list=PL2UsAzNdeUau_YvGOi-JBwXIGvKwhEAMn">https://www.youtube.com/watch?v=64fYOyrNN0c&list=PL2UsAzNdeUau_YvGOi-JBwXIGvKwhEAMn</a></span></p><p style="text-align: left;"><span style="font-family: verdana;"><br /></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-5238553149407380202020-12-23T21:33:00.005+01:002020-12-23T21:33:55.307+01:00TVP: UAVs para animación 3D<p style="text-align: justify;"><span style="font-family: verdana;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: verdana;"><a href="https://1.bp.blogspot.com/-JG6OiyaunZ8/X-OpkyHTNFI/AAAAAAAASHg/p5JNg5c6Pswmu05TxO-HpeaSfjXKzQ_ogCLcBGAsYHQ/s655/3D%2BANIMATION%2BUAV.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="367" data-original-width="655" height="322" src="https://1.bp.blogspot.com/-JG6OiyaunZ8/X-OpkyHTNFI/AAAAAAAASHg/p5JNg5c6Pswmu05TxO-HpeaSfjXKzQ_ogCLcBGAsYHQ/w576-h322/3D%2BANIMATION%2BUAV.JPG" width="576" /></a></span></div><b style="font-family: verdana;"><p style="text-align: justify;"><b style="font-family: verdana;"><br /></b></p>TVP</b><span style="font-family: verdana;"> es una de las principales videoproductoras del </span><b style="font-family: verdana;">Reino Unido</b><span style="font-family: verdana;"> y se ha ganado una merecida reputación de creatividad y excelencia desde sus inicios en 1983 hasta la actualidad.</span><p></p><p style="text-align: justify;"><span style="font-family: verdana;">Ubicada en <b>Aberdeen</b>, <b>TVP</b> realiza producciones de video de la más alta calidad y <b>animaciones</b> <b>3D</b> utilizando la última tecnología de producción disponible. Los equipos de <b>TVP</b> graban en todos los formatos, desde video <b>HD</b> hasta <b>RED Digital Cinema 5K RAW</b>.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Su personal están capacitado para filmar tanto en tierra como en alta mar y en el aire utilizando <b>UAVs</b>. </span><span style="font-family: verdana;">El equipo creativo de <b>TVP</b> se encarga de todo el proceso de producción, desde el concepto inicial hasta el guión, la gestión de la producción y el rodaje, pasando por la posproducción y luego hasta la entrega final en cualquier formato.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Más información: </span><span style="text-align: left;"><span style="font-family: verdana;"><a href="http://tvpstudios.tv/">http://tvpstudios.tv/</a></span></span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;"><br /></span></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-66532558825584198372020-12-18T21:57:00.002+01:002020-12-18T21:57:27.760+01:003D Mapping and Modeling Market Global Forecast to 2025<p style="text-align: justify;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-3uOneWjosLw/X90XoMmss1I/AAAAAAAASHE/ok6kPz4EvhIPO_fDe6npT0lXZeN3WjCpgCLcBGAsYHQ/s1348/3d-mapping-and-modeling-market-report.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="760" data-original-width="1348" height="277" src="https://1.bp.blogspot.com/-3uOneWjosLw/X90XoMmss1I/AAAAAAAASHE/ok6kPz4EvhIPO_fDe6npT0lXZeN3WjCpgCLcBGAsYHQ/w492-h277/3d-mapping-and-modeling-market-report.png" width="492" /></a></div><br /><span style="font-family: verdana;"><br /></span><p></p><p style="text-align: justify;"><span style="font-family: verdana;">The global <b>3D mapping</b> and modeling market size is expected to grow from USD 3.8 billion in 2020 to USD 7.6 billion by 2025, at a <b>Compounded Annual Growth Rate</b> (<b>CAGR</b>) of 15.0% during the forecast period.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">High demand for <b>3D animation</b> in mobile applications, games, and movies for the enhanced viewing experience, technological advancements in <b>3D scanners</b>, <b>3D sensors</b>, and the increasing availability of <b>3D content</b> to drive the growth of market.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Stringent government regulations and lack of investments, and impact of <b>COVID-19 </b>on the global economy are one of the major challenges in the market. Moreover, Increasing corruption and piracy concerns and high technological and installation costs as one of the key restraining factor in the market.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Read more:</span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;"><a href="https://www.marketsandmarkets.com/Market-Reports/3d-mapping-market-819.html">https://www.marketsandmarkets.com/Market-Reports/3d-mapping-market-819.html</a></span></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-87464737349364401242020-12-13T13:47:00.000+01:002020-12-13T13:47:01.759+01:00Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system<p style="text-align: justify;"><span style="font-family: verdana;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: verdana;"><a href="https://1.bp.blogspot.com/-vjk-Wuc-gxs/X9YNMkRxDjI/AAAAAAAASG0/tf93JnWYiZYWEORZK_z8jQKNd3dxI_LPgCLcBGAsYHQ/s557/MOBILE%2B3D%2BMAPPING.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="418" data-original-width="557" height="368" src="https://1.bp.blogspot.com/-vjk-Wuc-gxs/X9YNMkRxDjI/AAAAAAAASG0/tf93JnWYiZYWEORZK_z8jQKNd3dxI_LPgCLcBGAsYHQ/w490-h368/MOBILE%2B3D%2BMAPPING.jpg" width="490" /></a></span></div><span style="font-family: verdana;"><br /><b><br /></b></span><p></p><p style="text-align: justify;"><span style="font-family: verdana;"><b>Unmanned Aerial Vehicles</b> (<b>UAVs</b>) as a data acquisition platform and as a measurement instrument are becoming attractive for many surveying applications in civil engineering.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Their performance, however, is not well understood for these particular tasks. The scope of the presented work is the performance evaluation of a <b>UAV</b> system that was built to rapidly and autonomously acquire mobile <b>3D</b> <b>mapping</b> data.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Details to the components of the <b>UAV</b> system (hardware and control software) are explained. A novel program for photogrammetric flight planning and its execution for the generation of <b>3D</b> <b>point</b> clouds from digital mobile images is explained.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">A performance model for estimating the position error was developed and tested in several realistic construction environments. Test results are presented as they relate to large excavation and earth moving construction sites.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">The experiences with the developed <b>UAV</b> system are useful to researchers or practitioners in need for successfully adapting <b>UAV technology</b> for their applications.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Read more:</span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;"><a href="https://www.researchgate.net/publication/260270622_Mobile_3D_mapping_for_surveying_earthwork_projects_using_an_Unmanned_Aerial_Vehicle_UAV_system">https://www.researchgate.net/publication/260270622_Mobile_3D_mapping_for_surveying_earthwork_projects_using_an_Unmanned_Aerial_Vehicle_UAV_system</a></span></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-7109129409978388312020-12-08T16:59:00.002+01:002020-12-08T16:59:13.371+01:00Ventajas de la integración de componentes electrónicos en el interior de las PCBs a la hora de diseñar y fabricar electrónica para UAVs<p style="text-align: justify;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-gGjlUgdqY4c/X8-iSGo-JNI/AAAAAAAASGI/RbJUc0LoiMkszaynoztBw_SINg0UkEMDwCLcBGAsYHQ/s2048/NANO%2B12%2BCAPAS.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1152" data-original-width="2048" height="307" src="https://1.bp.blogspot.com/-gGjlUgdqY4c/X8-iSGo-JNI/AAAAAAAASGI/RbJUc0LoiMkszaynoztBw_SINg0UkEMDwCLcBGAsYHQ/w546-h307/NANO%2B12%2BCAPAS.jpg" width="546" /></a></div><br /><span style="font-family: verdana;"><br /></span><p></p><p style="text-align: justify;"><span style="font-family: verdana;">Hoy en día, uno de los mayores retos de la industria aeronautica para uso militar estriba en el diseño y fabricación de pequeñas plataformas aereas gobernadas por inteligencia artificial, tales como micro-misiles inteligentes, <b>micro-UAVs </b>y <b>nano-UAVs</b></span><span style="font-family: verdana;">.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Para satisfacer la funcionalidad requerida, los diseñadores de sus correspondientes circuitos electrónicos utilizan cada vez más componentes lo cual requiere <b>PCBs</b> de mayor superficie y crea un techo de rendimiento dictado por el espacio disponible, </span><span style="font-family: verdana;">por lo que es necesario reinventar la fabricación de </span><span style="font-family: verdana;">circuitos electrónicos</span><span style="font-family: verdana;">.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Una manera de reducir el espacio consiste en integrar componentes en las capas internas de la <b>PCB</b>, y esto es ya posible mediante la tecnología <b>AME</b> de <b>Nano Dimension</b>, que </span><span style="text-align: left;"><span style="font-family: verdana;">abre la puerta a un mundo de nuevas capacidades gracias a la integración de componentes activos y pasivos dentro de las PCBs, tal y como se muestra en este vídeo:</span></span></p><p><span style="font-family: verdana;"><a href="https://www.youtube.com/watch?v=E8GeucfOCJU&feature=emb_logo">https://www.youtube.com/watch?v=E8GeucfOCJU&feature=emb_logo</a></span></p><p><span style="font-family: verdana;"><br /></span></p><p><span style="font-family: verdana;">Para más información:</span></p><p><span style="font-family: verdana;"><a href="https://integral3dprinting.com/nano-dimension-dragonfly/">https://integral3dprinting.com/nano-dimension-dragonfly/</a></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-5342154488325399842020-12-07T16:10:00.004+01:002020-12-07T16:10:53.028+01:00LIDAR sheds new light on plant phenomics for plant breeding and management: Recent advances and future prospects<p style="text-align: justify;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-uJzeJK3j3eo/X85FoVSfXxI/AAAAAAAASF8/bv0lRrAl3lEufu6gfOCyWINLnzYny4-jwCLcBGAsYHQ/s638/LIDAR.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="359" data-original-width="638" height="305" src="https://1.bp.blogspot.com/-uJzeJK3j3eo/X85FoVSfXxI/AAAAAAAASF8/bv0lRrAl3lEufu6gfOCyWINLnzYny4-jwCLcBGAsYHQ/w541-h305/LIDAR.jpg" width="541" /></a></div><br /><span style="font-family: verdana;"><br /></span><p></p><p style="text-align: justify;"><span style="font-family: verdana;">Plant phenomics is a new avenue for linking plant genomics and environmental studies, thereby improving plant breeding and management. </span><span style="font-family: verdana;">Remote sensing techniques have improved high-throughput plant phenotyping. However, the accuracy, efficiency, and applicability of three-dimensional (3D) phenotyping are still challenging, especially in field environments.</span></p><p style="text-align: justify;"><span style="font-family: verdana;"><b>LIDAR</b> (<b>Light Detection And Ranging</b>) provides a powerful new tool for <b>3D Phenotyping</b> with the rapid development of facilities and algorithms. Numerous efforts have been devoted to studying static and dynamic changes of structural and functional phenotypes using <b>LIDAR</b> in agriculture. These progresses also improve <b>3D plant modeling</b> across different spatial–temporal scales and disciplines, providing easier and less expensive association with genes and analysis of environmental practices and affords new insights into breeding and management.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Beyond agriculture phenotyping, <b>LIDAR</b> shows great potential in forestry, horticultural, and grass phenotyping. Although <b>LIDAR</b> has resulted in remarkable improvements in plant phenotyping and modeling, the synthetization of <b>LIDAR</b>-based phenotyping for breeding and management has not been fully explored. In this study, the authors identify three main challenges in <b>LIDAR</b>-based phenotyping development: 1) developing low cost, high spatial–temporal, and hyperspectral <b>LIDAR</b> facilities, 2) moving into multi-dimensional phenotyping with an endeavor to generate new algorithms and models, and 3) embracing open source and big data.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Read more:</span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;"><a href="https://www.sciencedirect.com/science/article/pii/S0924271620303130?dgcid=rss_sd_all">https://www.sciencedirect.com/science/article/pii/S0924271620303130?dgcid=rss_sd_all</a></span></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-1586195532632531152020-12-06T14:05:00.001+01:002020-12-06T14:05:37.701+01:00Developing a strategy for precise 3D modelling of large-scale scenes for VR<p><br /><span style="font-family: verdana;"><br /></span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-1mhUUuiFfwg/X8zWyUSCRGI/AAAAAAAASFo/R08gQ0DMjVosMIXQ2BwQSvGwhlWD0GpnACLcBGAsYHQ/s1300/ManusVR_Glove_2016.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="867" data-original-width="1300" height="350" src="https://1.bp.blogspot.com/-1mhUUuiFfwg/X8zWyUSCRGI/AAAAAAAASFo/R08gQ0DMjVosMIXQ2BwQSvGwhlWD0GpnACLcBGAsYHQ/w526-h350/ManusVR_Glove_2016.png" width="526" /></a></div><p></p><p><span style="font-family: verdana;">In this work, it is presented a methodology for precise <b>3D modelling</b> and multi-source geospatial data blending for the purposes of <b>Virtual Reality</b> immersive and interactive experiences. </span><span style="font-family: verdana;">It has been evaluated on the volcanic island of <b>Santorini</b> due to its formidable geological </span><span style="font-family: verdana;">terrain and the interest it poses for scientific and touristic purposes.</span></p><p><span style="font-family: verdana;">The methodology developed here consists of three main steps: </span><span style="font-family: verdana;">Initially, bathymetric and <b>SRTM</b> (</span><span style="font-family: verdana;"><b>Shuttle Radar Topography Mission</b>)</span><span style="font-family: verdana;"> data are scaled down to match the smallest resolution of the dataset</span><span style="font-family: verdana;">. </span><span style="font-family: verdana;">Afterwards, the resulted </span><span style="font-family: verdana;">elevations are combined based on the slope of the relief, while considering a buffer area to enforce a smoother terrain. </span><span style="font-family: verdana;">As a final step, </span><span style="font-family: verdana;">the orthophotos are combined with the estimated <b>DTM</b> (<b>Digital Terrain Model</b>) via applying a nearest neighbour matching schema leading to </span><span style="font-family: verdana;">the final terrain background.</span></p><p><span style="font-family: verdana;">In addition to this, both onshore and offshore points-of-interest were modelled via image-based <b>3D </b></span><span style="font-family: verdana;"><b>reconstruction</b> and added to the virtual scene. The overall geospatial data that need to be visualized in applications demanding phototextured hyper-realistic models pose a significant challenge. The <b>3D models</b> are treated via a mesh optimization workflow, suitable for </span><span style="font-family: verdana;">efficient and fast visualization in virtual reality engines, through mesh simplification, physically based rendering texture maps baking, </span><span style="font-family: verdana;">and level-of-details. </span></p><p><span style="font-family: verdana;">Read more at <a href="https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B4-2020/567/2020/isprs-archives-XLIII-B4-2020-567-2020.pdf">https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B4-2020/567/2020/isprs-archives-XLIII-B4-2020-567-2020.pdf</a></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-6899274933821615972020-12-05T14:29:00.004+01:002020-12-05T14:29:35.303+01:00Ventajas de usar UAVs en investigación criminal<p style="text-align: justify;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-mCVWzTwVYy8/X8uJ9-FB5SI/AAAAAAAASFY/c0zBNyg_jQwYvAO44gnu00imMlNwsykIQCLcBGAsYHQ/s1623/CSI.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="830" data-original-width="1623" height="280" src="https://1.bp.blogspot.com/-mCVWzTwVYy8/X8uJ9-FB5SI/AAAAAAAASFY/c0zBNyg_jQwYvAO44gnu00imMlNwsykIQCLcBGAsYHQ/w547-h280/CSI.jpg" width="547" /></a></div><br /><span style="font-family: verdana;"><br /></span><p></p><p style="text-align: justify;"><span style="font-family: verdana;">Vamos a tratar en este post el uso de <b>UAVs</b> para investigación criminal.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Indudablemente el software de fotogrametría y el <b>escaneo 3D</b> </span><i><span style="color: #2b00fe; font-family: georgia;">"in situ"</span></i><span style="font-family: verdana;"> ya se está utilizando con éxito para documentar estas escenas, pero los UAVs ofrecen importantes ventajas que vamos a ver a continuación.</span></p><p style="text-align: justify;"><span style="font-family: verdana;"><b>1. Recorte de tiempos</b></span></p><p style="text-align: justify;"><span style="font-family: verdana;">Cuando ocurre un crimen, lo mejor para todos es despejar el área lo antes posible, pero la escena debe documentarse primero.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Los instrumentos más frecuentemente usados por los equipos de investigación criminal son el <b>escáner 3D</b> y/o las estaciones totales y/o la fotografía digital, o una combinación de los tres al objeto de recopilar datos y crear la nube de puntos en <b>3D</b> correspondiente a la escena del crimen.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Sin embargo, estos métodos pueden requerir una gran cantidad de tiempo y personal capacitado, que puede no siempre estar disponible. Por si esto fuera poco, los alrededores de la escena del crimen pueden ofrecer mucha información muy útil, que sólo se percibe desde las alturas.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Para la documentación de un crimen desde una cierta altura y en un amplio area, los <b>UAVs</b> resultan muy útiles porque pueden recorrer fácilmente distancias más grandes a una altitud conveniente para lograr una cobertura más rápida y precisa, permitiendo reducir entre un 60 y un 80% el tiempo requerido para documentar con precisión la escena del crimen</span><span style="font-family: verdana;">.</span></p><p style="text-align: justify;"><span style="font-family: verdana;"><b>2. Recorte de costes</b></span></p><p style="text-align: justify;"><span style="font-family: verdana;">Cerrar el paso a un area e investigar una escena criminal, requiere un trabajo humano que conlleva un coste laboral directamente proporcional al tiempo empleado.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Sin embargo, utilizar <b>UAVs</b> como herramienta para la toma de imágenes tiene un coste inversamente proporcional al tiempo empleado y puede resolverse en mucho menos tiempo utilizando planificadores de vuelo automatizado.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Con estos instrumentos, resulta relativamente económico documentar con precisión la escena de un crimen y acelerar el proceso, especialmente en situaciones con restricciones extremas de tiempo, de personal, o de otros equipos alternativos.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Por otra parte, la adquisición de datos en <b>3D</b> mediante <b>UAVs</b> permite captar datos allí donde no es posible obtenerlos mediante el uso del escaner láser o la estacion total, como por ejemplo cuando existen obstáculos insalvables.</span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;"><b>3. Sus resultados constituyen pruebas documentales</b></span></span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;">En ultima instancia, el objeto de recopilar datos no es otro que aportar evidencias que puedan ser presentadas ante un tribunal.</span></span></p><p style="text-align: justify;"><span style="font-family: verdana; text-align: left;">Muchas veces, la falta de datos por imposibilidad humana de acceder a ciertas areas hace que sea trabajo imposible presentar pruebas que avalen una sospecha, y es por esto que los datos recogidos por un <b>UAV</b> en un solo vuelo, combinados con un adecuado software de fotogrametría, pueden constituir la prueba definitiva para descartar o confirmar un asesinato o un suicidio, así como para confirmar o descartar la culpabilidad de un imputado.</span></p><p style="text-align: justify;"><br /></p><p style="text-align: justify;"><br /></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-64592791237422561782020-11-29T15:51:00.003+01:002020-11-29T15:51:50.932+01:00Development and Evaluation of a UAV-Photogrammetry System for Precise 3D Environmental Modeling<p style="text-align: justify;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-u9Q9NjBf1bo/X8O1dW6vARI/AAAAAAAASFA/arWMEptuWoE55hNcn9IqoBuZxL31XbVEACLcBGAsYHQ/s2048/3D%2BDATA%2BCAPTURING%2B2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1132" data-original-width="2048" height="221" src="https://1.bp.blogspot.com/-u9Q9NjBf1bo/X8O1dW6vARI/AAAAAAAASFA/arWMEptuWoE55hNcn9IqoBuZxL31XbVEACLcBGAsYHQ/w400-h221/3D%2BDATA%2BCAPTURING%2B2.png" width="400" /></a></div><br /><span style="font-family: verdana;"><br /></span><p></p><p style="text-align: justify;"><span style="font-family: verdana;">The specific requirements of <b>UAV-photogrammetry</b> needs particular solutions for system development, which have mostly been ignored or not assessed adequately in recent studies.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Accordingly, this paper presents the methodological and experimental aspects of correctly implementing an <b>UAV-photogrammetry</b> system. The hardware of the system consists of an electric-powered helicopter, a high-resolution digital camera and an inertial navigation system.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">The software of the system includes the in-house programs specifically designed for camera calibration, platform calibration, system integration, on-board data acquisition, flight planning and on-the-job self-calibration. The detailed features of the system are discussed, and solutions are proposed in order to enhance the system and its photogrammetric outputs.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">The developed system is extensively tested for precise modeling of the challenging environment of an open-pit gravel mine. The accuracy of the results is evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Additionally, the effects of imaging configuration and network stability on modeling accuracy are assessed. The experiments demonstrated that 1.55 m horizontal and 3.16 m vertical absolute modeling accuracy could be achieved via direct geo-referencing, which was improved to 0.4 cm and 1.7 cm after indirect geo-referencing.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Read more:</span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;"><a href="https://www.researchgate.net/publication/283328189_Development_and_Evaluation_of_a_UAV-Photogrammetry_System_for_Precise_3D_Environmental_Modeling">https://www.researchgate.net/publication/283328189_Development_and_Evaluation_of_a_UAV-Photogrammetry_System_for_Precise_3D_Environmental_Modeling</a></span></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-88653253008699336072020-11-28T22:34:00.001+01:002020-11-28T22:34:06.607+01:00Nano Dimension redefine el diseño y fabricación de electrónica para micro-UAVs de uso militar<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-NaTorxxRKUY/X8LB3eByLaI/AAAAAAAASE0/O1nURTi-IQ0U3v5S8q-ZEbSW6ZpFNGybwCLcBGAsYHQ/s600/beetle-drone.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="251" data-original-width="600" height="168" src="https://1.bp.blogspot.com/-NaTorxxRKUY/X8LB3eByLaI/AAAAAAAASE0/O1nURTi-IQ0U3v5S8q-ZEbSW6ZpFNGybwCLcBGAsYHQ/w400-h168/beetle-drone.jpg" width="400" /></a></div><br /><span style="font-family: verdana;"><br /></span><p></p><p><span style="font-family: verdana;">Cuando nos planteamos el diseño y fabricación de circuitos electrónicos para <b>micro-UAVs</b> de uso militar para operaciones encubiertas, nos encontramos con ciertos retos difíciles o imposibles de superar mediante las tecnologías convencionales de fabricación.</span></p><p><span style="font-family: verdana;">¿Qué podemos hacer cuando el espacio que nos queda para la electrónica, es insuficiente para albergar los circuitos impresos? ¿Sacrificamos prestaciones? ¿Aumentamos el tamaño del <b>micro-UAV</b>? Es un verdadero dilema.</span></p><p><span style="font-family: verdana;">Afortunadamente para los diseñadores y fabricantes de electrónica para uso militar, existe ya la posibilidad de aplicar la manufactura aditiva al diseño y fabricación de circuitos electrónicos.</span></p><p><span style="font-family: verdana;">La tecnología es muy simple, pero han sido necesarios muchos años de investigación hasta conseguir la precisión y repetitividad requeridas para ese tipo de aplicaciones.</span></p><p><span style="font-family: verdana;">Como se estarán imaginando, la tecnología ha sido desarrollada en Israel, en este caso por la firma <b>Nano Dimension</b> (Nasdaq, TASE: NNDM) cuyas máquinas de <b>Impresión 3D</b> para electrónica trabajan inyectando </span><span style="font-family: verdana;">materiales dieléctricos y conductivos de manera simultánea, capa a capa, sin limitaciones de geometría.</span></p><p><span style="font-family: verdana;">Lo verán más claro en este vídeo:</span></p><p><span style="font-family: verdana;"><a href="https://www.youtube.com/watch?v=P4NFf42b04E&feature=emb_logo">https://www.youtube.com/watch?v=P4NFf42b04E&feature=emb_logo</a></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-90095376813100142032020-11-28T21:35:00.002+01:002020-11-28T21:35:24.230+01:00Coastal Mapping using DJI Phantom 4 RTK in Post-Processing Kinematic Mode<p style="text-align: justify;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-BlRQMlK2Tyw/X8K0euM8XeI/AAAAAAAASEc/577iNULLbTIp1r8hhI5C9og9MllVo_I_wCLcBGAsYHQ/s1373/PHANTOM.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="761" data-original-width="1373" height="221" src="https://1.bp.blogspot.com/-BlRQMlK2Tyw/X8K0euM8XeI/AAAAAAAASEc/577iNULLbTIp1r8hhI5C9og9MllVo_I_wCLcBGAsYHQ/w400-h221/PHANTOM.JPG" width="400" /></a></div><br /><span style="font-family: verdana;"><br /></span><p></p><p style="text-align: justify;"><span style="font-family: verdana;">Topographic and geomorphological surveys of coastal areas usually require the aerial mapping of long and narrow sections of littoral.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">The georeferencing of photogrammetric models is generally based on the signalization and survey of <b>GCPs (Ground Control Points)</b> which are very time-consuming tasks.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Direct georeferencing with high camera location accuracy due to on-board multi-frequency </span><span style="text-align: left;"><span style="font-family: verdana;"><b>Global Navigation Satellite System</b> <b>(</b></span></span><b style="font-family: verdana;">GNSS) </b><span style="font-family: verdana;">receivers can limit the need for <b>GCPs</b>.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Recently, <b>DJI</b> has made available the <b>Phantom 4 Real-Time Kinematic</b> <b>(RTK)</b> <b>(DJI-P4RTK)</b> which combines the versatility and the ease of use of previous <b>DJI</b> <b>Phantom</b> models with the advantages of a multi-frequency on-board <b>GNSS</b> receiver.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">In this paper, the authors have investigated the accuracy of both photogrammetric models and <b>Digital Terrain Models (DTMs)</b> generated in <b>Agisoft Metashape</b> from two different image datasets (nadiral and oblique) acquired by a <b>DJI-P4RTK</b>.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Camera locations were computed with the <b>Post-Processing Kinematic (PPK)</b> of the <b>Receiver Independent Exchange Format (RINEX)</b> file recorded by the aircraft during flight missions. A <b>Continuously Operating Reference Station (CORS)</b> located at a 15 km distance from the site was used for this task.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">The results highlighted that the oblique dataset produced very similar results, with <b>GCPs</b> (<b>3D RMSE</b> = 0.025 m) and without (<b>3D RMSE</b> = 0.028 m), while the nadiral dataset was affected more by the position and number of the <b>GCPs</b> (<b>3D RMSE</b> from 0.034 to 0.075 m).</span></p><p style="text-align: justify;"><span style="font-family: verdana;">The introduction of a few oblique images into the nadiral dataset without any <b>GCP</b> improved the vertical accuracy of the model (<b>Up RMSE</b> from 0.052 to 0.025 m) and can represent a solution to speed up the image acquisition of nadiral datasets for <b>PPK</b> with the <b>DJI-P4RTK</b> and no <b>GCPs</b>.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Moreover, the results of this research are compared to those obtained in <b>RTK</b> mode for the same datasets. The novelty of this research is the combination of a multitude of aspects regarding the <b>DJI Phantom 4 RTK</b> aircraft and the subsequent data processing strategies for assessing the quality of photogrammetric models, <b>DTMs</b>, and cross-section profiles.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Read more:</span></p><p style="text-align: justify;"><span style="text-align: left;"><span style="font-family: verdana;"><a href="https://www.researchgate.net/publication/340328284_Coastal_Mapping_using_DJI_Phantom_4_RTK_in_Post-Processing_Kinematic_Mode">https://www.researchgate.net/publication/340328284_Coastal_Mapping_using_DJI_Phantom_4_RTK_in_Post-Processing_Kinematic_Mode</a></span></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-85660969079476280162020-11-27T17:11:00.002+01:002020-11-27T17:11:51.070+01:00UAVs en la Industria 4.0: Escaneo 3D, Optimización Topológica y Gemelos Digitales para el rediseño de UAVs y su fabricación mediante Manufactura Aditiva<p style="text-align: justify;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-3ih4jqiFk0E/X8Ek5hkA-gI/AAAAAAAASEE/AH7Q_o6Si_kfkJjC4u5qY1Pf0x_XcvODQCLcBGAsYHQ/s800/Scan-Xpress-GOM-Scan.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="445" data-original-width="800" height="299" src="https://1.bp.blogspot.com/-3ih4jqiFk0E/X8Ek5hkA-gI/AAAAAAAASEE/AH7Q_o6Si_kfkJjC4u5qY1Pf0x_XcvODQCLcBGAsYHQ/w535-h299/Scan-Xpress-GOM-Scan.png" width="535" /></a></div><span style="font-family: verdana;"><p style="text-align: justify;"><span style="font-family: verdana;"><br /></span></p>La empresa australiana </span><b style="font-family: verdana;">Silvertone</b><span style="font-family: verdana;"> desarrolla, diseña y fabrica vehículos aéreos no tripulados con capacidades de carga útil flexible.</span><p></p><p style="text-align: justify;"><span style="font-family: verdana;">Uno de sus sistemas de aviones no tripulados, el <b>Flamingo Mk3</b>, lleva un paquete pesado de equipos de telemetría y sensores.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Para mejorar la eficiencia del vuelo, el soporte que une el paquete de equipos al fuselaje y soporta el tren de aterrizaje, debía rediseñarse para reducir el peso pero conservando a su vez el rendimiento mecánico.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">El diseño existente fue sometido a procesos de optimización topológica hasta obtener un diseño orgánico de forma libre que satisfacía las condiciones de carga y de contorno requeridas.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">La geometría resultante de la optimización topológica generalmente no se adapta bien a los métodos de fabricación tradicionales. Sin embargo, la fabricación aditiva produce componentes a través de capas y no ofrece limitación alguna a la hora de fabricar una geometría por compleja que sea.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Se contrató a <b>Amiga Engineering</b> para fabricar el componente de topología optimizada. La muestra se imprimió en titanio <b>Gr23</b> en una máquina <b>ProX DMP</b> de <b>3D Systems</b>. El componente fabricado logró una reducción de peso significativa: 800 gramos menos que los 4 kilogramos originales, mejor equilibrio y rigidez, mayor seguridad, tiempos de vuelo más largos, mayor capacidad de carga útil y mejor eficiencia de la batería.</span></p><p style="text-align: justify;"><span style="font-family: verdana;">Se contrató al proveedor de servicios de metrología <b>Scan-Xpress</b> para medir el componente fabricado utilizando el escáner <b>GOM ATOS Q</b> de alta resolución y para realizar comprobaciones críticas de control de calidad antes de la instalación. El sistema de medición óptica <b>ATOS Q</b> de <b>GOM</b> es muy adecuado para medir superficies orgánicas y de forma libre generadas a partir de la optimización de la topología, incluido el soporte del paquete.</span></p><p style="text-align: left;"><span style="font-family: verdana;"><span style="background-color: white;">El sensor <b>ATOS Q</b> proyecta una trama de franjas que van cambiando de fase a medida que recorren la superficie de medición, al objeto de recolectar millones de puntos y generar un modelo tridimensional preciso. El sensor fue colocado en diferentes posiciones alrededor del componente hasta que toda la superficie se definió y capturó con precisión. Con l</span></span><span style="background-color: white; font-family: verdana;">a nube de puntos generada se creó una malla <b>3D</b> en formato <b>STL</b>, conocida como gemelo digital. </span><span style="background-color: white; font-family: verdana;">El gemelo digital generado se comparó con el modelo <b>CAD</b> y se registraron las diferencias.</span></p><p style="text-align: left;"><span style="font-family: verdana;">La información capturada brindó a <b>Amiga Engineering </b>la capacidad de validar sus métodos de producción y simulaciones. Los datos capturados también proporcionaron información para modificar los parámetros del proceso de <b>Manufactura Aditiva </b>para futuras ejecuciones de producción. Esta retroalimentación proporcionada por la calidad de los datos generados fue un factor determinante para que <b>Amiga Engineering</b> comprara el primer <b>ATOS Q</b> de <b>Australia</b>.</span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-14707205796655439342020-11-22T12:09:00.001+01:002020-11-22T12:09:46.916+01:00Impresión 3D para camuflar puntos de acceso IoT/WiFi en nano-UAVs<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-1Vueb4bnrs8/X7pGnWrpzNI/AAAAAAAASDc/RsLetluMuMorbFvVd05Ag1scywilcV9swCLcBGAsYHQ/s1288/NANO%2BDIMENSION%2BIOT.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1136" data-original-width="1288" height="448" src="https://1.bp.blogspot.com/-1Vueb4bnrs8/X7pGnWrpzNI/AAAAAAAASDc/RsLetluMuMorbFvVd05Ag1scywilcV9swCLcBGAsYHQ/w508-h448/NANO%2BDIMENSION%2BIOT.png" width="508" /></a></div><br /><span style="font-family: verdana;"><br /></span><p></p><p><span style="font-family: verdana;">Los usuarios y los fabricantes de <b>nano-UAVs</b> están viendose constantemente impulsados a demandar y agregar nuevas capacidades al producto final, y entre esas capacidades merecen destacarse todas aquellas relacionadas con el <b>IoT</b> (<b>Internet of Things</b>).</span></p><p><span style="font-family: verdana;">La firma israelí <b>Nano Dimension Ltd.</b> ha demostrado que es posible fabricar dispositivos de comunicación <b>IoT/WiFi</b> impresos en <b>3D</b> para que los <b>OEMs</b> de <b>nano-UAVs</b> puedan añadirlos a su producto final.</span></p><p><span style="font-family: verdana;">La velocidad de fabricación distingue a estos nuevos dispositivos, ya que <b>Nano Dimension</b> afirma que pueden estar listos para funcionar en sólo 18 horas, lo que equivale a una velocidad de producción un 90 por ciento más rápida que utilizando métodos convencionales.</span></p><p><span style="font-family: verdana;">Más información:</span></p><p><span style="font-family: verdana;"><a href="https://www.nano-di.com/capabilities-and-use-cases">https://www.nano-di.com/capabilities-and-use-cases</a></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-48536720769831501002020-11-22T11:40:00.002+01:002020-11-22T11:41:25.395+01:00Digital Innovations in European Archaeology<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-7-RUv8GZHpk/X7pAAW79ADI/AAAAAAAASDQ/j6zb1b2PJ88a6kEj3vCU6GLMzVa8ENKowCLcBGAsYHQ/s1287/3D%2BDATA%2BCAPTURING.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="690" data-original-width="1287" src="https://1.bp.blogspot.com/-7-RUv8GZHpk/X7pAAW79ADI/AAAAAAAASDQ/j6zb1b2PJ88a6kEj3vCU6GLMzVa8ENKowCLcBGAsYHQ/s320/3D%2BDATA%2BCAPTURING.png" width="320" /></a></div><br /> <p></p><p><span style="background-color: white; color: #333333; font-size: 16px; text-align: justify;"><span style="font-family: verdana;">European archaeologists in the last two decades have worked to integrate a wide range of emerging digital tools to enhance the recording, analysis, and dissemination of archaeological data.</span></span></p><p><span style="background-color: white; color: #333333; font-size: 16px; text-align: justify;"><span style="font-family: verdana;">These techniques have expanded and altered the data collected by archaeologists as well as their interpretations. At the same time archaeologists have expanded the capabilities of using these data on a large scale, across platforms, regions, and time periods, utilising new and existing digital research infrastructures to enhance the scale of data used for archaeological interpretations.</span></span></p><p><span style="background-color: white; color: #333333; font-size: 16px; text-align: justify;"><span style="font-family: verdana;">This Element discusses some of the most recent, innovative uses of these techniques in European archaeology at different stages of archaeological work. In addition to providing an overview of some of these techniques, it critically assesses these approaches and outlines the recent challenges to the discipline posed by self-reflexive use of these tools and advocacy for their open use in cultural heritage preservation and public engagement.</span></span></p><p><span style="font-family: verdana;"><span style="background-color: white; color: #333333; font-size: 16px; text-align: justify; text-indent: 32px;">Among these techniques used frequently in various archaeological contexts across Europe, aerial photogrammetry, utilising photographs taken by <b>UAVs</b> (<b>Unmanned Aerial Vehicles</b>) has been used to document larger landscapes </span><span style="background-color: white; color: #333333; font-size: 16px; text-align: justify; text-indent: 32px;">and close-range photogrammetry is becoming a ubiquitous recording tool on excavations and for historic architectural recording</span><span style="background-color: white; color: #333333; font-size: 16px; text-align: justify; text-indent: 32px;">. The low financial entry point to photogrammetry has made it an ideal technique for archaeologists, who are often working on a shoe-string budget</span><span style="background-color: white; color: #333333; font-size: 16px; text-align: justify; text-indent: 32px;">.</span></span></p><p><span style="font-family: verdana;"><span style="background-color: white; color: #333333; font-size: 16px; text-align: justify; text-indent: 32px;">Most archaeological projects are already equipped with a digital <b>SLR</b> (<b>Single Lens</b> <b>Reflex</b>) camera and </span><span class="page-marker" style="-webkit-font-smoothing: antialiased; background-color: white; border: 0px; box-sizing: border-box; color: #333333; font-size: 16px; font-stretch: inherit; font-variant-east-asian: inherit; font-variant-numeric: inherit; line-height: inherit; margin: 0px; padding: 0px; text-align: justify; text-indent: 32px; text-rendering: optimizelegibility; vertical-align: baseline;"></span><span style="background-color: white; color: #333333; font-size: 16px; text-align: justify; text-indent: 32px;">most of the necessary software licenses for image processing are open access or available at steeply reduced educational discounts.</span></span></p><p><span style="font-family: verdana;"><span style="background-color: white; color: #333333; font-size: 16px; text-align: justify; text-indent: 32px;">Read more: <a href="https://www.cambridge.org/core/elements/digital-innovations-in-european-archaeology/BDEA933427350E7D500F773A31EC9F4B/core-reader">https://www.cambridge.org/core/elements/digital-innovations-in-european-archaeology/BDEA933427350E7D500F773A31EC9F4B/core-reader</a></span></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-76441596049595265602020-11-21T22:47:00.007+01:002020-11-21T22:50:57.322+01:00Impresión 3D para circuitos electrónicos alojados en nano-UAVs<div><span style="font-family: verdana;"><div class="separator" style="clear: both; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-pLGRB-MUKRQ/X7mJpqSZhwI/AAAAAAAASDE/xKlDIHU5Gs0uX9GO4rbNSI-uuy7-5JGiwCLcBGAsYHQ/s343/NANODIMENSION%2B3.JPG" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="178" data-original-width="343" src="https://1.bp.blogspot.com/-pLGRB-MUKRQ/X7mJpqSZhwI/AAAAAAAASDE/xKlDIHU5Gs0uX9GO4rbNSI-uuy7-5JGiwCLcBGAsYHQ/s320/NANODIMENSION%2B3.JPG" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div></div><div class="separator" style="clear: both; text-align: justify;"><span style="text-align: left;">Durante los últimos años, los <b>nano</b></span><b style="text-align: left;">-UAVs</b><span style="text-align: left;"> han venido siendo utilizados como un instrumento clave en operaciones encubiertas llevadas a cabo por la </span><b style="text-align: left;">CIA</b><span style="text-align: left;">, el </span><b style="text-align: left;">FBI</b><span style="text-align: left;">, el </span><b style="text-align: left;">M16</b><span style="text-align: left;">, el </span><b style="text-align: left;">Mosad</b><span style="text-align: left;">, la </span><b style="text-align: left;">Sayeret Matkal</b><span style="text-align: left;"> así como otros grupos de inteligencia de diversos países.</span></div></span></div><div style="text-align: justify;"><span style="font-family: verdana;"><br /></span></div><div style="text-align: justify;"><span style="font-family: verdana;">Lo ideal para misiones <b>ISR</b> sería contar con un instrumento dotado de un conjunto de sensores capaces de llevar a cabo la misión permitiendo al operario ver mediante cámaras multiespectro, escuchar todo tipo de sonidos dentro y fuera del rango 20Hz-20KHz, e incluso detectar la presencia de explosivos, isótopos radioactivos, gases tóxicos, etc.</span></div><div style="text-align: justify;"><span style="font-family: verdana;"><br /></span></div><div style="text-align: justify;"><span style="font-family: verdana;">Por supuesto, ese instrumento debería estar diseñado para no ser detectado a simple vista por un humano, pasando desapercibido como un insecto. Ok, ¿Y qué más? Porque t</span><span style="font-family: verdana;">odo eso requiere diseñar circuitos electrónicos muy complejos, que deben ser alojados en volúmenes muy reducidos de geometría muy compleja, y a</span><span style="font-family: verdana;">nte ese tipo de situaciones, el diseño y fabricación convencionales de circuitos electrónicos no sirve.</span></div><div style="text-align: justify;"><span style="font-family: verdana;"><br /></span></div><div style="text-align: justify;"><span style="font-family: verdana;">Se hacía necesario pensar otra manera de fabricar, y otra manera de diseñar. Afortunadamente e</span><span style="font-family: verdana;">sta nueva manera de fabricar ya está disponible no sólo para uso militar sino también para uso civil, y sus siglas son </span><b style="font-family: verdana;">AME</b><span style="font-family: verdana;"> que corresponden a </span><b style="font-family: verdana;">Additive Manufacturing for Electronics</b><span style="font-family: verdana;">. Una tecnología extraordinaria desarrollada en </span><b style="font-family: verdana;">Israel</b><span style="font-family: verdana;"> por ingenieros de la firma </span><b style="font-family: verdana;">Nano Dimension</b><span style="font-family: verdana;">. ¿Se imaginan diseñar circuitos electrónicos no sólo en XY, sino también en Z? ¿Se imaginan poder ocultar componentes electrónicos en el interior de una <b>PCB</b>? ¿Y si la <b>PCB</b> pudiera tener cualquier geometría en los tres ejes?</span></div><div style="text-align: justify;"><span style="font-family: verdana;"><br /></span></div><div style="text-align: justify;"><span style="font-family: verdana;">Es increíble hasta dónde puede llegar esta tecnología. Les invito a descubrirlo a través de este vídeo:</span></div><div style="text-align: justify;"><span style="font-family: verdana;"><br /></span></div><div style="text-align: justify;"><span style="font-family: verdana;"><a href="https://www.youtube.com/watch?v=E8GeucfOCJU&feature=emb_logo">https://www.youtube.com/watch?v=E8GeucfOCJU&feature=emb_logo</a></span></div><div style="text-align: justify;"><span style="font-family: verdana;"><br /></span></div><div style="text-align: justify;"><span style="font-family: verdana;"><br /></span></div>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-33376450267614494452020-11-21T21:50:00.002+01:002020-11-21T21:50:54.100+01:00Accuracy assessment of RTK-GNSS equipped UAV conducted as-built surveys for construction site modelling<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-YX1vMUEq3Lw/X7l9nuNgyEI/AAAAAAAASCc/bWB17HxhKZEtC70YlZFu0YRHHNwA63_8wCLcBGAsYHQ/s500/3D%2BData%2BUAV.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="485" data-original-width="500" src="https://1.bp.blogspot.com/-YX1vMUEq3Lw/X7l9nuNgyEI/AAAAAAAASCc/bWB17HxhKZEtC70YlZFu0YRHHNwA63_8wCLcBGAsYHQ/s320/3D%2BData%2BUAV.png" width="320" /></a></div><br /> <p></p><p><span style="font-family: verdana;">Regular as-built surveys have become a necessary input for building information modelling.</span></p><p><span style="font-family: verdana;">Such large-scale <b>3D data</b> <b>capturing</b> can be conducted effectively by combining structure-from-motion and <b>UAVs</b> (<b>Unmanned Aerial Vehicles</b>).</span></p><p><span style="font-family: verdana;">Using a <b>RTK-GNSS</b> (<b>Real Time Kinematic-Global Navigation Satellite System</b>) </span><span style="font-family: verdana;">equipped <b>UAV</b>, 22 repeated weekly campaigns were conducted at two altitudes in various conditions.</span></p><p><span style="font-family: verdana;">The photogrammetric approach yielded <b>3D models</b>, which were compared to the terrestrial laser scanning based ground truth. Better than 2.8 cm geometry <b>RMSE</b> (<b>R</b></span><span style="font-family: verdana;"><b>oot Mean Square Error</b>)</span><span style="font-family: verdana;"> was consistently achieved using integrated georeferencing.</span></p><p><span style="font-family: verdana;">It is concluded that the <b>RTK-GNSS </b>based georeferencing enables reaching better than 5 cm geometry accuracy by utilising at least one ground control point.</span></p><p><span style="font-family: verdana;">Read more at:</span></p><p><span style="font-family: verdana;"><a href="https://www.tandfonline.com/doi/abs/10.1080/00396265.2020.1830544">https://www.tandfonline.com/doi/abs/10.1080/00396265.2020.1830544</a></span></p><p><span style="font-family: verdana;"><br /></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-76972393717771210652020-11-15T20:51:00.005+01:002020-11-15T20:51:52.497+01:00Nano Dimension: Assure Your Electronics Projects Confidentiality<p><span style="font-family: verdana;"><span style="white-space: pre-wrap;"><br /></span></span></p><p><span style="font-family: verdana;"><span style="white-space: pre-wrap;"></span></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: verdana;"><a href="https://1.bp.blogspot.com/-yZDcYkDN0CA/X7GF4AiSM3I/AAAAAAAASB0/eeYb9tl_I4sO647cBq692BOqPfRFHIzAACLcBGAsYHQ/s1024/NANO%2BDIMENSION%2B-%2BMIDS.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="768" data-original-width="1024" height="376" src="https://1.bp.blogspot.com/-yZDcYkDN0CA/X7GF4AiSM3I/AAAAAAAASB0/eeYb9tl_I4sO647cBq692BOqPfRFHIzAACLcBGAsYHQ/w501-h376/NANO%2BDIMENSION%2B-%2BMIDS.jpg" width="501" /></a></span></div><span style="font-family: verdana;"><br />The <b>Nano Dimension</b>’s <b>DragonFly™ Pro Additive Manufacturing Platform</b> for <b>Electronics</b>, is the one-stop solution for creating high-quality <b>3D Printed</b> <b>electronics</b> confidentially.</span><p></p><p><span style="color: var(--yt-spec-text-primary); font-size: var(--ytd-user-comment_-_font-size); letter-spacing: var(--ytd-user-comment_-_letter-spacing); white-space: pre-wrap;"><span style="font-family: verdana;"><span style="font-weight: var(--ytd-user-comment_-_font-weight);">The system can </span><b>3D print</b><span style="font-weight: var(--ytd-user-comment_-_font-weight);"> using metals and dielectric polymers simultaneously, allowing for the manufacture of non-planar electronics, antennas, </span><b>RFIDs</b><span style="font-weight: var(--ytd-user-comment_-_font-weight);">, multilayer </span><b>PCBs</b><span style="font-weight: var(--ytd-user-comment_-_font-weight);">, </span></span></span><span style="font-family: verdana;"><span style="white-space: pre-wrap;"><b>Complex Geometry PCBs</b>, </span></span><span style="color: var(--yt-spec-text-primary); font-family: verdana; font-size: var(--ytd-user-comment_-_font-size); font-weight: var(--ytd-user-comment_-_font-weight); letter-spacing: var(--ytd-user-comment_-_letter-spacing); white-space: pre-wrap;">and many other components. </span></p><p><span style="color: var(--yt-spec-text-primary); font-family: verdana; font-size: var(--ytd-user-comment_-_font-size); font-weight: var(--ytd-user-comment_-_font-weight); letter-spacing: var(--ytd-user-comment_-_letter-spacing); white-space: pre-wrap;">Discover it at:</span></p><p><span style="font-family: verdana;"><span style="white-space: pre-wrap;"><a href="https://www.youtube.com/watch?v=MDfSrb7FQ7w">https://www.youtube.com/watch?v=MDfSrb7FQ7w</a></span></span></p><p><span style="font-family: verdana;"><span style="white-space: pre-wrap;"><br /></span></span></p><p><br /></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-19466133697049819862020-11-15T20:17:00.008+01:002020-11-15T20:18:47.617+01:00Aspen detection in boreal forests: Capturing a key component of biodiversity using airborne hyperspectral, lidar, and UAV data<p><span style="font-family: verdana;"><span style="background-color: white; font-size: 16px;"></span></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: verdana;"><span style="background-color: white; font-size: 16px;"><a href="https://1.bp.blogspot.com/-BTUkm8eraGs/X7F-sQ7lufI/AAAAAAAASBk/U9BaaQ-DX2AJR6TDab-KN_1wxUDzcQxrgCLcBGAsYHQ/s550/ASPEN.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="389" data-original-width="550" height="342" src="https://1.bp.blogspot.com/-BTUkm8eraGs/X7F-sQ7lufI/AAAAAAAASBk/U9BaaQ-DX2AJR6TDab-KN_1wxUDzcQxrgCLcBGAsYHQ/w484-h342/ASPEN.jpg" width="484" /></a></span></span></div><span style="font-family: verdana;"><span style="background-color: white; font-size: 16px;">Importance of biodiversity is increasingly highlighted as an essential part of sustainable forest management.</span></span><p></p><p><span style="font-family: verdana;"><span style="background-color: white; font-size: 16px;">As direct monitoring of biodiversity is not possible, proxy variables have been used to indicate site's species richness and quality. </span></span><span style="background-color: white; font-family: verdana; font-size: 16px;">In boreal forests, European aspen (Populus tremula L.) is one of the most significant proxies for biodiversity.</span></p><span style="background-color: white; font-family: verdana; font-size: 16px;">Aspen is a keystone species, hosting a range of endangered species, hence having a high importance in maintaining forest biodiversity. Still, reliable and fine-scale spatial data on aspen occurrence remains scarce and incomprehensive. Although remote sensing-based species classification has been used for decades for the needs of forestry, commercially less significant species (e.g., aspen) have typically been excluded from the studies.</span><p></p><p><span style="background-color: white; font-family: verdana; font-size: 16px;">This creates a need for developing general methods for tree species classification covering also ecologically significant species. Our study area, located in <b>Evo</b>, <b>Southern Finland</b>, covers approximately 83 km2, and contains both managed and protected southern boreal forests. The main tree species in the area are Scots pine (Pinus sylvestris L.), Norway spruce (Picea abies (L.) Karst), and birch (Betula pendula and pubescens L.), with relatively sparse and scattered occurrence of aspen.</span></p><p><span style="background-color: white; font-family: verdana; font-size: 16px;">Along with a thorough field data, airborne hyperspectral and <b>LiDAR</b> data have been acquired from the study area. We also collected ultra high resolution <b>UAV</b> data with <b>RGB</b> and multispectral sensors. The aim is to gather fundamental data on hyperspectral and multispectral species classification, that can be utilized to produce detailed aspen data at large scale. For this, we first analyze species detection at tree-level. We test and compare different machine learning methods (<b>Support Vector Machines</b>, <b>Random Forest</b>, <b>Gradient Boosting Machine</b>) and deep learning methods (<b>3D Convolutional Neural Networks</b>), with specific emphasis on accurate and feasible aspen detection.</span></p><p><span style="background-color: white; font-family: verdana; font-size: 16px;">The results will show, how accurately aspen can be detected from the forest canopy, and which bandwidths have the largest importance for aspen. This information can be utilized for aspen detection from satellite images at large scale.</span></p><p><span style="background-color: white; font-family: verdana; font-size: 16px;">Read more at <a href="https://ui.adsabs.harvard.edu/abs/2020EGUGA..2221268K/abstract">https://ui.adsabs.harvard.edu/abs/2020EGUGA..2221268K/abstract</a></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-11121500551675031752020-11-14T14:49:00.001+01:002020-11-14T14:49:27.339+01:00Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment<div style="text-align: left;"><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-JJLqCD74Omc/X6_gWUz0ljI/AAAAAAAASBU/EcQt37B9hvcC2IH1F6oTCZsgknRkzmJzQCLcBGAsYHQ/s320/ORTOMOSAIC.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="320" data-original-width="320" src="https://1.bp.blogspot.com/-JJLqCD74Omc/X6_gWUz0ljI/AAAAAAAASBU/EcQt37B9hvcC2IH1F6oTCZsgknRkzmJzQCLcBGAsYHQ/s0/ORTOMOSAIC.jpg" /></a></div><br /> </div><div style="text-align: left;"><span style="font-family: verdana;">Efficient and rapid data collection techniques are necessary to obtain transitory information in the aftermath of natural hazards, which is not only useful for post-event management and planning, but also for post-event structural damage assessment.</span></div><div style="text-align: left;"><span style="font-family: verdana;"><br /></span></div><div style="text-align: left;"><span style="font-family: verdana;">Aerial imaging from <b>UAVs</b> permits highly detailed site characterization, in particular in the aftermath of extreme events with minimal ground support, to document current conditions of the region of interest. However, aerial imaging results in a massive amount of data in the form of two-dimensional (<b>2D</b>) orthomosaic images and three-dimensional (<b>3D</b>) point clouds.</span></div><div style="text-align: left;"><span style="font-family: verdana;"><br /></span></div><div style="text-align: left;"><span style="font-family: verdana;">Both types of datasets require effective and efficient data processing workflows to identify various damage states of structures. This manuscript aims to introduce two deep learning models based on both <b>2D</b> and <b>3D</b> convolutional neural networks to process the orthomosaic images and point clouds, for post windstorm classification.</span></div><div style="text-align: left;"><span style="font-family: verdana;"><br /></span></div><div style="text-align: left;"><span style="font-family: verdana;">In detail, <b>2D CNN</b> (<b>2D Convolutional Neural Networks</b>) are developed based on transfer learning from two well-known networks <b>AlexNet</b> and <b>VGGNet</b>. </span><span style="font-family: verdana;">In contrast, a <b>3DFCN (3D Fully Convolutional Network)</b> with skip connections was developed and trained based on the available point cloud data. Within this study, the datasets were created based on data from the aftermath of <b>Hurricanes Harvey</b> (<b>Texas</b>) and <b>Maria</b> (<b>Puerto Rico</b>). The developed <b>2DCNN</b> and <b>3DFCN</b> models were compared quantitatively based on the performance measures, and it was observed that the <b>3DFCN</b> was more robust in detecting the various classes.</span></div><div style="text-align: left;"><span style="font-family: verdana;"><br /></span></div><div style="text-align: left;"><span style="font-family: verdana;">This demonstrates the value and importance of <b>3D Datasets</b>, particularly the depth information, to distinguish between instances that represent different damage states in structures.</span></div><div style="text-align: left;"><span style="font-family: verdana;"><br /></span></div><div style="text-align: left;"><span style="font-family: verdana;">Read more: </span><span style="font-family: verdana;"><a href="https://www.mdpi.com/2504-446X/4/2/24">https://www.mdpi.com/2504-446X/4/2/24</a></span></div>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-65319400264962178782020-11-08T15:04:00.002+01:002020-11-08T15:04:58.187+01:00Empower innovation in your micro-UAVs with the technology of Nano Dimension<p><span style="font-family: verdana;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: verdana;"><span style="text-align: left;"><br /></span></span></div><div class="separator" style="clear: both; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><span style="font-family: verdana;"><a href="https://1.bp.blogspot.com/-Gl7Z9f2vTgg/X6f6tjmhQMI/AAAAAAAASBA/8F1e_yebTn8KcZZn0J_qzzi3FSOetqYdACLcBGAsYHQ/s907/DRAGONFLY.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="907" data-original-width="869" height="320" src="https://1.bp.blogspot.com/-Gl7Z9f2vTgg/X6f6tjmhQMI/AAAAAAAASBA/8F1e_yebTn8KcZZn0J_qzzi3FSOetqYdACLcBGAsYHQ/s320/DRAGONFLY.jpg" /></a></span></div><span style="font-family: verdana;"><br /><span style="text-align: left;"><br /></span></span></div><div class="separator" style="clear: both; text-align: center;"><span style="font-family: verdana;"><span style="text-align: left;">Meet </span><b style="text-align: left;">Nano Dimension</b><span style="text-align: left;">, a technology company, disrupting, shaping and defining the future of how electronics are made:</span></span></div><p></p><p><span style="font-family: verdana;"><a href="https://www.youtube.com/watch?v=P4NFf42b04E&feature=emb_logo">https://www.youtube.com/watch?v=P4NFf42b04E&feature=emb_logo</a></span></p><p><span style="font-family: verdana;">The products and solutions they offer are bridging today's world with the electronics of tomorrow.</span></p><p><span style="font-family: verdana;">Moving the industry, from <b>2D</b> to <b>3D</b>, from initial design right through to manufacturing, with the <b>DragonFly Pro</b> <b>Additive Manufacturing</b> <b>System</b>, the world's first professional <b>3D printer</b> for electronics, highly conductive silver nanoparticle ink, dielectric ink and advanced <b>3D software</b>. </span></p><p><span style="font-family: verdana;">Learn more about <b>Nano Dimension DragonFly Pro System</b> technology here: <a href="https://bit.ly/2LZtbkr">https://bit.ly/2LZtbkr</a></span></p><p><span style="font-family: verdana;">Learn more about additive manufacturing for electronics here: <a href="https://bit.ly/2StPkgB">https://bit.ly/2StPkgB</a></span></p><p><span style="font-family: verdana;">Contact Nano Dimension here: <a href="https://bit.ly/2TavMLb">https://bit.ly/2TavMLb</a></span></p><p><span style="font-family: verdana;">For more information about Nano Dimension, please click here: <a href="https://www.nano-di.com">https://www.nano-di.com</a></span></p><p><span style="font-family: verdana;">Get the latest news from Nano Dimension: <a href="https://bit.ly/2E22Nnv">https://bit.ly/2E22Nnv</a></span></p><p><span style="font-family: verdana;"><br /></span></p><p><br /></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-76310685840523525192020-11-08T14:39:00.003+01:002020-11-08T14:39:22.681+01:003D Fire Front Reconstruction in UAV-Based Forest-Fire Monitoring System<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-kgLrYaUSgQk/X6f07yLAgxI/AAAAAAAASAs/J-lSfuwrRbkcme3DrrDm4DNsm72yBx5VACLcBGAsYHQ/s238/3D%2BFire.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="212" data-original-width="238" height="356" src="https://1.bp.blogspot.com/-kgLrYaUSgQk/X6f07yLAgxI/AAAAAAAASAs/J-lSfuwrRbkcme3DrrDm4DNsm72yBx5VACLcBGAsYHQ/w400-h356/3D%2BFire.jpg" width="400" /></a></div><br /><span style="font-family: verdana;"><br /></span><p></p><p><span style="font-family: verdana;">This work presents a new method of <b>3D reconstruction</b> of the forest-fire front based on uncertain observations captured by remote sensing from <b>UAVs</b> within the forest-fire monitoring system.</span></p><p><span style="font-family: verdana;">The use of multiple cameras simultaneously to capture the scene and recognize its geometry including depth is proposed. Multi-directional observation allows perceiving and representing a volumetric nature of the fire front as well as the dynamics of the fire process.</span></p><p><span style="font-family: verdana;">The novelty of the proposed approach lies in the use of soft rough set to represent forest fire model within the discretized hierarchical model of the terrain and the use of <b>3D CNN</b> (<b>3D Convolutional Neural Network</b>) to classify voxels within the reconstructed scene.</span></p><p><span style="font-family: verdana;">The developed method provides sufficient performance and good visual representation to fulfill the requirements of fire response decision makers. </span></p><p><span style="font-family: verdana;">Read more at: <a href="https://ieeexplore.ieee.org/abstract/document/9204196">https://ieeexplore.ieee.org/abstract/document/9204196</a></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-83470033195443456122020-11-02T14:28:00.004+01:002020-11-02T14:28:38.815+01:00Method for establishing the UAV-rice vortex 3D model and extracting spatial parameters<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-WtLa_U_cgt8/X6AJaCZIoVI/AAAAAAAAR_w/KMzOyPOP5tsJQ5MQtSuAFTInuOAGlYzYgCLcBGAsYHQ/s286/METHOD%2BFOR.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="176" data-original-width="286" height="246" src="https://1.bp.blogspot.com/-WtLa_U_cgt8/X6AJaCZIoVI/AAAAAAAAR_w/KMzOyPOP5tsJQ5MQtSuAFTInuOAGlYzYgCLcBGAsYHQ/w400-h246/METHOD%2BFOR.jpg" width="400" /></a></div><span style="font-family: verdana;"><p>With the deepening research on the rotor wind field of <b>UAV</b> operation, it has become a mainstream to quantify the <b>UAV</b> operation effect and study the distribution law of rotor wind field via the spatial parameters of the <b>UAV-rice</b> <b>interaction</b> wind field vortex. </p></span><p></p><p><span style="font-family: verdana;">At present, the point cloud segmentation algorithms involved in most wind field vortex spatial parameter extraction methods cannot adapt to the instantaneous changes and indistinct boundary of the vortex. As a result, there are problems such as inaccurate three-dimensional (<b>3D</b>) shape and boundary contour of the wind field vortex as well as large errors in the vortex’s spatial parameters. </span></p><p><span style="font-family: verdana;">To this end, this paper proposes an accurate method for establishing the <b>UAV-rice</b> <b>interaction</b> vortex <b>3D model</b> and extracting vortex spatial parameters. Firstly, the original point cloud data of the wind filed vortex were collected in the image acquisition area. Secondly, <b>DDC-UL</b> processed the original point cloud data to develop the <b>3D</b> point cloud image of the wind field vortex. </span></p><p><span style="font-family: verdana;">Thirdly, the <b>3D</b> curved surface was reconstructed and spatial parameters were then extracted. Finally, the volume parameters and top surface area parameters of the <b>UAV-rice</b> <b>interaction</b> vortex were calculated and analyzed. The results show that the error rate of the <b>3D model</b> of the <b>UAV-rice</b> <b>interaction</b> wind field vortex developed by the proposed method is kept within 2%, which is at least 13 percentage points lower than that of algorithms like <b>PointNet</b>. </span></p><p><span style="font-family: verdana;">The average error rates of the volume parameters and the top surface area parameters extracted by the proposed method are 1.4% and 4.12%, respectively. This method provides <b>3D</b> <b>data</b> for studying the mechanism of rotor wind field in the crop canopy through the <b>3D vortex</b> model and its spatial parameters.</span></p><p><span style="font-family: verdana;">Read more at: <a href="http://www.ijpaa.org/index.php/ijpaa/article/view/84">http://www.ijpaa.org/index.php/ijpaa/article/view/84</a></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.comtag:blogger.com,1999:blog-2487899823888382892.post-57625352121138761962020-11-01T17:23:00.003+01:002020-11-01T17:23:44.185+01:00Federated Learning in the Sky: Aerial-Ground Air Quality Sensing Framework with UAV Swarms<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-02rs2qifRHQ/X57g_2etO6I/AAAAAAAAR_g/y07J9s5GbeQjCR-aN34jV1YNJH17Cc_-gCLcBGAsYHQ/s550/FEDERATED.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="370" data-original-width="550" height="269" src="https://1.bp.blogspot.com/-02rs2qifRHQ/X57g_2etO6I/AAAAAAAAR_g/y07J9s5GbeQjCR-aN34jV1YNJH17Cc_-gCLcBGAsYHQ/w400-h269/FEDERATED.png" width="400" /></a></div><br /><span style="font-family: verdana;"><br /></span><p></p><p><span style="font-family: verdana;">Due to air quality significantly affects human health, it is becoming increasingly important to accurately and timely predict the <b>Air Quality Index</b> (<b>AQI</b>).</span></p><p><span style="font-family: verdana;">To this end, this paper proposes a new federated learning-based aerial-ground air quality sensing framework for fine-grained <b>3D</b> air quality monitoring and forecasting.</span></p><p><span style="font-family: verdana;">Specifically, in the air, this framework leverages a light-weight <b>Dense-MobileNet</b> model to achieve energy-efficient end-to-end learning from haze features of haze images taken by <b>UAVs</b> (<b>Unmanned Aerial Vehicles</b>) for predicting <b>AQI</b> scale distribution.</span></p><p><span style="font-family: verdana;">Furthermore, the <b>Federated Learning Framework</b> not only allows various organizations or institutions to collaboratively learn a well-trained global model to monitor <b>AQI</b> without compromising privacy, but also expands the scope of <b>UAV</b> swarms monitoring.</span></p><p><span style="font-family: verdana;">For ground sensing systems, it is proposed a <b>GC-LSTM</b> (<b>Graph Convolutional</b> <b>neural network-based Long Short-Term Memory</b>) model to achieve accurate, real-time and future <b>AQI</b> inference. The <b>GC-LSTM</b> model utilizes the topological structure of the ground monitoring station to capture the spatio-temporal correlation of historical observation data, which helps the aerial-ground sensing system to achieve accurate <b>AQI</b> inference.</span></p><p><span style="font-family: verdana;">Through extensive case studies on a real-world dataset, numerical results show that the proposed framework can achieve accurate and energy-efficient <b>AQI</b> sensing without compromising the privacy of raw data.</span></p><p><span style="font-family: verdana;">Read more: <a href="https://ieeexplore.ieee.org/abstract/document/9184079">https://ieeexplore.ieee.org/abstract/document/9184079</a></span></p>David del Fresno Torrecillashttp://www.blogger.com/profile/13640941173288873226noreply@blogger.com