Mostrando entradas con la etiqueta UAVs. Mostrar todas las entradas
Mostrando entradas con la etiqueta UAVs. Mostrar todas las entradas

jueves, 31 de diciembre de 2020

UAVs para Animación 3D: CamFly Films


CamFly Films se fundó en 2014 para proporcionar servicios de fotografía y video mediante UAVs.

Su fundador, Serge Kouperschmidt, tiene más de 30 años de experiencia en producción de video y trabajó como camarógrafo y director de fotografía para la industria del cine en todo el mundo.

CamFly Films Ltd. tiene su sede en Londres y es un operador de UAVs certificado por la CAA (Civil Aviation Authority). Ofrece servicios profesionales de fotografía y filmación tanto en Londres como en cualquier lugar del Reino Unido.

https://www.youtube.com/watch?v=LHSDlY_IkLE

Desde magníficas filmaciones aéreas cinematográficas como la que se muestra en el enlace anterior, hasta inspecciones industriales, servicios de mapeo aéreo, estudios de tejados, e informes topográficos, proporciona servicios a medida basados en su gran experiencia de vuelo.

Entre sus actividades más demandadas cabría destacar el estudio y seguimiento de construcción de inmuebles, filmando el progreso diario de la construcción. También merecen destacarse las actividades relacionadas con la captación de imágenes térmicas, la fotogrametría, la ortofotografía, el mapeo con UAVs, la fotografía aérea 360°, así como el modelado 3D a partir de fotografías tomadas con UAVs.

Además de su PFCO (Permission for Commercial Operationsestándar, CamFly Films cuenta con un OSC (Operating Safety Case). Este permiso especialmente difícil de obtener, les permite volar legalmente a una distancia menor (10 metros de su objetivo) a una altitud mayor (188 metros) y más allá de la línea de visión. Esto hace que, a diferencia de la gran mayoría de otros operadores de UAVs, puedan operar con total eficiencia en el corazón de Londres.

CamFly Films también es una compañía de producción de video que ofrece fotografía, videografía y todos los servicios de filmación adjuntos: grabación de videos 4K impresionantes con cámaras de última generación, edición, gradación de color, adición de música, títulos, voz en off, efectos visuales, etc.

domingo, 27 de diciembre de 2020

Clifford Geometric Algebra-Based Approach for 3D Modeling of Agricultural Images Acquired by UAVs



Three-dimensional image modeling is essential in many scientific disciplines, including computer vision and precision agriculture.

So far, various methods of creating three-dimensional models have been considered. However, the processing of transformation matrices of each input image data is not controlled.

Site-specific crop mapping is essential because it helps farmers determine yield, biodiversity, energy, crop coverage, etc. Clifford Geometric Algebraic understanding of signaling and image processing has become increasingly important in recent years.

Geometric Algebraic treats multi-dimensional signals in a holistic way to maintain relationship between side sizes and prevent loss of information. This article has used agricultural images acquired by UAVs to construct three-dimensional models using Clifford geometric algebra. The qualitative and quantitative performance evaluation results show that Clifford geometric algebra can generate a three-dimensional geometric statistical model directly from UAVs’ RGB (Red Green Blue) images.

Through Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and visual comparison, the proposed algorithm’s performance is compared with latest algorithms. Experimental results show that proposed algorithm is better than other leading 3D modeling algorithms.

Read more:

https://www.researchgate.net/publication/347679848_Clifford_Geometric_Algebra-Based_Approach_for_3D_Modeling_of_Agricultural_Images_Acquired_by_UAVs


jueves, 24 de diciembre de 2020

Dji Phantom 3 para Animación 3D con texturas PBR


Las texturas PBR hacen referencia a una técnica de renderizado que permite calcular la luz de una escena en 3D, en base a la vida real.

Todavía no se ha conseguido un realismo al 100%, pero esta técnica permite calcular cómo se refleja la luz y las sombras que producen los objetos, de una manera más realista que en el pasado.

Estas texturas permiten simplificar el trabajo al aplicar materiales y se pueden usar en la mayoría de plataformas. Las texturas PBR dan información sobre el nivel de detalle, el color del material, el desplazamiento de los polígonos, la cantidad de reflexión, el detalle de la superficie y otro tipo de información como transparencia, refracción, curvatura, posición de los polígonos, etc.

En este video de Animación 3D se muestran las posibilidades de ésta técnica en combinación con las imágenes captadas mediante UAVs. Más concretamente, captadas por un Dji Phantom 3.

Enlace al vídeo:

https://www.youtube.com/watch?v=64fYOyrNN0c&list=PL2UsAzNdeUau_YvGOi-JBwXIGvKwhEAMn


domingo, 13 de diciembre de 2020

Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system



Unmanned Aerial Vehicles (UAVs) as a data acquisition platform and as a measurement instrument are becoming attractive for many surveying applications in civil engineering.

Their performance, however, is not well understood for these particular tasks. The scope of the presented work is the performance evaluation of a UAV system that was built to rapidly and autonomously acquire mobile 3D mapping data.

Details to the components of the UAV system (hardware and control software) are explained. A novel program for photogrammetric flight planning and its execution for the generation of 3D point clouds from digital mobile images is explained.

A performance model for estimating the position error was developed and tested in several realistic construction environments. Test results are presented as they relate to large excavation and earth moving construction sites.

The experiences with the developed UAV system are useful to researchers or practitioners in need for successfully adapting UAV technology for their applications.

Read more:

https://www.researchgate.net/publication/260270622_Mobile_3D_mapping_for_surveying_earthwork_projects_using_an_Unmanned_Aerial_Vehicle_UAV_system

sábado, 5 de diciembre de 2020

Ventajas de usar UAVs en investigación criminal



Vamos a tratar en este post el uso de UAVs para investigación criminal.

Indudablemente el software de fotogrametría y el escaneo 3D "in situ" ya se está utilizando con éxito para documentar estas escenas, pero los UAVs ofrecen importantes ventajas que vamos a ver a continuación.

1. Recorte de tiempos

Cuando ocurre un crimen, lo mejor para todos es despejar el área lo antes posible, pero la escena debe documentarse primero.

Los instrumentos más frecuentemente usados por los equipos de investigación criminal son el escáner 3D y/o las estaciones totales y/o la fotografía digital, o una combinación de los tres al objeto de recopilar datos y crear la nube de puntos en 3D correspondiente a la escena del crimen.

Sin embargo, estos métodos pueden requerir una gran cantidad de tiempo y personal capacitado, que puede no siempre estar disponible. Por si esto fuera poco, los alrededores de la escena del crimen pueden ofrecer mucha información muy útil, que sólo se percibe desde las alturas.

Para la documentación de un crimen desde una cierta altura y en un amplio area, los UAVs resultan muy útiles porque pueden recorrer fácilmente distancias más grandes a una altitud conveniente para lograr una cobertura más rápida y precisa, permitiendo reducir entre un 60 y un 80% el tiempo requerido para documentar con precisión la escena del crimen.

2. Recorte de costes

Cerrar el paso a un area e investigar una escena criminal, requiere un trabajo humano que conlleva un coste laboral directamente proporcional al tiempo empleado.

Sin embargo, utilizar UAVs como herramienta para la toma de imágenes tiene un coste inversamente proporcional al tiempo empleado y puede resolverse en mucho menos tiempo utilizando planificadores de vuelo automatizado.

Con estos instrumentos, resulta relativamente económico documentar con precisión la escena de un crimen y acelerar el proceso, especialmente en situaciones con restricciones extremas de tiempo, de personal, o de otros equipos alternativos.

Por otra parte, la adquisición de datos en 3D mediante UAVs permite captar datos allí donde no es posible obtenerlos mediante el uso del escaner láser o la estacion total, como por ejemplo cuando existen obstáculos insalvables.

3. Sus resultados constituyen pruebas documentales

En ultima instancia, el objeto de recopilar datos no es otro que aportar evidencias que puedan ser presentadas ante un tribunal.

Muchas veces, la falta de datos por imposibilidad humana de acceder a ciertas areas hace que sea trabajo imposible presentar pruebas que avalen una sospecha, y es por esto que los datos recogidos por un UAV en un solo vuelo, combinados con un adecuado software de fotogrametría, pueden constituir la prueba definitiva para descartar o confirmar un asesinato o un suicidio, así como para confirmar o descartar la culpabilidad de un imputado.



domingo, 22 de noviembre de 2020

Digital Innovations in European Archaeology


 

European archaeologists in the last two decades have worked to integrate a wide range of emerging digital tools to enhance the recording, analysis, and dissemination of archaeological data.

These techniques have expanded and altered the data collected by archaeologists as well as their interpretations. At the same time archaeologists have expanded the capabilities of using these data on a large scale, across platforms, regions, and time periods, utilising new and existing digital research infrastructures to enhance the scale of data used for archaeological interpretations.

This Element discusses some of the most recent, innovative uses of these techniques in European archaeology at different stages of archaeological work. In addition to providing an overview of some of these techniques, it critically assesses these approaches and outlines the recent challenges to the discipline posed by self-reflexive use of these tools and advocacy for their open use in cultural heritage preservation and public engagement.

Among these techniques used frequently in various archaeological contexts across Europe, aerial photogrammetry, utilising photographs taken by UAVs (Unmanned Aerial Vehicles) has been used to document larger landscapes and close-range photogrammetry is becoming a ubiquitous recording tool on excavations and for historic architectural recording. The low financial entry point to photogrammetry has made it an ideal technique for archaeologists, who are often working on a shoe-string budget.

Most archaeological projects are already equipped with a digital SLR (Single Lens Reflex) camera and most of the necessary software licenses for image processing are open access or available at steeply reduced educational discounts.

Read more: https://www.cambridge.org/core/elements/digital-innovations-in-european-archaeology/BDEA933427350E7D500F773A31EC9F4B/core-reader

sábado, 14 de noviembre de 2020

Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment


 
Efficient and rapid data collection techniques are necessary to obtain transitory information in the aftermath of natural hazards, which is not only useful for post-event management and planning, but also for post-event structural damage assessment.

Aerial imaging from UAVs permits highly detailed site characterization, in particular in the aftermath of extreme events with minimal ground support, to document current conditions of the region of interest. However, aerial imaging results in a massive amount of data in the form of two-dimensional (2D) orthomosaic images and three-dimensional (3D) point clouds.

Both types of datasets require effective and efficient data processing workflows to identify various damage states of structures. This manuscript aims to introduce two deep learning models based on both 2D and 3D convolutional neural networks to process the orthomosaic images and point clouds, for post windstorm classification.

In detail, 2D CNN (2D Convolutional Neural Networks) are developed based on transfer learning from two well-known networks AlexNet and VGGNetIn contrast, a 3DFCN (3D Fully Convolutional Network) with skip connections was developed and trained based on the available point cloud data. Within this study, the datasets were created based on data from the aftermath of Hurricanes Harvey (Texas) and Maria (Puerto Rico). The developed 2DCNN and 3DFCN models were compared quantitatively based on the performance measures, and it was observed that the 3DFCN was more robust in detecting the various classes.

This demonstrates the value and importance of 3D Datasets, particularly the depth information, to distinguish between instances that represent different damage states in structures.

domingo, 8 de noviembre de 2020

3D Fire Front Reconstruction in UAV-Based Forest-Fire Monitoring System



This work presents a new method of 3D reconstruction of the forest-fire front based on uncertain observations captured by remote sensing from UAVs within the forest-fire monitoring system.

The use of multiple cameras simultaneously to capture the scene and recognize its geometry including depth is proposed. Multi-directional observation allows perceiving and representing a volumetric nature of the fire front as well as the dynamics of the fire process.

The novelty of the proposed approach lies in the use of soft rough set to represent forest fire model within the discretized hierarchical model of the terrain and the use of 3D CNN (3D Convolutional Neural Network) to classify voxels within the reconstructed scene.

The developed method provides sufficient performance and good visual representation to fulfill the requirements of fire response decision makers. 

Read more at: https://ieeexplore.ieee.org/abstract/document/9204196

domingo, 1 de noviembre de 2020

Federated Learning in the Sky: Aerial-Ground Air Quality Sensing Framework with UAV Swarms



Due to air quality significantly affects human health, it is becoming increasingly important to accurately and timely predict the Air Quality Index (AQI).

To this end, this paper proposes a new federated learning-based aerial-ground air quality sensing framework for fine-grained 3D air quality monitoring and forecasting.

Specifically, in the air, this framework leverages a light-weight Dense-MobileNet model to achieve energy-efficient end-to-end learning from haze features of haze images taken by UAVs (Unmanned Aerial Vehicles) for predicting AQI scale distribution.

Furthermore, the Federated Learning Framework not only allows various organizations or institutions to collaboratively learn a well-trained global model to monitor AQI without compromising privacy, but also expands the scope of UAV swarms monitoring.

For ground sensing systems, it is proposed a GC-LSTM (Graph Convolutional neural network-based Long Short-Term Memory) model to achieve accurate, real-time and future AQI inference. The GC-LSTM model utilizes the topological structure of the ground monitoring station to capture the spatio-temporal correlation of historical observation data, which helps the aerial-ground sensing system to achieve accurate AQI inference.

Through extensive case studies on a real-world dataset, numerical results show that the proposed framework can achieve accurate and energy-efficient AQI sensing without compromising the privacy of raw data.

Read more: https://ieeexplore.ieee.org/abstract/document/9184079

martes, 20 de octubre de 2020

DroneCaps: Recognition Of Human Actions In UAV Videos Using Capsule Networks With Binary Volume Comparisons

Understanding human actions from videos captured by UAVs is a challenging task in computer vision due to the unfamiliar viewpoints of individuals and changes in their size due to the camera’s location and motion.

This work proposes DroneCaps, a capsule network architecture for multi-label HAR (Human Action Recognition) in videos captured by UAVs. DroneCaps uses features computed by 3D convolution neural networks plus a new set of features computed by a novel Binary Volume Comparison layer.

All these features, in conjunction with the learning power of CapsNets, allow understanding and abstracting the different viewpoints and poses of the depicted individuals very efficiently, thus improving multi-label HAR.

The evaluation of the DroneCaps architecture’s performance for multi-label classification shows that it outperforms state-of-the-art methods on the Okutama-Action dataset.

Read more at: https://ieeexplore.ieee.org/document/9190864

domingo, 18 de octubre de 2020

Vision-Based Obstacle Avoidance for UAVs via Imitation Learning with Sequential Neural Networks

This paper explores the feasibility of a framework for vision-based obstacle avoidance techniques that can be applied to UAVs (Unmanned Aerial Vehicles) where such decision-making policies are trained upon supervision of actual human flight data.

The neural networks are trained based on aggregated flight data from human experts, learning the implicit policy for visual obstacle avoidance by extracting the necessary features within the image. The images and flight data are collected from a simulated environment provided by Gazebo, and Robot Operating System is used to provide the communication nodes for the framework.

The framework is tested and validated in various environments with respect to four types of neural network including fully connected neural networks, two- and three-dimensional CNNs (Convolutional Neural Networks), and Recurrent Neural Networks (RNNs). Among the networks, sequential neural networks (i.e., 3D-CNNs and RNNs) provide the better performance due to its ability to explicitly consider the dynamic nature of the obstacle avoidance problem.

Read more at: https://link.springer.com/article/10.1007/s42405-020-00254-x

lunes, 12 de octubre de 2020

Tree Species Classification of UAV Hyperspectral and RGB Imagery with Deep Learning Convolutional Neural Networks

Interest in UAV solutions in forestry applications is growing.

Using UAVs, datasets can be captured flexibly and at high spatial and temporal resolutions when needed.

In forestry applications, fundamental tasks include the detection of individual trees, tree species classification, biomass estimation, etc. Deep Neural Networks (DNN) have shown superior results when comparing with conventional machine learning methods such as MLP (Multi-Layer Perceptron) in cases of huge input data.

The objective of this research is to investigate 3D Convolutional Neural Networks (3D-CNN) to classify three major tree species in a boreal forest: pine, spruce, and birch. The proposed 3D-CNN models were employed to classify tree species in a test site in Finland. The classifiers were trained with a dataset of 3039 manually labelled trees. Then the accuracies were assessed by employing independent datasets of 803 records.

To find the most efficient set of feature combination, it was compared the performances of 3D-CNN models trained with HS (HyperSpectral) channels, Red-Green-Blue (RGB) channels, and Canopy Height Model (CHM), separately and combined. It is demonstrated that the proposed 3D-CNN model with RGB and HS layers produces the highest classification accuracy. The producer accuracy of the best 3D-CNN classifier on the test dataset were 99.6%, 94.8%, and 97.4% for pines, spruces, and birches, respectively.

The best 3D-CNN classifier produced ~5% better classification accuracy than the MLP with all layers. The results suggest that the proposed method provides excellent classification results with acceptable performance metrics for HS datasets. The results show that pine class was detectable in most layers. Spruce was most detectable in RGB data, while birch was most detectable in the HS layers. Furthermore, the RGB datasets provide acceptable results for many low-accuracy applications.

Read more at: https://www.mdpi.com/2072-4292/12/7/1070


domingo, 11 de octubre de 2020

Classification of Grassland Desertification in China Based on Vis-NIR UAV Hyperspectral Remote Sensing

In this study, a vis-NIR (visual Near Infra Red) hyperspectral remote sensing system for UAVs (Unmanned Aerial Vehicles) was used to analyze the type and presence of vegetation and soil of typical desertified grassland in Inner Mongolia using a DBN (Deep Belief Network), 2D CNN (2D Convolutional Neural Network) and 3D CNN (3D Convolutional Neural Network).

The results show that these typical deep learning models can effectively classify hyperspectral data on desertified grassland features. The highest classification accuracy was achieved by 3D CNN, with an overall accuracy of 86.36%. This study enriches the spatial scale of remote sensing research on grassland desertification, and provides a basis for further high-precision statistics and inversion of remote sensing of grassland desertification.

Read more: https://www.spectroscopyonline.com/view/classification-grassland-desertification-china-based-vis-nir-uav-hyperspectral-remote-sensing

sábado, 10 de octubre de 2020

Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment


Aerial imaging from
UAVs (Unmanned Aerial Vehicles) permits highly detailed site characterization, in particular in the aftermath of extreme events with minimal ground support, to document current conditions of the region of interest.

However, aerial imaging results in a massive amount of data in the form of two-dimensional (2D) orthomosaic images and three-dimensional (3D) point clouds. Both types of datasets require effective and efficient data processing workflows to identify various damage states of structures.

This study aims to introduce two deep learning models based on both 2D and 3D convolutional neural networks to process the orthomosaic images and point clouds, for post windstorm classification. In detail, 2D CNN (2D Convolutional Neural Networks) are developed based on transfer learning from two well-known networks: AlexNet and VGGNet.

In contrast, a 3DFCN (3D Fully Convolutional Network) with skip connections was developed and trained based on the available point cloud data. Within this study, the datasets were created based on data from the aftermath of Hurricanes Harvey (Texas) and Maria (Puerto Rico). The developed 2DCNN and 3DFCN models were compared quantitatively based on the performance measures, and it was observed that the 3DFCN was more robust in detecting the various classes. 

This demonstrates the value and importance of 3D Datasets, particularly the depth information, to distinguish between instances that represent different damage states in structures.

Read more: https://www.mdpi.com/2504-446X/4/2/24/htm

sábado, 2 de mayo de 2020

UAV Photogrammetry for topographic monitoring of coastal areas


Coastal areas suffer degradation due to the action of the sea and other natural and human-induced causes.

Topographical changes in beaches and sand dunes need to be assessed, both after severe events and on a regular basis, to build models that can predict the evolution of these natural environments.

This is an important application for airborne Laser Imaging Detection and Ranging (LIDAR) and conventional photogrammetry is also being used for regular monitoring programs of sensitive coastal areas.

This paper analyses the use of UAVs (Unmanned Aerial Vehicles) to map and monitor sand dunes and beaches. A very light plane equipped with a very cheap, non-metric camera was used to acquire images with ground resolutions better than 5 cm.

The Agisoft Photoscan software was used to orientate the images, extract point clouds, build a digital surface model and produce orthoimage mosaics. The processing, which includes automatic aerial triangulation with camera calibration and subsequent model generation, was mostly automated.

To achieve the best positional accuracy for the whole process, signalised ground control points were surveyed with a differential GPS (Ground Positioning System) receiver. Two very sensitive test areas on the Portuguese northwest coast were analysed.

Detailed DSMs were obtained with 10 cm grid spacing and vertical accuracy (RMS) ranging from 3.5 to 5.0 cm, which is very similar to the image ground resolution (3.2–4.5 cm). Where possible to assess, the planimetric accuracy of the orthoimage mosaics was found to be subpixel.

Within the regular coastal monitoring programme being carried out in the region, UAVs can replace many of the conventional flights, with considerable gains in the cost of the data acquisition and without any loss in the quality of topographic and aerial imagery data.

Read more:


viernes, 1 de mayo de 2020

UAVs for 3D mapping applications


Unmanned Aerial Vehicle (UAV) platforms are nowadays a valuable source of data for inspection, surveillance, mapping and 3D modeling issues.

As UAVs can be considered as a low cost alternative to the classical manned aerial photogrammetry, new applications in the short- and close-range domain are introduced.

Rotary or fixed wing UAVs, capable of performing the photogrammetric data acquisition with amateur or SLR digital cameras, can fly in manual, semi automated and autonomous modes.

Following a typical photogrammetric workflow, 3D results like Digital Surface or Terrain Models (DTM/DSM), contours, textured 3D models, vector information, etc. can be produced, even on large areas.

This paper explore the use of UAV for Geomatics applications, giving an interesting overview of different UAV platforms and case studies.

https://www.researchgate.net/profile/Fabio_Remondino/publication/260529522_UAV_for_3D_mapping_applications_A_review/links/00b7d532f0e4da131e000000/UAV-for-3D-mapping-applications-A-review.pdf

miércoles, 29 de abril de 2020

Change Detection in Aerial Images Using Three-Dimensional Feature Maps



Interest in aerial image analysis has increased owing to recent developments in and availability of aerial imaging technologies, like UAVs (Unmanned aerial vehicles), as well as a growing need for autonomous surveillance systems.

Variant illumination, intensity noise, and different viewpoints are among the main challenges to overcome in order to determine changes in aerial images. In this paper, it is presented a robust method for change detection in aerial images.

To accomplish this, the method extracts three-dimensional (3D) features for segmentation of objects above a defined reference surface at each instant. The acquired 3D feature maps, with two measurements, are then used to determine changes in a scene over time.

In addition, the important parameters that affect measurement, such as the camera’s sampling rate, image resolution, the height of the drone, and the pixel’s height information, are investigated through a mathematical model. To exhibit its applicability, the proposed method has been evaluated on aerial images of various real-world locations and the results are promising.

The performance indicates the robustness of the method in addressing the problems of conventional change detection methods, such as intensity differences and shadows.



martes, 28 de abril de 2020

The 20th Attack Squadron locates and kill a terrorist command


The demand for UAVs to conduct armed overwatch missions to help protect United States forces, as well as their allies and partners, isn't going away, but exact the opposite: For the foreseeable future, the Reapers will continue to provide this invaluable service for American troops around the world, as you can see in a recently released video that includes a unique clip that an MQ-9 Reaper captured of militants firing a rocket-propelled grenade at a C-130 Hercules airlifter that was performing an air drop of cargo at relatively low altitude.

In the full video, the UAV's pilot and sensor operator, who later struck those hostile forces, also offer an interesting behind-the-scenes look at how the unmanned aircraft perform these kinds of armed overwatch missions. The Air Force's 432nd Wing at Creech Air Force Base in Nevada, one of the service's premier UAV units, posted the video on YouTube on Apr. 6, 2020. The pilot, 1st Lieutenant Russel, and the sensor operator, Airman First Class Ashley, both assigned to the 20th Attack Squadron, which itself is assigned to the 432nd, but is based at Whiteman Air Force Base in Missouri, describe the event.

martes, 31 de diciembre de 2019

¿Quieres reducir peso en tus UAVs? Descubre cómo



HP 3D Printing ha organizado un Webinar para explicar cómo la fabricación aditiva puede mejorar los procesos de fabricación a lo largo del ciclo de vida del producto. Naturalmente, HP se centrará en su tecnología Multi Jet Fusion y creo, por lo que conozco esa tecnología y por lo que conozco del resto de tecnologías de Manufactura Aditiva, que puede ser muy interesante para fabricar UAVs más ligeros mediante la sustitución de piezas metálicas por piezas de plástico: Esa reducción de peso traerá como consecuencias un menor consumo de combustible, así como un incremento de la velocidad máxima y el alcance.


Otra de las ventajas que veo super interesante para los fabricantes de UAVs, estriba en la alta productividad de esta tecnología, que supera con creces al resto de tecnologías competidoras. ¿Más ventajas? El diseño absolutamente libre: Si queremos producir UAVs más pequeños, más ligeros, más veloces y con mayor carga de pago, forzosamente deberemos rediseñar sus componentes cuando no el conjunto completo. Pero fabricarlos puede ser luego imposible por las limitaciones que impone la fabricación tradicional mediante CNC. Sin embargo, con la manufactura aditiva ese problema desaparece, y más aún con la tecnología de HP gracias al elevado nivel de isotropia que presentan las piezas impresas mediante la tecnología Multi Jet Fusion.


En definitiva, creo recomendable apuntarse a este evento: descubrirás los beneficios de la tecnología HP Multi Jet Fusion y conocerás casos de éxito de OEMs que están reinventando la forma en la que se fabrican sus productos. Por tanto, si deseas reducir el peso de tus UAVs no te pierdas esta oportunidad única de conocer en profundidad las posibilidades de esta tecnología sin precedentes.

Más información e inscripciones:





viernes, 4 de octubre de 2019

Nuevo radar de apoyo para UAVs armados que deban sobrevolar aeropuertos civiles


Un nuevo sistema de radar ha permitido por primera vez que los UAVs de ataque MQ-9 Reaper vuelen por primera vez sin escolta dentro y fuera del Aeropuerto Internacional Syracuse HancockEs la primera vez que UAVs militares despegan y aterrizan sin escolta desde y en un aeropuerto civil de los Estados Unidos.


En palabras de Michael Smithcoronel de la Guardia Nacional Aérea y comandante de la 174th Attack Wing"El nuevo radar terrestre recientemente instalado permite a los UAVs MQ-9 ejecutar misiones de entrenamiento de manera segura y más efectiva. Este sistema de radar mejora la seguridad del MQ-9 y ayuda a evitar colisiones con el tráfico aéreo comercial."


Los UAVs MQ-9 Reaper llevan cuatro años volando diariamente desde Syracuse.  Por motivos de seguridad, la FAA exigía que fueran escoltados por al menos un avión tripulado mientras volasen a una altitud de 18,000 pies. Sin embargo, solo dos aviones de la Patrulla Aérea Civil estaban disponibles para seguir a los Reapers, lo cual resultaba insuficiente ya que la guardia aérea debe trabajar cada dia con tres Reapers.


Desarrollado por SRC Inc., con sede en Cicero, el nuevo radar escanea los cielos alrededor del aeropuerto y detecta con mucha precisión todos los aviones, incluso drones y ultraligeros difíciles de ver. También puede determinar la altitud de un avión, incluso cuando su transpondedor no funcione, algo que los radares de la FAA no pueden hacer. Originalmente, SRC desarrolló el radar, conocido como LSTAR, para detectar proyectiles de mortero entrantes, y nunca antes había sido utilizado en un aeropuerto comercial.


Además de operar Reapers en misiones de entrenamiento durante 4,000 horas cada año, la 174th  Attack Wing entrena a todos los técnicos de mantenimiento de Reaper para la USAF, la Guardia Nacional Aérea o la Reserva de la Fuerza Aérea,  y también despliega miembros en el extranjero para apoyar las operaciones de Reaper y otras misiones de la USAF.