Search results for: wearable sensors
96 Optical and Structural Characterization of Rare Earth Doped Phosphate Glasses
Authors: Zélia Maria Da Costa Ludwig, Maria José Valenzuela Bell, Geraldo Henriques Da Silva, Thales Alves Faraco, Victor Rocha Da Silva, Daniel Rotmeister Teixeira, Vírgilio De Carvalho Dos Anjos, Valdemir Ludwig
Abstract:
Advances in telecommunications grow with the development of optical amplifiers based on rare earth ions. The focus has been concentrated in silicate glasses although their amplified spontaneous emission is limited to a few tens of nanometers (~ 40nm). Recently, phosphate glasses have received great attention due to their potential application in optical data transmission, detection, sensors and laser detector, waveguide and optical fibers, besides its excellent physical properties such as high thermal expansion coefficients and low melting temperature. Compared with the silica glasses, phosphate glasses provide different optical properties such as, large transmission window of infrared, and good density. Research on the improvement of physical and chemical durability of phosphate glass by addition of heavy metals oxides in P2O5 has been performed. The addition of Na2O further improves the solubility of rare earths, while increasing the Al2O3 links in the P2O5 tetrahedral results in increased durability and aqueous transition temperature and a decrease of the coefficient of thermal expansion. This work describes the structural and spectroscopic characterization of a phosphate glass matrix doped with different Er (Erbium) concentrations. The phosphate glasses containing Er3+ ions have been prepared by melt technique. A study of the optical absorption, luminescence and lifetime was conducted in order to characterize the infrared emission of Er3+ ions at 1540 nm, due to the radiative transition 4I13/2 → 4I15/2. Our results indicate that the present glass is a quite good matrix for Er3+ ions, and the quantum efficiency of the 1540 nm emission was high. A quenching mechanism for the mentioned luminescence was not observed up to 2,0 mol% of Er concentration. The Judd-Ofelt parameters, radiative lifetime and quantum efficiency have been determined in order to evaluate the potential of Er3+ ions in new phosphate glass. The parameters follow the trend as Ω2 > Ω4 > Ω6. It is well known that the parameter Ω2 is an indication of the dominant covalent nature and/or structural changes in the vicinity of the ion (short range effects), while Ω4 and Ω6 intensity parameters are long range parameters that can be related to the bulk properties such as viscosity and rigidity of the glass. From the PL measurements, no red or green upconversion was measured when pumping the samples with laser excitation at 980 nm. As future prospects: Synthesize this glass system with silver in order to determine the influence of silver nanoparticles on the Er3+ ions.Keywords: phosphate glass, erbium, luminescence, glass system
Procedia PDF Downloads 51095 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces
Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur
Abstract:
In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.Keywords: aerodynamic, bi-dimensional, vegetation, synergistic
Procedia PDF Downloads 26994 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang
Abstract:
Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI
Procedia PDF Downloads 26893 Constitutive Androstane Receptor (CAR) Inhibitor CINPA1 as a Tool to Understand CAR Structure and Function
Authors: Milu T. Cherian, Sergio C. Chai, Morgan A. Casal, Taosheng Chen
Abstract:
This study aims to use CINPA1, a recently discovered small-molecule inhibitor of the xenobiotic receptor CAR (constitutive androstane receptor) for understanding the binding modes of CAR and to guide CAR-mediated gene expression profiling studies in human primary hepatocytes. CAR and PXR are xenobiotic sensors that respond to drugs and endobiotics by modulating the expression of metabolic genes that enhance detoxification and elimination. Elevated levels of drug metabolizing enzymes and efflux transporters resulting from CAR activation promote the elimination of chemotherapeutic agents leading to reduced therapeutic effectiveness. Multidrug resistance in tumors after chemotherapy could be associated with errant CAR activity, as shown in the case of neuroblastoma. CAR inhibitors used in combination with existing chemotherapeutics could be utilized to attenuate multidrug resistance and resensitize chemo-resistant cancer cells. CAR and PXR have many overlapping modulating ligands as well as many overlapping target genes which confounded attempts to understand and regulate receptor-specific activity. Through a directed screening approach we previously identified a new CAR inhibitor, CINPA1, which is novel in its ability to inhibit CAR function without activating PXR. The cellular mechanisms by which CINPA1 inhibits CAR function were also extensively examined along with its pharmacokinetic properties. CINPA1 binding was shown to change CAR-coregulator interactions as well as modify CAR recruitment at DNA response elements of regulated genes. CINPA1 was shown to be broken down in the liver to form two, mostly inactive, metabolites. The structure-activity differences of CINPA1 and its metabolites were used to guide computational modeling using the CAR-LBD structure. To rationalize how ligand binding may lead to different CAR pharmacology, an analysis of the docked poses of human CAR bound to CITCO (a CAR activator) vs. CINPA1 or the metabolites was conducted. From our modeling, strong hydrogen bonding of CINPA1 with N165 and H203 in the CAR-LBD was predicted. These residues were validated to be important for CINPA1 binding using single amino-acid CAR mutants in a CAR-mediated functional reporter assay. Also predicted were residues making key hydrophobic interactions with CINPA1 but not the inactive metabolites. Some of these hydrophobic amino acids were also identified and additionally, the differential coregulator interactions of these mutants were determined in mammalian two-hybrid systems. CINPA1 represents an excellent starting point for future optimization into highly relevant probe molecules to study the function of the CAR receptor in normal- and pathophysiology, and possible development of therapeutics (for e.g. use for resensitizing chemoresistant neuroblastoma cells).Keywords: antagonist, chemoresistance, constitutive androstane receptor (CAR), multi-drug resistance, structure activity relationship (SAR), xenobiotic resistance
Procedia PDF Downloads 28892 Analysis of Stress and Strain in Head Based Control of Cooperative Robots through Tetraplegics
Authors: Jochen Nelles, Susanne Kohns, Julia Spies, Friederike Schmitz-Buhl, Roland Thietje, Christopher Brandl, Alexander Mertens, Christopher M. Schlick
Abstract:
Industrial robots as part of highly automated manufacturing are recently developed to cooperative (light-weight) robots. This offers the opportunity of using them as assistance robots and to improve the participation in professional life of disabled or handicapped people such as tetraplegics. Robots under development are located within a cooperation area together with the working person at the same workplace. This cooperation area is an area where the robot and the working person can perform tasks at the same time. Thus, working people and robots are operating in the immediate proximity. Considering the physical restrictions and the limited mobility of tetraplegics, a hands-free robot control could be an appropriate approach for a cooperative assistance robot. To meet these requirements, the research project MeRoSy (human-robot synergy) develops methods for cooperative assistance robots based on the measurement of head movements of the working person. One research objective is to improve the participation in professional life of people with disabilities and, in particular, mobility impaired persons (e.g. wheelchair users or tetraplegics), whose participation in a self-determined working life is denied. This raises the research question, how a human-robot cooperation workplace can be designed for hands-free robot control. Here, the example of a library scenario is demonstrated. In this paper, an empirical study that focuses on the impact of head movement related stress is presented. 12 test subjects with tetraplegia participated in the study. Tetraplegia also known as quadriplegia is the worst type of spinal cord injury. In the experiment, three various basic head movements were examined. Data of the head posture were collected by a motion capture system; muscle activity was measured via surface electromyography and the subjective mental stress was assessed via a mental effort questionnaire. The muscle activity was measured for the sternocleidomastoid (SCM), the upper trapezius (UT) or trapezius pars descendens, and the splenius capitis (SPL) muscle. For this purpose, six non-invasive surface electromyography sensors were mounted on the head and neck area. An analysis of variance shows differentiated muscular strains depending on the type of head movement. Systematically investigating the influence of different basic head movements on the resulting strain is an important issue to relate the research results to other scenarios. At the end of this paper, a conclusion will be drawn and an outlook of future work will be presented.Keywords: assistance robot, human-robot interaction, motion capture, stress-strain-concept, surface electromyography, tetraplegia
Procedia PDF Downloads 31591 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations
Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso
Abstract:
Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.Keywords: pipeline, leakage, detection, AI
Procedia PDF Downloads 19190 Temperature Dependence of the Optoelectronic Properties of InAs(Sb)-Based LED Heterostructures
Authors: Antonina Semakova, Karim Mynbaev, Nikolai Bazhenov, Anton Chernyaev, Sergei Kizhaev, Nikolai Stoyanov
Abstract:
At present, heterostructures are used for fabrication of almost all types of optoelectronic devices. Our research focuses on the optoelectronic properties of InAs(Sb) solid solutions that are widely used in fabrication of light emitting diodes (LEDs) operating in middle wavelength infrared range (MWIR). This spectral range (2-6 μm) is relevant for laser diode spectroscopy of gases and molecules, for systems for the detection of explosive substances, medical applications, and for environmental monitoring. The fabrication of MWIR LEDs that operate efficiently at room temperature is mainly hindered by the predominance of non-radiative Auger recombination of charge carriers over the process of radiative recombination, which makes practical application of LEDs difficult. However, non-radiative recombination can be partly suppressed in quantum-well structures. In this regard, studies of such structures are quite topical. In this work, electroluminescence (EL) of LED heterostructures based on InAs(Sb) epitaxial films with the molar fraction of InSb ranging from 0 to 0.09 and multi quantum-well (MQW) structures was studied in the temperature range 4.2-300 K. The growth of the heterostructures was performed by metal-organic chemical vapour deposition on InAs substrates. On top of the active layer, a wide-bandgap InAsSb(Ga,P) barrier was formed. At low temperatures (4.2-100 K) stimulated emission was observed. As the temperature increased, the emission became spontaneous. The transition from stimulated emission to spontaneous one occurred at different temperatures for structures with different InSb contents in the active region. The temperature-dependent carrier lifetime, limited by radiative recombination and the most probable Auger processes (for the materials under consideration, CHHS and CHCC), were calculated within the framework of the Kane model. The effect of various recombination processes on the carrier lifetime was studied, and the dominant role of Auger processes was established. For MQW structures quantization energies for electrons, light and heavy holes were calculated. A characteristic feature of the experimental EL spectra of these structures was the presence of peaks with energy different from that of calculated optical transitions between the first quantization levels for electrons and heavy holes. The obtained results showed strong effect of the specific electronic structure of InAsSb on the energy and intensity of optical transitions in nanostructures based on this material. For the structure with MQWs in the active layer, a very weak temperature dependence of EL peak was observed at high temperatures (>150 K), which makes it attractive for fabricating temperature-resistant gas sensors operating in the middle-infrared range.Keywords: Electroluminescence, InAsSb, light emitting diode, quantum wells
Procedia PDF Downloads 21289 Graphene-Graphene Oxide Dopping Effect on the Mechanical Properties of Polyamide Composites
Authors: Daniel Sava, Dragos Gudovan, Iulia Alexandra Gudovan, Ioana Ardelean, Maria Sonmez, Denisa Ficai, Laurentia Alexandrescu, Ecaterina Andronescu
Abstract:
Graphene and graphene oxide have been intensively studied due to the very good properties, which are intrinsic to the material or come from the easy doping of those with other functional groups. Graphene and graphene oxide have known a broad band of useful applications, in electronic devices, drug delivery systems, medical devices, sensors and opto-electronics, coating materials, sorbents of different agents for environmental applications, etc. The board range of applications does not come only from the use of graphene or graphene oxide alone, or by its prior functionalization with different moieties, but also it is a building block and an important component in many composite devices, its addition coming with new functionalities on the final composite or strengthening the ones that are already existent on the parent product. An attempt to improve the mechanical properties of polyamide elastomers by compounding with graphene oxide in the parent polymer composition was attempted. The addition of the graphene oxide contributes to the properties of the final product, improving the hardness and aging resistance. Graphene oxide has a lower hardness and textile strength, and if the amount of graphene oxide in the final product is not correctly estimated, it can lead to mechanical properties which are comparable to the starting material or even worse, the graphene oxide agglomerates becoming a tearing point in the final material if the amount added is too high (in a value greater than 3% towards the parent material measured in mass percentages). Two different types of tests were done on the obtained materials, the hardness standard test and the tensile strength standard test, and they were made on the obtained materials before and after the aging process. For the aging process, an accelerated aging was used in order to simulate the effect of natural aging over a long period of time. The accelerated aging was made in extreme heat. For all materials, FT-IR spectra were recorded using FT-IR spectroscopy. From the FT-IR spectra only the bands corresponding to the polyamide were intense, while the characteristic bands for graphene oxide were very small in comparison due to the very small amounts introduced in the final composite along with the low absorptivity of the graphene backbone and limited number of functional groups. In conclusion, some compositions showed very promising results, both in tensile strength test and in hardness tests. The best ratio of graphene to elastomer was between 0.6 and 0.8%, this addition extending the life of the product. Acknowledgements: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project ‘New nanostructured polymeric composites for centre pivot liners, centre plate and other components for the railway industry (RONERANANOSTRUCT)’, No: 18 PTE (PN-III-P2-2.1-PTE-2016-0146) is also acknowledged.Keywords: graphene, graphene oxide, mechanical properties, dopping effect
Procedia PDF Downloads 31588 Impact Location From Instrumented Mouthguard Kinematic Data In Rugby
Authors: Jazim Sohail, Filipe Teixeira-Dias
Abstract:
Mild traumatic brain injury (mTBI) within non-helmeted contact sports is a growing concern due to the serious risk of potential injury. Extensive research is being conducted looking into head kinematics in non-helmeted contact sports utilizing instrumented mouthguards that allow researchers to record accelerations and velocities of the head during and after an impact. This does not, however, allow the location of the impact on the head, and its magnitude and orientation, to be determined. This research proposes and validates two methods to quantify impact locations from instrumented mouthguard kinematic data, one using rigid body dynamics, the other utilizing machine learning. The rigid body dynamics technique focuses on establishing and matching moments from Euler’s and torque equations in order to find the impact location on the head. The methodology is validated with impact data collected from a lab test with the dummy head fitted with an instrumented mouthguard. Additionally, a Hybrid III Dummy head finite element model was utilized to create synthetic kinematic data sets for impacts from varying locations to validate the impact location algorithm. The algorithm calculates accurate impact locations; however, it will require preprocessing of live data, which is currently being done by cross-referencing data timestamps to video footage. The machine learning technique focuses on eliminating the preprocessing aspect by establishing trends within time-series signals from instrumented mouthguards to determine the impact location on the head. An unsupervised learning technique is used to cluster together impacts within similar regions from an entire time-series signal. The kinematic signals established from mouthguards are converted to the frequency domain before using a clustering algorithm to cluster together similar signals within a time series that may span the length of a game. Impacts are clustered within predetermined location bins. The same Hybrid III Dummy finite element model is used to create impacts that closely replicate on-field impacts in order to create synthetic time-series datasets consisting of impacts in varying locations. These time-series data sets are used to validate the machine learning technique. The rigid body dynamics technique provides a good method to establish accurate impact location of impact signals that have already been labeled as true impacts and filtered out of the entire time series. However, the machine learning technique provides a method that can be implemented with long time series signal data but will provide impact location within predetermined regions on the head. Additionally, the machine learning technique can be used to eliminate false impacts captured by sensors saving additional time for data scientists using instrumented mouthguard kinematic data as validating true impacts with video footage would not be required.Keywords: head impacts, impact location, instrumented mouthguard, machine learning, mTBI
Procedia PDF Downloads 21787 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance
Authors: Xiaoyong He
Abstract:
The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.Keywords: graphene, metamaterials, terahertz, tunable
Procedia PDF Downloads 34486 Monitoring of Rice Phenology and Agricultural Practices from Sentinel 2 Images
Authors: D. Courault, L. Hossard, V. Demarez, E. Ndikumana, D. Ho Tong Minh, N. Baghdadi, F. Ruget
Abstract:
In the global change context, efficient management of the available resources has become one of the most important topics, particularly for sustainable crop development. Timely assessment with high precision is crucial for water resource and pest management. Rice cultivated in Southern France in the Camargue region must face a challenge, reduction of the soil salinity by flooding and at the same time reduce the number of herbicides impacting negatively the environment. This context has lead farmers to diversify crop rotation and their agricultural practices. The objective of this study was to evaluate this crop diversity both in crop systems and in agricultural practices applied to rice paddy in order to quantify the impact on the environment and on the crop production. The proposed method is based on the combined use of crop models and multispectral data acquired from the recent Sentinel 2 satellite sensors launched by the European Space Agency (ESA) within the homework of the Copernicus program. More than 40 images at fine spatial resolution (10m in the optical range) were processed for 2016 and 2017 (with a revisit time of 5 days) to map crop types using random forest method and to estimate biophysical variables (LAI) retrieved by inversion of the PROSAIL canopy radiative transfer model. Thanks to the high revisit time of Sentinel 2 data, it was possible to monitor the soil labor before flooding and the second sowing made by some farmers to better control weeds. The temporal trajectories of remote sensing data were analyzed for various rice cultivars for defining the main parameters describing the phenological stages useful to calibrate two crop models (STICS and SAFY). Results were compared to surveys conducted with 10 farms. A large variability of LAI has been observed at farm scale (up to 2-3m²/m²) which induced a significant variability in the yields simulated (up to 2 ton/ha). Observations on more than 300 fields have also been collected on land use. Various maps were elaborated, land use, LAI, flooding and sowing, and harvest dates. All these maps allow proposing a new typology to classify these paddy crop systems. Key phenological dates can be estimated from inverse procedures and were validated against ground surveys. The proposed approach allowed to compare the years and to detect anomalies. The methods proposed here can be applied at different crops in various contexts and confirm the potential of remote sensing acquired at fine resolution such as the Sentinel2 system for agriculture applications and environment monitoring. This study was supported by the French national center of spatial studies (CNES, funded by the TOSCA).Keywords: agricultural practices, remote sensing, rice, yield
Procedia PDF Downloads 27585 Synthesis of Carbon Nanotubes from Coconut Oil and Fabrication of a Non Enzymatic Cholesterol Biosensor
Authors: Mitali Saha, Soma Das
Abstract:
The fabrication of nanoscale materials for use in chemical sensing, biosensing and biological analyses has proven a promising avenue in the last few years. Cholesterol has aroused considerable interest in recent years on account of its being an important parameter in clinical diagnosis. There is a strong positive correlation between high serum cholesterol level and arteriosclerosis, hypertension, and myocardial infarction. Enzyme-based electrochemical biosensors have shown high selectivity and excellent sensitivity, but the enzyme is easily denatured during its immobilization procedure and its activity is also affected by temperature, pH, and toxic chemicals. Besides, the reproducibility of enzyme-based sensors is not very good which further restrict the application of cholesterol biosensor. It has been demonstrated that carbon nanotubes could promote electron transfer with various redox active proteins, ranging from cytochrome c to glucose oxidase with a deeply embedded redox center. In continuation of our earlier work on the synthesis and applications of carbon and metal based nanoparticles, we have reported here the synthesis of carbon nanotubes (CCNT) by burning coconut oil under insufficient flow of air using an oil lamp. The soot was collected from the top portion of the flame, where the temperature was around 6500C which was purified, functionalized and then characterized by SEM, p-XRD and Raman spectroscopy. The SEM micrographs showed the formation of tubular structure of CCNT having diameter below 100 nm. The XRD pattern indicated the presence of two predominant peaks at 25.20 and 43.80, which corresponded to (002) and (100) planes of CCNT respectively. The Raman spectrum (514 nm excitation) showed the presence of 1600 cm-1 (G-band) related to the vibration of sp2-bonded carbon and at 1350 cm-1 (D-band) responsible for the vibrations of sp3-bonded carbon. A nonenzymatic cholesterol biosensor was then fabricated on an insulating Teflon material containing three silver wires at the surface, covered by CCNT, obtained from coconut oil. Here, CCNTs worked as working as well as counter electrodes whereas reference electrode and electric contacts were made of silver. The dimensions of the electrode was 3.5 cm×1.0 cm×0.5 cm (length× width × height) and it is ideal for working with 50 µL volume like the standard screen printed electrodes. The voltammetric behavior of cholesterol at CCNT electrode was investigated by cyclic voltammeter and differential pulse voltammeter using 0.001 M H2SO4 as electrolyte. The influence of the experimental parameters on the peak currents of cholesterol like pH, accumulation time, and scan rates were optimized. Under optimum conditions, the peak current was found to be linear in the cholesterol concentration range from 1 µM to 50 µM with a sensitivity of ~15.31 μAμM−1cm−2 with lower detection limit of 0.017 µM and response time of about 6s. The long-term storage stability of the sensor was tested for 30 days and the current response was found to be ~85% of its initial response after 30 days.Keywords: coconut oil, CCNT, cholesterol, biosensor
Procedia PDF Downloads 28284 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 1083 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 4282 A Nonlinear Feature Selection Method for Hyperspectral Image Classification
Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo
Abstract:
For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine
Procedia PDF Downloads 26581 Remote Radiation Mapping Based on UAV Formation
Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov
Abstract:
High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation
Procedia PDF Downloads 10080 Iron Oxide Reduction Using Solar Concentration and Carbon-Free Reducers
Authors: Bastien Sanglard, Simon Cayez, Guillaume Viau, Thomas Blon, Julian Carrey, Sébastien Lachaize
Abstract:
The need to develop clean production processes is a key challenge of any industry. Steel and iron industries are particularly concerned since they emit 6.8% of global anthropogenic greenhouse gas emissions. One key step of the process is the high-temperature reduction of iron ore using coke, leading to large amounts of CO2 emissions. One route to decrease impacts is to get rid of fossil fuels by changing both the heat source and the reducer. The present work aims at investigating experimentally the possibility to use concentrated solar energy and carbon-free reducing agents. Two sets of experimentations were realized. First, in situ X-ray diffraction on pure and industrial powder of hematite was realized to study the phase evolution as a function of temperature during reduction under hydrogen and ammonia. Secondly, experiments were performed on industrial iron ore pellets, which were reduced by NH3 or H2 into a “solar furnace” composed of a controllable 1600W Xenon lamp to simulate and control the solar concentrated irradiation of a glass reactor and of a diaphragm to control light flux. Temperature and pressure were recorded during each experiment via thermocouples and pressure sensors. The percentage of iron oxide converted to iron (called thereafter “reduction ratio”) was found through Rietveld refinement. The power of the light source and the reduction time were varied. Results obtained in the diffractometer reaction chamber show that iron begins to form at 300°C with pure Fe2O3 powder and 400°C with industrial iron ore when maintained at this temperature for 60 minutes and 80 minutes, respectively. Magnetite and wuestite are detected on both powders during the reduction under hydrogen; under ammonia, iron nitride is also detected for temperatures between400°C and 600°C. All the iron oxide was converted to iron for a reaction of 60 min at 500°C, whereas a conversion ratio of 96% was reached with industrial powder for a reaction of 240 min at 600°C under hydrogen. Under ammonia, full conversion was also reached after 240 min of reduction at 600 °C. For experimentations into the solar furnace with iron ore pellets, the lamp power and the shutter opening were varied. An 83.2% conversion ratio was obtained with a light power of 67 W/cm2 without turning over the pellets. Nevertheless, under the same conditions, turning over the pellets in the middle of the experiment permits to reach a conversion ratio of 86.4%. A reduction ratio of 95% was reached with an exposure of 16 min by turning over pellets at half time with a flux of 169W/cm2. Similar or slightly better results were obtained under an ammonia reducing atmosphere. Under the same flux, the highest reduction yield of 97.3% was obtained under ammonia after 28 minutes of exposure. The chemical reaction itself, including the solar heat source, does not produce any greenhouse gases, so solar metallurgy represents a serious way to reduce greenhouse gas emission of metallurgy industry. Nevertheless, the ecological impact of the reducers must be investigated, which will be done in future work.Keywords: solar concentration, metallurgy, ammonia, hydrogen, sustainability
Procedia PDF Downloads 13879 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 4878 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming
Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter
Abstract:
High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.Keywords: hyperelastic, anisotropic, polymer film, thermoforming
Procedia PDF Downloads 61877 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables
Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez
Abstract:
Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X
Procedia PDF Downloads 26576 Vision and Challenges of Developing VR-Based Digital Anatomy Learning Platforms and a Solution Set for 3D Model Marking
Authors: Gizem Kayar, Ramazan Bakir, M. Ilkay Koşar, Ceren U. Gencer, Alperen Ayyildiz
Abstract:
Anatomy classes are crucial for general education of medical students, whereas learning anatomy is quite challenging and requires memorization of thousands of structures. In traditional teaching methods, learning materials are still based on books, anatomy mannequins, or videos. This results in forgetting many important structures after several years. However, more interactive teaching methods like virtual reality, augmented reality, gamification, and motion sensors are becoming more popular since such methods ease the way we learn and keep the data in mind for longer terms. During our study, we designed a virtual reality based digital head anatomy platform to investigate whether a fully interactive anatomy platform is effective to learn anatomy and to understand the level of teaching and learning optimization. The Head is one of the most complicated human anatomy structures, with thousands of tiny, unique structures. This makes the head anatomy one of the most difficult parts to understand during class sessions. Therefore, we developed a fully interactive digital tool with 3D model marking, quiz structures, 2D/3D puzzle structures, and VR support so as to integrate the power of VR and gamification. The project has been developed in Unity game engine with HTC Vive Cosmos VR headset. The head anatomy 3D model has been selected with full skeletal, muscular, integumentary, head, teeth, lymph, and vein system. The biggest issue during the development was the complexity of our model and the marking of it in the 3D world system. 3D model marking requires to access to each unique structure in the counted subsystems which means hundreds of marking needs to be done. Some parts of our 3D head model were monolithic. This is why we worked on dividing such parts to subparts which is very time-consuming. In order to subdivide monolithic parts, one must use an external modeling tool. However, such tools generally come with high learning curves, and seamless division is not ensured. Second option was to integrate tiny colliders to all unique items for mouse interaction. However, outside colliders which cover inner trigger colliders cause overlapping, and these colliders repel each other. Third option is using raycasting. However, due to its own view-based nature, raycasting has some inherent problems. As the model rotate, view direction changes very frequently, and directional computations become even harder. This is why, finally, we studied on the local coordinate system. By taking the pivot point of the model into consideration (back of the nose), each sub-structure is marked with its own local coordinate with respect to the pivot. After converting the mouse position to the world position and checking its relation with the corresponding structure’s local coordinate, we were able to mark all points correctly. The advantage of this method is its applicability and accuracy for all types of monolithic anatomical structures.Keywords: anatomy, e-learning, virtual reality, 3D model marking
Procedia PDF Downloads 10075 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor
Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar
Abstract:
Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration
Procedia PDF Downloads 19174 Enhancing Engineering Students Educational Experience: Studying Hydrostatic Pumps Association System in Fluid Mechanics Laboratories
Authors: Alexandre Daliberto Frugoli, Pedro Jose Gabriel Ferreira, Pedro Americo Frugoli, Lucio Leonardo, Thais Cavalheri Santos
Abstract:
Laboratory classes in Engineering courses are essential for students to be able to integrate theory with practical reality, by handling equipment and observing experiments. In the researches of physical phenomena, students can learn about the complexities of science. Over the past years, universities in developing countries have been reducing the course load of engineering courses, in accordance with cutting cost agendas. Quality education is the object of study for researchers and requires educators and educational administrators able to demonstrate that the institutions are able to provide great learning opportunities at reasonable costs. Didactic test benches are indispensable equipment in educational activities related to turbo hydraulic pumps and pumping facilities study, which have a high cost and require long class time due to measurements and equipment adjustment time. In order to overcome the aforementioned obstacles, aligned with the professional objectives of an engineer, GruPEFE - UNIP (Research Group in Physics Education for Engineering - Universidade Paulista) has developed a multi-purpose stand for the discipline of fluid mechanics which allows the study of velocity and flow meters, loads losses and pump association. In this work, results obtained by the association in series and in parallel of hydraulic pumps will be presented and discussed, mainly analyzing the repeatability of experimental procedures and their agreement with the theory. For the association in series two identical pumps were used, consisting of the connection of the discharge of a pump to the suction of the next one, allowing the fluid to receive the power of all machines in the association. The characteristic curve of the set is obtained from the curves of each of the pumps, by adding the heads corresponding to the same flow rates. The same pumps were associated in parallel. In this association, the discharge piping is common to the two machines together. The characteristic curve of the set was obtained by adding to each value of H (head height), the flow rates of each pump. For the tests, the input and output pressure of each pump were measured. For each set there were three sets of measurements, varying the flow rate in range from 6.0 to 8.5 m 3 / h. For the two associations, the results showed an excellent repeatability with variations of less than 10% between sets of measurements and also a good agreement with the theory. This variation agrees with the instrumental uncertainty. Thus, the results validate the use of the fluids bench designed for didactic purposes. As a future work, a digital acquisition system is being developed, using differential sensors of extremely low pressures (2 to 2000 Pa approximately) for the microcontroller Arduino.Keywords: engineering education, fluid mechanics, hydrostatic pumps association, multi-purpose stand
Procedia PDF Downloads 22073 Assessing Moisture Adequacy over Semi-arid and Arid Indian Agricultural Farms using High-Resolution Thermography
Authors: Devansh Desai, Rahul Nigam
Abstract:
Crop water stress (W) at a given growth stage starts to set in as moisture availability (M) to roots falls below 75% of maximum. It has been found that ratio of crop evapotranspiration (ET) and reference evapotranspiration (ET0) is an indicator of moisture adequacy and is strongly correlated with ‘M’ and ‘W’. The spatial variability of ET0 is generally less over an agricultural farm of 1-5 ha than ET, which depends on both surface and atmospheric conditions, while the former depends only on atmospheric conditions. Solutions from surface energy balance (SEB) and thermal infrared (TIR) remote sensing are now known to estimate latent heat flux of ET. In the present study, ET and moisture adequacy index (MAI) (=ET/ET0) have been estimated over two contrasting western India agricultural farms having rice-wheat system in semi-arid climate and arid grassland system, limited by moisture availability. High-resolution multi-band TIR sensing observations at 65m from ECOSTRESS (ECOsystemSpaceborne Thermal Radiometer Experiment on Space Station) instrument on-board International Space Station (ISS) were used in an analytical SEB model, STIC (Surface Temperature Initiated Closure) to estimate ET and MAI. The ancillary variables used in the ET modeling and MAI estimation were land surface albedo, NDVI from close-by LANDSAT data at 30m spatial resolution, ET0 product at 4km spatial resolution from INSAT 3D, meteorological forcing variables from short-range weather forecast on air temperature and relative humidity from NWP model. Farm-scale ET estimates at 65m spatial resolution were found to show low RMSE of 16.6% to 17.5% with R2 >0.8 from 18 datasets as compared to reported errors (25 – 30%) from coarser-scale ET at 1 to 8 km spatial resolution when compared to in situ measurements from eddy covariance systems. The MAI was found to show lower (<0.25) and higher (>0.5) magnitudes in the contrasting agricultural farms. The study showed the potential need of high-resolution high-repeat spaceborne multi-band TIR payloads alongwith optical payload in estimating farm-scale ET and MAI for estimating consumptive water use and water stress. A set of future high-resolution multi-band TIR sensors are planned on-board Indo-French TRISHNA, ESA’s LSTM, NASA’s SBG space-borne missions to address sustainable irrigation water management at farm-scale to improve crop water productivity. These will provide precise and fundamental variables of surface energy balance such as LST (Land Surface Temperature), surface emissivity, albedo and NDVI. A synchronization among these missions is needed in terms of observations, algorithms, product definitions, calibration-validation experiments and downstream applications to maximize the potential benefits.Keywords: thermal remote sensing, land surface temperature, crop water stress, evapotranspiration
Procedia PDF Downloads 7072 The Effects of Adding Vibrotactile Feedback to Upper Limb Performance during Dual-Tasking and Response to Misleading Visual Feedback
Authors: Sigal Portnoy, Jason Friedman, Eitan Raveh
Abstract:
Introduction: Sensory substitution is possible due to the capacity of our brain to adapt to information transmitted by a synthetic receptor via an alternative sensory system. Practical sensory substitution systems are being developed in order to increase the functionality of individuals with sensory loss, e.g. amputees. For upper limb prosthetic-users the loss of tactile feedback compels them to allocate visual attention to their prosthesis. The effect of adding vibrotactile feedback (VTF) to the applied force has been studied, however its effect on the allocation if visual attention during dual-tasking and the response during misleading visual feedback have not been studied. We hypothesized that VTF will improve the performance and reduce visual attention during dual-task assignments in healthy individuals using a robotic hand and improve the performance in a standardized functional test, despite the presence of misleading visual feedback. Methods: For the dual-task paradigm, twenty healthy subjects were instructed to toggle two keyboard arrow keys with the left hand to retain a moving virtual car on a road on a screen. During the game, instructions for various activities, e.g. mix the sugar in the glass with a spoon, appeared on the screen. The subject performed these tasks with a robotic hand, attached to the right hand. The robotic hand was controlled by the activity of the flexors and extensors of the right wrist, recorded using surface EMG electrodes. Pressure sensors were attached at the tips of the robotic hand and induced VTF using vibrotactile actuators attached to the right arm of the subject. An eye-tracking system tracked to visual attention of the subject during the trials. The trials were repeated twice, with and without the VTF. Additionally, the subjects performed the modified box and blocks, hidden from eyesight, in a motion laboratory. A virtual presentation of a misleading visual feedback was be presented on a screen so that twice during the trial, the virtual block fell while the physical block was still held by the subject. Results: This is an ongoing study, which current results are detailed below. We are continuing these trials with transradial myoelectric prosthesis-users. In the healthy group, the VTF did not reduce the visual attention or improve performance during dual-tasking for the tasks that were typed transfer-to-target, e.g. place the eraser on the shelf. An improvement was observed for other tasks. For example, the average±standard deviation of time to complete the sugar-mixing task was 13.7±17.2s and 19.3±9.1s with and without the VTF, respectively. Also, the number of gaze shifts from the screen to the hand during this task were 15.5±23.7 and 20.0±11.6, with and without the VTF, respectively. The response of the subjects to the misleading visual feedback did not differ between the two conditions, i.e. with and without VTF. Conclusions: Our interim results suggest that the performance of certain activities of daily living may be improved by VTF. The substitution of visual sensory input by tactile feedback might require a long training period so that brain plasticity can occur and allow adaptation to the new condition.Keywords: prosthetics, rehabilitation, sensory substitution, upper limb amputation
Procedia PDF Downloads 34171 Superoleophobic Nanocellulose Aerogel Membrance as Bioinspired Cargo Carrier on Oil by Sol-Gel Method
Authors: Zulkifli, I. W. Eltara, Anawati
Abstract:
Understanding the complementary roles of surface energy and roughness on natural nonwetting surfaces has led to the development of a number of biomimetic superhydrophobic surfaces, which exhibit apparent contact angles with water greater than 150 degrees and low contact angle hysteresis. However, superoleophobic surfaces—those that display contact angles greater than 150 degrees with organic liquids having appreciably lower surface tensions than that of water—are extremely rare. In addition to chemical composition and roughened texture, a third parameter is essential to achieve superoleophobicity, namely, re-entrant surface curvature in the form of overhang structures. The overhangs can be realized as fibers. Superoleophobic surfaces are appealing for example, antifouling, since purely superhydrophobic surfaces are easily contaminated by oily substances in practical applications, which in turn will impair the liquid repellency. On the other studied have demonstrate that such aqueous nanofibrillar gels are unexpectedly robust to allow formation of highly porous aerogels by direct water removal by freeze-drying, they are flexible, unlike most aerogels that suffer from brittleness, and they allow flexible hierarchically porous templates for functionalities, e.g. for electrical conductivity. No crosslinking, solvent exchange nor supercritical drying are required to suppress the collapse during the aerogel preparation, unlike in typical aerogel preparations. The aerogel used in current work is an ultralight weight solid material composed of native cellulose nanofibers. The native cellulose nanofibers are cleaved from the self-assembled hierarchy of macroscopic cellulose fibers. They have become highly topical, as they are proposed to show extraordinary mechanical properties due to their parallel and grossly hydrogen bonded polysaccharide chains. We demonstrate that superoleophobic nanocellulose aerogels coating by sol-gel method, the aerogel is capable of supporting a weight nearly 3 orders of magnitude larger than the weight of the aerogel itself. The load support is achieved by surface tension acting at different length scales: at the macroscopic scale along the perimeter of the carrier, and at the microscopic scale along the cellulose nanofibers by preventing soaking of the aerogel thus ensuring buoyancy. Superoleophobic nanocellulose aerogels have recently been achieved using unmodified cellulose nanofibers and using carboxy methylated, negatively charged cellulose nanofibers as starting materials. In this work, the aerogels made from unmodified cellulose nanofibers were subsequently treated with fluorosilanes. To complement previous work on superoleophobic aerogels, we demonstrate their application as cargo carriers on oil, gas permeability, plastrons, and drag reduction, and we show that fluorinated nanocellulose aerogels are high-adhesive superoleophobic surfaces. We foresee applications including buoyant, gas permeable, dirt-repellent coatings for miniature sensors and other devices floating on generic liquid surfaces.Keywords: superoleophobic, nanocellulose, aerogel, sol-gel
Procedia PDF Downloads 35170 Attention Treatment for People With Aphasia: Language-Specific vs. Domain-General Neurofeedback
Authors: Yael Neumann
Abstract:
Attention deficits are common in people with aphasia (PWA). Two treatment approaches address these deficits: domain-general methods like Play Attention, which focus on cognitive functioning, and domain-specific methods like Language-Specific Attention Treatment (L-SAT), which use linguistically based tasks. Research indicates that L-SAT can improve both attentional deficits and functional language skills, while Play Attention has shown success in enhancing attentional capabilities among school-aged children with attention issues compared to standard cognitive training. This study employed a randomized controlled cross-over single-subject design to evaluate the effectiveness of these two attention treatments over 25 weeks. Four PWA participated, undergoing a battery of eight standardized tests measuring language and cognitive skills. The treatments were counterbalanced. Play Attention used EEG sensors to detect brainwaves, enabling participants to manipulate items in a computer game while learning to suppress theta activity and increase beta activity. An algorithm tracked changes in the theta-to-beta ratio, allowing points to be earned during the games. L-SAT, on the other hand, involved hierarchical language tasks that increased in complexity, requiring greater attention from participants. Results showed that for language tests, Participant 1 (moderate aphasia) aligned with existing literature, showing L-SAT was more effective than Play Attention. However, Participants 2 (very severe) and 3 and 4 (mild) did not conform to this pattern; both treatments yielded similar outcomes. This may be due to the extremes of aphasia severity: the very severe participant faced significant overall deficits, making both approaches equally challenging, while the mild participant performed well initially, leaving limited room for improvement. In attention tests, Participants 1 and 4 exhibited results consistent with prior research, indicating Play Attention was superior to L-SAT. Participant 2, however, showed no significant improvement with either program, although L-SAT had a slight edge on the Visual Elevator task, measuring switching and mental flexibility. This advantage was not sustained at the one-month follow-up, likely due to the participant’s struggles with complex attention tasks. Participant 3's results similarly did not align with prior studies, revealing no difference between the two treatments, possibly due to the challenging nature of the attention measures used. Regarding participation and ecological tests, all participants showed similar mild improvements with both treatments. This limited progress could stem from the short study duration, with only five weeks allocated for each treatment, which may not have been enough time to achieve meaningful changes affecting life participation. In conclusion, the performance of participants appeared influenced by their level of aphasia severity. The moderate PWA’s results were most aligned with existing literature, indicating better attention improvement from the domain-general approach (Play Attention) and better language improvement from the domain-specific approach (L-SAT).Keywords: attention, language, cognitive rehabilitation, neurofeedback
Procedia PDF Downloads 1969 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 19568 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification
Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas
Abstract:
Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles
Procedia PDF Downloads 23267 Cognitive Radio in Aeronautic: Comparison of Some Spectrum Sensing Technics
Authors: Abdelkhalek Bouchikhi, Elyes Benmokhtar, Sebastien Saletzki
Abstract:
The aeronautical field is experiencing issues with RF spectrum congestion due to the constant increase in the number of flights, aircrafts and telecom systems on board. In addition, these systems are bulky in size, weight and energy consumption. The cognitive radio helps particularly solving the spectrum congestion issue by its capacity to detect idle frequency channels then, allowing an opportunistic exploitation of the RF spectrum. The present work aims to propose a new use case for aeronautical spectrum sharing and to study the performances of three different detection techniques: energy detector, matched filter and cyclostationary detector within the aeronautical use case. The spectrum in the proposed cognitive radio is allocated dynamically where each cognitive radio follows a cognitive cycle. The spectrum sensing is a crucial step. The goal of the sensing is gathering data about the surrounding environment. Cognitive radio can use different sensors: antennas, cameras, accelerometer, thermometer, etc. In IEEE 802.22 standard, for example, a primary user (PU) has always the priority to communicate. When a frequency channel witch used by the primary user is idle, the secondary user (SU) is allowed to transmit in this channel. The Distance Measuring Equipment (DME) is composed of a UHF transmitter/receiver (interrogator) in the aircraft and a UHF receiver/transmitter on the ground. While the future cognitive radio will be used jointly to alleviate the spectrum congestion issue in the aeronautical field. LDACS, for example, is a good candidate; it provides two isolated data-links: ground-to-air and air-to-ground data-links. The first contribution of the present work is a strategy allowing sharing the L-band. The adopted spectrum sharing strategy is as follow: the DME will play the role of PU which is the licensed user and the LDACS1 systems will be the SUs. The SUs could use the L-band channels opportunely as long as they do not causing harmful interference signals which affect the QoS of the DME system. Although the spectrum sensing is a key step, it helps detecting holes by determining whether the primary signal is present or not in a given frequency channel. A missing detection on primary user presence creates interference between PU and SU and will affect seriously the QoS of the legacy radio. In this study, first brief definitions, concepts and the state of the art of cognitive radio will be presented. Then, a study of three communication channel detection algorithms in a cognitive radio context is carried out. The study is made from the point of view of functions, material requirements and signal detection capability in the aeronautical field. Then, we presented a modeling of the detection problem by three different methods (energy, adapted filter, and cyclostationary) as well as an algorithmic description of these detectors is done. Then, we study and compare the performance of the algorithms. Simulations were carried out using MATLAB software. We analyzed the results based on ROCs curves for SNR between -10dB and 20dB. The three detectors have been tested with a synthetics and real world signals.Keywords: aeronautic, communication, navigation, surveillance systems, cognitive radio, spectrum sensing, software defined radio
Procedia PDF Downloads 175