Search results for: deep cold rolling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3075

Search results for: deep cold rolling

1965 Devulcanization of Waste Rubber Tyre Utilizing Deep Eutectic Solvents and Ultrasonic Energy

Authors: Ricky Saputra, Rashmi Walvekar, Mohammad Khalid, Kaveh Shahbaz, Suganti Ramarad

Abstract:

This particular study of interest aims to study the effect of coupling ultrasonic treatment with eutectic solvents in devulcanization process of waste rubber tyre. Specifically, three different types of Deep Eutectic Solvents (DES) were utilized, namely ChCl:Urea (1:2), ChCl:ZnCl₂ (1:2) and ZnCl₂:urea (2:7) in which their physicochemical properties were analysed and proven to have permissible water content that is less than 3.0 wt%, degradation temperature below 200ᵒC and freezing point below 60ᵒC. The mass ratio of rubber to DES was varied from 1:20-1:40, sonicated for 1 hour at 37 kHz and heated at variable time of 5-30 min at 180ᵒC. Energy dispersive x-rays (EDX) results revealed that the first two DESs give the highest degree of sulphur removal at 74.44 and 76.69% respectively with optimum heating time at 15 minutes whereby if prolonged, reformation of crosslink network would be experienced. Such is supported by the evidence shown by both FTIR and FESEM results where di-sulfide peak reappears at 30 minutes and morphological structures from 15 to 30 minutes change from smooth with high voidage to rigid with low voidage respectively. Furthermore, TGA curve reveals similar phenomena whereby at 15 minutes thermal decomposition temperature is at the lowest due to the decrease of molecular weight as a result of sulphur removal but increases back at 30 minutes. Type of bond change was also analysed whereby it was found that only di-sulphide bond was cleaved and which indicates partial-devulcanization. Overall, the results show that DES has a great potential to be used as devulcanizing solvent.

Keywords: crosslink network, devulcanization, eutectic solvents, reformation, ultrasonic

Procedia PDF Downloads 171
1964 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 168
1963 Effect of Boundary Condition on Granular Pressure of Gas-Solid Flow in a Rotating Drum

Authors: Rezwana Rahman

Abstract:

Various simulations have been conducted to understand the particle's macroscopic behavior in the solid-gas multiphase flow in rotating drums in the past. In these studies, the particle-wall no-slip boundary condition was usually adopted. However, the non-slip boundary condition is rarely encountered in real systems. A little effort has been made to investigate the particle behavior at slip boundary conditions. The paper represents a study of the gas-solid flow in a horizontal rotating drum at a slip boundary wall condition. Two different sizes of particles with the same density have been considered. The Eulerian–Eulerian multiphase model with the kinetic theory of granular flow was used in the simulations. The granular pressure at the rolling flow regime with specularity coefficient 1 was examined and compared with that obtained based on the no-slip boundary condition. The results reveal that the profiles of granular pressure distribution on the transverse plane of the drum are similar for both boundary conditions. But, overall, compared with those for the no-slip boundary condition, the values of granular pressure for specularity coefficient 1 are larger for the larger particle and smaller for the smaller particle.

Keywords: boundary condition, eulerian–eulerian, multiphase, specularity coefficient, transverse plane

Procedia PDF Downloads 218
1962 Analyzing Soviet and Post-Soviet Contemporary Russian Foreign Policy by Applying the Theory of Political Realism

Authors: Simon Tsipis

Abstract:

In this study, we propose to analyze Russian foreign policy conduct by applying the theory of Political Realism and the qualitative comparative method of analysis. We find that the paradigm of Political Realism supplies us with significant insights into the sources of contemporary Russian foreign policy conduct since the power factor was and remains an integral element in Russian foreign policies, especially when we apply comparative analysis and compare it with the behavior of its Soviet predecessor. Through the lens of the Realist theory, a handful of Russian foreign policy-making becomes clearer and much more comprehensible.

Keywords: realism, Russia, cold war, Soviet Union, European security

Procedia PDF Downloads 114
1961 Current Design Approach for Seismic Resistant Automated Rack Supported Warehouses: Strong Points and Critical Aspects

Authors: Agnese Natali, Francesco Morelli, Walter Salvatore

Abstract:

Automated Rack Supported Warehouses (ARSWs) are structures currently designed as steel racks. Even if there are common characteristics, there are differences that don’t allow to adopt the same design approach. Aiming to highlight the factors influencing the design and the behavior of ARSWs, a set of 5 structures designed by 5 European companies specialized in this field is used to perform both a critical analysis of the design approaches and the assessment of the seismic performance, which is used to point out the criticalities and the necessity of new design philosophy.

Keywords: steel racks, automated rack supported warehouse, thin walled cold-formed elements, seismic assessment

Procedia PDF Downloads 163
1960 A Design Research Methodology for Light and Stretchable Electrical Thermal Warm-Up Sportswear to Enhance the Performance of Athletes against Harsh Environment

Authors: Chenxiao Yang, Li Li

Abstract:

In this decade, the sportswear market rapidly expanded while numerous sports brands are conducting fierce competitions to hold their market shares and trying to act as a leader in professional competition sports areas to set the trends. Thus, various advancing sports equipment is being deeply explored to improving athletes’ performance in fierce competitions. Although there is plenty protective equipment such as cuff, running legging, etc., on the market, there is still blank in the field of sportswear during prerace warm-up this important time gap, especially for those competitions host in cold environment. Because there is always time gaps between warm-up and race due to event logistics or unexpected weather factors. Athletes will be exposed to chilly condition for an unpredictable long period of time. As a consequence, the effects of warm-up will be negated, and the competition performance will be degraded. However, reviewing the current market, there is none effective sports equipment provided to help athletes against this harsh environment or the rare existing products are so blocky or heavy to restrict the actions. An ideal thermal-protective sportswear should be light, flexible, comfort and aesthetic at the same time. Therefore, this design research adopted the textile circular knitting methodology to integrate soft silver-coated conductive yarns (ab. SCCYs), elastic nylon yarn and polyester yarn to develop the proposed electrical, thermal sportswear, with the strengths aforementioned. Meanwhile, the relationship between heating performance, stretch load, and energy consumption were investigated. Further, a simulation model was established to ensure providing sufficient warm and flexibility at lower energy cost and with an optimized production, parameter determined. The proposed circular knitting technology and simulation model can be directly applied to instruct prototype developments to cater different target consumers’ needs and ensure prototypes’’ safety. On the other hand, high R&D investment and time consumption can be saved. Further, two prototypes: a kneecap and an elbow guard, were developed to facilitate the transformation of research technology into an industrial application and to give a hint on the blur future blueprint.

Keywords: cold environment, silver-coated conductive yarn, electrical thermal textile, stretchable

Procedia PDF Downloads 268
1959 A Systematic Review and Meta-Analysis in Slow Gait Speed and Its Association with Worse Postoperative Outcomes in Cardiac Surgery

Authors: Vignesh Ratnaraj, Jaewon Chang

Abstract:

Background: Frailty is associated with poorer outcomes in cardiac surgery, but the heterogeneity in frailty assessment tools makes it difficult to ascertain its true impact in cardiac surgery. Slow gait speed is a simple, validated, and reliable marker of frailty. We performed a systematic review and meta-analysis to examine the effect of slow gait speed on postoperative cardiac surgical patients. Methods: PubMED, MEDLINE, and EMBASE databases were searched from January 2000 to August 2021 for studies comparing slow gait speed and “normal” gait speed. The primary outcome was in-hospital mortality. Secondary outcomes were composite mortality and major morbidity, AKI, stroke, deep sternal wound infection, prolonged ventilation, discharge to a healthcare facility, and ICU length of stay. Results: There were seven eligible studies with 36,697 patients. Slow gait speed was associated with an increased likelihood of in-hospital mortality (risk ratio [RR]: 2.32; 95% confidence interval [CI]: 1.87–2.87). Additionally, they were more likely to suffer from composite mortality and major morbidity (RR: 1.52; 95% CI: 1.38–1.66), AKI (RR: 2.81; 95% CI: 1.44–5.49), deep sternal wound infection (RR: 1.77; 95% CI: 1.59–1.98), prolonged ventilation >24 h (RR: 1.97; 95% CI: 1.48–2.63), reoperation (RR: 1.38; 95% CI: 1.05–1.82), institutional discharge (RR: 2.08; 95% CI: 1.61–2.69), and longer ICU length of stay (MD: 21.69; 95% CI: 17.32–26.05). Conclusion: Slow gait speed is associated with poorer outcomes in cardiac surgery. Frail patients are twofold more likely to die during hospital admission than non-frail counterparts and are at an increased risk of developing various perioperative complications.

Keywords: cardiac surgery, gait speed, recovery, frailty

Procedia PDF Downloads 71
1958 Seismic Reflection Highlights of New Miocene Deep Aquifers in Eastern Tunisia Basin (North Africa)

Authors: Mourad Bédir, Sami Khomsi, Hakim Gabtni, Hajer Azaiez, Ramzi Gharsalli, Riadh Chebbi

Abstract:

Eastern Tunisia is a semi-arid area; located in the northern Africa plate; southern Mediterranean side. It is facing water scarcity, overexploitation, and decreasing of water quality of phreatic water table. Water supply and storage will not respond to the demographic and economic growth and demand. In addition, only 5 109 m3 of rainwater from 35 109 m3 per year renewable rain water supply can be retained and remobilized. To remediate this water deficiency, researches had been focused to near new subsurface deep aquifers resources. Among them, Upper Miocene sandstone deposits of Béglia, Saouaf, and Somaa Formations. These sandstones are known for their proven Hydrogeologic and hydrocarbon reservoir characteristics in the Tunisian margin. They represent semi-confined to confined aquifers. This work is based on new integrated approaches of seismic stratigraphy, seismic tectonics, and hydrogeology, to highlight and characterize these reservoirs levels for aquifer exploitation in semi-arid area. As a result, five to six third order sequence deposits had been highlighted. They are composed of multi-layered extended sandstones reservoirs; separated by shales packages. These reservoir deposits represent lowstand and highstand system tracts of these sequences, which represent lowstand and highstand system tracts of these sequences. They constitute important strategic water resources volumes for the region.

Keywords: Tunisia, Hydrogeology, sandstones, basin, seismic, aquifers, modeling

Procedia PDF Downloads 175
1957 A Review of Brain Implant Device: Current Developments and Applications

Authors: Ardiansyah I. Ryan, Ashsholih K. R., Fathurrohman G. R., Kurniadi M. R., Huda P. A

Abstract:

The burden of brain-related disease is very high. There are a lot of brain-related diseases with limited treatment result and thus raise the burden more. The Parkinson Disease (PD), Mental Health Problem, or Paralysis of extremities treatments had risen concern, as the patients for those diseases usually had a low quality of life and low chance to recover fully. There are also many other brain or related neural diseases with the similar condition, mainly the treatments for those conditions are still limited as our understanding of the brain function is insufficient. Brain Implant Technology had given hope to help in treating this condition. In this paper, we examine the current update of the brain implant technology. Neurotechnology is growing very rapidly worldwide. The United States Food and Drug Administration (FDA) has approved the use of Deep Brain Stimulation (DBS) as a brain implant in humans. As for neural implant both the cochlear implant and retinal implant are approved by FDA too. All of them had shown a promising result. DBS worked by stimulating a specific region in the brain with electricity. This device is planted surgically into a very specific region of the brain. This device consists of 3 main parts: Lead (thin wire inserted into the brain), neurostimulator (pacemaker-like device, planted surgically in the chest) and an external controller (to turn on/off the device by patient/programmer). FDA had approved DBS for the treatment of PD, Pain Management, Epilepsy and Obsessive Compulsive Disorder (OCD). The target treatment of DBS in PD is to reduce the tremor and dystonia symptoms. DBS has been showing the promising result in animal and limited human trial for other conditions such as Alzheimer, Mental Health Problem (Major Depression, Tourette Syndrome), etc. Every surgery has risks of complications, although in DBS the chance is very low. DBS itself had a very satisfying result as long as the subject criteria to be implanted this device based on indication and strictly selection. Other than DBS, there are several brain implant devices that still under development. It was included (not limited to) implant to treat paralysis (In Spinal Cord Injury/Amyotrophic Lateral Sclerosis), enhance brain memory, reduce obesity, treat mental health problem and treat epilepsy. The potential of neurotechnology is unlimited. When brain function and brain implant were fully developed, it may be one of the major breakthroughs in human history like when human find ‘fire’ for the first time. Support from every sector for further research is very needed to develop and unveil the true potential of this technology.

Keywords: brain implant, deep brain stimulation (DBS), deep brain stimulation, Parkinson

Procedia PDF Downloads 154
1956 Omni-Modeler: Dynamic Learning for Pedestrian Redetection

Authors: Michael Karnes, Alper Yilmaz

Abstract:

This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.

Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition

Procedia PDF Downloads 75
1955 Framework to Quantify Customer Experience

Authors: Anant Sharma, Ashwin Rajan

Abstract:

Customer experience is measured today based on defining a set of metrics and KPIs, setting up thresholds and defining triggers across those thresholds. While this is an effective way of measuring against a Key Performance Indicator ( referred to as KPI in the rest of the paper ), this approach cannot capture the various nuances that make up the overall customer experience. Customers consume a product or service at various levels, which is not reflected in metrics like Customer Satisfaction or Net Promoter Score, but also across other measurements like recurring revenue, frequency of service usage, e-learning and depth of usage. Here we explore an alternative method of measuring customer experience by flipping the traditional views. Rather than rolling customers up to a metric, we roll up metrics to hierarchies and then measure customer experience. This method allows any team to quantify customer experience across multiple touchpoints in a customer’s journey. We make use of various data sources which contain information for metrics like CXSAT, NPS, Renewals, and depths of service usage collected across a customer lifecycle. This data can be mined systematically to get linkages between different data points like geographies, business groups, products and time. Additional views can be generated by blending synthetic contexts into the data to show trends and top/bottom types of reports. We have created a framework that allows us to measure customer experience using the above logic.

Keywords: analytics, customers experience, BI, business operations, KPIs, metrics

Procedia PDF Downloads 70
1954 Effectiveness of Interactive Integrated Tutorial in Teaching Medical Subjects to Dental Students: A Pilot Study

Authors: Mohammad Saleem, Neeta Kumar, Anita Sharma, Sazina Muzammil

Abstract:

It is observed that some of the dental students in our setting take less interest in medical subjects. Various teaching methods are focus of research interest currently and being tried to generate interest among students. An approach of interactive integrated tutorial was used to assess its feasibility in teaching medical subjects to dental undergraduates. The aim was to generate interest and promote active self-learning among students. The objectives were to (1) introduce the integrated interactive learning method through two departments, (2) get feedback from the students and faculty on feasibility and effectiveness of this method. Second-year students in Bachelor of Dental Surgery course were divided into two groups. Each group was asked to study physiology and pathology of a common and important condition (anemia and hypertension) in a week’s time. During the tutorial, students asked questions on physiology and pathology of that condition from each other in the presence of teachers of both physiology and pathology departments. The teachers acted only as facilitators. After the session, the feedback from students and faculty on this alternative learning method was obtained. Results: Majority of the students felt that this method of learning is enjoyable, helped to develop reasoning skills and ability to correlate and integrate the knowledge from two related fields. Majority of the students felt that this kind of learning led to better understanding of the topic and motivated them towards deep learning. Teachers observed that the study promoted interdepartmental cross-discipline collaboration and better students’ linkages. Conclusion: Interactive integrated tutorial is effective in motivating dental students for better and deep learning of medical subjects.

Keywords: active learning, education, integrated, interactive, self-learning, tutorials

Procedia PDF Downloads 312
1953 Discrete Element Simulations of Composite Ceramic Powders

Authors: Julia Cristina Bonaldo, Christophe L. Martin, Severine Romero Baivier, Stephane Mazerat

Abstract:

Alumina refractories are commonly used in steel and foundry industries. These refractories are prepared through a powder metallurgy route. They are a mixture of hard alumina particles and graphite platelets embedded into a soft carbonic matrix (binder). The powder can be cold pressed isostatically or uniaxially, depending on the application. The compact is then fired to obtain the final product. The quality of the product is governed by the microstructure of the composite and by the process parameters. The compaction behavior and the mechanical properties of the fired product depend greatly on the amount of each phase, on their morphology and on the initial microstructure. In order to better understand the link between these parameters and the macroscopic behavior, we use the Discrete Element Method (DEM) to simulate the compaction process and the fracture behavior of the fired composite. These simulations are coupled with well-designed experiments. Four mixes with various amounts of Al₂O₃ and binder were tested both experimentally and numerically. In DEM, each particle is modelled and the interactions between particles are taken into account through appropriate contact or bonding laws. Here, we model a bimodal mixture of large Al₂O₃ and small Al₂O₃ covered with a soft binder. This composite is itself mixed with graphite platelets. X-ray tomography images are used to analyze the morphologies of the different components. Large Al₂O₃ particles and graphite platelets are modelled in DEM as sets of particles bonded together. The binder is modelled as a soft shell that covers both large and small Al₂O₃ particles. When two particles with binder indent each other, they first interact through this soft shell. Once a critical indentation is reached (towards the end of compaction), hard Al₂O₃ - Al₂O₃ contacts appear. In accordance with experimental data, DEM simulations show that the amount of Al₂O₃ and the amount of binder play a major role for the compaction behavior. The graphite platelets bend and break during the compaction, also contributing to the macroscopic stress. Firing step is modeled in DEM by ascribing bonds to particles which contact each other after compaction. The fracture behavior of the compacted mixture is also simulated and compared with experimental data. Both diametrical tests (Brazilian tests) and triaxial tests are carried out. Again, the link between the amount of Al₂O₃ particles and the fracture behavior is investigated. The methodology described here can be generalized to other particulate materials that are used in the ceramic industry.

Keywords: cold compaction, composites, discrete element method, refractory materials, x-ray tomography

Procedia PDF Downloads 137
1952 Harnessing Deep-Level Metagenomics to Explore the Three Dynamic One Health Areas: Healthcare, Domiciliary and Veterinary

Authors: Christina Killian, Katie Wall, Séamus Fanning, Guerrino Macori

Abstract:

Deep-level metagenomics offers a useful technical approach to explore the three dynamic One Health axes: healthcare, domiciliary and veterinary. There is currently limited understanding of the composition of complex biofilms, natural abundance of AMR genes and gene transfer occurrence in these ecological niches. By using a newly established small-scale complex biofilm model, COMBAT has the potential to provide new information on microbial diversity, antimicrobial resistance (AMR)-encoding gene abundance, and their transfer in complex biofilms of importance to these three One Health axes. Shotgun metagenomics has been used to sample the genomes of all microbes comprising the complex communities found in each biofilm source. A comparative analysis between untreated and biocide-treated biofilms is described. The basic steps include the purification of genomic DNA, followed by library preparation, sequencing, and finally, data analysis. The use of long-read sequencing facilitates the completion of metagenome-assembled genomes (MAG). Samples were sequenced using a PromethION platform, and following quality checks, binning methods, and bespoke bioinformatics pipelines, we describe the recovery of individual MAGs to identify mobile gene elements (MGE) and the corresponding AMR genotypes that map to these structures. High-throughput sequencing strategies have been deployed to characterize these communities. Accurately defining the profiles of these niches is an essential step towards elucidating the impact of the microbiota on each niche biofilm environment and their evolution.

Keywords: COMBAT, biofilm, metagenomics, high-throughput sequencing

Procedia PDF Downloads 54
1951 Decoupled Dynamic Control of Unicycle Robot Using Integral Linear Quadratic Regulator and Sliding Mode Controller

Authors: Shweda Mohan, J. L. Nandagopal, S. Amritha

Abstract:

This paper focuses on the dynamic modelling of unicycle robot. Two main concepts used for balancing unicycle robot are: reaction wheel pendulum and inverted pendulum. The pitch axis is modelled as inverted pendulum and roll axis is modelled as reaction wheel pendulum. The unicycle yaw dynamics is not considered which makes the derivation of dynamics relatively simple. For the roll controller, sliding-mode controller has been adopted and optimal methods are used to minimize switching-function chattering. For pitch controller, an LQR controller has been implemented to drive the unicycle robot to follow the desired velocity trajectory. The pitching and rolling balance could be achieved by two DC motors. Unicycle robot is a non-holonomic, non-linear, static unbalance system that has the minimal number of point contact to the ground, therefore, it is a perfect platform for researchers to study motion and balance control. These real-time solutions will be a viable solution for advanced robotic systems and controls.

Keywords: decoupled dynamics, linear quadratic regulator (LQR) control, Lyapunov function sliding mode control, unicycle robot, velocity and trajectory control

Procedia PDF Downloads 362
1950 Intelligent Fault Diagnosis for the Connection Elements of Modular Offshore Platforms

Authors: Jixiang Lei, Alexander Fuchs, Franz Pernkopf, Katrin Ellermann

Abstract:

Within the Space@Sea project, funded by the Horizon 2020 program, an island consisting of multiple platforms was designed. The platforms are connected by ropes and fenders. The connection is critical with respect to the safety of the whole system. Therefore, fault detection systems are investigated, which could detect early warning signs for a possible failure in the connection elements. Previously, a model-based method called Extended Kalman Filter was developed to detect the reduction of rope stiffness. This method detected several types of faults reliably, but some types of faults were much more difficult to detect. Furthermore, the model-based method is sensitive to environmental noise. When the wave height is low, a long time is needed to detect a fault and the accuracy is not always satisfactory. In this sense, it is necessary to develop a more accurate and robust technique that can detect all rope faults under a wide range of operational conditions. Inspired by this work on the Space at Sea design, we introduce a fault diagnosis method based on deep neural networks. Our method cannot only detect rope degradation by using the acceleration data from each platform but also estimate the contributions of the specific acceleration sensors using methods from explainable AI. In order to adapt to different operational conditions, the domain adaptation technique DANN is applied. The proposed model can accurately estimate rope degradation under a wide range of environmental conditions and help users understand the relationship between the output and the contributions of each acceleration sensor.

Keywords: fault diagnosis, deep learning, domain adaptation, explainable AI

Procedia PDF Downloads 179
1949 Evaluation of the Effect of Turbulence Caused by the Oscillation Grid on Oil Spill in Water Column

Authors: Mohammad Ghiasvand, Babak Khorsandi, Morteza Kolahdoozan

Abstract:

Under the influence of waves, oil in the sea is subject to vertical scattering in the water column. Scientists' knowledge of how oil is dispersed in the water column is one of the lowest levels of knowledge among other processes affecting oil in the marine environment, which highlights the need for research and study in this field. Therefore, this study investigates the distribution of oil in the water column in a turbulent environment with zero velocity characteristics. Lack of laboratory results to analyze the distribution of petroleum pollutants in deep water for information Phenomenon physics on the one hand and using them to calibrate numerical models on the other hand led to the development of laboratory models in research. According to the aim of the present study, which is to investigate the distribution of oil in homogeneous and isotropic turbulence caused by the oscillating Grid, after reaching the ideal conditions, the crude oil flow was poured onto the water surface and oil was distributed in deep water due to turbulence was investigated. In this study, all experimental processes have been implemented and used for the first time in Iran, and the study of oil diffusion in the water column was considered one of the key aspects of pollutant diffusion in the oscillating Grid environment. Finally, the required oscillation velocities were taken at depths of 10, 15, 20, and 25 cm from the water surface and used in the analysis of oil diffusion due to turbulence parameters. The results showed that with the characteristics of the present system in two static modes and network motion with a frequency of 0.8 Hz, the results of oil diffusion in the four mentioned depths at a frequency of 0.8 Hz compared to the static mode from top to bottom at 26.18, 57 31.5, 37.5 and 50% more. Also, after 2.5 minutes of the oil spill at a frequency of 0.8 Hz, oil distribution at the mentioned depths increased by 49, 61.5, 85, and 146.1%, respectively, compared to the base (static) state.

Keywords: homogeneous and isotropic turbulence, oil distribution, oscillating grid, oil spill

Procedia PDF Downloads 73
1948 Innovative Dissipative Bracings for Seismic-Resistant Automated Rack Supported Warehouses

Authors: Agnese Natali, Francesco Morelli, Walter Salvatore

Abstract:

Automated Rack Supported Warehouses (ARSWs) are storage buildings whose structure is made of the same racks where goods are placed. The possibility of designing dissipative seismic-resistant ARSWs is investigated. Diagonals are the dissipative elements, arranged as tense-only X bracings. Local optimization is numerically performed to satisfy the over-resistant connection request for the dissipative element, that is hard to be reached due the geometrical limits of the thin-walled diagonals and must be balanced with resistance, the limit of slenderness, and ductility requests.

Keywords: steel racks, thin-walled cold-formed elements, structural optimization, hierarchy rules, dog-bone philosophy

Procedia PDF Downloads 159
1947 Explainable Deep Learning for Neuroimaging: A Generalizable Approach for Differential Diagnosis of Brain Diseases

Authors: Nighat Bibi

Abstract:

The differential diagnosis of brain diseases by magnetic resonance imaging (MRI) is a crucial step in the diagnostic process, and deep learning (DL) has the potential to significantly improve the accuracy and efficiency of these diagnoses. This study focuses on creating an ensemble learning (EL) model that utilizes the ResNet50, DenseNet121, and EfficientNetB1 architectures to concurrently and accurately classify various brain conditions from MRI images. The proposed ensemble learning model identifies a range of brain disorders that encompass different types of brain tumors, as well as multiple sclerosis. The proposed model was trained on two open-source datasets, consisting of MRI images of glioma, meningioma, pituitary tumors, and multiple sclerosis. Central to this research is the integration of gradient-weighted class activation mapping (Grad-CAM) for model interpretability, aligning with the growing emphasis on explainable AI (XAI) in medical imaging. The application of Grad-CAM improves the transparency of the model's decision-making process, which is vital for clinical acceptance and trust in AI-assisted diagnostic tools. The EL model achieved an impressive 99.84% accuracy in classifying these various brain conditions, demonstrating its potential as a versatile and effective tool for differential diagnosis in neuroimaging. The model’s ability to distinguish between multiple brain diseases underscores its significant potential in the field of medical imaging. Additionally, Grad-CAM visualizations provide deeper insights into the neural network’s reasoning, contributing to a more transparent and interpretable AI-driven diagnostic process in neuroimaging.

Keywords: brain tumour, differential diagnosis, ensemble learning, explainability, grad-cam, multiple sclerosis

Procedia PDF Downloads 6
1946 Development of a Novel Clinical Screening Tool, Using the BSGE Pain Questionnaire, Clinical Examination and Ultrasound to Predict the Severity of Endometriosis Prior to Laparoscopic Surgery

Authors: Marlin Mubarak

Abstract:

Background: Endometriosis is a complex disabling disease affecting young females in the reproductive period mainly. The aim of this project is to generate a diagnostic model to predict severity and stage of endometriosis prior to Laparoscopic surgery. This will help to improve the pre-operative diagnostic accuracy of stage 3 & 4 endometriosis and as a result, refer relevant women to a specialist centre for complex Laparoscopic surgery. The model is based on the British Society of Gynaecological Endoscopy (BSGE) pain questionnaire, clinical examination and ultrasound scan. Design: This is a prospective, observational, study, in which women completed the BSGE pain questionnaire, a BSGE requirement. Also, as part of the routine preoperative assessment patient had a routine ultrasound scan and when recto-vaginal and deep infiltrating endometriosis was suspected an MRI was performed. Setting: Luton & Dunstable University Hospital. Patients: Symptomatic women (n = 56) scheduled for laparoscopy due to pelvic pain. The age ranged between 17 – 52 years of age (mean 33.8 years, SD 8.7 years). Interventions: None outside the recognised and established endometriosis centre protocol set up by BSGE. Main Outcome Measure(s): Sensitivity and specificity of endometriosis diagnosis predicted by symptoms based on BSGE pain questionnaire, clinical examinations and imaging. Findings: The prevalence of diagnosed endometriosis was calculated to be 76.8% and the prevalence of advanced stage was 55.4%. Deep infiltrating endometriosis in various locations was diagnosed in 32/56 women (57.1%) and some had DIE involving several locations. Logistic regression analysis was performed on 36 clinical variables to create a simple clinical prediction model. After creating the scoring system using variables with P < 0.05, the model was applied to the whole dataset. The sensitivity was 83.87% and specificity 96%. The positive likelihood ratio was 20.97 and the negative likelihood ratio was 0.17, indicating that the model has a good predictive value and could be useful in predicting advanced stage endometriosis. Conclusions: This is a hypothesis-generating project with one operator, but future proposed research would provide validation of the model and establish its usefulness in the general setting. Predictive tools based on such model could help organise the appropriate investigation in clinical practice, reduce risks associated with surgery and improve outcome. It could be of value for future research to standardise the assessment of women presenting with pelvic pain. The model needs further testing in a general setting to assess if the initial results are reproducible.

Keywords: deep endometriosis, endometriosis, minimally invasive, MRI, ultrasound.

Procedia PDF Downloads 352
1945 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil

Authors: M. Seguini, D. Nedjar

Abstract:

An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.

Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability

Procedia PDF Downloads 412
1944 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments

Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz

Abstract:

Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.

Keywords: LSTMs, streamflow, hyperparameters, hydrology

Procedia PDF Downloads 69
1943 Response of Diaphragmatic Excursion to Inspiratory Muscle Trainer Post Thoracotomy

Authors: H. M. Haytham, E. A. Azza, E.S. Mohamed, E. G. Nesreen

Abstract:

Thoracotomy is a great surgery that has serious pulmonary complications, so purpose of this study was to determine the response of diaphragmatic excursion to inspiratory muscle trainer post thoracotomy. Thirty patients of both sexes (16 men and 14 women) with age ranged from 20 to 40 years old had done thoracotomy participated in this study. The practical work was done in cardiothoracic department, Kasr-El-Aini hospital at faculty of medicine for individuals 3 days Post operatively. Patients were assigned into two groups: group A (study group) included 15 patients (8 men and 7 women) who received inspiratory muscle training by using inspiratory muscle trainer for 20 minutes and routine chest physiotherapy (deep breathing, cough and early ambulation) twice daily, 3 days per week for one month. Group B (control group) included 15 patients (8 men and 7 women) who received the routine chest physiotherapy only (deep breathing, cough and early ambulation) twice daily, 3 days per week for one month. Ultrasonography was used to evaluate the changes in diaphragmatic excursion before and after training program. Statistical analysis revealed a significant increase in diaphragmatic excursion in the study group (59.52%) more than control group (18.66%) after using inspiratory muscle trainer post operatively in patients post thoracotomy. It was concluded that the inspiratory muscle training device increases diaphragmatic excursion in patients post thoracotomy through improving inspiratory muscle strength and improving mechanics of breathing and using of inspiratory muscle trainer as a method of physical therapy rehabilitation to reduce post-operative pulmonary complications post thoracotomy.

Keywords: diaphragmatic excursion, inspiratory muscle trainer, ultrasonography, thoracotomy

Procedia PDF Downloads 318
1942 Deep Learning for SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network

Procedia PDF Downloads 66
1941 A Model of a Non-expanding Universe

Authors: Yongbai Yin

Abstract:

We propose a non-expanding model of the universe based on the non-changing fine-structure constant and Einstein’s space-time relativity theory. This model consistently explains the Redshift, the ‘expanding’ and the age of the universe without introducing the singularity and inflationary issues that occurred in the ‘Big Bang’ model. It also offers an interpretation of the unexpected ‘accelerated expanding’ universe and the origin of the mystery of ‘Dark matter’. It predicts that the universe began with a ‘cold and peaceful’ rather than ‘extremely hot’ stage which is used to explain consistently the microwave background radiation. It predicts mathematically that galaxies could end in blackholes because blackholes should have the same environmental conditions as those at the beginning of the universe in this model, paving the way to offer a model of the cyclic universes without violating the first law of thermodynamics.

Keywords: big bang, accelerated expanding universe, dark matters, blackholes, microwave background radiation, universe modelling

Procedia PDF Downloads 8
1940 A World Map of Seabed Sediment Based on 50 Years of Knowledge

Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès

Abstract:

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Keywords: marine sedimentology, seabed map, sediment classification, world ocean

Procedia PDF Downloads 231
1939 Personal Exposure to Respirable Particles and Other Selected Gases among Cyclists near and Away from Busy Roads of Perth Metropolitan Area

Authors: Anu Shrestha, Krassi Rumchev, Ben Mullins, Yun Zhao, Linda Selvey

Abstract:

Cycling is often promoted as a means of reducing vehicular congestion, noise and greenhouse gas and air pollutant emissions in urban areas. It is also indorsed as a healthy means of transportation in terms of reducing the risk of developing a range of physical and psychological conditions. However, people who cycle regularly may not be aware that they can become exposed to high levels of Vehicular Air Pollutants (VAP) emitted by nearby traffics and therefore experience adverse health effects as a result. The study will highlight the present scenario of ambient air pollution level in different cycling routes in Perth and also highlight significant contribution to the understanding of health risks that cyclist may face from exposure to particulate air pollution. Methodology: This research was conducted in Perth, Western Austral and consisted of two groups of cyclists cycling near high (2 routes) and low (two routes) vehicular traffic roads, at high and low levels of exertion, during the cold and warm seasons. A sample size of 123 regular cyclists who cycled at least 80 km/week, aged 20-55, and non-smoker were selected for this study. There were altogether 100 male and 23 female who were asked to choose one or more routes among four different routes, and each participant cycled the route for warm or cold or both seasons. Cyclist who reported cardiovascular and other chronic health conditions (excluding asthma) were not invited into the study. Exposures to selected air pollutants were assessed by undertaking background and personal measurements alone with the measurement of heart and breathe rate of each participant. Finding: According to the preliminary study findings, the cyclists who used cycling route close to high traffic route were exposed to higher levels of measured air pollutants Nitrogen Oxide (NO₂) =0.12 ppm, sulfur dioxide (SO₂)=0.06 ppm and carbon monoxide (CO)=0.25 PPM compared to those who cycled away from busy roads. However, we measured high concentrations of particulate air pollution near one of the low traffic route which we associate with the close proximity to ferry station. Concluding Statement: As a conclusion, we recommend that cycling routes should be selected away from high traffic routes. If possible, we should also consider that if the cycling route is surrounded by the dense populated infrastructures, it can trap the pollutants and always facilitate in increasing inhalation of particle count among the cyclists.

Keywords: air pollution, carbon monoxide, cyclists' health, nitrogen dioxide, nitrogen oxide, respirable particulate matters

Procedia PDF Downloads 262
1938 Bioactivity Evaluation of Cucurbitin Derived Enzymatic Hydrolysates

Authors: Ž. Vaštag, Lj. Popović, S. Popović

Abstract:

After cold pressing of pumpkin oil, the defatted oil cake (PUOC) was utilized as raw material for processing of bio-functional hydrolysates. In this study, the in vitro bioactivity of an alcalase (AH) and a pepsin hydrolysate (PH) prepared from the major pumpkin 12S globulin (cucurbitin) are compared. The hydrolysates were produced at optimum reaction conditions (temperature, pH) for the enzymes, during 60min. The bioactivity testing included antioxidant and angiotensin I converting enzyme inhibitory activity assays. The hydrolysates showed high potential as natural antioxidants and possibly antihypertensive agents in functional food or nutraceuticals. Additionally, preliminary studies have shown that both hydrolysates could exhibit modest α-amylase inhibitory activity, which indicates on their hypoglycemic potential.

Keywords: cucurbitin, alcalase, pepsin, protein hydrolysates, in vitro bioactivity

Procedia PDF Downloads 309
1937 Denial among Women Living with Cancer: An Exploratory Study to Understand the Consequences of Cancer and the Denial Mechanism

Authors: Judith Partouche-Sebban, Saeedeh Rezaee Vessal

Abstract:

Because of the rising number of new cases of cancer, especially among women, it is more than essential to better understand how women experience cancer in order to bring them adapted to support and care and enhance their well-being and patient experience. Cancer stands for a traumatic experience in which the diagnosis, its medical treatments, and the related side effects lead to deep physical and psychological changes that may arouse considerable stress and anxiety. In order to reduce these negative emotions, women tend to use various defense mechanisms, among which denial has been defined as the most frequent mechanism used by breast cancer patients. This study aims to better understand the consequences of the experience of cancer and their link with the adoption of a denial strategy. The empirical research was done among female cancer survivors in France. Since the topic of this study is relatively unexplored, a qualitative methodology and open-ended interviews were employed. In total, 25 semi-directive interviews were conducted with a female with different cancers, different stages of treatment, and different ages. A systematic inductive method was performed to analyze data. The content analysis enabled to highlight three different denial-related behaviors among women with cancer, which serve a self-protective function. First, women who expressed high levels of anxiety confessed they tended to completely deny the existence of their cancer immediately after the diagnosis of their illness. These women mainly exhibit many fears and a deep distrust toward the medical context and professionals. This coping mechanism is defined by the patient as being unconscious. Second, other women deliberately decided to deny partial information about their cancer, whether this information is related to the stages of the illness, the emotional consequences, or the behavioral consequences of the illness. These women use this strategy as a way to avoid the reality of the illness and its impact on the different aspects of their life as if cancer does not exist. Third, some women tend to reinterpret and give meaning to their cancer as a way to reduce its impact on their life. To this end, they may use magical thinking or positive reframing, or reinterpretation. Because denial may lead to delays in medical treatments, this topic deserves a deep investigation, especially in the context of oncology. As denial is defined as a specific defense mechanism, this study contributes to the existing literature in service marketing which focuses on emotions and emotional regulation in healthcare services which is a crucial issue. Moreover, this study has several managerial implications for healthcare professionals who interact with patients in order to implement better care and support for the patients.

Keywords: cancer, coping mechanisms, denial, healthcare services

Procedia PDF Downloads 84
1936 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 89