Search results for: Deep Shikha
1436 Comparison Between Bispectral Index Guided Anesthesia and Standard Anesthesia Care in Middle Age Adult Patients Undergoing Modified Radical Mastectomy
Authors: Itee Chowdhury, Shikha Modi
Abstract:
Introduction: Cancer is beginning to outpace cardiovascular disease as a cause of death affecting every major organ system with profound implications for perioperative management. Breast cancer is the most common cancer in women in India, accounting for 27% of all cancers. The small changes in analgesic management of cancer patients can greatly improve prognosis and reduce the risk of postsurgical cancer recurrence as opioid-based analgesia has a deleterious effect on cancer outcomes. Shortened postsurgical recovery time facilitates earlier return to intended oncological therapy maximising the chance of successful treatment. Literature reveals that the role of BIS since FDA approval has been assessed in various types of surgeries, but clinical data on its use in oncosurgical patients are scanty. Our study focuses on the role of BIS-guided anaesthesia for breast cancer surgery patients. Methods: A prospective randomized controlled study in patients aged 36-55years scheduled for modified radical mastectomy was conducted in 51 patients in each group who met the inclusion and exclusion criteria, and randomization was done by sealed envelope technique. In BIS guided anaesthesia group (B), sevoflurane was titrated to keep the BIS value 45-60, and thereafter if the patient showed hypertension/tachycardia, an opioid was given. In standard anaesthesia care (group C), sevoflurane was titrated to keep MAC in the range of 0.8-1, and fentanyl was given if the patient showed hypertension/tachycardia. Intraoperative opioid consumption was calculated. Postsurgery recovery characteristics, including Aldrete score, were assessed. Patients were questioned for pain, PONV, and recall of the intraoperative event. A comparison of age, BMI, ASA, recovery characteristics, opioid, and VAS score was made using the non-parametric Mann-Whitney U test. Categorical data like intraoperative awareness of surgery and PONV was studied using the Chi-square test. A comparison of heart rate and MAP was made by an independent sample t-test. #ggplot2 package was used to show the trend of the BIS index for all intraoperative time points for each patient. For a statistical test of significance, the cut-off p-value was set as <0.05. Conclusions: BIS monitoring led to reduced opioid consumption and early recovery from anaesthesia in breast cancer patients undergoing MRM resulting in less postoperative nausea and vomiting and less pain intensity in the immediate postoperative period without any recall of the intraoperative event. Thus, the use of a Bispectral index monitor allows for tailoring of anaesthesia administration with a good outcome.Keywords: bispectral index, depth of anaesthesia, recovery, opioid consumption
Procedia PDF Downloads 1271435 Evaluation of the Effect of Turbulence Caused by the Oscillation Grid on Oil Spill in Water Column
Authors: Mohammad Ghiasvand, Babak Khorsandi, Morteza Kolahdoozan
Abstract:
Under the influence of waves, oil in the sea is subject to vertical scattering in the water column. Scientists' knowledge of how oil is dispersed in the water column is one of the lowest levels of knowledge among other processes affecting oil in the marine environment, which highlights the need for research and study in this field. Therefore, this study investigates the distribution of oil in the water column in a turbulent environment with zero velocity characteristics. Lack of laboratory results to analyze the distribution of petroleum pollutants in deep water for information Phenomenon physics on the one hand and using them to calibrate numerical models on the other hand led to the development of laboratory models in research. According to the aim of the present study, which is to investigate the distribution of oil in homogeneous and isotropic turbulence caused by the oscillating Grid, after reaching the ideal conditions, the crude oil flow was poured onto the water surface and oil was distributed in deep water due to turbulence was investigated. In this study, all experimental processes have been implemented and used for the first time in Iran, and the study of oil diffusion in the water column was considered one of the key aspects of pollutant diffusion in the oscillating Grid environment. Finally, the required oscillation velocities were taken at depths of 10, 15, 20, and 25 cm from the water surface and used in the analysis of oil diffusion due to turbulence parameters. The results showed that with the characteristics of the present system in two static modes and network motion with a frequency of 0.8 Hz, the results of oil diffusion in the four mentioned depths at a frequency of 0.8 Hz compared to the static mode from top to bottom at 26.18, 57 31.5, 37.5 and 50% more. Also, after 2.5 minutes of the oil spill at a frequency of 0.8 Hz, oil distribution at the mentioned depths increased by 49, 61.5, 85, and 146.1%, respectively, compared to the base (static) state.Keywords: homogeneous and isotropic turbulence, oil distribution, oscillating grid, oil spill
Procedia PDF Downloads 751434 Development of a Novel Clinical Screening Tool, Using the BSGE Pain Questionnaire, Clinical Examination and Ultrasound to Predict the Severity of Endometriosis Prior to Laparoscopic Surgery
Authors: Marlin Mubarak
Abstract:
Background: Endometriosis is a complex disabling disease affecting young females in the reproductive period mainly. The aim of this project is to generate a diagnostic model to predict severity and stage of endometriosis prior to Laparoscopic surgery. This will help to improve the pre-operative diagnostic accuracy of stage 3 & 4 endometriosis and as a result, refer relevant women to a specialist centre for complex Laparoscopic surgery. The model is based on the British Society of Gynaecological Endoscopy (BSGE) pain questionnaire, clinical examination and ultrasound scan. Design: This is a prospective, observational, study, in which women completed the BSGE pain questionnaire, a BSGE requirement. Also, as part of the routine preoperative assessment patient had a routine ultrasound scan and when recto-vaginal and deep infiltrating endometriosis was suspected an MRI was performed. Setting: Luton & Dunstable University Hospital. Patients: Symptomatic women (n = 56) scheduled for laparoscopy due to pelvic pain. The age ranged between 17 – 52 years of age (mean 33.8 years, SD 8.7 years). Interventions: None outside the recognised and established endometriosis centre protocol set up by BSGE. Main Outcome Measure(s): Sensitivity and specificity of endometriosis diagnosis predicted by symptoms based on BSGE pain questionnaire, clinical examinations and imaging. Findings: The prevalence of diagnosed endometriosis was calculated to be 76.8% and the prevalence of advanced stage was 55.4%. Deep infiltrating endometriosis in various locations was diagnosed in 32/56 women (57.1%) and some had DIE involving several locations. Logistic regression analysis was performed on 36 clinical variables to create a simple clinical prediction model. After creating the scoring system using variables with P < 0.05, the model was applied to the whole dataset. The sensitivity was 83.87% and specificity 96%. The positive likelihood ratio was 20.97 and the negative likelihood ratio was 0.17, indicating that the model has a good predictive value and could be useful in predicting advanced stage endometriosis. Conclusions: This is a hypothesis-generating project with one operator, but future proposed research would provide validation of the model and establish its usefulness in the general setting. Predictive tools based on such model could help organise the appropriate investigation in clinical practice, reduce risks associated with surgery and improve outcome. It could be of value for future research to standardise the assessment of women presenting with pelvic pain. The model needs further testing in a general setting to assess if the initial results are reproducible.Keywords: deep endometriosis, endometriosis, minimally invasive, MRI, ultrasound.
Procedia PDF Downloads 3531433 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil
Authors: M. Seguini, D. Nedjar
Abstract:
An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability
Procedia PDF Downloads 4141432 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments
Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz
Abstract:
Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.Keywords: LSTMs, streamflow, hyperparameters, hydrology
Procedia PDF Downloads 691431 Response of Diaphragmatic Excursion to Inspiratory Muscle Trainer Post Thoracotomy
Authors: H. M. Haytham, E. A. Azza, E.S. Mohamed, E. G. Nesreen
Abstract:
Thoracotomy is a great surgery that has serious pulmonary complications, so purpose of this study was to determine the response of diaphragmatic excursion to inspiratory muscle trainer post thoracotomy. Thirty patients of both sexes (16 men and 14 women) with age ranged from 20 to 40 years old had done thoracotomy participated in this study. The practical work was done in cardiothoracic department, Kasr-El-Aini hospital at faculty of medicine for individuals 3 days Post operatively. Patients were assigned into two groups: group A (study group) included 15 patients (8 men and 7 women) who received inspiratory muscle training by using inspiratory muscle trainer for 20 minutes and routine chest physiotherapy (deep breathing, cough and early ambulation) twice daily, 3 days per week for one month. Group B (control group) included 15 patients (8 men and 7 women) who received the routine chest physiotherapy only (deep breathing, cough and early ambulation) twice daily, 3 days per week for one month. Ultrasonography was used to evaluate the changes in diaphragmatic excursion before and after training program. Statistical analysis revealed a significant increase in diaphragmatic excursion in the study group (59.52%) more than control group (18.66%) after using inspiratory muscle trainer post operatively in patients post thoracotomy. It was concluded that the inspiratory muscle training device increases diaphragmatic excursion in patients post thoracotomy through improving inspiratory muscle strength and improving mechanics of breathing and using of inspiratory muscle trainer as a method of physical therapy rehabilitation to reduce post-operative pulmonary complications post thoracotomy.Keywords: diaphragmatic excursion, inspiratory muscle trainer, ultrasonography, thoracotomy
Procedia PDF Downloads 3191430 Deep Learning for SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network
Procedia PDF Downloads 671429 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.Keywords: marine sedimentology, seabed map, sediment classification, world ocean
Procedia PDF Downloads 2321428 Denial among Women Living with Cancer: An Exploratory Study to Understand the Consequences of Cancer and the Denial Mechanism
Authors: Judith Partouche-Sebban, Saeedeh Rezaee Vessal
Abstract:
Because of the rising number of new cases of cancer, especially among women, it is more than essential to better understand how women experience cancer in order to bring them adapted to support and care and enhance their well-being and patient experience. Cancer stands for a traumatic experience in which the diagnosis, its medical treatments, and the related side effects lead to deep physical and psychological changes that may arouse considerable stress and anxiety. In order to reduce these negative emotions, women tend to use various defense mechanisms, among which denial has been defined as the most frequent mechanism used by breast cancer patients. This study aims to better understand the consequences of the experience of cancer and their link with the adoption of a denial strategy. The empirical research was done among female cancer survivors in France. Since the topic of this study is relatively unexplored, a qualitative methodology and open-ended interviews were employed. In total, 25 semi-directive interviews were conducted with a female with different cancers, different stages of treatment, and different ages. A systematic inductive method was performed to analyze data. The content analysis enabled to highlight three different denial-related behaviors among women with cancer, which serve a self-protective function. First, women who expressed high levels of anxiety confessed they tended to completely deny the existence of their cancer immediately after the diagnosis of their illness. These women mainly exhibit many fears and a deep distrust toward the medical context and professionals. This coping mechanism is defined by the patient as being unconscious. Second, other women deliberately decided to deny partial information about their cancer, whether this information is related to the stages of the illness, the emotional consequences, or the behavioral consequences of the illness. These women use this strategy as a way to avoid the reality of the illness and its impact on the different aspects of their life as if cancer does not exist. Third, some women tend to reinterpret and give meaning to their cancer as a way to reduce its impact on their life. To this end, they may use magical thinking or positive reframing, or reinterpretation. Because denial may lead to delays in medical treatments, this topic deserves a deep investigation, especially in the context of oncology. As denial is defined as a specific defense mechanism, this study contributes to the existing literature in service marketing which focuses on emotions and emotional regulation in healthcare services which is a crucial issue. Moreover, this study has several managerial implications for healthcare professionals who interact with patients in order to implement better care and support for the patients.Keywords: cancer, coping mechanisms, denial, healthcare services
Procedia PDF Downloads 851427 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 901426 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 971425 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments
Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea
Abstract:
The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.Keywords: deep learning, data mining, gender predication, MOOCs
Procedia PDF Downloads 1471424 Oscillating Water Column Wave Energy Converter with Deep Water Reactance
Authors: William C. Alexander
Abstract:
The oscillating water column (OSC) wave energy converter (WEC) with deep water reactance (DWR) consists of a large hollow sphere filled with seawater at the base, referred to as the ‘stabilizer’, a hollow cylinder at the top of the device, with a said cylinder having a bottom open to the sea and a sealed top save for an orifice which leads to an air turbine, and a long, narrow rod connecting said stabilizer with said cylinder. A small amount of ballast at the bottom of the stabilizer and a small amount of floatation in the cylinder keeps the device upright in the sea. The floatation is set such that the mean water level is nominally halfway up the cylinder. The entire device is loosely moored to the seabed to keep it from drifting away. In the presence of ocean waves, seawater will move up and down within the cylinder, producing the ‘oscillating water column’. This gives rise to air pressure within the cylinder alternating between positive and negative gauge pressure, which in turn causes air to alternately leave and enter the cylinder through said top-cover situated orifice. An air turbine situated within or immediately adjacent to said orifice converts the oscillating airflow into electric power for transport to shore or elsewhere by electric power cable. Said oscillating air pressure produces large up and down forces on the cylinder. Said large forces are opposed through the rod to the large mass of water retained within the stabilizer, which is located deep enough to be mostly free of any wave influence and which provides the deepwater reactance. The cylinder and stabilizer form a spring-mass system which has a vertical (heave) resonant frequency. The diameter of the cylinder largely determines the power rating of the device, while the size (and water mass within) of the stabilizer determines said resonant frequency. Said frequency is chosen to be on the lower end of the wave frequency spectrum to maximize the average power output of the device over a large span of time (such as a year). The upper portion of the device (the cylinder) moves laterally (surge) with the waves. This motion is accommodated with minimal loading on the said rod by having the stabilizer shaped like a sphere, allowing the entire device to rotate about the center of the stabilizer without rotating the seawater within the stabilizer. A full-scale device of this type may have the following dimensions. The cylinder may be 16 meters in diameter and 30 meters high, the stabilizer 25 meters in diameter, and the rod 55 meters long. Simulations predict that this will produce 1,400 kW in waves of 3.5-meter height and 12 second period, with a relatively flat power curve between 5 and 16 second wave periods, as will be suitable for an open-ocean location. This is nominally 10 times higher power than similar-sized WEC spar buoys as reported in the literature, and the device is projected to have only 5% of the mass per unit power of other OWC converters.Keywords: oscillating water column, wave energy converter, spar bouy, stabilizer
Procedia PDF Downloads 1061423 Recent Developments in the Application of Deep Learning to Stock Market Prediction
Authors: Shraddha Jain Sharma, Ratnalata Gupta
Abstract:
Predicting stock movements in the financial market is both difficult and rewarding. Analysts and academics are increasingly using advanced approaches such as machine learning techniques to anticipate stock price patterns, thanks to the expanding capacity of computing and the recent advent of graphics processing units and tensor processing units. Stock market prediction is a type of time series prediction that is incredibly difficult to do since stock prices are influenced by a variety of financial, socioeconomic, and political factors. Furthermore, even minor mistakes in stock market price forecasts can result in significant losses for companies that employ the findings of stock market price prediction for financial analysis and investment. Soft computing techniques are increasingly being employed for stock market prediction due to their better accuracy than traditional statistical methodologies. The proposed research looks at the need for soft computing techniques in stock market prediction, the numerous soft computing approaches that are important to the field, past work in the area with their prominent features, and the significant problems or issue domain that the area involves. For constructing a predictive model, the major focus is on neural networks and fuzzy logic. The stock market is extremely unpredictable, and it is unquestionably tough to correctly predict based on certain characteristics. This study provides a complete overview of the numerous strategies investigated for high accuracy prediction, with a focus on the most important characteristics.Keywords: stock market prediction, artificial intelligence, artificial neural networks, fuzzy logic, accuracy, deep learning, machine learning, stock price, trading volume
Procedia PDF Downloads 901422 Efficient L-Xylulose Production Using Whole-Cell Biocatalyst With NAD+ Regeneration System Through Co-Expression of Xylitol Dehydrogenase and NADH Oxidase in Escherichia Coli
Authors: Mesfin Angaw Tesfay
Abstract:
L-Xylulose is a potentially valuable rare sugar used as starting material for antiviral and anticancer drug development in pharmaceutical industries. L-Xylulose exist in a very low concentration in nature and have to be synthesized from cheap starting materials such as xylitol through biotechnological approaches. In this study, cofactor engineering and deep eutectic solvent were applied to improve the efficiency of L-xylulose production from xylitol. A water-forming NAD+ regeneration enzyme (NADH oxidase) from Streptococcus mutans ATCC 25175 was introduced into E. coli with xylitol-4-dehydrogenase (XDH) of Pantoea ananatis resulting in recombinant cells harboring the vector pETDuet-xdh-SmNox. Further, three deep eutectic solvents (DES) including, Choline chloride/glycerol (ChCl/G), Choline chloride/urea (ChCl/U), and Choline chloride/ethylene glycol (ChCl/EG) have been employed to facilitate the conversion efficiency of L-xylulose from xylitol. The co-expression system exhibited optimal activity at a temperature of 37 ℃ and pH 8.5, and the addition of Mg2+ enhanced the catalytic activity by 1.19-fold. Co-expression of NADH oxidase with XDH enzyme resulted in increased L-xylulose concentration and productivity from xylitol as well as the intracellular NAD+ concentration. Two of the DES used (ChCl/U and ChCl/EG) show positive effects on product yield and the ChCl/G has inhibiting effects. The optimum concentration of ChCl/U was 2.5%, which increased the L-xylulose yields compared to the control without DES. In a 1 L fermenter the final concentration and productivity of L-xylulose from 50 g/L of xylitol reached 48.45 g/L, and 2.42 g/L.h respectively, which was the highest report. Overall, this study is a suitable approach for large-scale production of L-xylulose from xylitol using the engineered E. coli cell.Keywords: Xylitol-4-dehydrogenase, NADH oxidase, L-xylulose, Xylitol, Coexpression, DESs
Procedia PDF Downloads 231421 Reducing the Imbalance Penalty Through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey
Authors: Hayriye Anıl, Görkem Kar
Abstract:
In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations since geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning, and, time series methods, the total generation of the power plants belonging to Zorlu Natural Electricity Generation, which has a high installed capacity in terms of geothermal, was estimated for the first one and two weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.Keywords: machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting
Procedia PDF Downloads 1101420 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning
Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih
Abstract:
Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network
Procedia PDF Downloads 1861419 An Integrated Label Propagation Network for Structural Condition Assessment
Authors: Qingsong Xiong, Cheng Yuan, Qingzhao Kong, Haibei Xiong
Abstract:
Deep-learning-driven approaches based on vibration responses have attracted larger attention in rapid structural condition assessment while obtaining sufficient measured training data with corresponding labels is relevantly costly and even inaccessible in practical engineering. This study proposes an integrated label propagation network for structural condition assessment, which is able to diffuse the labels from continuously-generating measurements by intact structure to those of missing labels of damage scenarios. The integrated network is embedded with damage-sensitive features extraction by deep autoencoder and pseudo-labels propagation by optimized fuzzy clustering, the architecture and mechanism which are elaborated. With a sophisticated network design and specified strategies for improving performance, the present network achieves to extends the superiority of self-supervised representation learning, unsupervised fuzzy clustering and supervised classification algorithms into an integration aiming at assessing damage conditions. Both numerical simulations and full-scale laboratory shaking table tests of a two-story building structure were conducted to validate its capability of detecting post-earthquake damage. The identifying accuracy of a present network was 0.95 in numerical validations and an average 0.86 in laboratory case studies, respectively. It should be noted that the whole training procedure of all involved models in the network stringently doesn’t rely upon any labeled data of damage scenarios but only several samples of intact structure, which indicates a significant superiority in model adaptability and feasible applicability in practice.Keywords: autoencoder, condition assessment, fuzzy clustering, label propagation
Procedia PDF Downloads 971418 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 1601417 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 641416 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 1141415 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment
Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee
Abstract:
Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation
Procedia PDF Downloads 3481414 Phase Synchronization of Skin Blood Flow Oscillations under Deep Controlled Breathing in Human
Authors: Arina V. Tankanag, Gennady V. Krasnikov, Nikolai K. Chemeris
Abstract:
The development of respiration-dependent oscillations in the peripheral blood flow may occur by at least two mechanisms. The first mechanism is related to the change of venous pressure due to mechanical activity of lungs. This phenomenon is known as ‘respiratory pump’ and is one of the mechanisms of venous return of blood from the peripheral vessels to the heart. The second mechanism is related to the vasomotor reflexes controlled by the respiratory modulation of the activity of centers of the vegetative nervous system. Early high phase synchronization of respiration-dependent blood flow oscillations of left and right forearm skin in healthy volunteers at rest was shown. The aim of the work was to study the effect of deep controlled breathing on the phase synchronization of skin blood flow oscillations. 29 normotensive non-smoking young women (18-25 years old) of the normal constitution without diagnosed pathologies of skin, cardiovascular and respiratory systems participated in the study. For each of the participants six recording sessions were carried out: first, at the spontaneous breathing rate; and the next five, in the regimes of controlled breathing with fixed breathing depth and different rates of enforced breathing regime. The following rates of controlled breathing regime were used: 0.25, 0.16, 0.10, 0.07 and 0.05 Hz. The breathing depth amounted to 40% of the maximal chest excursion. Blood perfusion was registered by laser flowmeter LAKK-02 (LAZMA, Russia) with two identical channels (wavelength 0.63 µm; emission power, 0.5 mW). The first probe was fastened to the palmar surface of the distal phalanx of left forefinger; the second probe was attached to the external surface of the left forearm near the wrist joint. These skin zones were chosen as zones with different dominant mechanisms of vascular tonus regulation. The degree of phase synchronization of the registered signals was estimated from the value of the wavelet phase coherence. The duration of all recording was 5 min. The sampling frequency of the signals was 16 Hz. The increasing of synchronization of the respiratory-dependent skin blood flow oscillations for all controlled breathing regimes was obtained. Since the formation of respiration-dependent oscillations in the peripheral blood flow is mainly caused by the respiratory modulation of system blood pressure, the observed effects are most likely dependent on the breathing depth. It should be noted that with spontaneous breathing depth does not exceed 15% of the maximal chest excursion, while in the present study the breathing depth was 40%. Therefore it has been suggested that the observed significant increase of the phase synchronization of blood flow oscillations in our conditions is primarily due to an increase of breathing depth. This is due to the enhancement of both potential mechanisms of respiratory oscillation generation: venous pressure and sympathetic modulation of vascular tone.Keywords: deep controlled breathing, peripheral blood flow oscillations, phase synchronization, wavelet phase coherence
Procedia PDF Downloads 2131413 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 891412 Metagenomics-Based Molecular Epidemiology of Viral Diseases
Authors: Vyacheslav Furtak, Merja Roivainen, Olga Mirochnichenko, Majid Laassri, Bella Bidzhieva, Tatiana Zagorodnyaya, Vladimir Chizhikov, Konstantin Chumakov
Abstract:
Molecular epidemiology and environmental surveillance are parts of a rational strategy to control infectious diseases. They have been widely used in the worldwide campaign to eradicate poliomyelitis, which otherwise would be complicated by the inability to rapidly respond to outbreaks and determine sources of the infection. The conventional scheme involves isolation of viruses from patients and the environment, followed by their identification by nucleotide sequences analysis to determine phylogenetic relationships. This is a tedious and time-consuming process that yields definitive results when it may be too late to implement countermeasures. Because of the difficulty of high-throughput full-genome sequencing, most such studies are conducted by sequencing only capsid genes or their parts. Therefore the important information about the contribution of other parts of the genome and inter- and intra-species recombination to viral evolution is not captured. Here we propose a new approach based on the rapid concentration of sewage samples with tangential flow filtration followed by deep sequencing and reconstruction of nucleotide sequences of viruses present in the samples. The entire nucleic acids content of each sample is sequenced, thus preserving in digital format the complete spectrum of viruses. A set of rapid algorithms was developed to separate deep sequence reads into discrete populations corresponding to each virus and assemble them into full-length consensus contigs, as well as to generate a complete profile of sequence heterogeneities in each of them. This provides an effective approach to study molecular epidemiology and evolution of natural viral populations.Keywords: poliovirus, eradication, environmental surveillance, laboratory diagnosis
Procedia PDF Downloads 2811411 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer
Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom
Abstract:
Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN
Procedia PDF Downloads 751410 Electrical Machine Winding Temperature Estimation Using Stateful Long Short-Term Memory Networks (LSTM) and Truncated Backpropagation Through Time (TBPTT)
Authors: Yujiang Wu
Abstract:
As electrical machine (e-machine) power density re-querulents become more stringent in vehicle electrification, mounting a temperature sensor for e-machine stator windings becomes increasingly difficult. This can lead to higher manufacturing costs, complicated harnesses, and reduced reliability. In this paper, we propose a deep-learning method for predicting electric machine winding temperature, which can either replace the sensor entirely or serve as a backup to the existing sensor. We compare the performance of our method, the stateful long short-term memory networks (LSTM) with truncated backpropagation through time (TBTT), with that of linear regression, as well as stateless LSTM with/without residual connection. Our results demonstrate the strength of combining stateful LSTM and TBTT in tackling nonlinear time series prediction problems with long sequence lengths. Additionally, in industrial applications, high-temperature region prediction accuracy is more important because winding temperature sensing is typically used for derating machine power when the temperature is high. To evaluate the performance of our algorithm, we developed a temperature-stratified MSE. We propose a simple but effective data preprocessing trick to improve the high-temperature region prediction accuracy. Our experimental results demonstrate the effectiveness of our proposed method in accurately predicting winding temperature, particularly in high-temperature regions, while also reducing manufacturing costs and improving reliability.Keywords: deep learning, electrical machine, functional safety, long short-term memory networks (LSTM), thermal management, time series prediction
Procedia PDF Downloads 991409 Monitoring of Wound Healing Through Structural and Functional Mechanisms Using Photoacoustic Imaging Modality
Authors: Souradip Paul, Arijit Paramanick, M. Suheshkumar Singh
Abstract:
Traumatic injury is the leading worldwide health problem. Annually, millions of surgical wounds are created for the sake of routine medical care. The healing of these unintended injuries is always monitored based on visual inspection. The maximal restoration of tissue functionality remains a significant concern of clinical care. Although minor injuries heal well with proper care and medical treatment, large injuries negatively influence various factors (vasculature insufficiency, tissue coagulation) and cause poor healing. Demographically, the number of people suffering from severe wounds and impaired healing conditions is burdensome for both human health and the economy. An incomplete understanding of the functional and molecular mechanism of tissue healing often leads to a lack of proper therapies and treatment. Hence, strong and promising medical guidance is necessary for monitoring the tissue regeneration processes. Photoacoustic imaging (PAI), is a non-invasive, hybrid imaging modality that can provide a suitable solution in this regard. Light combined with sound offers structural, functional and molecular information from the higher penetration depth. Therefore, molecular and structural mechanisms of tissue repair will be readily observable in PAI from the superficial layer and in the deep tissue region. Blood vessel formation and its growth is an essential tissue-repairing components. These vessels supply nutrition and oxygen to the cell in the wound region. Angiogenesis (formation of new capillaries from existing blood vessels) contributes to new blood vessel formation during tissue repair. The betterment of tissue healing directly depends on angiogenesis. Other optical microscopy techniques can visualize angiogenesis in micron-scale penetration depth but are unable to provide deep tissue information. PAI overcomes this barrier due to its unique capability. It is ideally suited for deep tissue imaging and provides the rich optical contrast generated by hemoglobin in blood vessels. Hence, an early angiogenesis detection method provided by PAI leads to monitoring the medical treatment of the wound. Along with functional property, mechanical property also plays a key role in tissue regeneration. The wound heals through a dynamic series of physiological events like coagulation, granulation tissue formation, and extracellular matrix (ECM) remodeling. Therefore tissue elasticity changes, can be identified using non-contact photoacoustic elastography (PAE). In a nutshell, angiogenesis and biomechanical properties are both critical parameters for tissue healing and these can be characterized in a single imaging modality (PAI).Keywords: PAT, wound healing, tissue coagulation, angiogenesis
Procedia PDF Downloads 1061408 Action Potential of Lateral Geniculate Neurons at Low Threshold Currents: Simulation Study
Authors: Faris Tarlochan, Siva Mahesh Tangutooru
Abstract:
Lateral Geniculate Nucleus (LGN) is the relay center in the visual pathway as it receives most of the input information from retinal ganglion cells (RGC) and sends to visual cortex. Low threshold calcium currents (IT) at the membrane are the unique indicator to characterize this firing functionality of the LGN neurons gained by the RGC input. According to the LGN functional requirements such as functional mapping of RGC to LGN, the morphologies of the LGN neurons were developed. During the neurological disorders like glaucoma, the mapping between RGC and LGN is disconnected and hence stimulating LGN electrically using deep brain electrodes can restore the functionalities of LGN. A computational model was developed for simulating the LGN neurons with three predominant morphologies, each representing different functional mapping of RGC to LGN. The firings of action potentials at LGN neuron due to IT were characterized by varying the stimulation parameters, morphological parameters and orientation. A wide range of stimulation parameters (stimulus amplitude, duration and frequency) represents the various strengths of the electrical stimulation with different morphological parameters (soma size, dendrites size and structure). The orientation (0-1800) of LGN neuron with respect to the stimulating electrode represents the angle at which the extracellular deep brain stimulation towards LGN neuron is performed. A reduced dendrite structure was used in the model using Bush–Sejnowski algorithm to decrease the computational time while conserving its input resistance and total surface area. The major finding is that an input potential of 0.4 V is required to produce the action potential in the LGN neuron which is placed at 100 µm distance from the electrode. From this study, it can be concluded that the neuroprostheses under design would need to consider the capability of inducing at least 0.4V to produce action potentials in LGN.Keywords: Lateral Geniculate Nucleus, visual cortex, finite element, glaucoma, neuroprostheses
Procedia PDF Downloads 2771407 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores
Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan
Abstract:
Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics
Procedia PDF Downloads 130