Search results for: Fourier neural operator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3065

Search results for: Fourier neural operator

215 Interfacial Reactions between Aromatic Polyamide Fibers and Epoxy Matrix

Authors: Khodzhaberdi Allaberdiev

Abstract:

In order to understand the interactions on the interface polyamide fibers and epoxy matrix in fiber- reinforced composites were investigated industrial aramid fibers: armos, svm, terlon using individual epoxy matrix components, epoxies: diglycidyl ether of bisphenol A (DGEBA), three- and diglycidyl derivatives of m, p-amino-, m, p-oxy-, o, m,p-carboxybenzoic acids, the models: curing agent, aniline and the compound, that depict of the structure the primary addition reaction the amine to the epoxy resin, N-di (oxyethylphenoxy) aniline. The chemical structure of the surface of untreated and treated polyamide fibers analyzed using Fourier transform infrared spectroscopy (FTIR). The impregnation of fibers with epoxy matrix components and N-di (oxyethylphenoxy) aniline has been carried out by heating 150˚C (6h). The optimum fiber loading is at 65%.The result a thermal treatment is the covalent bonds formation , derived from a combined of homopolymerization and crosslinking mechanisms in the interfacial region between the epoxy resin and the surface of fibers. The reactivity of epoxy resins on interface in microcomposites (MC) also depends from processing aids treated on surface of fiber and the absorbance moisture. The influences these factors as evidenced by the conversion of epoxy groups values in impregnated with DGEBA of the terlons: industrial, dried (in vacuum) and purified samples: 5.20 %, 4.65% and 14.10%, respectively. The same tendency for svm and armos fibers is observed. The changes in surface composition of these MC were monitored by X-ray photoelectron spectroscopy (XPS). In the case of the purified fibers, functional groups of fibers act as well as a catalyst and curing agent of epoxy resin. It is found that the value of the epoxy groups conversion for reinforced formulations depends on aromatic polyamides nature and decreases in the order: armos >svm> terlon. This difference is due of the structural characteristics of fibers. The interfacial interactions also examined between polyglycidyl esters substituted benzoic acids and polyamide fibers in the MC. It is found that on interfacial interactions these systems influences as well as the structure and the isomerism of epoxides. The IR-spectrum impregnated fibers with aniline showed that the polyamide fibers appreciably with aniline do not react. FTIR results of treated fibers with N-di (oxyethylphenoxy) aniline fibers revealed dramatically changes IR-characteristic of the OH groups of the amino alcohol. These observations indicated hydrogen bondings and covalent interactions between amino alcohol and functional groups of fibers. This result also confirms appearance of the exo peak on Differential Scanning Calorimetry (DSC) curve of the MC. Finally, the theoretical evaluation non-covalent interactions between individual epoxy matrix components and fibers has been performed using the benzanilide and its derivative contaning the benzimidazole moiety as a models of terlon and svm,armos, respectively. Quantum-topological analysis also demonstrated the existence hydrogen bond between amide group of models and epoxy matrix components.All the results indicated that on the interface polyamide fibers and epoxy matrix exist not only covalent, but and non-covalent the interactions during the preparation of MC.

Keywords: epoxies, interface, modeling, polyamide fibers

Procedia PDF Downloads 266
214 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method

Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov

Abstract:

The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.

Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection

Procedia PDF Downloads 214
213 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 127
212 Census and Mapping of Oil Palms Over Satellite Dataset Using Deep Learning Model

Authors: Gholba Niranjan Dilip, Anil Kumar

Abstract:

Conduct of accurate reliable mapping of oil palm plantations and census of individual palm trees is a huge challenge. This study addresses this challenge and developed an optimized solution implemented deep learning techniques on remote sensing data. The oil palm is a very important tropical crop. To improve its productivity and land management, it is imperative to have accurate census over large areas. Since, manual census is costly and prone to approximations, a methodology for automated census using panchromatic images from Cartosat-2, SkySat and World View-3 satellites is demonstrated. It is selected two different study sites in Indonesia. The customized set of training data and ground-truth data are created for this study from Cartosat-2 images. The pre-trained model of Single Shot MultiBox Detector (SSD) Lite MobileNet V2 Convolutional Neural Network (CNN) from the TensorFlow Object Detection API is subjected to transfer learning on this customized dataset. The SSD model is able to generate the bounding boxes for each oil palm and also do the counting of palms with good accuracy on the panchromatic images. The detection yielded an F-Score of 83.16 % on seven different images. The detections are buffered and dissolved to generate polygons demarcating the boundaries of the oil palm plantations. This provided the area under the plantations and also gave maps of their location, thereby completing the automated census, with a fairly high accuracy (≈100%). The trained CNN was found competent enough to detect oil palm crowns from images obtained from multiple satellite sensors and of varying temporal vintage. It helped to estimate the increase in oil palm plantations from 2014 to 2021 in the study area. The study proved that high-resolution panchromatic satellite image can successfully be used to undertake census of oil palm plantations using CNNs.

Keywords: object detection, oil palm tree census, panchromatic images, single shot multibox detector

Procedia PDF Downloads 160
211 Quantitative Texture Analysis of Shoulder Sonography for Rotator Cuff Lesion Classification

Authors: Chung-Ming Lo, Chung-Chien Lee

Abstract:

In many countries, the lifetime prevalence of shoulder pain is up to 70%. In America, the health care system spends 7 billion per year about the healthy issues of shoulder pain. With respect to the origin, up to 70% of shoulder pain is attributed to rotator cuff lesions This study proposed a computer-aided diagnosis (CAD) system to assist radiologists classifying rotator cuff lesions with less operator dependence. Quantitative features were extracted from the shoulder ultrasound images acquired using an ALOKA alpha-6 US scanner (Hitachi-Aloka Medical, Tokyo, Japan) with linear array probe (scan width: 36mm) ranging from 5 to 13 MHz. During examination, the postures of the examined patients are standard sitting position and are followed by the regular routine. After acquisition, the shoulder US images were drawn out from the scanner and stored as 8-bit images with pixel value ranging from 0 to 255. Upon the sonographic appearance, the boundary of each lesion was delineated by a physician to indicate the specific pattern for analysis. The three lesion categories for classification were composed of 20 cases of tendon inflammation, 18 cases of calcific tendonitis, and 18 cases of supraspinatus tear. For each lesion, second-order statistics were quantified in the feature extraction. The second-order statistics were the texture features describing the correlations between adjacent pixels in a lesion. Because echogenicity patterns were expressed via grey-scale. The grey-scale co-occurrence matrixes with four angles of adjacent pixels were used. The texture metrics included the mean and standard deviation of energy, entropy, correlation, inverse different moment, inertia, cluster shade, cluster prominence, and Haralick correlation. Then, the quantitative features were combined in a multinomial logistic regression classifier to generate a prediction model of rotator cuff lesions. Multinomial logistic regression classifier is widely used in the classification of more than two categories such as the three lesion types used in this study. In the classifier, backward elimination was used to select a feature subset which is the most relevant. They were selected from the trained classifier with the lowest error rate. Leave-one-out cross-validation was used to evaluate the performance of the classifier. Each case was left out of the total cases and used to test the trained result by the remaining cases. According to the physician’s assessment, the performance of the proposed CAD system was shown by the accuracy. As a result, the proposed system achieved an accuracy of 86%. A CAD system based on the statistical texture features to interpret echogenicity values in shoulder musculoskeletal ultrasound was established to generate a prediction model for rotator cuff lesions. Clinically, it is difficult to distinguish some kinds of rotator cuff lesions, especially partial-thickness tear of rotator cuff. The shoulder orthopaedic surgeon and musculoskeletal radiologist reported greater diagnostic test accuracy than general radiologist or ultrasonographers based on the available literature. Consequently, the proposed CAD system which was developed according to the experiment of the shoulder orthopaedic surgeon can provide reliable suggestions to general radiologists or ultrasonographers. More quantitative features related to the specific patterns of different lesion types would be investigated in the further study to improve the prediction.

Keywords: shoulder ultrasound, rotator cuff lesions, texture, computer-aided diagnosis

Procedia PDF Downloads 284
210 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI

Procedia PDF Downloads 171
209 An Approach to Autonomous Drones Using Deep Reinforcement Learning and Object Detection

Authors: K. R. Roopesh Bharatwaj, Avinash Maharana, Favour Tobi Aborisade, Roger Young

Abstract:

Presently, there are few cases of complete automation of drones and its allied intelligence capabilities. In essence, the potential of the drone has not yet been fully utilized. This paper presents feasible methods to build an intelligent drone with smart capabilities such as self-driving, and obstacle avoidance. It does this through advanced Reinforcement Learning Techniques and performs object detection using latest advanced algorithms, which are capable of processing light weight models with fast training in real time instances. For the scope of this paper, after researching on the various algorithms and comparing them, we finally implemented the Deep-Q-Networks (DQN) algorithm in the AirSim Simulator. In future works, we plan to implement further advanced self-driving and object detection algorithms, we also plan to implement voice-based speech recognition for the entire drone operation which would provide an option of speech communication between users (People) and the drone in the time of unavoidable circumstances. Thus, making drones an interactive intelligent Robotic Voice Enabled Service Assistant. This proposed drone has a wide scope of usability and is applicable in scenarios such as Disaster management, Air Transport of essentials, Agriculture, Manufacturing, Monitoring people movements in public area, and Defense. Also discussed, is the entire drone communication based on the satellite broadband Internet technology for faster computation and seamless communication service for uninterrupted network during disasters and remote location operations. This paper will explain the feasible algorithms required to go about achieving this goal and is more of a reference paper for future researchers going down this path.

Keywords: convolution neural network, natural language processing, obstacle avoidance, satellite broadband technology, self-driving

Procedia PDF Downloads 251
208 Personalizing Human Physical Life Routines Recognition over Cloud-based Sensor Data via AI and Machine Learning

Authors: Kaushik Sathupadi, Sandesh Achar

Abstract:

Pervasive computing is a growing research field that aims to acknowledge human physical life routines (HPLR) based on body-worn sensors such as MEMS sensors-based technologies. The use of these technologies for human activity recognition is progressively increasing. On the other hand, personalizing human life routines using numerous machine-learning techniques has always been an intriguing topic. In contrast, various methods have demonstrated the ability to recognize basic movement patterns. However, it still needs to be improved to anticipate the dynamics of human living patterns. This study introduces state-of-the-art techniques for recognizing static and dy-namic patterns and forecasting those challenging activities from multi-fused sensors. Further-more, numerous MEMS signals are extracted from one self-annotated IM-WSHA dataset and two benchmarked datasets. First, we acquired raw data is filtered with z-normalization and denoiser methods. Then, we adopted statistical, local binary pattern, auto-regressive model, and intrinsic time scale decomposition major features for feature extraction from different domains. Next, the acquired features are optimized using maximum relevance and minimum redundancy (mRMR). Finally, the artificial neural network is applied to analyze the whole system's performance. As a result, we attained a 90.27% recognition rate for the self-annotated dataset, while the HARTH and KU-HAR achieved 83% on nine living activities and 90.94% on 18 static and dynamic routines. Thus, the proposed HPLR system outperformed other state-of-the-art systems when evaluated with other methods in the literature.

Keywords: artificial intelligence, machine learning, gait analysis, local binary pattern (LBP), statistical features, micro-electro-mechanical systems (MEMS), maximum relevance and minimum re-dundancy (MRMR)

Procedia PDF Downloads 20
207 Autophagy Acceleration and Self-Healing by the Revolution against Frequent Eating, High Glycemic and Unabsorbable Substances as One Meal a Day Plan

Authors: Reihane Mehrparvar

Abstract:

Human age could exceed further by altering gene expression through food intaking, although as a consequence of recent century eating patterns, human life-span getting shorter by emerging irregulating in autophagy mechanism, insulin, leptin, gut microbiota which are important etiological factors of type-2 diabetes, obesity, infertility, cancer, metabolic and autoimmune diseases. However, restricted calorie intake and vigorous exercise might be beneficial for losing weight and metabolic regulation in a short period but could not be implementable in the long term as a way of life. Therefore, the lack of a dietary program that is compatible with the genes of the body is essential. Sweet and high-glycemic-index (HGI) foods were associated with type-2 diabetes and cancer morbidity. The neuropsychological perspective characterizes the inclination of sweet and HGI-food consumption as addictive behavior; hence this process engages preference of gut microbiota, neural node, and dopaminergic functions. Moreover, meal composition is not the only factor that affects body hemostasis. In this narrative review, it is believed to attempt to investigate how the body responded to different food intakes and represent an accurate model based on current evidence. Eating frequently and ingesting unassimilable protein and carbohydrates may not be compatible with human genes and could cause impairments in the self-renovation mechanism. This trajectory indicates our body is more adapted to starvation and eating animal meat and marrow. Here has been recommended a model that takes into account three important factors: frequent eating, meal composition, and circadian rhythm, which may offer a promising intervention for obesity, inflammation, cardiovascular, autoimmune disorder, type-2 diabetes, insulin resistance, infertility, and cancer through intensifying autophagy-mechanism and eliminate medical costs.

Keywords: metabolic disease, anti-aging, type-2 diabetes, autophagy

Procedia PDF Downloads 81
206 AI for Efficient Geothermal Exploration and Utilization

Authors: Velimir Monty Vesselinov, Trais Kliplhuis, Hope Jasperson

Abstract:

Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.

Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal

Procedia PDF Downloads 53
205 Spatial and Temporal Variability of Meteorological Drought Including Atmospheric Circulation in Central Europe

Authors: Andrzej Wałęga, Marta Cebulska, Agnieszka Ziernicka-Wojtaszek, Wojciech Młocek, Agnieszka Wałęga, Tommaso Caloiero

Abstract:

Drought is one of the natural phenomena influencing many aspects of human activities like food production, agriculture, industry, and the ecological conditions of the environment. In the area of the Polish Carpathians, there are periods with a deficit of rainwater and an increasing frequency in dry months, especially in the cold half of the year. The aim of this work is a spatial and temporal analysis of drought, expressed as SPI in a heterogenous area of the Polish Carpathian and of the highland Region in the Central part of Europe based on long-term precipitation data. Also, to our best knowledge, for the first time in this work, drought characteristics analyzed via the SPI were discussed based on the atmospheric circulation calendar. The study region is the Upper Vistula Basin, located in the southern and south-eastern part of Poland. In this work, monthly precipitation from 56 rainfall stations was analysed from 1961 to 2022. The 3-, 6-, 9-, and 12-month Standardized Precipitation Index (SPI) were used as indicators of meteorological drought. For the 3-month SPI, the main climatic mechanisms determining extreme droughts were defined based on the calendar of synoptic circulations. The Mann-Kendall test was used to detect the trend of extreme droughts. Statistically significant trends of SPI were observed on 52.7% of all analyzed stations, and in most cases, a positive trend was observed. Statistically significant trends were more frequently observed in stations located in the western part of the analyzed region. Long-term droughts, represented by the 12-month SPI, occurred in all stations but not in all years. Short-term droughts (3-month SPI) were most frequent in the winter season, 6 and 9-month SPI in winter and spring, and 12-month SPI in winter and autumn, respectively. The spatial distribution of drought was highly diverse. The most intensive drought occurred in 1984, with the 6-month SPI covering 98% of the analyzed region and the 9 and 12-month SPI covering 90% of the entire region. Droughts exhibit a seasonal pattern, with a dominant 10-year periodicity for all analyzed variants of SPI. Additionally, Fourier analysis revealed a 2-year periodicity for the 3-, 6-, and 9-month SPI and a 31-year periodicity for the 12-month SPI. The results provide insights into the typical climatic conditions in Poland, with strong seasonality in precipitation. The study highlighted that short-term extreme droughts, represented by the 3-month SPI, are often caused by anticyclonic situations with high-pressure wedges Ka and Wa, and anticyclonic West as observed in 52.3% of cases. These findings are crucial for understanding the spatial and temporal variability of short and long-term extreme droughts in Central Europe, particularly for the agriculture sector dominant in the northern part of the analyzed region, where drought frequency is highest.

Keywords: atmospheric circulation, drought, precipitation, SPI, the Upper Vistula Basin

Procedia PDF Downloads 74
204 Hybridization of Mathematical Transforms for Robust Video Watermarking Technique

Authors: Harpal Singh, Sakshi Batra

Abstract:

The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility.

Keywords: discrete wavelet transform, robustness, video watermarking, watermark

Procedia PDF Downloads 224
203 Changing Emphases in Mental Health Research Methodology: Opportunities for Occupational Therapy

Authors: Jeffrey Chase

Abstract:

Historically the profession of Occupational Therapy was closely tied to the treatment of those suffering from mental illness; more recently, and especially in the U.S., the percentage of OTs identifying as working in the mental health area has declined significantly despite the estimate that by 2020 behavioral health disorders will surpass physical illnesses as the major cause of disability worldwide. In the U.S. less than 10% of OTs identify themselves as working with the mentally ill and/or practicing in mental health settings. Such a decline has implications for both those suffering from mental illness and the profession of Occupational Therapy. One reason cited for the decline of OT in mental health has been the limited research in the discipline addressing mental health practice. Despite significant advances in technology and growth in the field of neuroscience, major institutions and funding sources such as the National Institute of Mental Health (NIMH) have noted that research into the etiology and treatment of mental illness have met with limited success over the past 25 years. One major reason posited by NIMH is that research has been limited by how we classify individuals, that being mostly on what is observable. A new classification system being developed by NIMH, the Research Domain Criteria (RDoc), has the goal to look beyond just descriptors of disorders for common neural, genetic, and physiological characteristics that cut across multiple supposedly separate disorders. The hope is that by classifying individuals along RDoC measures that both reliability and validity will improve resulting in greater advances in the field. As a result of this change NIH and NIMH will prioritize research funding to those projects using the RDoC model. Multiple disciplines across many different setting will be required for RDoC or similar classification systems to be developed. During this shift in research methodology OT has an opportunity to reassert itself into the research and treatment of mental illness, both in developing new ways to more validly classify individuals, and to document the legitimacy of previously ill-defined and validated disorders such as sensory integration.

Keywords: global mental health and neuroscience, research opportunities for ot, greater integration of ot in mental health research, research and funding opportunities, research domain criteria (rdoc)

Procedia PDF Downloads 275
202 The Effect of Framework Structure on N2O Formation over Cu-Based Zeolites during NH3-SCR Reactions

Authors: Ghodsieh Isapour Toutizad, Aiyong Wang, Joonsoo Han, Derek Creaser, Louise Olsson, Magnus Skoglundh, Hanna HaRelind

Abstract:

Nitrous oxide (N2O), which is generally formed as a byproduct of industrial chemical processes and fossil fuel combustion, has attracted considerable attention due to its destructive role in global warming and ozone layer depletion. From various developed technologies used for lean NOx reduction, the selective catalytic reduction (SCR) of NOx with ammonia is presently the most applied method. Therefore, the development of catalysts for efficient lean NOx reduction without forming N2O in the process, or only forming it to a very small extent from the exhaust gases is of crucial significance. One type of catalysts that nowadays are used for this aim are zeolite-based catalysts. It is owing to their remarkable catalytic performance under practical reaction conditions such as high thermal stability and high N2 selectivity. Among all zeolites, copper ion-exchanged zeolites, with CHA, MFI, and BEA framework structure (like SSZ-13, ZSM-5 and Beta, respectively), represent higher hydrothermal stability, high activity and N2 selectivity. This work aims at investigating the effect of the zeolite framework structure on the formation of N2O during NH3-SCR reaction conditions over three Cu-based zeolites ranging from small-pore to large-pore framework structure. In the zeolite framework, Cu exists in two cationic forms, that can catalyze the SCR reaction by activating NO to form NO+ and/or surface nitrate species. The nitrate species can thereafter react with NH3 to form another intermediate, ammonium nitrate, which seems to be one source for N2O formation at low temperatures. The results from in situ diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) indicate that during the NO oxidation step, mainly NO+ and nitrate species are formed on the surface of the catalysts. The intensity of the absorption peak attributed to NO+ species is higher for the Cu-CHA sample compared to the other two samples, indicating a higher stability of this species in small cages. Furthermore, upon the addition of NH3, through the standard SCR reaction conditions, absorption peaks assigned to N-H stretching and bending vibrations are building up. At the same time, negative peaks are evolving in the O-H stretching region, indicating blocking/replacement of surface OH-groups by NH3 and NH4+. By removing NH3 and adding NO2 to the inlet gas composition, the peaks in the N-H stretching and bending vibration regions show a decreasing trend in intensity, with the decrease being more pronounced for increasing pore size. It can probably be owing to the higher accumulation of ammonia species in the small-pore size zeolite compared to the other two samples. Furthermore, it is worth noting that the ammonia surface species are strongly bonded to the CHA zeolite structure, which makes it more difficult to react with NO2. To conclude, the framework structure of the zeolite seems to play an important role in the formation and reactivity of surface species relevant for the SCR process. Here we intend to discuss the connection between the zeolite structure, the surface species, and the formation of N2O during ammonia-SCR.

Keywords: fast SCR, nitrous oxide, NOx, standard SCR, zeolites

Procedia PDF Downloads 236
201 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks

Procedia PDF Downloads 354
200 Structural Correlates of Reduced Malicious Pleasure in Huntington's Disease

Authors: Sandra Baez, Mariana Pino, Mildred Berrio, Hernando Santamaria-Garcia, Lucas Sedeno, Adolfo Garcia, Sol Fittipaldi, Agustin Ibanez

Abstract:

Schadenfreude refers to the perceiver’s experience of pleasure at another’s misfortune. This is a multidetermined emotion which can be evoked by hostile feelings and envy. The experience of Schadenfreude engages mechanisms implicated in diverse social cognitive processes. For instance, Schadenfreude involves heightened reward processing, accompanied by increased striatal engagement and it interacts with mentalizing and perspective-taking abilities. Patients with Huntington's disease (HD) exhibit reductions of Schadenfreude experience, suggesting a role of striatal degeneration in such an impairment. However, no study has directly assessed the relationship between regional brain atrophy in HD and reduced Schadenfreude. This study investigated whether gray matter (GM) atrophy in HD patients correlates with ratings of Schadenfreude. First, we compared the performance of 20 HD patients and 23 controls on an experimental task designed to trigger Schadenfreude and envy (another social emotion acting as a control condition). Second, we compared GM volume between groups. Third, we examined brain regions where atrophy might be associated with specific impairments in the patients. Results showed that while both groups showed similar ratings of envy, HD patients reported lower Schadenfreude. The latter pattern was related to atrophy in regions of the reward system (ventral striatum) and the mentalizing network (precuneus and superior parietal lobule). Our results shed light on the intertwining of reward and socioemotional processes in Schadenfreude, while offering novel evidence about their neural correlates. In addition, our results open the door to future studies investigating social emotion processing in other clinical populations characterized by striatal or mentalizing network impairments (e.g., Parkinson’s disease, schizophrenia, autism spectrum disorders).

Keywords: envy, Gray matter atrophy, Huntigton's disease, Schadenfreude, social emotions

Procedia PDF Downloads 335
199 Fabrication of High Energy Hybrid Capacitors from Biomass Waste-Derived Activated Carbon

Authors: Makhan Maharjan, Mani Ulaganathan, Vanchiappan Aravindan, Srinivasan Madhavi, Jing-Yuan Wang, Tuti Mariana Lim

Abstract:

There is great interest to exploit sustainable, low-cost, renewable resources as carbon precursors for energy storage applications. Research on development of energy storage devices has been growing rapidly due to mismatch in power supply and demand from renewable energy sources This paper reported the synthesis of porous activated carbon from biomass waste and evaluated its performance in supercapicators. In this work, we employed orange peel (waste material) as the starting material and synthesized activated carbon by pyrolysis of KOH impregnated orange peel char at 800 °C in argon atmosphere. The resultant orange peel-derived activated carbon (OP-AC) exhibited a high BET surface area of 1,901 m2 g-1, which is the highest surface area so far reported for the orange peel. The pore size distribution (PSD) curve exhibits the pores centered at 11.26 Å pore width, suggesting dominant microporosity. The OP-AC was studied as positive electrode in combination with different negative electrode materials, such as pre-lithiated graphite (LiC6) and Li4Ti5O12 for making different hybrid capacitors. The lithium ion capacitor (LIC) fabricated using OP-AC with pre-lithiated graphite delivered a high energy density of ~106 Wh kg–1. The energy density for OP-AC||Li4Ti5O12 capacitor was ~35 Wh kg–1. For comparison purpose, configuration of OP-AC||OP-AC capacitors were studied in both aqueous (1M H2SO4) and organic (1M LiPF6 in EC-DMC) electrolytes, which delivered the energy density of 6.6 Wh kg-1 and 16.3 Wh kg-1, respectively. The cycling retentions obtained at current density of 1 A g–1 were ~85.8, ~87.0 ~82.2 and ~58.8% after 2500 cycles for OP-AC||OP-AC (aqueous), OP-AC||OP-AC (organic), OP-AC||Li4Ti5O12 and OP-AC||LiC6 configurations, respectively. In addition, characterization studies were performed by elemental and proximate composition, thermogravimetry, field emission-scanning electron microscopy, Raman spectra, X-ray diffraction (XRD) pattern, Fourier transform-infrared, X-ray photoelectron spectroscopy (XPS) and N2 sorption isotherms. The morphological features from FE-SEM exhibited well-developed porous structures. Two typical broad peaks observed in the XRD framework of the synthesized carbon implies amorphous graphitic structure. The ratio of 0.86 for ID/IG in Raman spectra infers high degree of graphitization in the sample. The band spectra of C 1s in XPS display the well resolved peaks related to carbon atoms in various chemical environments; for instances, the characteristics binding energies appeared at ~283.83, ~284.83, ~286.13, ~288.56, and ~290.70 eV which correspond to sp2 -graphitic C, sp3 -graphitic C, C-O, C=O and π-π*, respectively. Characterization studies revealed the synthesized carbon to be promising electrode material towards the application for energy storage devices. The findings opened up the possibility of developing high energy LICs from abundant, low-cost, renewable biomass waste.

Keywords: lithium-ion capacitors, orange peel, pre-lithiated graphite, supercapacitors

Procedia PDF Downloads 243
198 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model

Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis

Abstract:

Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).

Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry

Procedia PDF Downloads 225
197 Radar Fault Diagnosis Strategy Based on Deep Learning

Authors: Bin Feng, Zhulin Zong

Abstract:

Radar systems are critical in the modern military, aviation, and maritime operations, and their proper functioning is essential for the success of these operations. However, due to the complexity and sensitivity of radar systems, they are susceptible to various faults that can significantly affect their performance. Traditional radar fault diagnosis strategies rely on expert knowledge and rule-based approaches, which are often limited in effectiveness and require a lot of time and resources. Deep learning has recently emerged as a promising approach for fault diagnosis due to its ability to learn features and patterns from large amounts of data automatically. In this paper, we propose a radar fault diagnosis strategy based on deep learning that can accurately identify and classify faults in radar systems. Our approach uses convolutional neural networks (CNN) to extract features from radar signals and fault classify the features. The proposed strategy is trained and validated on a dataset of measured radar signals with various types of faults. The results show that it achieves high accuracy in fault diagnosis. To further evaluate the effectiveness of the proposed strategy, we compare it with traditional rule-based approaches and other machine learning-based methods, including decision trees, support vector machines (SVMs), and random forests. The results demonstrate that our deep learning-based approach outperforms the traditional approaches in terms of accuracy and efficiency. Finally, we discuss the potential applications and limitations of the proposed strategy, as well as future research directions. Our study highlights the importance and potential of deep learning for radar fault diagnosis. It suggests that it can be a valuable tool for improving the performance and reliability of radar systems. In summary, this paper presents a radar fault diagnosis strategy based on deep learning that achieves high accuracy and efficiency in identifying and classifying faults in radar systems. The proposed strategy has significant potential for practical applications and can pave the way for further research.

Keywords: radar system, fault diagnosis, deep learning, radar fault

Procedia PDF Downloads 90
196 A Digital Clone of an Irrigation Network Based on Hardware/Software Simulation

Authors: Pierre-Andre Mudry, Jean Decaix, Jeremy Schmid, Cesar Papilloud, Cecile Munch-Alligne

Abstract:

In most of the Swiss Alpine regions, the availability of water resources is usually adequate even in times of drought, as evidenced by the 2003 and 2018 summers. Indeed, important natural stocks are for the moment available in the form of snow and ice, but the situation is likely to change in the future due to global and regional climate change. In addition, alpine mountain regions are areas where climate change will be felt very rapidly and with high intensity. For instance, the ice regime of these regions has already been affected in recent years with a modification of the monthly availability and extreme events of precipitations. The current research, focusing on the municipality of Val de Bagnes, located in the canton of Valais, Switzerland, is part of a project led by the Altis company and achieved in collaboration with WSL, BlueArk Entremont, and HES-SO Valais-Wallis. In this region, water occupies a key position notably for winter and summer tourism. Thus, multiple actors want to apprehend the future needs and availabilities of water, on both the 2050 and 2100 horizons, in order to plan the modifications to the water supply and distribution networks. For those changes to be salient and efficient, a good knowledge of the current water distribution networks is of most importance. In the current case, the water drinking network is well documented, but this is not the case for the irrigation one. Since the water consumption for irrigation is ten times higher than for drinking water, data acquisition on the irrigation network is a major point to determine future scenarios. This paper first presents the instrumentation and simulation of the irrigation network using custom-designed IoT devices, which are coupled with a digital clone simulated to reduce the number of measuring locations. The developed IoT ad-hoc devices are energy-autonomous and can measure flows and pressures using industrial sensors such as calorimetric water flow meters. Measurements are periodically transmitted using the LoRaWAN protocol over a dedicated infrastructure deployed in the municipality. The gathered values can then be visualized in real-time on a dashboard, which also provides historical data for analysis. In a second phase, a digital clone of the irrigation network was modeled using EPANET, a software for water distribution systems that performs extended-period simulations of flows and pressures in pressurized networks composed of reservoirs, pipes, junctions, and sinks. As a preliminary work, only a part of the irrigation network was modelled and validated by comparisons with the measurements. The simulations are carried out by imposing the consumption of water at several locations. The validation is performed by comparing the simulated pressures are different nodes with the measured ones. An accuracy of +/- 15% is observed on most of the nodes, which is acceptable for the operator of the network and demonstrates the validity of the approach. Future steps will focus on the deployment of the measurement devices on the whole network and the complete modelling of the network. Then, scenarios of future consumption will be investigated. Acknowledgment— The authors would like to thank the Swiss Federal Office for Environment (FOEN), the Swiss Federal Office for Agriculture (OFAG) for their financial supports, and ALTIS for the technical support, this project being part of the Swiss Pilot program 'Adaptation aux changements climatiques'.

Keywords: hydraulic digital clone, IoT water monitoring, LoRaWAN water measurements, EPANET, irrigation network

Procedia PDF Downloads 145
195 Outcome of Bowel Management Program in Patient with Spinal Cord Injury

Authors: Roongtiwa Chobchuen, Angkana Srikhan, Pattra Wattanapan

Abstract:

Background: Neurogenic bowel is common condition after spinal cord injury. Most of spinal cord injured patients have motor weakness, mobility impairment which leads to constipation. Moreover, the neural pathway involving bowel function is interrupted. Therefore, the bowel management program should be implemented in nursing care in the earliest time after the onset of the disease to prevent the morbidity and mortality. Objective: To study the outcome of bowel management program of the patients with spinal cord injury who admitted for rehabilitation program. Study design: Descriptive study. Setting: Rehabilitation ward in Srinagarind Hospital. Populations: patients with subacute to chronic spinal cord injury who admitted at rehabilitation ward, Srinagarind hospital, aged over 18 years old. Instrument: The neurogenic bowel dysfunction score (NBDS) was used to determine the severity of neurogenic bowel. Procedure and statistical analysis: All participants were asked to complete the demographic data; age gender, duration of disease, diagnosis. The individual bowel function was assessed using NBDS at admission. The patients and caregivers were trained by nurses about the bowel management program which consisted of diet modification, abdominal massage, digital stimulation, stool evacuation including medication and physical activity. The outcome of the bowel management program was assessed by NBDS at discharge. The chi-square test was used to detect the difference in severity of neurogenic bowel at admission and discharge. Results: Sixteen spinal cord injured patients were enrolled in the study (age 45 ± 17 years old, 69% were male). Most of them (50%) were tetraplegia. On the admission, 12.5%, 12.5%, 43.75% and 31.25% were categorized as very minor (NBDS 0-6), minor (NBDS 7-9), moderate (NBDS 10-13) and severe (NBDS 14+) respectively. The severity of neurogenic bowel was decreased significantly at discharge (56.25%, 18.755%, 18.75% and 6.25% for very minor, minor, moderate and severe group respectively; p < 0.001) compared with NBDS at admission. Conclusions: Implementation of the effective bowel program decrease the severity of the neurogenic bowel in patient with spinal cord injury.

Keywords: neurogenic bowel, NBDS, spinal cord injury, bowel program

Procedia PDF Downloads 243
194 Joubert Syndrome and Related Disorders: A Single Center Experience

Authors: Ali Al Orf, Khawaja Bilal Waheed

Abstract:

Background and objective: Joubert syndrome (JS) is a rare, autosomal-recessive condition. Early recognition is important for management and counseling. Magnetic resonance imaging (MRI) can help in diagnosis. Therefore, we sought to evaluate clinical presentation and MRI findings in Joubert syndrome and related disorders. Method: A retrospective review of genetically proven cases of Joubert syndromes and related disorders was reviewed for their clinical presentation, demographic information, and magnetic resonance imaging findings in a period of the last 10 years. Two radiologists documented magnetic resonance imaging (MRI) findings. The presence of hypoplasia of the cerebellar vermis with hypoplasia of the superior cerebellar peduncle resembling the “Molar Tooth Sign” in the mid-brain was documented. Genetic testing results were collected to label genes linked to the diagnoses. Results: Out of 12 genetically proven JS cases, most were females (9/12), and nearly all presented with hypotonia, ataxia, developmental delay, intellectual impairment, and speech disorders. 5/12 children presented at age of 1 or below. The molar tooth sign was seen in 10/12 cases. Two cases were associated with other brain findings. Most of the cases were found associated with consanguineous marriage Conclusion and discussion: The molar tooth sign is a frequent and reliable sign of JS and related disorders. Genes related to defective cilia result in malfunctioning in the retina, renal tubule, and neural cell migration, thus producing heterogeneous syndrome complexes known as “ciliopathies.” Other ciliopathies like Senior-Loken syndrome, Bardet Biedl syndrome, and isolated nephronophthisis must be considered as the differential diagnosis of JS. The main imaging findings are the partial or complete absence of the cerebellar vermis, hypoplastic cerebellar peduncles (giving MTS), and (bat-wing appearance) fourth ventricular deformity. LimitationsSingle-center, small sample size, and retrospective nature of the study were a few of the study limitations.

Keywords: Joubart syndrome, magnetic resonance imaging, molar tooth sign, hypotonia

Procedia PDF Downloads 95
193 Image Processing-Based Maize Disease Detection Using Mobile Application

Authors: Nathenal Thomas

Abstract:

In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.

Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot

Procedia PDF Downloads 74
192 Adaptor Protein APPL2 Could Be a Therapeutic Target for Improving Hippocampal Neurogenesis and Attenuating Depressant Behaviors and Olfactory Dysfunctions in Chronic Corticosterone-induced Depression

Authors: Jiangang Shen

Abstract:

Olfactory dysfunction is a common symptom companied by anxiety- and depressive-like behaviors in depressive patients. Chronic stress triggers hormone responses and inhibits the proliferation and differentiation of neural stem cells (NSCs) in the hippocampus and subventricular zone (SVZ)-olfactory bulb (OB), contributing to depressive behaviors and olfactory dysfunction. However, the cellular signaling molecules to regulate chronic stress mediated olfactory dysfunction are largely unclear. Adaptor proteins containing the pleckstrin homology domain, phosphotyrosine binding domain, and leucine zipper motif (APPLs) are multifunctional adaptor proteins. Herein, we tested the hypothesis that APPL2 could inhibit hippocampal neurogenesis by affecting glucocorticoid receptor (GR) signaling, subsequently contributing to depressive and anxiety behaviors as well as olfactory dysfunctions. The major discoveries are included: (1) APPL2 Tg mice had enhanced GR phosphorylation under basic conditions but had no different plasma corticosterone (CORT) level and GR phosphorylation under stress stimulation. (2) APPL2 Tg mice had impaired hippocampal neurogenesis and revealed depressive and anxiety behaviors. (3) GR antagonist RU486 reversed the impaired hippocampal neurogenesis in the APPL2 Tg mice. (4) APPL2 Tg mice displayed higher GR activity and less capacity for neurogenesis at the olfactory system with lesser olfactory sensitivity than WT mice. (5) APPL2 negatively regulates olfactory functions by switching fate commitments of NSCs in adult olfactory bulbs via interaction with Notch1 signaling. Furthermore, baicalin, a natural medicinal compound, was found to be a promising agent targeting APPL2/GR signaling and promoting adult neurogenesis in APPL2 Tg mice and chronic corticosterone-induced depression mouse models. Behavioral tests revealed that baicalin had antidepressant and olfactory-improving effects. Taken together, APPL2 is a critical therapeutic target for antidepressant treatment.

Keywords: APPL2, hippocampal neurogenesis, depressive behaviors and olfactory dysfunction, stress

Procedia PDF Downloads 76
191 Time's Arrow and Entropy: Violations to the Second Law of Thermodynamics Disrupt Time Perception

Authors: Jason Clarke, Michaela Porubanova, Angela Mazzoli, Gulsah Kut

Abstract:

What accounts for our perception that time inexorably passes in one direction, from the past to the future, the so-called arrow of time, given that the laws of physics permit motion in one temporal direction to also happen in the reverse temporal direction? Modern physics says that the reason for time’s unidirectional physical arrow is the relationship between time and entropy, the degree of disorder in the universe, which is evolving from low entropy (high order; thermal disequilibrium) toward high entropy (high disorder; thermal equilibrium), the second law of thermodynamics. Accordingly, our perception of the direction of time, from past to future, is believed to emanate as a result of the natural evolution of entropy from low to high, with low entropy defining our notion of ‘before’ and high entropy defining our notion of ‘after’. Here we explored this proposed relationship between entropy and the perception of time’s arrow. We predicted that if the brain has some mechanism for detecting entropy, whose output feeds into processes involved in constructing our perception of the direction of time, presentation of violations to the expectation that low entropy defines ‘before’ and high entropy defines ‘after’ would alert this mechanism, leading to measurable behavioral effects, namely a disruption in duration perception. To test this hypothesis, participants were shown briefly-presented (1000 ms or 500 ms) computer-generated visual dynamic events: novel 3D shapes that were seen either to evolve from whole figures into parts (low to high entropy condition) or were seen in the reverse direction: parts that coalesced into whole figures (high to low entropy condition). On each trial, participants were instructed to reproduce the duration of their visual experience of the stimulus by pressing and releasing the space bar. To ensure that attention was being deployed to the stimuli, a secondary task was to report the direction of the visual event (forward or reverse motion). Participants completed 60 trials. As predicted, we found that duration reproduction was significantly longer for the high to low entropy condition compared to the low to high entropy condition (p=.03). This preliminary data suggests the presence of a neural mechanism that detects entropy, which is used by other processes to construct our perception of the direction of time or time’s arrow.

Keywords: time perception, entropy, temporal illusions, duration perception

Procedia PDF Downloads 172
190 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 105
189 i-Plastic: Surface and Water Column Microplastics From the Coastal North Eastern Atlantic (Portugal)

Authors: Beatriz Rebocho, Elisabete Valente, Carla Palma, Andreia Guilherme, Filipa Bessa, Paula Sobral

Abstract:

The global accumulation of plastic in the oceans is a growing problem. Plastic is transported from its source to the oceans via rivers, which are considered the main route for plastic particles from land-based sources to the ocean. These plastics undergo physical and chemical degradation resulting in microplastics. The i-Plastic project aims to understand and predict the dispersion, accumulation and impacts of microplastics (5 mm to 1 µm) and nano plastics (below 1 µm) in marine environments from the tropical and temperate land-ocean interface to the open ocean under distinct flow and climate regimes. Seasonal monitoring of the fluxes of microplastics was carried out in (three) coastal areas in Brazil, Portugal and Spain. The present work shows the first results of in-situ seasonal monitoring and mapping of microplastics in ocean waters between Ovar and Vieira de Leiria (Portugal), in which 43 surface water samples and 43 water column samples were collected in contrasting seasons (spring and autumn). The spring and autumn surface water samples were collected with a 300 µm and 150 µm pore neuston net, respectively. In both campaigns, water column samples were collected using a conical mesh with a 150 µm pore. The experimental procedure comprises the following steps: i) sieving by a metal sieve; ii) digestion with potassium hydroxide to remove the organic matter original from the sample matrix. After a filtration step, the content is retained on a membrane and observed under a stereomicroscope, and physical and chemical characterization (type, color, size, and polymer composition) of the microparticles is performed. Results showed that 84% and 88% of the surface water and water column samples were contaminated with microplastics, respectively. Surface water samples collected during the spring campaign averaged 0.35 MP.m-3, while surface water samples collected during autumn recorded 0.39 MP.m-3. Water column samples from the spring campaign had an average of 1.46 MP.m-3, while those from the autumn recorded 2.54 MP.m-3. In the spring, all microplastics found were fibers, predominantly black and blue. In autumn, the dominant particles found in the surface waters were fibers, while in the water column, fragments were dominant. In spring, the average size of surface water particles was 888 μm, while in the water column was 1063 μm. In autumn, the average size of surface and water column microplastics was 1333 μm and 1393 μm, respectively. The main polymers identified by Attenuated Total Reflectance (ATR) and micro-ATR Fourier Transform Infrared (FTIR) spectroscopy from all samples were low-density polyethylene (LDPE), polypropylene (PP), polyethylene terephthalate (PET), and polyvinyl chloride (PVC). The significant difference between the microplastic concentration in the water column between the two campaigns could be due to the remixing of the water masses that occurred that week due to the occurrence of a storm. This work presents preliminary results since the i-Plastic project is still in progress. These results will contribute to the understanding of the spatial and temporal dispersion and accumulation of microplastics in this marine environment.

Keywords: microplastics, Portugal, Atlantic Ocean, water column, surface water

Procedia PDF Downloads 80
188 Antimicrobial and Aroma Finishing of Organic Cotton Knits Using Vetiver Oil Microcapsules for Health Care Textiles

Authors: K. J. Sannapapamma, H. Malligawad Lokanath, Sakeena Naikwadi

Abstract:

Eco-friendly textiles are gaining importance among the consumers and textile manufacturers in the healthcare sector due to increased environmental pollution which leads to several health and environmental hazards. Hence, the research was designed to cultivate and develop the organic cotton knit, to prepare and characterize the Vetiver oil microcapsules for textile finishing and to access the wash durability of finished knits. The cotton SAHANA variety grown under organic production systems was processed and spun into 30 single yarn dyed with four natural colorants (Arecanut slurry, Eucalyptus leaves, Pomegranate rind and Indigo) and eco dyed yarn was further used for development of single jersy knitted fabric. Vetiveria zizanioides is an aromatic grass which is being traditionally used in medicine and perfumery. Vetiver essential oil was used for preparation of microcapsules by interfacial polymerization technique subjected to Gas Chromatography Mass Spectrometry (GCMS), Fourier Transform Infrared Spectroscopy (FTIR), Thermo Gravimetric Analyzer (TGA) and Scanning Electron Microscope (SEM) for characterization of microcapsules. The knitted fabric was finished with vetiver oil microcapsules by exhaust and pad dry cure methods. The finished organic knit was assessed for laundering on antimicrobial efficiency and aroma intensity. GCMS spectral analysis showed that, diethyl phthalate (28%) was the major compound found in vetiver oil followed by isoaromadendrene epoxide (7.72%), beta-vetivenene (6.92%), solavetivone (5.58%), aromadenderene, azulene and khusimol. Bioassay explained that, the vetiver oil and diluted vetiver oil possessed greater zone of inhibition against S. aureus and E. coli than the coconut oil. FTRI spectra of vetiver oil and microcapsules possessed similar peaks viz., C-H, C=C & C꞊O stretching and additionally oil microcapsules possessed the peak of 3331.24 cm-1 at 91.14 transmittance was attributed to N-H stretches. TGA of oil microcapsules revealed that, there was a minimum weight loss (5.835%) recorded at 467.09°C compared to vetiver oil i.e., -3.026% at the temperature of 396.24°C. The shape of the microcapsules was regular and round, some were spherical in shape and few were rounded by small aggregates. Irrespective of methods of application, organic cotton knits finished with microcapsules by pad dry cure method showed maximum zone of inhibition compared to knits finished by exhaust method against S. aureus and E. coli. The antimicrobial activity of the finished samples was subjected to multiple washing which indicated that knits finished with pad dry cure method showed a zone of inhibition even after 20th wash and better aroma retention compared to knits finished with the exhaust method of application. Further, the group of respondents rated that the 5th washed samples had the greater aroma intensity in both the methods than the other samples. Thus, the vetiver microencapsulated organic cotton knits are free from hazardous chemicals and have multi-functional properties that can be suitable for medical and healthcare textiles.

Keywords: exhaust and pad dry cure finishing, interfacial polymerization, organic cotton knits, vetiver oil microcapsules

Procedia PDF Downloads 281
187 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy

Authors: Kemal Efe Eseller, Göktuğ Yazici

Abstract:

Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.

Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing

Procedia PDF Downloads 87
186 Local Energy and Flexibility Markets to Foster Demand Response Services within the Energy Community

Authors: Eduardo Rodrigues, Gisela Mendes, José M. Torres, José E. Sousa

Abstract:

In the sequence of the liberalisation of the electricity sector a progressive engagement of consumers has been considered and targeted by sector regulatory policies. With the objective of promoting market competition while protecting consumers interests, by transferring some of the upstream benefits to the end users while reaching a fair distribution of system costs, different market models to value consumers’ demand flexibility at the energy community level are envisioned. Local Energy and Flexibility Markets (LEFM) involve stakeholders interested in providing or procure local flexibility for community, services and markets’ value. Under the scope of DOMINOES, a European research project supported by Horizon 2020, the local market concept developed is expected to: • Enable consumers/prosumers empowerment, by allowing them to value their demand flexibility and Distributed Energy Resources (DER); • Value local liquid flexibility to support innovative distribution grid management, e.g., local balancing and congestion management, voltage control and grid restoration; • Ease the wholesale market uptake of DER, namely small-scale flexible loads aggregation as Virtual Power Plants (VPPs), facilitating Demand Response (DR) service provision; • Optimise the management and local sharing of Renewable Energy Sources (RES) in Medium Voltage (MV) and Low Voltage (LV) grids, trough energy transactions within an energy community; • Enhance the development of energy markets through innovative business models, compatible with ongoing policy developments, that promote the easy access of retailers and other service providers to the local markets, allowing them to take advantage of communities’ flexibility to optimise their portfolio and subsequently their participation in external markets. The general concept proposed foresees a flow of market actions, technical validations, subsequent deliveries of energy and/or flexibility and balance settlements. Since the market operation should be dynamic and capable of addressing different requests, either prioritising balancing and prosumer services or system’s operation, direct procurement of flexibility within the local market must also be considered. This paper aims to highlight the research on the definition of suitable DR models to be used by the Distribution System Operator (DSO), in case of technical needs, and by the retailer, mainly for portfolio optimisation and solve unbalances. The models to be proposed and implemented within relevant smart distribution grid and microgrid validation environments, are focused on day-ahead and intraday operation scenarios, for predictive management and near-real-time control respectively under the DSO’s perspective. At local level, the DSO will be able to procure flexibility in advance to tackle different grid constrains (e.g., demand peaks, forecasted voltage and current problems and maintenance works), or during the operating day-to-day, to answer unpredictable constraints (e.g., outages, frequency deviations and voltage problems). Due to the inherent risks of their active market participation retailers may resort to DR models to manage their portfolio, by optimising their market actions and solve unbalances. The interaction among the market actors involved in the DR activation and in flexibility exchange is explained by a set of sequence diagrams for the DR modes of use from the DSO and the energy provider perspectives. • DR for DSO’s predictive management – before the operating day; • DR for DSO’s real-time control – during the operating day; • DR for retailer’s day-ahead operation; • DR for retailer’s intraday operation.

Keywords: demand response, energy communities, flexible demand, local energy and flexibility markets

Procedia PDF Downloads 99