Search results for: gamma neural oscillation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2315

Search results for: gamma neural oscillation

245 Nanoceutical Intervention (Nanodrug) of Neonatal Hyperbilirubinemias Compared to Conventional Phototherapy

Authors: Samir Kumar Pal

Abstract:

Background: Targeted rapid degradation of bilirubin has the potential to thwart incipient bilirubin encephalopathy. Uncontrolled hyperbilirubinemia is a potential problem in developing countries, including India, because of the lack of reliable healthcare institutes for conventional phototherapy. In India, most of the rural subjects duel in the exchange limit during transport, leading to a risk of kernicterus when they arrive at the treatment centre. Thus, an alternative pharmaceutical agent is needed for the hours. Objective: Exploration of a distinct therapeutic strategy for the control of neonatal hyperbilirubinemia compared to conventional phototherapy in a clinical setting. Method: We synthesized, characterized and investigated a spinel-structured Manganese citrate nanocomplex (C-Mn₃O₄ NC, the nanodrug) along with conventional phototherapy in neonatal subjects. We have also observed BIND scores in order to assess neurological dysfunctions. Results: Our observational study clearly reveals that the rate of declination of bilirubin in neonatal subjects with nanodrug oral administration and phototherapy is faster compared to that in the case of phototherapy only. The associated neural dysfunctions were also found to be significantly lower in the case of combined therapy. Conclusion: This study demonstrates that combined therapy works better than conventional phototherapy only for the control of hyperbilirubinemia. We have observed that a significant portion of neonatal subjects requiring blood exchange has been prevented with the combined therapeutic strategy. Further compilation of a drug-safety-dossier is warranted to translate this novel therapeutic chemo preventive approach to clinical settings.

Keywords: nanodrug, nanoparticle, Neonatal hyperbilirubinemia, alternative to phototherapy, redox modulation, redox medicine

Procedia PDF Downloads 38
244 Network Conditioning and Transfer Learning for Peripheral Nerve Segmentation in Ultrasound Images

Authors: Harold Mauricio Díaz-Vargas, Cristian Alfonso Jimenez-Castaño, David Augusto Cárdenas-Peña, Guillermo Alberto Ortiz-Gómez, Alvaro Angel Orozco-Gutierrez

Abstract:

Precise identification of the nerves is a crucial task performed by anesthesiologists for an effective Peripheral Nerve Blocking (PNB). Now, anesthesiologists use ultrasound imaging equipment to guide the PNB and detect nervous structures. However, visual identification of the nerves from ultrasound images is difficult, even for trained specialists, due to artifacts and low contrast. The recent advances in deep learning make neural networks a potential tool for accurate nerve segmentation systems, so addressing the above issues from raw data. The most widely spread U-Net network yields pixel-by-pixel segmentation by encoding the input image and decoding the attained feature vector into a semantic image. This work proposes a conditioning approach and encoder pre-training to enhance the nerve segmentation of traditional U-Nets. Conditioning is achieved by the one-hot encoding of the kind of target nerve a the network input, while the pre-training considers five well-known deep networks for image classification. The proposed approach is tested in a collection of 619 US images, where the best C-UNet architecture yields an 81% Dice coefficient, outperforming the 74% of the best traditional U-Net. Results prove that pre-trained models with the conditional approach outperform their equivalent baseline by supporting learning new features and enriching the discriminant capability of the tested networks.

Keywords: nerve segmentation, U-Net, deep learning, ultrasound imaging, peripheral nerve blocking

Procedia PDF Downloads 86
243 Artificial Intelligence in Bioscience: The Next Frontier

Authors: Parthiban Srinivasan

Abstract:

With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.

Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction

Procedia PDF Downloads 343
242 Design of Robust and Intelligent Controller for Active Removal of Space Debris

Authors: Shabadini Sampath, Jinglang Feng

Abstract:

With huge kinetic energy, space debris poses a major threat to astronauts’ space activities and spacecraft in orbit if a collision happens. The active removal of space debris is required in order to avoid frequent collisions that would occur. In addition, the amount of space debris will increase uncontrollably, posing a threat to the safety of the entire space system. But the safe and reliable removal of large-scale space debris has been a huge challenge to date. While capturing and deorbiting space debris, the space manipulator has to achieve high control precision. However, due to uncertainties and unknown disturbances, there is difficulty in coordinating the control of the space manipulator. To address this challenge, this paper focuses on developing a robust and intelligent control algorithm that controls joint movement and restricts it on the sliding manifold by reducing uncertainties. A neural network adaptive sliding mode controller (NNASMC) is applied with the objective of finding the control law such that the joint motions of the space manipulator follow the given trajectory. A computed torque control (CTC) is an effective motion control strategy that is used in this paper for computing space manipulator arm torque to generate the required motion. Based on the Lyapunov stability theorem, the proposed intelligent controller NNASMC and CTC guarantees the robustness and global asymptotic stability of the closed-loop control system. Finally, the controllers used in the paper are modeled and simulated using MATLAB Simulink. The results are presented to prove the effectiveness of the proposed controller approach.

Keywords: GNC, active removal of space debris, AI controllers, MatLabSimulink

Procedia PDF Downloads 113
241 The Importance of Visual Communication in Artificial Intelligence

Authors: Manjitsingh Rajput

Abstract:

Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems.

Keywords: visual communication AI, computer vision, visual aid in communication, essence of visual communication.

Procedia PDF Downloads 72
240 Impact of Climate Change on Sea Level Rise along the Coastline of Mumbai City, India

Authors: Chakraborty Sudipta, A. R. Kambekar, Sarma Arnab

Abstract:

Sea-level rise being one of the most important impacts of anthropogenic induced climate change resulting from global warming and melting of icebergs at Arctic and Antarctic, the investigations done by various researchers both on Indian Coast and elsewhere during the last decade has been reviewed in this paper. The paper aims to ascertain the propensity of consistency of different suggested methods to predict the near-accurate future sea level rise along the coast of Mumbai. Case studies at East Coast, Southern Tip and West and South West coast of India have been reviewed. Coastal Vulnerability Index of several important international places has been compared, which matched with Intergovernmental Panel on Climate Change forecasts. The application of Geographic Information System mapping, use of remote sensing technology, both Multi Spectral Scanner and Thematic Mapping data from Landsat classified through Iterative Self-Organizing Data Analysis Technique for arriving at high, moderate and low Coastal Vulnerability Index at various important coastal cities have been observed. Instead of data driven, hindcast based forecast for Significant Wave Height, additional impact of sea level rise has been suggested. Efficacy and limitations of numerical methods vis-à-vis Artificial Neural Network has been assessed, importance of Root Mean Square error on numerical results is mentioned. Comparing between various computerized methods on forecast results obtained from MIKE 21 has been opined to be more reliable than Delft 3D model.

Keywords: climate change, Coastal Vulnerability Index, global warming, sea level rise

Procedia PDF Downloads 117
239 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques

Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev

Abstract:

Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.

Keywords: data analysis, demand modeling, healthcare, medical facilities

Procedia PDF Downloads 130
238 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline

Authors: Kenan Morani, Esra Kaya Ayana

Abstract:

This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.

Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation

Procedia PDF Downloads 115
237 Intelligent Fault Diagnosis for the Connection Elements of Modular Offshore Platforms

Authors: Jixiang Lei, Alexander Fuchs, Franz Pernkopf, Katrin Ellermann

Abstract:

Within the Space@Sea project, funded by the Horizon 2020 program, an island consisting of multiple platforms was designed. The platforms are connected by ropes and fenders. The connection is critical with respect to the safety of the whole system. Therefore, fault detection systems are investigated, which could detect early warning signs for a possible failure in the connection elements. Previously, a model-based method called Extended Kalman Filter was developed to detect the reduction of rope stiffness. This method detected several types of faults reliably, but some types of faults were much more difficult to detect. Furthermore, the model-based method is sensitive to environmental noise. When the wave height is low, a long time is needed to detect a fault and the accuracy is not always satisfactory. In this sense, it is necessary to develop a more accurate and robust technique that can detect all rope faults under a wide range of operational conditions. Inspired by this work on the Space at Sea design, we introduce a fault diagnosis method based on deep neural networks. Our method cannot only detect rope degradation by using the acceleration data from each platform but also estimate the contributions of the specific acceleration sensors using methods from explainable AI. In order to adapt to different operational conditions, the domain adaptation technique DANN is applied. The proposed model can accurately estimate rope degradation under a wide range of environmental conditions and help users understand the relationship between the output and the contributions of each acceleration sensor.

Keywords: fault diagnosis, deep learning, domain adaptation, explainable AI

Procedia PDF Downloads 161
236 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 377
235 Fight against Money Laundering with Optical Character Recognition

Authors: Saikiran Subbagari, Avinash Malladhi

Abstract:

Anti Money Laundering (AML) regulations are designed to prevent money laundering and terrorist financing activities worldwide. Financial institutions around the world are legally obligated to identify, assess and mitigate the risks associated with money laundering and report any suspicious transactions to governing authorities. With increasing volumes of data to analyze, financial institutions seek to automate their AML processes. In the rise of financial crimes, optical character recognition (OCR), in combination with machine learning (ML) algorithms, serves as a crucial tool for automating AML processes by extracting the data from documents and identifying suspicious transactions. In this paper, we examine the utilization of OCR for AML and delve into various OCR techniques employed in AML processes. These techniques encompass template-based, feature-based, neural network-based, natural language processing (NLP), hidden markov models (HMMs), conditional random fields (CRFs), binarizations, pattern matching and stroke width transform (SWT). We evaluate each technique, discussing their strengths and constraints. Also, we emphasize on how OCR can improve the accuracy of customer identity verification by comparing the extracted text with the office of foreign assets control (OFAC) watchlist. We will also discuss how OCR helps to overcome language barriers in AML compliance. We also address the implementation challenges that OCR-based AML systems may face and offer recommendations for financial institutions based on the data from previous research studies, which illustrate the effectiveness of OCR-based AML.

Keywords: anti-money laundering, compliance, financial crimes, fraud detection, machine learning, optical character recognition

Procedia PDF Downloads 124
234 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network

Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin

Abstract:

The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.

Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake

Procedia PDF Downloads 44
233 Gas Systems of the Amadeus Basin, Australia

Authors: Chris J. Boreham, Dianne S. Edwards, Amber Jarrett, Justin Davies, Robert Poreda, Alex Sessions, John Eiler

Abstract:

The origins of natural gases in the Amadeus Basin have been assessed using molecular and stable isotope (C, H, N, He) systematics. A dominant end-member thermogenic, oil-associated gas is considered for the Ordovician Pacoota−Stairway sandstones of the Mereenie gas and oil field. In addition, an abiogenic end-member is identified in the latest Proterozoic lower Arumbera Sandstone of the Dingo gasfield, being most likely associated with radiolysis of methane with polymerisation to wet gases. The latter source assignment is based on a similar geochemical fingerprint derived from the laboratory gamma irradiation experiments on methane. A mixed gas source is considered for the Palm Valley gasfield in the Ordovician Pacoota Sandstone. Gas wetness (%∑C₂−C₅/∑C₁−C₅) decreases in the order Mereenie (19.1%) > Palm Valley (9.4%) > Dingo (4.1%). Non-produced gases at Magee-1 (23.5%; Late Proterozoic Heavitree Quartzite) and Mount Kitty-1 (18.9%; Paleo-Mesoproterozoic fractured granitoid basement) are very wet. Methane thermometry based on clumped isotopes of methane (¹³CDH₃) is consistent with the abiogenic origin for the Dingo gas field with methane formation temperature of 254ᵒC. However, the low methane formation temperature of 57°C for the Mereenie gas suggests either a mixed thermogenic-biogenic methane source or there is no thermodynamic equilibrium between the methane isotopomers. The shallow reservoir depth and present-day formation temperature below 80ᵒC would support microbial methanogenesis, but there is no accompanying alteration of the C- and H-isotopes of the wet gases and CO₂ that is typically associated with biodegradation. The Amadeus Basin gases show low to extremely high inorganic gas contents. Carbon dioxide is low in abundance (< 1% CO₂) and becomes increasing depleted in ¹³C from the Palm Valley (av. δ¹³C 0‰) to the Mereenie (av. δ¹³C -6.6‰) and Dingo (av. δ¹³C -14.3‰) gas fields. Although the wide range in carbon isotopes for CO₂ is consistent with multiple origins from inorganic to organic inputs, the most likely process is fluid-rock alteration with enrichment in ¹²C in the residual gaseous CO₂ accompanying progressive carbonate precipitation within the reservoir. Nitrogen ranges from low−moderate (1.7−9.9% N₂) abundance (Palm Valley av. 1.8%; Mereenie av. 9.1%; Dingo av. 9.4%) to extremely high abundance in Magee-1 (43.6%) and Mount Kitty-1 (61.0%). The nitrogen isotopes for the production gases have δ¹⁵N = -3.0‰ for Mereenie, -3.0‰ for Palm Valley and -7.1‰ for Dingo, suggest all being mixed inorganic and thermogenic nitrogen sources. Helium (He) abundance varies over a wide range from a low of 0.17% to one of the world’s highest at 9% (Mereenie av. 0.23%; Palm Valley av. 0.48%, Dingo av. 0.18%, Magee-1 6.2%; Mount Kitty-1 9.0%). Complementary helium isotopes (R/Ra = ³He/⁴Hesample / ³He/⁴Heair) range from 0.013 to 0.031 R/Ra, indicating a dominant crustal origin for helium with a sustained input of radiogenic 4He from the decomposition of U- and Th-bearing minerals, effectively diluting any original mantle helium input. The high helium content in the non-produced gases compared to the shallower producing wells most likely reflects their stratigraphic position relative to the Tonian Bitter Springs Group with the former below and the latter above an effective carbonate-salt seal.

Keywords: amadeus gas, thermogenic, abiogenic, C, H, N, He isotopes

Procedia PDF Downloads 180
232 Chaotic Electronic System with Lambda Diode

Authors: George Mahalu

Abstract:

The Chua diode has been configured over time in various ways, using electronic structures like as operational amplifiers (OAs) or devices with gas or semiconductors. When discussing the use of semiconductor devices, tunnel diodes (Esaki diodes) are most often considered, and more recently, transistorized configurations such as lambda diodes. The paper-work proposed here uses in the modeling a lambda diode type configuration consisting of two Junction Field Effect Transistors (JFET). The original scheme is created in the MULTISIM electronic simulation environment and is analyzed in order to identify the conditions for the appearance of evolutionary unpredictability specific to nonlinear dynamic systems with chaos-induced behavior. The chaotic deterministic oscillator is one autonomous type, a fact that places it in the class of Chua’s type oscillators, the only significant and most important difference being the presence of a nonlinear device like the one mentioned structure above. The chaotic behavior is identified both by means of strange attractor-type trajectories and visible during the simulation and by highlighting the hypersensitivity of the system to small variations of one of the input parameters. The results obtained through simulation and the conclusions drawn are useful in the further research of ways to implement such constructive electronic solutions in theoretical and practical applications related to modern small signal amplification structures, to systems for encoding and decoding messages through various modern ways of communication, as well as new structures that can be imagined both in modern neural networks and in those for the physical implementation of some requirements imposed by current research with the aim of obtaining practically usable solutions in quantum computing and quantum computers.

Keywords: chaos, lambda diode, strange attractor, nonlinear system

Procedia PDF Downloads 66
231 Designing and Simulation of the Rotor and Hub of the Unmanned Helicopter

Authors: Zbigniew Czyz, Ksenia Siadkowska, Krzysztof Skiba, Karol Scislowski

Abstract:

Today’s progress in the rotorcraft is mostly associated with an optimization of aircraft performance achieved by active and passive modifications of main rotor assemblies and a tail propeller. The key task is to improve their performance, improve the hover quality factor for rotors but not change in specific fuel consumption. One of the tasks to improve the helicopter is an active optimization of the main rotor providing for flight stages, i.e., an ascend, flight, a descend. An active interference with the airflow around the rotor blade section can significantly change characteristics of the aerodynamic airfoil. The efficiency of actuator systems modifying aerodynamic coefficients in the current solutions is relatively high and significantly affects the increase in strength. The solution to actively change aerodynamic characteristics assumes a periodic change of geometric features of blades depending on flight stages. Changing geometric parameters of blade warping enables an optimization of main rotor performance depending on helicopter flight stages. Structurally, an adaptation of shape memory alloys does not significantly affect rotor blade fatigue strength, which contributes to reduce costs associated with an adaptation of the system to the existing blades, and gains from a better performance can easily amortize such a modification and improve profitability of such a structure. In order to obtain quantitative and qualitative data to solve this research problem, a number of numerical analyses have been necessary. The main problem is a selection of design parameters of the main rotor and a preliminary optimization of its performance to improve the hover quality factor for rotors. This design concept assumes a three-bladed main rotor with a chord of 0.07 m and radius R = 1 m. The value of rotor speed is a calculated parameter of an optimization function. To specify the initial distribution of geometric warping, a special software has been created that uses a numerical method of a blade element which respects dynamic design features such as fluctuations of a blade in its joints. A number of performance analyses as a function of rotor speed, forward speed, and altitude have been performed. The calculations were carried out for the full model assembly. This approach makes it possible to observe the behavior of components and their mutual interaction resulting from the forces. The key element of each rotor is the shaft, hub and pins holding the joints and blade yokes. These components are exposed to the highest loads. As a result of the analysis, the safety factor was determined at the level of k > 1.5, which gives grounds to obtain certification for the strength of the structure. The construction of the joint rotor has numerous moving elements in its structure. Despite the high safety factor, the places with the highest stresses, where the signs of wear and tear may appear, have been indicated. The numerical analysis carried out showed that the most loaded element is the pin connecting the modular bearing of the blade yoke with the element of the horizontal oscillation joint. The stresses in this element result in a safety factor of k=1.7. The other analysed rotor components have a safety factor of more than 2 and in the case of the shaft, this factor is more than 3. However, it must be remembered that the structure is as strong as the weakest cell is. Designed rotor for unmanned aerial vehicles adapted to work with blades with intelligent materials in its structure meets the requirements for certification testing. Acknowledgement: This work has been financed by the Polish National Centre for Research and Development under the LIDER program, Grant Agreement No. LIDER/45/0177/L-9/17/NCBR/2018.

Keywords: main rotor, rotorcraft aerodynamics, shape memory alloy, materials, unmanned helicopter

Procedia PDF Downloads 134
230 A Review on the Hydrologic and Hydraulic Performances in Low Impact Development-Best Management Practices Treatment Train

Authors: Fatin Khalida Abdul Khadir, Husna Takaijudin

Abstract:

Bioretention system is one of the alternatives to approach the conventional stormwater management, low impact development (LID) strategy for best management practices (BMPs). Incorporating both filtration and infiltration, initial research on bioretention systems has shown that this practice extensively decreases runoff volumes and peak flows. The LID-BMP treatment train is one of the latest LID-BMPs for stormwater treatments in urbanized watersheds. The treatment train is developed to overcome the drawbacks that arise from conventional LID-BMPs and aims to enhance the performance of the existing practices. In addition, it is also used to improve treatments in both water quality and water quantity controls as well as maintaining the natural hydrology of an area despite the current massive developments. The objective of this paper is to review the effectiveness of the conventional LID-BMPS on hydrologic and hydraulic performances through column studies in different configurations. The previous studies on the applications of LID-BMP treatment train that were developed to overcome the drawbacks of conventional LID-BMPs are reviewed and use as the guidelines for implementing this system in Universiti Teknologi Petronas (UTP) and elsewhere. The reviews on the analysis conducted for hydrologic and hydraulic performances using the artificial neural network (ANN) model are done in order to be utilized in this study. In this study, the role of the LID-BMP treatment train is tested by arranging bioretention cells in series in order to be implemented for controlling floods that occurred currently and in the future when the construction of the new buildings in UTP completed. A summary of the research findings on the performances of the system is provided which includes the proposed modifications on the designs.

Keywords: bioretention system, LID-BMP treatment train, hydrological and hydraulic performance, ANN analysis

Procedia PDF Downloads 107
229 Injection of Bradykinin in Femoral Artery Elicits Cardiorespiratory Reflexes Involving Perivascular Afferents in Rat Models

Authors: Sanjeev K. Singh, Maloy B. Mandal, Revand R.

Abstract:

The physiology of baroreceptors and chemoreceptors present in large blood vessels of the heart is well known in regulation of cardiorespiratory functions. Since large blood vessels and peripheral blood vessels are of same mesodermal origin, therefore, involvement of the latter in regulation of cardiorespiratory system is expected. Role of perivascular nerves in mediating cardiorespiratory alterations produced after intra-arterial injection of a nociceptive agent (bradykinin) was examined in urethane anesthetized male rats. Respiratory frequency, blood pressure, and heart rate were recorded for 30 min after the retrograde injection of bradykinin/saline in the femoral artery. In addition, paw edema was determined and water content was expressed as percentage of wet weight. Injection of bradykinin produced immediate tachypnoeic, hypotensive and bradycardiac responses of shorter latency (5-8 s) favoring the neural mechanisms involved in it. Injection of equi-volume of saline did not produce any responses and served as time matched control. Paw edema was observed in the ipsilateral hind limb. Pretreatment with diclofenac sodium significantly attenuated the bradykinin-induced responses and also blocked the paw edema. Ipsilateral femoral and sciatic nerve sectioning attenuated bradykinin-induced responses significantly indicating the origin of responses from the local vascular bed. Administration of bradykinin in the segment of an artery produced reflex cardiorespiratory changes by stimulating the perivascular nociceptors involving prostaglandins. This is a novel study exhibiting the role of peripheral blood vessels in regulation of cardiorespiratory system.

Keywords: vasosensory reflex, cardiorespiratory changes, nociceptive agent, bradykinin, VR1 receptors

Procedia PDF Downloads 129
228 Current Methods for Drug Property Prediction in the Real World

Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh

Abstract:

Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.

Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning

Procedia PDF Downloads 57
227 Implementation of Synthesis and Quality Control Procedures of ¹⁸F-Fluoromisonidazole Radiopharmaceutical

Authors: Natalia C. E. S. Nascimento, Mercia L. Oliveira, Fernando R. A. Lima, Leonardo T. C. do Nascimento, Marina B. Silveira, Brigida G. A. Schirmer, Andrea V. Ferreira, Carlos Malamut, Juliana B. da Silva

Abstract:

Tissue hypoxia is a common characteristic of solid tumors leading to decreased sensitivity to radiotherapy and chemotherapy. In the clinical context, tumor hypoxia assessment employing the positron emission tomography (PET) tracer ¹⁸F-fluoromisonidazole ([¹⁸F]FMISO) is helpful for physicians for planning and therapy adjusting. The aim of this work was to implement the synthesis of 18F-FMISO in a TRACERlab® MXFDG module and also to establish the quality control procedure. [¹⁸F]FMISO was synthesized at Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN/Brazil) using an automated synthesizer (TRACERlab® MXFDG, GE) adapted for the production of [¹⁸F]FMISO. The FMISO chemical standard was purchased from ABX. 18O- enriched water was acquired from Center of Molecular Research. Reagent kits containing eluent solution, acetonitrile, ethanol, 2.0 M HCl solution, buffer solution, water for injections and [¹⁸F]FMISO precursor (dissolved in 2 ml acetonitrile) were purchased from ABX. The [¹⁸F]FMISO samples were purified by Solid Phase Extraction method. The quality requirements of [¹⁸F]FMISO are established in the European Pharmacopeia. According to that reference, quality control of [¹⁸F]FMISO should include appearance, pH, radionuclidic identity and purity, radiochemical identity and purity, chemical purity, residual solvents, bacterial endotoxins, and sterility. The duration of the synthesis process was 53 min, with radiochemical yield of (37.00 ± 0.01) % and the specific activity was more than 70 GBq/µmol. The syntheses were reproducible and showed satisfactory results. In relation to the quality control analysis, the samples were clear and colorless at pH 6.0. The spectrum emission, measured by using a High-Purity Germanium Detector (HPGe), presented a single peak at 511 keV and the half-life, determined by the decay method in an activimeter, was (111.0 ± 0.5) min, indicating no presence of radioactive contaminants, besides the desirable radionuclide (¹⁸F). The samples showed concentration of tetrabutylammonium (TBA) < 50μg/mL, assessed by visual comparison to TBA standard applied in the same thin layer chromatographic plate. Radiochemical purity was determined by high performance liquid chromatography (HPLC) and the results were 100%. Regarding the residual solvents tested, ethanol and acetonitrile presented concentration lower than 10% and 0.04%, respectively. Healthy female mice were injected via lateral tail vein with [¹⁸F]FMISO, microPET imaging studies (15 min) were performed after 2 h post injection (p.i), and the biodistribution was analyzed in five-time points (30, 60, 90, 120 and 180 min) after injection. Subsequently, organs/tissues were assayed for radioactivity with a gamma counter. All parameters of quality control test were in agreement to quality criteria confirming that [¹⁸F]FMISO was suitable for use in non-clinical and clinical trials, following the legal requirements for the production of new radiopharmaceuticals in Brazil.

Keywords: automatic radiosynthesis, hypoxic tumors, pharmacopeia, positron emitters, quality requirements

Procedia PDF Downloads 179
226 Luminescent Properties of Plastic Scintillator with Large Area Photonic Crystal Prepared by a Combination of Nanoimprint Lithography and Atomic Layer Deposition

Authors: Jinlu Ruan, Liang Chen, Bo Liu, Xiaoping Ouyang, Zhichao Zhu, Zhongbing Zhang, Shiyi He, Mengxuan Xu

Abstract:

Plastic scintillators play an important role in the measurement of a mixed neutron/gamma pulsed radiation, neutron radiography and pulse shape discrimination technology. In some research, these luminescent properties are necessary that photons produced by the interactions between a plastic scintillator and radiations can be detected as much as possible by the photoelectric detectors and more photons can be emitted from the scintillators along a specific direction where detectors are located. Unfortunately, a majority of these photons produced are trapped in the plastic scintillators due to the total internal reflection (TIR), because there is a significant light-trapping effect when the incident angle of internal scintillation light is larger than the critical angle. Some of these photons trapped in the scintillator may be absorbed by the scintillator itself and the others are emitted from the edges of the scintillator. This makes the light extraction of plastic scintillators very low. Moreover, only a small portion of the photons emitted from the scintillator easily can be detected by detectors effectively, because the distribution of the emission directions of this portion of photons exhibits approximate Lambertian angular profile following a cosine emission law. Therefore, enhancing the light extraction efficiency and adjusting the emission angular profile become the keys for improving the number of photons detected by the detectors. In recent years, photonic crystal structures have been covered on inorganic scintillators to enhance the light extraction efficiency and adjust the angular profile of scintillation light successfully. However, that, preparation methods of photonic crystals will deteriorate performance of plastic scintillators and even destroy the plastic scintillators, makes the investigation on preparation methods of photonic crystals for plastic scintillators and luminescent properties of plastic scintillators with photonic crystal structures inadequate. Although we have successfully made photonic crystal structures covered on the surface of plastic scintillators by a modified self-assembly technique and achieved a great enhance of light extraction efficiency without evident angular-dependence for the angular profile of scintillation light, the preparation of photonic crystal structures with large area (the diameter is larger than 6cm) and perfect periodic structure is still difficult. In this paper, large area photonic crystals on the surface of scintillators were prepared by nanoimprint lithography firstly, and then a conformal layer with material of high refractive index on the surface of photonic crystal by atomic layer deposition technique in order to enhance the stability of photonic crystal structures and increase the number of leaky modes for improving the light extraction efficiency. The luminescent properties of the plastic scintillator with photonic crystals prepared by the mentioned method are compared with those of the plastic scintillator without photonic crystal. The results indicate that the number of photons detected by detectors is increased by the enhanced light extraction efficiency and the angular profile of scintillation light exhibits evident angular-dependence for the scintillator with photonic crystals. The mentioned preparation of photonic crystals is beneficial to scintillation detection applications and lays an important technique foundation for the plastic scintillators to meet special requirements under different application backgrounds.

Keywords: angular profile, atomic layer deposition, light extraction efficiency, plastic scintillator, photonic crystal

Procedia PDF Downloads 182
225 Supervisory Controller with Three-State Energy Saving Mode for Induction Motor in Fluid Transportation

Authors: O. S. Ebrahim, K. O. Shawky, M. O. S. Ebrahim, P. K. Jain

Abstract:

Induction Motor (IM) driving pump is the main consumer of electricity in a typical fluid transportation system (FTS). It was illustrated that changing the connection of the stator windings from delta to star at no load could achieve noticeable active and reactive energy savings. This paper proposes a supervisory hysteresis liquid-level control with three-state energy saving mode (ESM) for IM in FTS including storage tank. The IM pump drive comprises modified star/delta switch and hydromantic coupler. Three-state ESM is defined, along with the normal running, and named analog to computer ESMs as follows: Sleeping mode in which the motor runs at no load with delta stator connection, hibernate mode in which the motor runs at no load with a star connection, and motor shutdown is the third energy saver mode. A logic flow-chart is synthesized to select the motor state at no-load for best energetic cost reduction, considering the motor thermal capacity used. An artificial neural network (ANN) state estimator, based on the recurrent architecture, is constructed and learned in order to provide fault-tolerant capability for the supervisory controller. Sequential test of Wald is used for sensor fault detection. Theoretical analysis, preliminary experimental testing and, computer simulations are performed to show the effectiveness of the proposed control in terms of reliability, power quality and energy/coenergy cost reduction with the suggestion of power factor correction.

Keywords: ANN, ESM, IM, star/delta switch, supervisory control, FT, reliability, power quality

Procedia PDF Downloads 172
224 The Effect of an Abnormal Prefrontal Cortex on the Symptoms of Attention Deficit/Hyperactivity Disorder

Authors: Irene M. Arora

Abstract:

Hypothesis: Attention Deficit Hyperactivity Disorder is the result of an underdeveloped prefrontal cortex which is the primary cause for the signs and symptoms seen as defining features of ADHD. Methods: Through ‘PubMed’, ‘Wiley’ and ‘Google Scholar’ studies published between 2011-2018 were evaluated, determining if a dysfunctional prefrontal cortex caused the characteristic symptoms associated with ADHD. The search terms "prefrontal cortex", "Attention-Deficit/Hyperactivity Disorder", "cognitive control", "frontostriatal tract" among others, were used to maximize the assortment of relevant studies. Excluded papers were systematic reviews, meta-analyses and publications published before 2010 to ensure clinical relevance. Results: Nine publications were analyzed in this review, all of which were non-randomized matched control studies. Three studies found a decrease in the functional integrity of the frontostriatal tract fibers in conjunction with four studies finding impaired frontal cortex stimulation. Prefrontal dysfunction, specifically medial and orbitofrontal areas, displayed abnormal functionality of reward processing in ADHD patients when compared to their normal counterparts. A total of 807 subjects were studied in this review, yielding that a little over half (54%) presented with remission of symptoms in adulthood. Conclusion: While the prefrontal cortex shows the highest consistency of impaired activity and thinner volumes in patients with ADHD, this is a heterogenous disorder implicating its pathophysiology to the dysfunction of other neural structures as well. However, remission of ADHD symptomatology in adulthood was found to be attributable to increased prefrontal functional connectivity and integration, suggesting a key role for the prefrontal cortex in the development of ADHD.

Keywords: prefrontal cortex, ADHD, inattentive, impulsivity, reward processing

Procedia PDF Downloads 103
223 Real Time Classification of Political Tendency of Twitter Spanish Users based on Sentiment Analysis

Authors: Marc Solé, Francesc Giné, Magda Valls, Nina Bijedic

Abstract:

What people say on social media has turned into a rich source of information to understand social behavior. Specifically, the growing use of Twitter social media for political communication has arisen high opportunities to know the opinion of large numbers of politically active individuals in real time and predict the global political tendencies of a specific country. It has led to an increasing body of research on this topic. The majority of these studies have been focused on polarized political contexts characterized by only two alternatives. Unlike them, this paper tackles the challenge of forecasting Spanish political trends, characterized by multiple political parties, by means of analyzing the Twitters Users political tendency. According to this, a new strategy, named Tweets Analysis Strategy (TAS), is proposed. This is based on analyzing the users tweets by means of discovering its sentiment (positive, negative or neutral) and classifying them according to the political party they support. From this individual political tendency, the global political prediction for each political party is calculated. In order to do this, two different strategies for analyzing the sentiment analysis are proposed: one is based on Positive and Negative words Matching (PNM) and the second one is based on a Neural Networks Strategy (NNS). The complete TAS strategy has been performed in a Big-Data environment. The experimental results presented in this paper reveal that NNS strategy performs much better than PNM strategy to analyze the tweet sentiment. In addition, this research analyzes the viability of the TAS strategy to obtain the global trend in a political context make up by multiple parties with an error lower than 23%.

Keywords: political tendency, prediction, sentiment analysis, Twitter

Procedia PDF Downloads 218
222 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms

Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager

Abstract:

This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.

Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties

Procedia PDF Downloads 44
221 Adaptive Motion Compensated Spatial Temporal Filter of Colonoscopy Video

Authors: Nidhal Azawi

Abstract:

Colonoscopy procedure is widely used in the world to detect an abnormality. Early diagnosis can help to heal many patients. Because of the unavoidable artifacts that exist in colon images, doctors cannot detect a colon surface precisely. The purpose of this work is to improve the visual quality of colonoscopy videos to provide better information for physicians by removing some artifacts. This work complements a series of work consisting of three previously published papers. In this paper, Optic flow is used for motion compensation, and then consecutive images are aligned/registered to integrate some information to create a new image that has or reveals more information than the original one. Colon images have been classified into informative and noninformative images by using a deep neural network. Then, two different strategies were used to treat informative and noninformative images. Informative images were treated by using Lucas Kanade (LK) with an adaptive temporal mean/median filter, whereas noninformative images are treated by using Lucas Kanade with a derivative of Gaussian (LKDOG) with adaptive temporal median images. A comparison result showed that this work achieved better results than that results in the state- of- the- art strategies for the same degraded colon images data set, which consists of 1000 images. The new proposed algorithm reduced the error alignment by about a factor of 0.3 with a 100% successfully image alignment ratio. In conclusion, this algorithm achieved better results than the state-of-the-art approaches in case of enhancing the informative images as shown in the results section; also, it succeeded to convert the non-informative images that have very few details/no details because of the blurriness/out of focus or because of the specular highlight dominate significant amount of an image to informative images.

Keywords: optic flow, colonoscopy, artifacts, spatial temporal filter

Procedia PDF Downloads 97
220 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments

Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea

Abstract:

The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.

Keywords: deep learning, data mining, gender predication, MOOCs

Procedia PDF Downloads 123
219 Enhancing Neural Connections through Music and tDCS: Insights from an fNIRS Study

Authors: Dileep G., Akash Singh, Dalchand Ahirwar, Arkadeep Ghosh, Ashutosh Purohit, Gaurav Guleria, Kshatriya Om Prashant, Pushkar Patel, Saksham Kumar, Vanshaj Nathani, Vikas Dangi, Shubhajit Roy Chowdhury, Varun Dutt

Abstract:

Transcranial direct current stimulation (tDCS) has shown promise as a novel approach to enhance cognitive performance and provide therapeutic benefits for various brain disorders. However, the exact underlying brain mechanisms are not fully understood. We conducted a study to examine the brain's functional changes when subjected to simultaneous tDCS and music (Indian classical raga). During the study, participants in the experimental group underwent a 20-minute session of tDCS at two mA while listening to music (raga) for a duration of seven days. In contrast, the control group received a sham stimulation for two minutes at two mA over the same seven-day period. The objective was to examine whether repetitive tDCS could lead to the formation of additional functional connections between the medial prefrontal cortex (the stimulated area) and the auditory cortex in comparison to a sham stimulation group. In this study, 26 participants (5 female) underwent pre- and post-intervention scans, where changes were compared after one week of either tDCS or sham stimulation in conjunction with music. The study revealed significant effects of tDCS on functional connectivity between the stimulated area and the auditory cortex. The combination of tDCS applied over the mPFC and music resulted in newly formed connections. Based on our findings, it can be inferred that applying anodal tDCS over the mPFC enhances functional connectivity between the stimulated area and the auditory cortex when compared to the effects observed with sham stimulation.

Keywords: fNIRS, tDCS, neuroplasticity, music

Procedia PDF Downloads 54
218 Chemopreventive Efficacy of Andrographolide in Rat Colon Carcinogenesis Model Using Aberrant Crypt Foci (ACF) as Endpoint Marker

Authors: Maryam Hajrezaie, Mahmood Ameen Abdulla, Nazia Abdul Majid, Hapipa Mohd Ali, Pouya Hassandarvish, Maryam Zahedi Fard

Abstract:

Background: Colon cancer is one of the most prevalent cancers in the world and is the third leading cause of death among cancers in both males and females. The incidence of colon cancer is ranked fourth among all cancers but varies in different parts of the world. Cancer chemoprevention is defined as the use of natural or synthetic compounds capable of inducing biological mechanisms necessary to preserve genomic fidelity. Andrographolide is the major labdane diterpenoidal constituent of the plant Andrographis paniculata (family Acanthaceae), used extensively in the traditional medicine. Extracts of the plant and their constituents are reported to exhibit a wide spectrum of biological activities of therapeutic importance. Laboratory animal model studies have provided evidence that Andrographolide play a role in inhibiting the risk of certain cancers. Objective: Our aim was to evaluate the chemopreventive efficacy of the Andrographolide in the AOM induced rat model. Methods: To evaluate inhibitory properties of andrographolide on colonic aberrant crypt foci (ACF), five groups of 7-week-old male rats were used. Group 1 (control group) were fed with 10% Tween 20 once a day, Group 2 (cancer control) rats were intra-peritoneally injected with 15 mg/kg Azoxymethan, Gropu 3 (drug control) rats were injected with 15 mg/kg azoxymethan and 5-Flourouracil, Group 4 and 5 (experimental groups) were fed with 10 and 20 mg/kg andrographolide each once a day. After 1 week, the treatment group rats received subcutaneous injections of azoxymethane, 15 mg/kg body weight, once weekly for 2 weeks. Control rats were continued on Tween 20 feeding once a day and experimental groups 10 and 20 mg/kg andrographolide feeding once a day for 8 weeks. All rats were sacrificed 8 weeks after the azoxymethane treatment. Colons were evaluated grossly and histopathologically for ACF. Results: Administration of 10 mg/kg and 20 mg/kg andrographolide were found to be effectively chemoprotective, as evidenced microscopily and biochemically. Andrographolide suppressed total colonic ACF formation up to 40% to 60%, respectively, when compared with control group. Pre-treatment with andrographolide, significantly reduced the impact of AOM toxicity on plasma protein and urea levels as well as on plasma aspartate aminotransferase (AST), alanine aminotransferase (ALT), lactate dehydrogenase (LDH) and gamma-glutamyl transpeptidase (GGT) activities. Grossly, colorectal specimens revealed that andrographolide treatments decreased the mean score of number of crypts in AOM-treated rats. Importantly, rats fed andrographolide showed 75% inhibition of foci containing four or more aberrant crypts. The results also showed a significant increase in glutathione (GSH), superoxide dismutase (SOD), nitric oxide (NO), and Prostaglandin E2 (PGE2) activities and a decrease in malondialdehyde (MDA) level. Histologically all treatment groups showed a significant decrease of dysplasia as compared to control group. Immunohistochemical staining showed up-regulation of Hsp70 and down-regulation of Bax proteins. Conclusion: The current study demonstrated that Andrographolide reduce the number of ACF. According to these data, Andrographolide might be a promising chemoprotective activity, in a model of AOM-induced in ACF.

Keywords: chemopreventive, andrographolide, colon cancer, aberrant crypt foci (ACF)

Procedia PDF Downloads 418
217 Sleep Disturbance in Indonesian School-Aged Children and Its Relationship to Nutritional Aspect

Authors: William Cheng, Rini Sekartini

Abstract:

Background: Sleep is essential for children because it provides enhancement for the neural system activities that give physiologic effects for the body to support growth and development. One of the modifiable factors that relates with sleep is nutrition, which includes nutritional status, iron intake, and magnesium intake. Nutritional status represents the balance between nutritional intake and expenditure, while iron and magnesium are micronutrients that are related to sleep regulation. The aim of this study is to identify prevalence of sleep disturbance among Indonesian children and to evaluate its relation with aspect to nutrition. Methods : A cross-sectional study involving children aged 5 to 7-years-old in an urban primary health care between 2012 and 2013 was carried out. Related data includes anthropometric status, iron intake, and magnesium intake. Iron and magnesium intake was obtained by 24-hours food recall procedure. Sleep Disturbance Scale for Children (SDSC) was used as the diagnostic tool for sleep disturbance, with score under 39 indicating presence of problem. Results: Out of 128 school-aged children included in this study, 28 (23,1%) of them were found to have sleep disturbance. The majority of children had good nutritional status, with only 15,7% that were severely underweight or underweight, and 12,4% that were identified as stunted. On the contrary, 99 children (81,8%) were identified to have inadequate magnesium intake and 56 children (46,3%) with inadequate iron intake. Our analysis showed there was no significant relation between all of the nutritional status indicators and sleep disturbance (p>0,05%). Moreover, inadequate iron and magnesium intake also failed to prove significant relation with sleep disturbance in this population. Conclusion: Almost fourth of school-aged children in Indonesia were found to have sleep disturbance and further study are needed to overcome this problem. According to our finding, there is no correlation between nutritional status, iron intake, magnesium intake, and sleep disturbance.

Keywords: iron intake, magnesium intake, nutritional status, school-aged children, sleep disturbance

Procedia PDF Downloads 448
216 Lucilia Sericata Netrin-A: Secreted by Salivary Gland Larvae as a Potential to Neuroregeneration

Authors: Hamzeh Alipour, Masoumeh Bagheri, Tahereh Karamzadeh, Abbasali Raz, Kourosh Azizi

Abstract:

Netrin-A, a protein identified for conducting commissural axons, has a similar role in angiogenesis. In addition, studies have shown that one of the netrin-A receptors is expressed in the growing cells of small capillaries. It will be interesting to study this new group of molecules because their role in wound healing will become clearer in the future due to angiogenesis. The greenbottle blowfly Luciliasericata (L. sericata) larvae are increasingly used in maggot therapy of chronic wounds. This aim of this was the identification of moleculareatures of Netrin-A in L. sericata larvae. Larvae were reared under standard maggotarium conditions. The nucleic acid sequence of L. sericataNetrin-A (LSN-A) was then identified using Rapid Amplification of cDNA Ends (RACE) and Rapid Amplification of Genomic Ends (RAGE). Parts of the Netrin-A gene, including the middle, 3′-, and 5′-ends were identified, TA cloned in pTG19 plasmid, and transferred into DH5ɑ Escherichia coli. Each part was sequenced and assembled using SeqMan software. This gene structure was further subjected to in silico analysis. The DNA of LSN-A was identified to be 2407 bp, while its mRNA sequence was recognized as 2115 bp by Oligo0.7 software. It translated the Netrin-A protein with 704 amino acid residues. Its molecular weight is estimated to be 78.6 kDa. The 3-D structure ofNetrin-A drawn by SWISS-MODEL revealed its similarity to the Netrin-1 of humans with 66.8% identity. The LSN-A protein conduces to repair the myelin membrane in neuronal cells. Ultimately, it can be an effective candidate in neural regeneration and wound healing. Furthermore, our next attempt is to deplore recombinant proteins for use in medical sciences.

Keywords: maggot therapy, netrin-A, RACE, RAGE, lucilia sericata

Procedia PDF Downloads 88