Search results for: edge detection algorithm
788 Rescaled Range Analysis of Seismic Time-Series: Example of the Recent Seismic Crisis of Alhoceima
Authors: Marina Benito-Parejo, Raul Perez-Lopez, Miguel Herraiz, Carolina Guardiola-Albert, Cesar Martinez
Abstract:
Persistency, long-term memory and randomness are intrinsic properties of time-series of earthquakes. The Rescaled Range Analysis (RS-Analysis) was introduced by Hurst in 1956 and modified by Mandelbrot and Wallis in 1964. This method represents a simple and elegant analysis which determines the range of variation of one natural property (the seismic energy released in this case) in a time interval. Despite the simplicity, there is complexity inherent in the property measured. The cumulative curve of the energy released in time is the well-known fractal geometry of a devil’s staircase. This geometry is used for determining the maximum and minimum value of the range, which is normalized by the standard deviation. The rescaled range obtained obeys a power-law with the time, and the exponent is the Hurst value. Depending on this value, time-series can be classified in long-term or short-term memory. Hence, an algorithm has been developed for compiling the RS-Analysis for time series of earthquakes by days. Completeness time distribution and locally stationarity of the time series are required. The interest of this analysis is their application for a complex seismic crisis where different earthquakes take place in clusters in a short period. Therefore, the Hurst exponent has been obtained for the seismic crisis of Alhoceima (Mediterranean Sea) of January-March, 2016, where at least five medium-sized earthquakes were triggered. According to the values obtained from the Hurst exponent for each cluster, a different mechanical origin can be detected, corroborated by the focal mechanisms calculated by the official institutions. Therefore, this type of analysis not only allows an approach to a greater understanding of a seismic series but also makes possible to discern different types of seismic origins.Keywords: Alhoceima crisis, earthquake time series, Hurst exponent, rescaled range analysis
Procedia PDF Downloads 322787 Context-Aware Point-Of-Interests Recommender Systems Using Integrated Sentiment and Network Analysis
Authors: Ho Yeon Park, Kyoung-Jae Kim
Abstract:
Recently, user’s interests for location-based social network service increases according to the advances of social web and location-based technologies. It may be easy to recommend preferred items if we can use user’s preference, context and social network information simultaneously. In this study, we propose context-aware POI (point-of-interests) recommender systems using location-based network analysis and sentiment analysis which consider context, social network information and implicit user’s preference score. We propose a context-aware POI recommendation system consisting of three sub-modules and an integrated recommendation system of them. First, we will develop a recommendation module based on network analysis. This module combines social network analysis and cluster-indexing collaboration filtering. Next, this study develops a recommendation module using social singular value decomposition (SVD) and implicit SVD. In this research, we will develop a recommendation module that can recommend preference scores based on the frequency of POI visits of user in POI recommendation process by using social and implicit SVD which can reflect implicit feedback in collaborative filtering. We also develop a recommendation module using them that can estimate preference scores based on the recommendation. Finally, this study will propose a recommendation module using opinion mining and emotional analysis using data such as reviews of POIs extracted from location-based social networks. Finally, we will develop an integration algorithm that combines the results of the three recommendation modules proposed in this research. Experimental results show the usefulness of the proposed model in relation to the recommended performance.Keywords: sentiment analysis, network analysis, recommender systems, point-of-interests, business analytics
Procedia PDF Downloads 250786 Assessing the Survival Time of Hospitalized Patients in Eastern Ethiopia During 2019–2020 Using the Bayesian Approach: A Retrospective Cohort Study
Authors: Chalachew Gashu, Yoseph Kassa, Habtamu Geremew, Mengestie Mulugeta
Abstract:
Background and Aims: Severe acute malnutrition remains a significant health challenge, particularly in low‐ and middle‐income countries. The aim of this study was to determine the survival time of under‐five children with severe acute malnutrition. Methods: A retrospective cohort study was conducted at a hospital, focusing on under‐five children with severe acute malnutrition. The study included 322 inpatients admitted to the Chiro hospital in Chiro, Ethiopia, between September 2019 and August 2020, whose data was obtained from medical records. Survival functions were analyzed using Kaplan‒Meier plots and log‐rank tests. The survival time of severe acute malnutrition was further analyzed using the Cox proportional hazards model and Bayesian parametric survival models, employing integrated nested Laplace approximation methods. Results: Among the 322 patients, 118 (36.6%) died as a result of severe acute malnutrition. The estimated median survival time for inpatients was found to be 2 weeks. Model selection criteria favored the Bayesian Weibull accelerated failure time model, which demonstrated that age, body temperature, pulse rate, nasogastric (NG) tube usage, hypoglycemia, anemia, diarrhea, dehydration, malaria, and pneumonia significantly influenced the survival time of severe acute malnutrition. Conclusions: This study revealed that children below 24 months, those with altered body temperature and pulse rate, NG tube usage, hypoglycemia, and comorbidities such as anemia, diarrhea, dehydration, malaria, and pneumonia had a shorter survival time when affected by severe acute malnutrition under the age of five. To reduce the death rate of children under 5 years of age, it is necessary to design community management for acute malnutrition to ensure early detection and improve access to and coverage for children who are malnourished.Keywords: Bayesian analysis, severe acute malnutrition, survival data analysis, survival time
Procedia PDF Downloads 49785 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods
Authors: Matthew D. Baffa
Abstract:
Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.Keywords: emissivity, heat loss, infrared thermography, thermal conductance
Procedia PDF Downloads 313784 Dengue Virus Infection Rate in Mosquitoes Collected in Thailand Related to Environmental Factors
Authors: Chanya Jetsukontorn
Abstract:
Dengue hemorrhagic fever is the most important Mosquito-borne disease and the major public health problem in Thailand. The most important vector is Aedes aegypti. Environmental factors such as temperature, relative humidity, and biting rate affect dengue virus infection. The most effective measure for prevention is controlling of vector mosquitoes. In addition, surveillance of field-caught mosquitoes is imperative for determining the natural vector and can provide an early warning sign at risk of transmission in an area. In this study, Aedes aegypti mosquitoes were collected in Amphur Muang, Phetchabun Province, Thailand. The mosquitoes were collected in the rainy season and the dry season both indoor and outdoor. During mosquito’s collection, the data of environmental factors such as temperature, humidity and breeding sites were observed and recorded. After identified to species, mosquitoes were pooled according to genus/species, and sampling location. Pools consisted of a maximum of 10 Aedes mosquitoes. 70 pools of 675 Aedes aegypti were screened with RT-PCR for flaviviruses. To confirm individual infection for determining True infection rate, individual mosquitoes which gave positive results of flavivirus detection were tested for dengue virus by RT-PCR. The infection rate was 5.93% (4 positive individuals from 675 mosquitoes). The probability to detect dengue virus in mosquitoes at the neighbour’s houses was 1.25 times, especially where distances between neighboring houses and patient’s houses were less than 50 meters. The relative humidity in dengue-infected villages with dengue-infected mosquitoes was significantly higher than villages that free from dengue-infected mosquitoes. Indoor biting rate of Aedes aegypti was 14.87 times higher than outdoor, and biting times of 09.00-10.00, 10.00-11.00, 11.00-12.00 yielded 1.77, 1.46, 0.68mosquitoes/man-hour, respectively. These findings confirm environmental factors were related to Dengue infection in Thailand. Data obtained from this study will be useful for the prevention and control of the diseases.Keywords: Aedes aegypti, Dengue virus, environmental factors, one health, PCR
Procedia PDF Downloads 145783 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models
Authors: Azadeh Jafari, Robert G. Owens
Abstract:
In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics
Procedia PDF Downloads 361782 Assessment of Land Use Land Cover Change-Induced Climatic Effects
Authors: Mahesh K. Jat, Ankan Jana, Mahender Choudhary
Abstract:
Rapid population and economic growth resulted in changes in large-scale land use land cover (LULC) changes. Changes in the biophysical properties of the Earth's surface and its impact on climate are of primary concern nowadays. Different approaches, ranging from location-based relationships or modelling earth surface - atmospheric interaction through modelling techniques like surface energy balance (SEB) are used in the recent past to examine the relationship between changes in Earth surface land cover and climatic characteristics like temperature and precipitation. A remote sensing-based model i.e., Surface Energy Balance Algorithm for Land (SEBAL), has been used to estimate the surface heat fluxes over Mahi Bajaj Sagar catchment (India) from 2001 to 2020. Landsat ETM and OLI satellite data are used to model the SEB of the area. Changes in observed precipitation and temperature, obtained from India Meteorological Department (IMD) have been correlated with changes in surface heat fluxes to understand the relative contributions of LULC change in changing these climatic variables. Results indicate a noticeable impact of LULC changes on climatic variables, which are aligned with respective changes in SEB components. Results suggest that precipitation increases at a rate of 20 mm/year. The maximum and minimum temperature decreases and increases at 0.007 ℃ /year and 0.02 ℃ /year, respectively. The average temperature increases at 0.009 ℃ /year. Changes in latent heat flux and sensible heat flux positively correlate with precipitation and temperature, respectively. Variation in surface heat fluxes influences the climate parameters and is an adequate reason for climate change. So, SEB modelling is helpful to understand the LULC change and its impact on climate.Keywords: LULC, sensible heat flux, latent heat flux, SEBAL, landsat, precipitation, temperature
Procedia PDF Downloads 116781 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 86780 Application of Human Biomonitoring and Physiologically-Based Pharmacokinetic Modelling to Quantify Exposure to Selected Toxic Elements in Soil
Authors: Eric Dede, Marcus Tindall, John W. Cherrie, Steve Hankin, Christopher Collins
Abstract:
Current exposure models used in contaminated land risk assessment are highly conservative. Use of these models may lead to over-estimation of actual exposures, possibly resulting in negative financial implications due to un-necessary remediation. Thus, we are carrying out a study seeking to improve our understanding of human exposure to selected toxic elements in soil: arsenic (As), cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb) resulting from allotment land-use. The study employs biomonitoring and physiologically-based pharmacokinetic (PBPK) modelling to quantify human exposure to these elements. We recruited 37 allotment users (adults > 18 years old) in Scotland, UK, to participate in the study. Concentrations of the elements (and their bioaccessibility) were measured in allotment samples (soil and allotment produce). Amount of produce consumed by the participants and participants’ biological samples (urine and blood) were collected for up to 12 consecutive months. Ethical approval was granted by the University of Reading Research Ethics Committee. PBPK models (coded in MATLAB) were used to estimate the distribution and accumulation of the elements in key body compartments, thus indicating the internal body burden. Simulating low element intake (based on estimated ‘doses’ from produce consumption records), predictive models suggested that detection of these elements in urine and blood was possible within a given period of time following exposure. This information was used in planning biomonitoring, and is currently being used in the interpretation of test results from biological samples. Evaluation of the models is being carried out using biomonitoring data, by comparing model predicted concentrations and measured biomarker concentrations. The PBPK models will be used to generate bioavailability values, which could be incorporated in contaminated land exposure models. Thus, the findings from this study will promote a more sustainable approach to contaminated land management.Keywords: biomonitoring, exposure, PBPK modelling, toxic elements
Procedia PDF Downloads 319779 Clinical Applications of Amide Proton Transfer Magnetic Resonance Imaging: Detection of Brain Tumor Proliferative Activity
Authors: Fumihiro Ima, Shinichi Watanabe, Shingo Maeda, Haruna Imai, Hiroki Niimi
Abstract:
It is important to know growth rate of brain tumors before surgery because it influences treatment planning including not only surgical resection strategy but also adjuvant therapy after surgery. Amide proton transfer (APT) imaging is an emerging molecular magnetic resonance imaging (MRI) technique based on chemical exchange saturation transfer without administration of contrast medium. The underlying assumption in APT imaging of tumors is that there is a close relationship between the proliferative activity of the tumor and mobile protein synthesis. We aimed to evaluate the diagnostic performance of APT imaging of pre-and post-treatment brain tumors. Ten patients with brain tumor underwent conventional and APT-weighted sequences on a 3.0 Tesla MRI before clinical intervention. The maximum and the minimum APT-weighted signals (APTWmax and APTWmin) in each solid tumor region were obtained and compared before and after clinical intervention. All surgical specimens were examined for histopathological diagnosis. Eight of ten patients underwent adjuvant therapy after surgery. Histopathological diagnosis was glioma in 7 patients (WHO grade 2 in 2 patients, WHO grade 3 in 3 patients and WHO grade 4 in 2 patients), meningioma WHO grade1 in 2 patients and primary lymphoma of the brain in 1 patient. High-grade gliomas showed significantly higher APTW-signals than that in low-grade gliomas. APTWmax in one huge parasagittal meningioma infiltrating into the skull bone was higher than that in glioma WHO grade 4. On the other hand, APTWmax in another convexity meningioma was the same as that in glioma WHO grade 3. Diagnosis of primary lymphoma of the brain was possible with APT imaging before pathological confirmation. APTW-signals in residual tumors decreased dramatically within one year after adjuvant therapy in all patients. APT imaging demonstrated excellent diagnostic performance for the planning of surgery and adjuvant therapy of brain tumors.Keywords: amides, magnetic resonance imaging, brain tumors, cell proliferation
Procedia PDF Downloads 139778 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods
Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie
Abstract:
The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence
Procedia PDF Downloads 248777 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks
Authors: Van Trieu, Shouhuai Xu, Yusheng Feng
Abstract:
Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.Keywords: causality, multilevel graph, cyber-attacks, prediction
Procedia PDF Downloads 156776 Relative Entropy Used to Determine the Divergence of Cells in Single Cell RNA Sequence Data Analysis
Authors: An Chengrui, Yin Zi, Wu Bingbing, Ma Yuanzhu, Jin Kaixiu, Chen Xiao, Ouyang Hongwei
Abstract:
Single cell RNA sequence (scRNA-seq) is one of the effective tools to study transcriptomics of biological processes. Recently, similarity measurement of cells is Euclidian distance or its derivatives. However, the process of scRNA-seq is a multi-variate Bernoulli event model, thus we hypothesize that it would be more efficient when the divergence between cells is valued with relative entropy than Euclidian distance. In this study, we compared the performances of Euclidian distance, Spearman correlation distance and Relative Entropy using scRNA-seq data of the early, medial and late stage of limb development generated in our lab. Relative Entropy is better than other methods according to cluster potential test. Furthermore, we developed KL-SNE, an algorithm modifying t-SNE whose definition of divergence between cells Euclidian distance to Kullback–Leibler divergence. Results showed that KL-SNE was more effective to dissect cell heterogeneity than t-SNE, indicating the better performance of relative entropy than Euclidian distance. Specifically, the chondrocyte expressing Comp was clustered together with KL-SNE but not with t-SNE. Surprisingly, cells in early stage were surrounded by cells in medial stage in the processing of KL-SNE while medial cells neighbored to late stage with the process of t-SNE. This results parallel to Heatmap which showed cells in medial stage were more heterogenic than cells in other stages. In addition, we also found that results of KL-SNE tend to follow Gaussian distribution compared with those of the t-SNE, which could also be verified with the analysis of scRNA-seq data from another study on human embryo development. Therefore, it is also an effective way to convert non-Gaussian distribution to Gaussian distribution and facilitate the subsequent statistic possesses. Thus, relative entropy is potentially a better way to determine the divergence of cells in scRNA-seq data analysis.Keywords: Single cell RNA sequence, Similarity measurement, Relative Entropy, KL-SNE, t-SNE
Procedia PDF Downloads 340775 Cross Reactivity of Risperidone in Fentanyl Point of Care Devices
Authors: Barry D. Kyle, Jessica Boyd, Robin Pickersgill, Nicole Squires, Cynthia Balion
Abstract:
Background-Aim: Fentanyl is a highly-potent synthetic μ-opioid receptor agonist used for exceptional pain management. Its main metabolite, norfentanyl, is typically present in urine at significantly high concentrations (i.e. ~20%) representing an effective targeting molecule for immunoassay detection. Here, we evaluated the NCSTM One Step Fentanyl Test Device© and the BTNX Rapid ResponseTM Single Drug Test Strip© point of care (POC) test strips targeting norfentanyl (20 ng/ml) and fentanyl (100 ng/ml) molecules for potential risperidone interference. Methods: POC tests calibrated against norfentanyl (20 ng/ml) used [immunochromatographic] lateral flow devices to provide qualitative results within five minutes of urine sample contact. Results were recorded as negative if lines appeared in the test and control regions according to manufacturer’s instructions. Positive results were recorded if no line appeared in the test region (i.e., control line only visible). Pooled patient urine (n=20), that screened negative for drugs of abuse (using NCS One Step Multi-Line Screen) and fentanyl (using BTNX Rapid Response Strip) was used for spiking studies. Urine was spiked with risperidone alone and with combinations of fentanyl, norfentanyl and/or risperidone to evaluate cross-reactivity in each test device. Results: A positive screen result was obtained when 8,000 ng/mL of risperidone was spiked into drug free urine using the NCS test device. Positive screen results were also obtained in spiked urine samples containing fentanyl and norfentanyl combinations below the cut-off concentrations when 4000 ng/mL risperidone was present using the NCS testing device. There were no screen positive test results using the BTNX test strip with up to 8,000 ng/mL alone or in combination with concentrations of fentanyl and norfentanyl below the cut-off. Both devices screened positive when either fentanyl or norfentanyl exceeded the cut-off threshold in the absence and presence of risperidone. Conclusion: We report that urine samples containing risperidone may give a false positive result using the NCS One Step Fentanyl Test Device.Keywords: fentanyl, interferences, point of care test, Risperidone
Procedia PDF Downloads 274774 The Link between Corporate Governance and EU Competition Law Enforcement: A Conditional Logistic Regression Analysis of the Role of Diversity, Independence and Corporate Social Responsibility
Authors: Jeroen De Ceuster
Abstract:
This study is the first empirical analysis of the link between corporate governance and European Union competition law. Although competition law enforcement is often studied through the lens of competition law, we offer an alternative perspective by looking at a number of corporate governance factor at the level of the board of directors. We find that undertakings where the Chief Executive Officer is also chairman of the board are twice as likely to violate European Union competition law. No significant relationship was found between European Union competition law infringements and gender diversity of the board, the size of the board, the percentage of directors appointed after the Chief Executive Officer, the percentage of independent directors, or the presence of corporate social responsibility (CSR) committee. This contribution is based on a 1-1 matched peer study. Our sample includes all ultimate parent companies with a board that have been sanctioned by the European Commission for either anticompetitive agreements or abuse of dominance for the period from 2004 to 2018. These companies were matched to a company with headquarters in the same country, belongs to the same industry group, is active in the European Economic Area, and is the nearest neighbor to the infringing company in terms of revenue. Our final sample includes 121 pairs. As is common with matched peer studies, we use CLR to analyze the differences within these pairs. The only statistically significant independent variable after controlling for size and performance is CEO/Chair duality. The results indicate that companies whose Chief Executive Officer also functions as chairman of the board are twice as likely to infringe European Union competition law. This is in line with the monitoring theory of the board of directors, which states that its primary function is to monitor top management. Since competition law infringements are mostly organized by management and hidden from board directors, the results suggest that a Chief Executive Officer who is also chairman is more likely to be either complicit in the infringement or less critical towards his day-to-day colleagues and thus impedes proper detection by the board of competition law infringements.Keywords: corporate governance, competition law, board of directors, board independence, ender diversity, corporate social responisbility
Procedia PDF Downloads 139773 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis
Authors: Maher Ali Rusho, Sudipta Halder
Abstract:
The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.
Procedia PDF Downloads 12772 The Role of Hypothalamus Mediators in Energy Imbalance
Authors: Maftunakhon Latipova, Feruza Khaydarova
Abstract:
Obesity is considered a chronic metabolic disease that occurs at any age. Regulation of body weight in the body is carried out through complex interaction of a complex of interrelated systems that control the body's energy system. Energy imbalance is the cause of obesity and overweight, in which the supply of energy from food exceeds the energy needs of the body. Obesity is closely related to impaired appetite regulation, and a hypothalamus is a key place for neural regulation of food consumption. The nucleus of the hypothalamus is connected and interdependent on receiving, integrating and sending hunger signals to regulate appetite. Purpose of the study: to identify markers of food behavior. Materials and methods: The screening was carried out to identify eating disorders in 200 men and women aged 18 to 35 years with overweight and obesity and to check the effects of Orexin A and Neuropeptide Y markers. A questionnaire and questionnaires were conducted with over 200 people aged 18 to 35 years. Questionnaires were for eating disorders and hidden depression (on the Zang scale). Anthropometry is measured by OT, OB, BMI, Weight, and Height. Based on the results of the collected data, 3 groups were divided: People with obesity, People with overweight, Control Group of Healthy People. Results: Of the 200 analysed persons, 86% had eating disorders. Of these, 60% of eating disorders were associated with childhood. According to the Zang test result: Normal condition was about 37%, mild depressive disorder 20%, moderate depressive disorder 25% and 18% of people suffered from severe depressive disorder without knowing it. One group of people with obesity had eating disorders and moderate and severe depressive disorder, and group 2 was overweight with mild depressive disorder. According to laboratory data, the first group had the lowest concentration of Orexin A and Neuropeptide U in blood serum. Conclusions: Being overweight and obese are the first signal of many diseases, and prevention and detection of these disorders will prevent various diseases, including type 2 diabetes. Obesity etiology is associated with eating disorders and signal transmission of the orexinorghetic system of the hypothalamus.Keywords: obesity, endocrinology, hypothalamus, overweight
Procedia PDF Downloads 76771 Design and Optimization of a Small Hydraulic Propeller Turbine
Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink
Abstract:
A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design
Procedia PDF Downloads 150770 Enhancing Healthcare Delivery in Low-Income Markets: An Exploration of Wireless Sensor Network Applications
Authors: Innocent Uzougbo Onwuegbuzie
Abstract:
Healthcare delivery in low-income markets is fraught with numerous challenges, including limited access to essential medical resources, inadequate healthcare infrastructure, and a significant shortage of trained healthcare professionals. These constraints lead to suboptimal health outcomes and a higher incidence of preventable diseases. This paper explores the application of Wireless Sensor Networks (WSNs) as a transformative solution to enhance healthcare delivery in these underserved regions. WSNs, comprising spatially distributed sensor nodes that collect and transmit health-related data, present opportunities to address critical healthcare needs. Leveraging WSN technology facilitates real-time health monitoring and remote diagnostics, enabling continuous patient observation and early detection of medical issues, especially in areas with limited healthcare facilities and professionals. The implementation of WSNs can enhance the overall efficiency of healthcare systems by enabling timely interventions, reducing the strain on healthcare facilities, and optimizing resource allocation. This paper highlights the potential benefits of WSNs in low-income markets, such as cost-effectiveness, increased accessibility, and data-driven decision-making. However, deploying WSNs involves significant challenges, including technical barriers like limited internet connectivity and power supply, alongside concerns about data privacy and security. Moreover, robust infrastructure and adequate training for local healthcare providers are essential for successful implementation. It further examines future directions for WSNs, emphasizing innovation, scalable solutions, and public-private partnerships. By addressing these challenges and harnessing the potential of WSNs, it is possible to revolutionize healthcare delivery and improve health outcomes in low-income markets.Keywords: wireless sensor networks (WSNs), healthcare delivery, low-Income markets, remote patient monitoring, health data security
Procedia PDF Downloads 37769 Multi-Objective Multi-Period Allocation of Temporary Earthquake Disaster Response Facilities with Multi-Commodities
Authors: Abolghasem Yousefi-Babadi, Ali Bozorgi-Amiri, Aida Kazempour, Reza Tavakkoli-Moghaddam, Maryam Irani
Abstract:
All over the world, natural disasters (e.g., earthquakes, floods, volcanoes and hurricanes) causes a lot of deaths. Earthquakes are introduced as catastrophic events, which is accident by unusual phenomena leading to much loss around the world. Such could be replaced by disasters or any other synonyms strongly demand great long-term help and relief, which can be hard to be managed. Supplies and facilities are very important challenges after any earthquake which should be prepared for the disaster regions to satisfy the people's demands who are suffering from earthquake. This paper proposed disaster response facility allocation problem for disaster relief operations as a mathematical programming model. Not only damaged people in the earthquake victims, need the consumable commodities (e.g., food and water), but also they need non-consumable commodities (e.g., clothes) to protect themselves. Therefore, it is concluded that paying attention to disaster points and people's demands are very necessary. To deal with this objective, both commodities including consumable and need non-consumable commodities are considered in the presented model. This paper presented the multi-objective multi-period mathematical programming model regarding the minimizing the average of the weighted response times and minimizing the total operational cost and penalty costs of unmet demand and unused commodities simultaneously. Furthermore, a Chebycheff multi-objective solution procedure as a powerful solution algorithm is applied to solve the proposed model. Finally, to illustrate the model applicability, a case study of the Tehran earthquake is studied, also to show model validation a sensitivity analysis is carried out.Keywords: facility location, multi-objective model, disaster response, commodity
Procedia PDF Downloads 257768 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections
Authors: Anthony D. Rhodes, Manan Goel
Abstract:
We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.Keywords: computer vision, object segmentation, interactive segmentation, model compression
Procedia PDF Downloads 120767 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)
Authors: Tarek Duzan
Abstract:
Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data
Procedia PDF Downloads 93766 A Step Magnitude Haptic Feedback Device and Platform for Better Way to Review Kinesthetic Vibrotactile 3D Design in Professional Training
Authors: Biki Sarmah, Priyanko Raj Mudiar
Abstract:
In the modern world of remotely interactive virtual reality-based learning and teaching, including professional skill-building training and acquisition practices, as well as data acquisition and robotic systems, the revolutionary application or implementation of field-programmable neurostimulator aids and first-hand interactive sensitisation techniques into 3D holographic audio-visual platforms have been a coveted dream of many scholars, professionals, scientists, and students. Integration of 'kinaesthetic vibrotactile haptic perception' along with an actuated step magnitude contact profiloscopy in augmented reality-based learning platforms and professional training can be implemented by using an extremely calculated and well-coordinated image telemetry including remote data mining and control technique. A real-time, computer-aided (PLC-SCADA) field calibration based algorithm must be designed for the purpose. But most importantly, in order to actually realise, as well as to 'interact' with some 3D holographic models displayed over a remote screen using remote laser image telemetry and control, all spatio-physical parameters like cardinal alignment, gyroscopic compensation, as well as surface profile and thermal compositions, must be implemented using zero-order type 1 actuators (or transducers) because they provide zero hystereses, zero backlashes, low deadtime as well as providing a linear, absolutely controllable, intrinsically observable and smooth performance with the least amount of error compensation while ensuring the best ergonomic comfort ever possible for the users.Keywords: haptic feedback, kinaesthetic vibrotactile 3D design, medical simulation training, piezo diaphragm based actuator
Procedia PDF Downloads 166765 Experimental Investigation of Beams Having Spring Mass Resonators
Authors: Somya R. Patro, Arnab Banerjee, G. V. Ramana
Abstract:
A flexural beam carrying elastically mounted concentrated masses, such as engines, motors, oscillators, or vibration absorbers, is often encountered in mechanical, civil, and aeronautical engineering domains. To prevent resonance conditions, the designers must predict the natural frequencies of such a constrained beam system. This paper investigates experimental and analytical studies on vibration suppression in a cantilever beam with a tip mass with the help of spring-mass to achieve local resonance conditions. The system consists of a 3D printed polylactic acid (PLA) beam screwed at the base plate of the shaker system. The top of the free end is connected by an accelerometer which also acts as a tip mass. A spring and a mass are attached at the bottom to replicate the mechanism of the spring-mass resonator. The Fast Fourier Transform (FFT) algorithm converts time acceleration plots into frequency amplitude plots from which transmittance is calculated as a function of the excitation frequency. The mathematical formulation is based on the transfer matrix method, and the governing differential equations are based on Euler Bernoulli's beam theory. The experimental results are successfully validated with the analytical results, providing us essential confidence in our proposed methodology. The beam spring-mass system is then converted to an equivalent two-degree of freedom system, from which frequency response function is obtained. The H2 optimization technique is also used to obtain the closed-form expression of optimum spring stiffness, which shows the influence of spring stiffness on the system's natural frequency and vibration response.Keywords: euler bernoulli beam theory, fast fourier transform, natural frequencies, polylactic acid, transmittance, vibration absorbers
Procedia PDF Downloads 105764 Clinical Applications of Amide Proton Transfer Magnetic Resonance Imaging: Detection of Brain Tumor Proliferative Activity
Authors: Fumihiro Imai, Shinichi Watanabe, Shingo Maeda, Haruna Imai, Hiroki Niimi
Abstract:
It is important to know the growth rate of brain tumors before surgery because it influences treatment planning, including not only surgical resection strategy but also adjuvant therapy after surgery. Amide proton transfer (APT) imaging is an emerging molecular magnetic resonance imaging (MRI) technique based on chemical exchange saturation transfer without the administration of a contrast medium. The underlying assumption in APT imaging of tumors is that there is a close relationship between the proliferative activity of the tumor and mobile protein synthesis. We aimed to evaluate the diagnostic performance of APT imaging of pre-and post-treatment brain tumors. Ten patients with brain tumor underwent conventional and APT-weighted sequences on a 3.0 Tesla MRI before clinical intervention. The maximum and the minimum APT-weighted signals (APTWmax and APTWmin) in each solid tumor region were obtained and compared before and after a clinical intervention. All surgical specimens were examined for histopathological diagnosis. Eight of ten patients underwent adjuvant therapy after surgery. Histopathological diagnosis was glioma in 7 patients (WHO grade 2 in 2 patients, WHO grade 3 in 3 patients, and WHO grade 4 in 2 patients), meningioma WHO grade 1 in 2 patients, and primary lymphoma of the brain in 1 patient. High-grade gliomas showed significantly higher APTW signals than that low-grade gliomas. APTWmax in one huge parasagittal meningioma infiltrating into the skull bone was higher than that in glioma WHO grade 4. On the other hand, APTWmax in another convexity meningioma was the same as that in glioma WHO grade 3. Diagnosis of primary lymphoma of the brain was possible with APT imaging before pathological confirmation. APTW signals in residual tumors decreased dramatically within one year after adjuvant therapy in all patients. APT imaging demonstrated excellent diagnostic performance for the planning of surgery and adjuvant therapy of brain tumors.Keywords: amides, magnetic resonance imaging, brain tumors, cell proliferation
Procedia PDF Downloads 87763 Comparison of Fatty Acids Composition of Three Commercial Fish Species Farmed in the Adriatic Sea
Authors: Jelka Pleadin, Greta Krešić, Tina Lešić, Ana Vulić, Renata Barić, Tanja Bogdanović, Dražen Oraić, Ana Legac, Snježana Zrnčić
Abstract:
Fish has been acknowledged as an integral component of a well-balanced diet, providing a healthy source of energy, high-quality proteins, vitamins, essential minerals and, especially, n-3 long-chain polyunsaturated fatty acids (n-3 LC PUFA), mainly eicosapentaenoic acid (20:5 n-3 EPA), and docosahexaenoicacid, (22:6 n-3 DHA), whose pleiotropic effects in terms of health promotion and disease prevention have been increasingly recognised. In this study, the fatty acids composition of three commercially important farmed fish species: sea bream (Sparus aurata), sea bass (Dicentrarchus labrax) and dentex (Dentex dentex) was investigated. In total, 60 fish samples were retrieved during 2015 (n = 30) and 2016 (n = 30) from different locations in the Adriatic Sea. Methyl esters of fatty acids were analysed using gas chromatography (GC) with flame ionization detection (FID). The results show that the most represented fatty acid in all three analysed species is oleic acid (C18:1n-9, OA), followed by linoleic acid (C18:2n-6, LA) and palmitic acid (C16:0, PA). Dentex was shown to have two to four times higher eicosapentaenoic (EPA) and docosahexaenoic (DHA) acid content as compared to sea bream and sea bass. The recommended n-6/n-3 ratio was determined in all fish species but obtained results pointed to statistically significant differences (p < 0.05) in fatty acid composition among the analysed fish species and their potential as a dietary source of valuable fatty acids. Sea bass and sea bream had a significantly higher proportion of n-6 fatty acids, while dentex had a significantly higher proportion of n-3 (C18:4n-3, C20:4n-3, EPA, DHA) fatty acids. A higher hypocholesterolaemic and hypercholesterolaemic fatty acids (HH) ratio was determined for sea bass and sea bream, which comes as the consequence of a lower share of SFA determined in these two species in comparison to dentex. Since the analysed fish species vary in their fatty acids composition consumption of diverse fish species would be advisable. Based on the established lipid quality indicators, dentex, a fish species underutilised by the aquaculture, seems to be a highly recommendable and important source of fatty acids recommended to be included into the human diet.Keywords: dentex, fatty acids, farmed fish, sea bass, sea bream
Procedia PDF Downloads 392762 Monitoring of Cannabis Cultivation with High-Resolution Images
Authors: Levent Basayigit, Sinan Demir, Burhan Kara, Yusuf Ucar
Abstract:
Cannabis is mostly used for drug production. In some countries, an excessive amount of illegal cannabis is cultivated and sold. Most of the illegal cannabis cultivation occurs on the lands far from settlements. In farmlands, it is cultivated with other crops. In this method, cannabis is surrounded by tall plants like corn and sunflower. It is also cultivated with tall crops as the mixed culture. The common method of the determination of the illegal cultivation areas is to investigate the information obtained from people. This method is not sufficient for the determination of illegal cultivation in remote areas. For this reason, more effective methods are needed for the determination of illegal cultivation. Remote Sensing is one of the most important technologies to monitor the plant growth on the land. The aim of this study is to monitor cannabis cultivation area using satellite imagery. The main purpose of this study was to develop an applicable method for monitoring the cannabis cultivation. For this purpose, cannabis was grown as single or surrounded by the corn and sunflower in plots. The morphological characteristics of cannabis were recorded two times per month during the vegetation period. The spectral signature library was created with the spectroradiometer. The parcels were monitored with high-resolution satellite imagery. With the processing of satellite imagery, the cultivation areas of cannabis were classified. To separate the Cannabis plots from the other plants, the multiresolution segmentation algorithm was found to be the most successful for classification. WorldView Improved Vegetative Index (WV-VI) classification was the most accurate method for monitoring the plant density. As a result, an object-based classification method and vegetation indices were sufficient for monitoring the cannabis cultivation in multi-temporal Earthwiev images.Keywords: Cannabis, drug, remote sensing, object-based classification
Procedia PDF Downloads 272761 Characterization of Defense-Related Genes and Metabolite Profiling in Oil Palm Elaeis guineensis during Interaction with Ganoderma boninense
Authors: Mohammad Nazri Abdul Bahari, Nurshafika Mohd Sakeh, Siti Nor Akmar Abdullah
Abstract:
Basal stem rot (BSR) is the most devastating disease in oil palm. Among the oil palm pathogenic fungi, the most prevalent and virulent species associated with BSR is Ganoderma boninense. Early detection of G. boninense attack in oil palm wherein physical symptoms has not yet appeared can offer opportunities to prevent the spread of the necrotrophic fungus. However, poor understanding of molecular defense responses and roles of antifungal metabolites in oil palm against G. boninense has complicated the resolving measures. Hence, characterization of defense-related molecular responses and production of antifungal compounds during early interaction with G. boninense is of utmost important. Four month-old oil palm (Elaeis guineensis) seedlings were artificially infected with G. boninense-inoculated rubber wood block via sitting technique. RNA of samples were extracted from roots and leaves tissues at 0, 3, 7 and 11 days post inoculation (d.p.i) followed with sequencing using RNA-Seq method. Differentially-expressed genes (DEGs) of oil palm-G. boninense interaction were identified, while changes in metabolite profile will be scrutinized related to the DEGs. The RNA-Seq data generated a total of 113,829,376 and 313,293,229 paired-end clean reads from untreated (0 d.p.i) and treated (3, 7, 11 d.p.i) samples respectively, each with two biological replicates. The paired-end reads were mapped to Elaeis guineensis reference genome to screen out non-oil palm genes and subsequently generated 74,794 coding sequences. DEG analysis of phytohormone biosynthetic genes in oil palm roots revealed that at p-value ≤ 0.01, ethylene and jasmonic acid may act in antagonistic manner with salicylic acid to coordinate defense response at early interaction with G. boninense. Findings on metabolite profiling of G. boninense-infected oil palm roots and leaves are hoped to explain the defense-related compounds elicited by Elaeis guineensis in response to G. boninense colonization. The study aims to shed light on molecular defense response of oil palm at early interaction with G. boninense and promote prevention measures against Ganoderma infection.Keywords: Ganoderma boninense, metabolites, phytohormones, RNA-Seq
Procedia PDF Downloads 264760 Design and Development of On-Line, On-Site, In-Situ Induction Motor Performance Analyser
Authors: G. S. Ayyappan, Srinivas Kota, Jaffer R. C. Sheriff, C. Prakash Chandra Joshua
Abstract:
In the present scenario of energy crises, energy conservation in the electrical machines is very important in the industries. In order to conserve energy, one needs to monitor the performance of an induction motor on-site and in-situ. The instruments available for this purpose are very meager and very expensive. This paper deals with the design and development of induction motor performance analyser on-line, on-site, and in-situ. The system measures only few electrical input parameters like input voltage, line current, power factor, frequency, powers, and motor shaft speed. These measured data are coupled to name plate details and compute the operating efficiency of induction motor. This system employs the method of computing motor losses with the help of equivalent circuit parameters. The equivalent circuit parameters of the concerned motor are estimated using the developed algorithm at any load conditions and stored in the system memory. The developed instrument is a reliable, accurate, compact, rugged, and cost-effective one. This portable instrument could be used as a handy tool to study the performance of both slip ring and cage induction motors. During the analysis, the data can be stored in SD Memory card and one can perform various analyses like load vs. efficiency, torque vs. speed characteristics, etc. With the help of the developed instrument, one can operate the motor around its Best Operating Point (BOP). Continuous monitoring of the motor efficiency could lead to Life Cycle Assessment (LCA) of motors. LCA helps in taking decisions on motor replacement or retaining or refurbishment.Keywords: energy conservation, equivalent circuit parameters, induction motor efficiency, life cycle assessment, motor performance analysis
Procedia PDF Downloads 384759 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems
Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer
Abstract:
This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.Keywords: cascade control, multi-Loop control systems, multiobjective optimization, optimal control
Procedia PDF Downloads 153