Search results for: Genetic Algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4651

Search results for: Genetic Algorithm

781 Numerical Investigation of Beam-Columns Subjected to Non-Proportional Loadings under Ambient Temperature Conditions

Authors: George Adomako Kumi

Abstract:

The response of structural members, when subjected to various forms of non-proportional loading, plays a major role in the overall stability and integrity of a structure. This research seeks to present the outcome of a finite element investigation conducted by the use of finite element programming software ABAQUS to validate the experimental results of elastic and inelastic behavior and strength of beam-columns subjected to axial loading, biaxial bending, and torsion under ambient temperature conditions. The application of the rigorous and highly complicated ABAQUS finite element software will seek to account for material, non-linear geometry, deformations, and, more specifically, the contact behavior between the beam-columns and support surfaces. Comparisons of the three-dimensional model with the results of actual tests conducted and results from a solution algorithm developed through the use of the finite difference method will be established in order to authenticate the veracity of the developed model. The results of this research will seek to provide structural engineers with much-needed knowledge about the behavior of steel beam columns and their response to various non-proportional loading conditions under ambient temperature conditions.

Keywords: beam-columns, axial loading, biaxial bending, torsion, ABAQUS, finite difference method

Procedia PDF Downloads 150
780 Numerical Analysis of a Pilot Solar Chimney Power Plant

Authors: Ehsan Gholamalizadeh, Jae Dong Chung

Abstract:

Solar chimney power plant is a feasible solar thermal system which produces electricity from the Sun. The objective of this study is to investigate buoyancy-driven flow and heat transfer through a built pilot solar chimney system called 'Kerman Project'. The system has a chimney with the height and diameter of 60 m and 3 m, respectively, and the average radius of its solar collector is about 20 m, and also its average collector height is about 2 m. A three-dimensional simulation was conducted to analyze the system, using computational fluid dynamics (CFD). In this model, radiative transfer equation was solved using the discrete ordinates (DO) radiation model taking into account a non-gray radiation behavior. In order to modelling solar irradiation from the sun’s rays, the solar ray tracing algorithm was coupled to the computation via a source term in the energy equation. The model was validated with comparing to the experimental data of the Manzanares prototype and also the performance of the built pilot system. Then, based on the numerical simulations, velocity and temperature distributions through the system, the temperature profile of the ground surface and the system performance were presented. The analysis accurately shows the flow and heat transfer characteristics through the pilot system and predicts its performance.

Keywords: buoyancy-driven flow, computational fluid dynamics, heat transfer, renewable energy, solar chimney power plant

Procedia PDF Downloads 231
779 Fault Detection and Isolation in Sensors and Actuators of Wind Turbines

Authors: Shahrokh Barati, Reza Ramezani

Abstract:

Due to the countries growing attention to the renewable energy producing, the demand for energy from renewable energy has gone up among the renewable energy sources; wind energy is the fastest growth in recent years. In this regard, in order to increase the availability of wind turbines, using of Fault Detection and Isolation (FDI) system is necessary. Wind turbines include of various faults such as sensors fault, actuator faults, network connection fault, mechanical faults and faults in the generator subsystem. Although, sensors and actuators have a large number of faults in wind turbine but have discussed fewer in the literature. Therefore, in this work, we focus our attention to design a sensor and actuator fault detection and isolation algorithm and Fault-tolerant control systems (FTCS) for Wind Turbine. The aim of this research is to propose a comprehensive fault detection and isolation system for sensors and actuators of wind turbine based on data-driven approaches. To achieve this goal, the features of measurable signals in real wind turbine extract in any condition. The next step is the feature selection among the extract in any condition. The next step is the feature selection among the extracted features. Features are selected that led to maximum separation networks that implemented in parallel and results of classifiers fused together. In order to maximize the reliability of decision on fault, the property of fault repeatability is used.

Keywords: FDI, wind turbines, sensors and actuators faults, renewable energy

Procedia PDF Downloads 378
778 Effects of Starvation, Glucose Treatment and Metformin on Resistance in Chronic Myeloid Leukemia Cells

Authors: Nehir Nebioglu

Abstract:

Chemotherapy is widely used for the treatment of cancer. Doxorubicin is an anti-cancer chemotherapy drug that is classified as an anthracycline antibiotic. Antitumor antibiotics consist of natural products produced by species of the soil fungus Streptomyces. These drugs act in multiple phases of the cell cycle and are known cell-cycle specific. Although DOX is a precious clinical antineoplastic agent, resistance is also a problem that limits its utility besides cardiotoxicity problem. The drug resistance of cancer cells results from multiple factors including individual variation, genetic heterogeneity within a tumor, and cellular evolution. The mechanism of resistance is thought to involve, in particular, ABCB1 (MDR1, Pgp) and ABCC1 (MRP1) as well as other transporters. Several studies on DOX-resistant cell lines have shown that resistance can be overcome by an inhibition of ABCB1, ABCC1, and ABCC2. This study attempts to understand the effects of different concentration levels of glucose treatment and starvation on the proliferation of Doxorubicin resistant cancer cells lines. To understand the effect of starvation, K562/Dox and K562 cell lines were treated with 0, 5 nM, 50 nM, 500 nM, 5 uM and 50 uM Dox concentrations in both starvation and normal medium conditions. In addition to this, to interpret the effect of glucose treatment, different concentrations (0, 1 mM, 5 mM, 25 mM) of glucose were applied to Dox-treated (with 0, 5 nM, 50 nM, 500 nM, 5 uM and 50 uM) K562/Dox and K652 cell lines. All results show significant decreasing in the cell count of K562/Dox, when cells were starved. However, while proliferation of K562/Dox lines decrease is associated with the increasingly applied Dox concentration, K562/Dox starved ones remain at the same proliferation level. Thus, the results imply that an amount of K562/Dox lines gain starvation resistance and remain resistant. Furthermore, for K562/Dox, there is no clear effect of glucose treatment in terms of cell proliferation. In the presence of a moderate level of glucose (5 mM), proliferation increases compared to other concentration of glucose for each different Dox application. On the other hand, a significant increase in cell proliferation in moderate level of glucose is only observed in 5 uM Dox concentration. The moderate concentration level of Dox can be examined in further studies. For the high amount of glucose (25 mM), cell proliferation levels are lower than moderate glucose application. The reason could be high amount of glucose may not be absorbable by cells. Also, in the presence of low amount of glucose, proliferation is decreasing in an orderly manner of increase in Dox concentration. This situation can be explained by the glucose depletion -Warburg effect- in the literature.

Keywords: drug resistance, cancer cells, chemotherapy, doxorubicin

Procedia PDF Downloads 153
777 Deep Reinforcement Learning Model for Autonomous Driving

Authors: Boumaraf Malak

Abstract:

The development of intelligent transportation systems (ITS) and artificial intelligence (AI) are spurring us to pave the way for the widespread adoption of autonomous vehicles (AVs). This is open again opportunities for smart roads, smart traffic safety, and mobility comfort. A highly intelligent decision-making system is essential for autonomous driving around dense, dynamic objects. It must be able to handle complex road geometry and topology, as well as complex multiagent interactions, and closely follow higher-level commands such as routing information. Autonomous vehicles have become a very hot research topic in recent years due to their significant ability to reduce traffic accidents and personal injuries. Using new artificial intelligence-based technologies handles important functions in scene understanding, motion planning, decision making, vehicle control, social behavior, and communication for AV. This paper focuses only on deep reinforcement learning-based methods; it does not include traditional (flat) planar techniques, which have been the subject of extensive research in the past because reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. The DRL algorithm used so far found solutions to the four main problems of autonomous driving; in our paper, we highlight the challenges and point to possible future research directions.

Keywords: deep reinforcement learning, autonomous driving, deep deterministic policy gradient, deep Q-learning

Procedia PDF Downloads 55
776 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 117
775 Teaching Tools for Web Processing Services

Authors: Rashid Javed, Hardy Lehmkuehler, Franz Josef-Behr

Abstract:

Web Processing Services (WPS) have up growing concern in geoinformation research. However, teaching about them is difficult because of the generally complex circumstances of their use. They limit the possibilities for hands- on- exercises on Web Processing Services. To support understanding however a Training Tools Collection was brought on the way at University of Applied Sciences Stuttgart (HFT). It is limited to the scope of Geostatistical Interpolation of sample point data where different algorithms can be used like IDW, Nearest Neighbor etc. The Tools Collection aims to support understanding of the scope, definition and deployment of Web Processing Services. For example it is necessary to characterize the input of Interpolation by the data set, the parameters for the algorithm and the interpolation results (here a grid of interpolated values is assumed). This paper reports on first experiences using a pilot installation. This was intended to find suitable software interfaces for later full implementations and conclude on potential user interface characteristics. Experiences were made with Deegree software, one of several Services Suites (Collections). Being strictly programmed in Java, Deegree offers several OGC compliant Service Implementations that also promise to be of benefit for the project. The mentioned parameters for a WPS were formalized following the paradigm that any meaningful component will be defined in terms of suitable standards. E.g. the data output can be defined as a GML file. But, the choice of meaningful information pieces and user interactions is not free but partially determined by the selected WPS Processing Suite.

Keywords: deegree, interpolation, IDW, web processing service (WPS)

Procedia PDF Downloads 332
774 Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children

Authors: F. Jiménez, R. Jódar, M. Martín, G. Sánchez, G. Sciavicco

Abstract:

Abstract—Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time, to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Its application to unsupervised classification is restricted to a limited number of experiments in the literature. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We present a feature selection wrapper model composed by a multi-objective evolutionary algorithm, the clustering method Expectation-Maximization (EM), and the classifier C4.5 for the unsupervised classification of data extracted from a psychological test named BASC-II (Behavior Assessment System for Children - II ed.) with two objectives: Maximizing the likelihood of the clustering model and maximizing the accuracy of the obtained classifier. We present a methodology to integrate feature selection for unsupervised classification, model evaluation, decision making (to choose the most satisfactory model according to a a posteriori process in a multi-objective context), and testing. We compare the performance of the classifier obtained by the multi-objective evolutionary algorithms ENORA and NSGA-II, and the best solution is then validated by the psychologists that collected the data.

Keywords: evolutionary computation, feature selection, classification, clustering

Procedia PDF Downloads 342
773 Using Deep Learning Real-Time Object Detection Convolution Neural Networks for Fast Fruit Recognition in the Tree

Authors: K. Bresilla, L. Manfrini, B. Morandi, A. Boini, G. Perulli, L. C. Grappadelli

Abstract:

Image/video processing for fruit in the tree using hard-coded feature extraction algorithms have shown high accuracy during recent years. While accurate, these approaches even with high-end hardware are computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks (CNNs), specifically an algorithm (YOLO - You Only Look Once) with 24+2 convolution layers. Using deep-learning techniques eliminated the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This CNN is trained on more than 5000 images of apple and pear fruits on 960 cores GPU (Graphical Processing Unit). Testing set showed an accuracy of 90%. After this, trained data were transferred to an embedded device (Raspberry Pi gen.3) with camera for more portability. Based on correlation between number of visible fruits or detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Speed of processing and detection of the whole platform was higher than 40 frames per second. This speed is fast enough for any grasping/harvesting robotic arm or other real-time applications.

Keywords: artificial intelligence, computer vision, deep learning, fruit recognition, harvesting robot, precision agriculture

Procedia PDF Downloads 389
772 Segmentation of the Liver and Spleen From Abdominal CT Images Using Watershed Approach

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

The phase of segmentation is an important step in the processing and interpretation of medical images. In this paper, we focus on the segmentation of liver and spleen from the abdomen computed tomography (CT) images. The importance of our study comes from the fact that the segmentation of ROI from CT images is usually a difficult task. This difficulty is the gray’s level of which is similar to the other organ also the ROI are connected to the ribs, heart, kidneys, etc. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to remove the surrounding and connected organs and tissues by applying morphological filters. This first step makes the extraction of interest regions easier. The second step consists of improving the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce these deficiencies by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 468
771 Using Machine Learning to Classify Human Fetal Health and Analyze Feature Importance

Authors: Yash Bingi, Yiqiao Yin

Abstract:

Reduction of child mortality is an ongoing struggle and a commonly used factor in determining progress in the medical field. The under-5 mortality number is around 5 million around the world, with many of the deaths being preventable. In light of this issue, Cardiotocograms (CTGs) have emerged as a leading tool to determine fetal health. By using ultrasound pulses and reading the responses, CTGs help healthcare professionals assess the overall health of the fetus to determine the risk of child mortality. However, interpreting the results of the CTGs is time-consuming and inefficient, especially in underdeveloped areas where an expert obstetrician is hard to come by. Using a support vector machine (SVM) and oversampling, this paper proposed a model that classifies fetal health with an accuracy of 99.59%. To further explain the CTG measurements, an algorithm based on Randomized Input Sampling for Explanation ((RISE) of Black-box Models was created, called Feature Alteration for explanation of Black Box Models (FAB), and compared the findings to Shapley Additive Explanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME). This allows doctors and medical professionals to classify fetal health with high accuracy and determine which features were most influential in the process.

Keywords: machine learning, fetal health, gradient boosting, support vector machine, Shapley values, local interpretable model agnostic explanations

Procedia PDF Downloads 121
770 Exploring the Correlation between Body Constitution of an Individual as Per Ayurveda and Gut Microbiome in Healthy, Multi Ethnic Urban Population in Bangalore, India

Authors: Shalini TV, Gangadharan GG, Sriranjini S Jaideep, ASN Seshasayee, Awadhesh Pandit

Abstract:

Introduction: Prakriti (body-mind constitution of an individual) is a conventional, customized and unique understanding of which is essential for the personalized medicine described in Ayurveda, Indian System of Medicine. Based on the Doshas( functional, bio humoral unit in the body), individuals are categorized into three major Prakriti- Vata, Pitta, and Kapha. The human gut microbiome hosts plenty of highly diverse and metabolically active microorganisms, mainly dominated by the bacteria, which are known to influence the physiology of an individual. Few researches have shown the correlation between the Prakriti and the biochemical parameters. In this study, an attempt was made to explore any correlation between the Prakriti (phenotype of an individual) with the Genetic makeup of the gut microbiome in healthy individuals. Materials and methods: 270 multi-ethnic, healthy volunteers of both sex with the age group between 18 to 40 years, with no history of antibiotics in the last 6 months were recruited into three groups of Vata, Pitta, and Kapha. The Prakriti of the individual was determined using Ayusoft, a software designed by CDAC, Pune, India. The volunteers were subjected to initial screening for the assessment of their height, weight, Body Mass Index, Vital signs and Blood investigations to ensure they are healthy. The stool and saliva samples of the recruited volunteers were collected as per the standard operating procedure developed, and the bacterial DNA was isolated using Qiagen kits. The extracted DNA was subjected to 16s rRNA sequencing using the Illumina kits. The sequencing libraries are targeting the variable V3 and V4 regions of the 16s rRNA gene. Paired sequencing was done on the MiSeq system and data were analyzed using the CLC Genomics workbench 11. Results: The 16s rRNA sequencing of the V3 and V4 regions showed a diverse pattern in both the oral and stool microbial DNA. The study did not reveal any specific pattern of bacterial flora amongst the Prakriti. All the p-values were more than the effective alpha values for all OTUs in both the buccal cavity and stool samples. Therefore, there was no observed significant enrichment of an OTU in the patient samples from either the buccal cavity or stool samples. Conclusion: In healthy volunteers of multi-ethnicity, due to the influence of the various factors, the correlation between the Prakriti and the gut microbiome was not seen.

Keywords: gut microbiome, ayurveda Prakriti, sequencing, multi-ethnic urban population

Procedia PDF Downloads 111
769 Innovative Preparation Techniques: Boosting Oral Bioavailability of Phenylbutyric Acid Through Choline Salt-Based API-Ionic Liquids and Therapeutic Deep Eutectic Systems

Authors: Lin Po-Hsi, Sheu Ming-Thau

Abstract:

Urea cycle disorders (UCD) are rare genetic metabolic disorders that compromise the body's urea cycle. Sodium phenylbutyrate (SPB) is a medication commonly administered in tablet or powder form to lower ammonia levels. Nonetheless, its high sodium content poses risks to sodium-sensitive UCD patients. This necessitates the creation of an alternative drug formulation to mitigate sodium load and optimize drug delivery for UCD patients. This study focused on crafting a novel oral drug formulation for UCD, leveraging choline bicarbonate and phenylbutyric acid. The active pharmaceutical ingredient-ionic liquids (API-ILs) and therapeutic deep eutectic systems (THEDES) were formed by combining these with choline chloride. These systems display characteristics like maintaining a liquid state at room temperature and exhibiting enhanced solubility. This in turn amplifies drug dissolution rate, permeability, and ultimately oral bioavailability. Incorporating choline-based phenylbutyric acid as a substitute for traditional SPB can effectively curtail the sodium load in UCD patients. Our in vitro dissolution experiments revealed that the ILs and DESs, synthesized using choline bicarbonate and choline chloride with phenylbutyric acid, surpassed commercial tablets in dissolution speed. Pharmacokinetic evaluations in SD rats indicated a notable uptick in the oral bioavailability of phenylbutyric acid, underscoring the efficacy of choline salt ILs in augmenting its bioavailability. Additional in vitro intestinal permeability tests on SD rats authenticated that the ILs, formulated with choline bicarbonate and phenylbutyric acid, demonstrate superior permeability compared to their sodium and acid counterparts. To conclude, choline salt ILs developed from choline bicarbonate and phenylbutyric acid present a promising avenue for UCD treatment, with the added benefit of reduced sodium load. They also hold merit in formulation engineering. The sustained-release capabilities of DESs position them favorably for drug delivery, while the low toxicity and cost-effectiveness of choline chloride signal potential in formulation engineering. Overall, this drug formulation heralds a prospective therapeutic avenue for UCD patients.

Keywords: phenylbutyric acid, sodium phenylbutyrate, choline salt, ionic liquids, deep eutectic systems, oral bioavailability

Procedia PDF Downloads 79
768 Effects of a Dwarfing Gene sd1-d (Dee-Geo-Woo-Gen Dwarf) on Yield and Related Traits in Rice: Preliminary Report

Authors: M. Bhattarai, B. B. Rana, M. Kamimukai, I. Takamure, T. Kawano, M. Murai

Abstract:

The sd1-d allele at the sd1 locus on chromosome 1, originating from Taiwanese variety Dee-geo-woo-gen, has been playing important role for developing short-culm and lodging-resistant indica varieties such as IR36 in rice. The dominant allele SD1 for long culm at the locus is differentiated into SD1-in and SD1-ja which are harbored in indica and japonica subspecies’s, respectively. The sd1-d of an indica variety IR36 was substituted with SD1-in or SD1-ja by recurrent backcrosses of 17 times with IR36, and two isogenic tall lines regarding the respective dominant alleles were developed by using an indica variety IR5867 and a japonica one ‘Koshihikari’ as donors, which were denoted by '5867-36' and 'Koshi-36', respectively. The present study was conducted to examine the effect of sd1-d on yield and related traits as compared with SD1-in and SD1-ja, by using the two isogenic tall lines. Seedlings of IR36 and the two isogenic lines were transplanted on an experimental field of Kochi University, by the planting distance of 30 cm × 15 cm with two seedlings per hill, on May 3, 2017. Chemical fertilizers were supplied by basal application and top-dressing at a rate of 8.00, 6.57 and 7.52 g/m², respectively, for N, P₂O₅ and K₂O in total. Yield, yield components, and other traits were measured. Culm length (cm) was in the order of 5867-36 (101.9) > Koshi-36 (80.1) > IR36 (60.0), where '>' indicates statistically significant difference at the 5% level. Accordingly, sd1-d reduced culm by 41.9 and 20.1 cm, compared with SD1-in and SD1-ja, respectively, and the effect of elongating culm was higher in the former allele than in the latter one. Total brown rice yield (g/m²), including unripened grains, was in the order of IR36 (611) ≧ 5867-36 (586) ≧ Koshi-36 (572), indicating non-significant differences among them. Yield-1.5mm sieve (g/m²) was in the order of IR36 (596) ≧ 5867-36 (575) ≧ Koshi-36 (558). Spikelet number per panicle was in the order of 5867-36 (89.2) ≧ IR36 (84.7) ≧ Koshi-36 (79.8), and 5867-36 > Koshi-36. Panicle number per m² was in the order of IR36 (428) ≧ Koshi-36 (403) ≧ 5867-36 (353), and IR36 > 5867-36, suggesting that sd1-d increased number of panicles compared with SD1-in. Ripened-grain percentage-1.5mm sieve was in the order of Koshi-36 (86.0) ≧ 5867-36 (85.0) ≧ IR36 (82.7), and Koshi-36 > IR36. Thousand brown-rice-grain weight-1.5mm sieve (g) was in the order of 5867-36 (21.5) > Koshi-36 (20.2) ≧ IR36 (19.9). Total dry weight at maturity (g/m²) was in the order of 5867-36 (1404 ) ≧ IR36 (1310) ≧ Kosihi-36 (1290). Harvest index of total brown rice (%) was in the order of IR36 (39.6) > Koshi-36 (37.7) > 5867-36 (35.5). Hence, sd1-d did not exert significant effect on yield in indica genetic background. However, lodging was observed from the late stage of maturity in 5867-36 and Koshi-36, particularly in the former, which was principally due to their long culms. Consequently, sd1-d enables higher yield with higher fertilizer application, by enhancing lodging resistance, particularly in indica subspecies.

Keywords: rice, dwarfing gene, sd1-d, SD1-in, SD1-ja, yield

Procedia PDF Downloads 145
767 Multimodal Content: Fostering Students’ Language and Communication Competences

Authors: Victoria L. Malakhova

Abstract:

The research is devoted to multimodal content and its effectiveness in developing students’ linguistic and intercultural communicative competences as an indefeasible constituent of their future professional activity. Description of multimodal content both as a linguistic and didactic phenomenon makes the study relevant. The objective of the article is the analysis of creolized texts and the effect they have on fostering higher education students’ skills and their productivity. The main methods used are linguistic text analysis, qualitative and quantitative methods, deduction, generalization. The author studies texts with full and partial creolization, their features and role in composing multimodal textual space. The main verbal and non-verbal markers and paralinguistic means that enhance the linguo-pragmatic potential of creolized texts are covered. To reveal the efficiency of multimodal content application in English teaching, the author conducts an experiment among both undergraduate students and teachers. This allows specifying main functions of creolized texts in the process of language learning, detecting ways of enhancing students’ competences, and increasing their motivation. The described stages of using creolized texts can serve as an algorithm for work with multimodal content in teaching English as a foreign language. The findings contribute to improving the efficiency of the academic process.

Keywords: creolized text, English language learning, higher education, language and communication competences, multimodal content

Procedia PDF Downloads 93
766 Exploring Public Opinions Toward the Use of Generative Artificial Intelligence Chatbot in Higher Education: An Insight from Topic Modelling and Sentiment Analysis

Authors: Samer Muthana Sarsam, Abdul Samad Shibghatullah, Chit Su Mon, Abd Aziz Alias, Hosam Al-Samarraie

Abstract:

Generative Artificial Intelligence chatbots (GAI chatbots) have emerged as promising tools in various domains, including higher education. However, their specific role within the educational context and the level of legal support for their implementation remain unclear. Therefore, this study aims to investigate the role of Bard, a newly developed GAI chatbot, in higher education. To achieve this objective, English tweets were collected from Twitter's free streaming Application Programming Interface (API). The Latent Dirichlet Allocation (LDA) algorithm was applied to extract latent topics from the collected tweets. User sentiments, including disgust, surprise, sadness, anger, fear, joy, anticipation, and trust, as well as positive and negative sentiments, were extracted using the NRC Affect Intensity Lexicon and SentiStrength tools. This study explored the benefits, challenges, and future implications of integrating GAI chatbots in higher education. The findings shed light on the potential power of such tools, exemplified by Bard, in enhancing the learning process and providing support to students throughout their educational journey.

Keywords: generative artificial intelligence chatbots, bard, higher education, topic modelling, sentiment analysis

Procedia PDF Downloads 53
765 Interval Bilevel Linear Fractional Programming

Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi

Abstract:

The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.

Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients

Procedia PDF Downloads 420
764 Deep Reinforcement Learning Approach for Optimal Control of Industrial Smart Grids

Authors: Niklas Panten, Eberhard Abele

Abstract:

This paper presents a novel approach for real-time and near-optimal control of industrial smart grids by deep reinforcement learning (DRL). To achieve highly energy-efficient factory systems, the energetic linkage of machines, technical building equipment and the building itself is desirable. However, the increased complexity of the interacting sub-systems, multiple time-variant target values and stochastic influences by the production environment, weather and energy markets make it difficult to efficiently control the energy production, storage and consumption in the hybrid industrial smart grids. The studied deep reinforcement learning approach allows to explore the solution space for proper control policies which minimize a cost function. The deep neural network of the DRL agent is based on a multilayer perceptron (MLP), Long Short-Term Memory (LSTM) and convolutional layers. The agent is trained within multiple Modelica-based factory simulation environments by the Advantage Actor Critic algorithm (A2C). The DRL controller is evaluated by means of the simulation and then compared to a conventional, rule-based approach. Finally, the results indicate that the DRL approach is able to improve the control performance and significantly reduce energy respectively operating costs of industrial smart grids.

Keywords: industrial smart grids, energy efficiency, deep reinforcement learning, optimal control

Procedia PDF Downloads 169
763 A Rare Case Report of Non-Langerhans Cell Cutaneous Histiocytosis in a 6-Month Old Infant

Authors: Apoorva D. R.

Abstract:

INTRODUCTION: Hemophagocytic lymphohistiocytosis (HLH) is a severe, potentially fatal syndrome in which there is excessive immune activation. The disease is seen in children and people of all ages, but infants from birth to 18 months are most frequently affected. HLH is a sporadic or familial condition that can be triggered by various events that disturb immunological homeostasis. In cases with a genetic predisposition and sporadic occurrences, infection is a frequent trigger. Because of the rarity of this disease, the diverse clinical presentation, and the lack of specificity in the clinical and laboratory results, prompt treatment is essential, but the biggest obstacle to a favorable outcome is frequently a delay in identification. CASE REPORT: Here we report a case of a 6-month-old male infant who presented to the dermatology outpatient with disseminated skin lesions present over the face, abdomen, scalp, and bilateral upper and lower limbs for the past month. The lesions were insidious in onset, initially started over the abdomen, and gradually progressed to involve other body parts. The patient also had a history of fever which was moderate in grade, on and off in nature for 1 month. There were no significant complaints in the past, family, or drug history. There was no history of feeding difficulties in the baby. Parents gave a history of developmental milestones appropriate for age. Examination findings include multiple well-defined monomorphic erythematous papules with a central crater present over bilateral cheeks. Few lichenoid shiny papules present over bilateral arms, legs, and abdomen. Ultrasound of the abdomen and pelvis showed mild degree hepatosplenomegaly, intraabdominal lymphadenopathy, and bilateral inguinal lymphadenopathy. Routine blood investigations showed anemia and lymphopenia. Multiple X-rays of the skull, chest, and bilateral upper and lower limbs were done and were normal. Histopathology features were suggestive of non-Langerhans cell cutaneous histiocytosis. CONCLUSION: HLH is a fatal and rare disease. A high level of suspicion and an interdisciplinary approach among experienced clinicians, pathologists, and microbiologists to define the diagnosis and causative disease are key to diagnosing this case. Early detection and treatment can reduce patient morbidity and mortality.

Keywords: histiocytosis, non langerhans cell, case report, fatal, rare

Procedia PDF Downloads 69
762 Intelligent Recognition of Diabetes Disease via FCM Based Attribute Weighting

Authors: Kemal Polat

Abstract:

In this paper, an attribute weighting method called fuzzy C-means clustering based attribute weighting (FCMAW) for classification of Diabetes disease dataset has been used. The aims of this study are to reduce the variance within attributes of diabetes dataset and to improve the classification accuracy of classifier algorithm transforming from non-linear separable datasets to linearly separable datasets. Pima Indians Diabetes dataset has two classes including normal subjects (500 instances) and diabetes subjects (268 instances). Fuzzy C-means clustering is an improved version of K-means clustering method and is one of most used clustering methods in data mining and machine learning applications. In this study, as the first stage, fuzzy C-means clustering process has been used for finding the centers of attributes in Pima Indians diabetes dataset and then weighted the dataset according to the ratios of the means of attributes to centers of theirs. Secondly, after weighting process, the classifier algorithms including support vector machine (SVM) and k-NN (k- nearest neighbor) classifiers have been used for classifying weighted Pima Indians diabetes dataset. Experimental results show that the proposed attribute weighting method (FCMAW) has obtained very promising results in the classification of Pima Indians diabetes dataset.

Keywords: fuzzy C-means clustering, fuzzy C-means clustering based attribute weighting, Pima Indians diabetes, SVM

Procedia PDF Downloads 387
761 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models

Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble

Abstract:

Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.

Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate

Procedia PDF Downloads 190
760 Establishing Community-Based Pro-Biodiversity Enterprise in the Philippines: A Climate Change Adaptation Strategy towards Agro-Biodiversity Conservation and Local Green Economic Development

Authors: Dina Magnaye

Abstract:

In the Philippines, the performance of the agricultural sector is gauged through crop productivity and returns from farm production rather than the biodiversity in the agricultural ecosystem. Agricultural development hinges on the overall goal of increasing productivity through intensive agriculture, monoculture system, utilization of high yielding varieties in plants, and genetic upgrading in animals. This merits an analysis of the role of agro-biodiversity in terms of increasing productivity, food security and economic returns from community-based pro-biodiversity enterprises. These enterprises conserve biodiversity while equitably sharing production income in the utilization of biological resources. The study aims to determine how community-based pro-biodiversity enterprises become instrumental in local climate change adaptation and agro-biodiversity conservation as input to local green economic development planning. It also involves an assessment of the role of agrobiodiversity in terms of increasing productivity, food security and economic returns from community-based pro-biodiversity enterprises. The perceptions of the local community members both in urban and upland rural areas on community-based pro-biodiversity enterprises were evaluated. These served as a basis in developing a planning modality that can be mainstreamed in the management of local green economic enterprises to benefit the environment, provide local income opportunities, conserve species diversity, and sustain environment-friendly farming systems and practices. The interviews conducted with organic farmer-owners, entrepreneur-organic farmers, and organic farm workers revealed that pro-biodiversity enterprise such as organic farming involved the cyclic use of natural resources within the carrying capacity of a farm; recognition of the value of tradition and culture especially in the upland rural area; enhancement of socio-economic capacity; conservation of ecosystems in harmony with nature; and climate change mitigation. The suggested planning modality for community-based pro-biodiversity enterprises for a green economy encompasses four (4) phases to include community resource or capital asset profiling; stakeholder vision development; strategy formulation for sustained enterprises; and monitoring and evaluation.

Keywords: agro-biodiversity, agro-biodiversity conservation, local green economy, organic farming, pro-biodiversity enterprise

Procedia PDF Downloads 336
759 Real-Time Multi-Vehicle Tracking Application at Intersections Based on Feature Selection in Combination with Color Attribution

Authors: Qiang Zhang, Xiaojian Hu

Abstract:

In multi-vehicle tracking, based on feature selection, the tracking system efficiently tracks vehicles in a video with minimal error in combination with color attribution, which focuses on presenting a simple and fast, yet accurate and robust solution to the problem such as inaccurately and untimely responses of statistics-based adaptive traffic control system in the intersection scenario. In this study, a real-time tracking system is proposed for multi-vehicle tracking in the intersection scene. Considering the complexity and application feasibility of the algorithm, in the object detection step, the detection result provided by virtual loops were post-processed and then used as the input for the tracker. For the tracker, lightweight methods were designed to extract and select features and incorporate them into the adaptive color tracking (ACT) framework. And the approbatory online feature selection algorithms are integrated on the mature ACT system with good compatibility. The proposed feature selection methods and multi-vehicle tracking method are evaluated on KITTI datasets and show efficient vehicle tracking performance when compared to the other state-of-the-art approaches in the same category. And the system performs excellently on the video sequences recorded at the intersection. Furthermore, the presented vehicle tracking system is suitable for surveillance applications.

Keywords: real-time, multi-vehicle tracking, feature selection, color attribution

Procedia PDF Downloads 133
758 Neuron Efficiency in Fluid Dynamics and Prediction of Groundwater Reservoirs'' Properties Using Pattern Recognition

Authors: J. K. Adedeji, S. T. Ijatuyi

Abstract:

The application of neural network using pattern recognition to study the fluid dynamics and predict the groundwater reservoirs properties has been used in this research. The essential of geophysical survey using the manual methods has failed in basement environment, hence the need for an intelligent computing such as predicted from neural network is inevitable. A non-linear neural network with an XOR (exclusive OR) output of 8-bits configuration has been used in this research to predict the nature of groundwater reservoirs and fluid dynamics of a typical basement crystalline rock. The control variables are the apparent resistivity of weathered layer (p1), fractured layer (p2), and the depth (h), while the dependent variable is the flow parameter (F=λ). The algorithm that was used in training the neural network is the back-propagation coded in C++ language with 300 epoch runs. The neural network was very intelligent to map out the flow channels and detect how they behave to form viable storage within the strata. The neural network model showed that an important variable gr (gravitational resistance) can be deduced from the elevation and apparent resistivity pa. The model results from SPSS showed that the coefficients, a, b and c are statistically significant with reduced standard error at 5%.

Keywords: gravitational resistance, neural network, non-linear, pattern recognition

Procedia PDF Downloads 190
757 Data Mining Approach: Classification Model Evaluation

Authors: Lubabatu Sada Sodangi

Abstract:

The rapid growth in exchange and accessibility of information via the internet makes many organisations acquire data on their own operation. The aim of data mining is to analyse the different behaviour of a dataset using observation. Although, the subset of the dataset being analysed may not display all the behaviours and relationships of the entire data and, therefore, may not represent other parts that exist in the dataset. There is a range of techniques used in data mining to determine the hidden or unknown information in datasets. In this paper, the performance of two algorithms Chi-Square Automatic Interaction Detection (CHAID) and multilayer perceptron (MLP) would be matched using an Adult dataset to find out the percentage of an/the adults that earn > 50k and those that earn <= 50k per year. The two algorithms were studied and compared using IBM SPSS statistics software. The result for CHAID shows that the most important predictors are relationship and education. The algorithm shows that those are married (husband) and have qualification: Bachelor, Masters, Doctorate or Prof-school whose their age is > 41<57 earn > 50k. Also, multilayer perceptron displays marital status and capital gain as the most important predictors of the income. It also shows that individuals that their capital gain is less than 6,849 and are single, separated or widow, earn <= 50K, whereas individuals with their capital gain is > 6,849, work > 35 hrs/wk, and > 27yrs their income will be > 50k. By comparing the two algorithms, it is observed that both algorithms are reliable but there is strong reliability in CHAID which clearly shows that relation and education contribute to the prediction as displayed in the data visualisation.

Keywords: data mining, CHAID, multi-layer perceptron, SPSS, Adult dataset

Procedia PDF Downloads 359
756 Effectiveness Assessment of a Brazilian Larvicide on Aedes Control

Authors: Josiane N. Muller, Allan K. R. Galardo, Tatiane A. Barbosa, Evan P. Ferro, Wellington M. Dos Santos, Ana Paula S. A. Correa, Edinaldo C. Rego, Jose B. P. Lima

Abstract:

The susceptibility status of an insect population to any larvicide depends on several factors such includes genetic constitution, environmental conditions and others. The mosquito Aedes aegypti is the primary vector of three important viral diseases, Zika, Dengue, and Chikungunya. The frequent outbreaks of those diseases in different parts of Brazil demonstrate the importance of testing the susceptibility of vectors in different environments. Since the control of this mosquito leads to the control of disease, alternatives for vector control that value the different Brazilian environmental conditions are needed for effective actions. The aim of this study was to evaluate a new commercial formulation of Bacillus thuringiensis israelenses (DengueTech: Brazilian innovative technology) in the Brazilian Legal Amazon considering the climate conditions. Semi-field tests were conducted in the Institute of Scientific and Technological Research of the State of Amapa in two different environments, one in a shaded area and the other exposed to sunlight. The mosquito larvae were exposed to larvicide concentration and a control; each group was tested in three containers of 40 liters each. To assess persistence 50 third instar larvae of Aedes aegypti laboratory lineages (Rockefeller) and 50 larvae of Aedes aegypti collected in the municipality of Macapa, Brazil’s Amapa state, were added weekly and after 24 hours the mortality was assessed. In total 16 tests were performed, where 12 were done with replacement of water (1/5 of the volume, three times per week). The effectiveness of the product was determined through mortality of ≥ 80%, as recommend by the World Health Organization. The results demonstrated that high-water temperatures (26-35 °C) on the containers influenced the residual time of the product, where the maximum effect achieved was 21 days in the shaded area; and no effectiveness of 60 days was found in any of the tests, as expected according to the larvicide company. The test with and without water replacement did not present significant differences in the mortality rate. Considering the different environments and climate, these results stimulate the need to test larvicide and its effectiveness in specific environmental settings in order to identify the parameters required for better results. Thus, we see the importance of semi-field researches considering the local climate conditions for a successful control of Aedes aegypti.

Keywords: Aedes aegypti, bioassay, larvicida, vector control

Procedia PDF Downloads 104
755 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction

Authors: Qais M. Yousef, Yasmeen A. Alshaer

Abstract:

Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.

Keywords: artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization

Procedia PDF Downloads 152
754 Classification of Potential Biomarkers in Breast Cancer Using Artificial Intelligence Algorithms and Anthropometric Datasets

Authors: Aref Aasi, Sahar Ebrahimi Bajgani, Erfan Aasi

Abstract:

Breast cancer (BC) continues to be the most frequent cancer in females and causes the highest number of cancer-related deaths in women worldwide. Inspired by recent advances in studying the relationship between different patient attributes and features and the disease, in this paper, we have tried to investigate the different classification methods for better diagnosis of BC in the early stages. In this regard, datasets from the University Hospital Centre of Coimbra were chosen, and different machine learning (ML)-based and neural network (NN) classifiers have been studied. For this purpose, we have selected favorable features among the nine provided attributes from the clinical dataset by using a random forest algorithm. This dataset consists of both healthy controls and BC patients, and it was noted that glucose, BMI, resistin, and age have the most importance, respectively. Moreover, we have analyzed these features with various ML-based classifier methods, including Decision Tree (DT), K-Nearest Neighbors (KNN), eXtreme Gradient Boosting (XGBoost), Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machine (SVM) along with NN-based Multi-Layer Perceptron (MLP) classifier. The results revealed that among different techniques, the SVM and MLP classifiers have the most accuracy, with amounts of 96% and 92%, respectively. These results divulged that the adopted procedure could be used effectively for the classification of cancer cells, and also it encourages further experimental investigations with more collected data for other types of cancers.

Keywords: breast cancer, diagnosis, machine learning, biomarker classification, neural network

Procedia PDF Downloads 107
753 Proposed Framework based on Classification of Vertical Handover Decision Strategies in Heterogeneous Wireless Networks

Authors: Shidrokh Goudarzi, Wan Haslina Hassan

Abstract:

Heterogeneous wireless networks are converging towards an all-IP network as part of the so-called next-generation network. In this paradigm, different access technologies need to be interconnected; thus, vertical handovers or vertical handoffs are necessary for seamless mobility. In this paper, we conduct a review of existing vertical handover decision-making mechanisms that aim to provide ubiquitous connectivity to mobile users. To offer a systematic comparison, we categorize these vertical handover measurement and decision structures based on their respective methodology and parameters. Subsequently, we analyze several vertical handover approaches in the literature and compare them according to their advantages and weaknesses. The paper compares the algorithms based on the network selection methods, complexity of the technologies used and efficiency in order to introduce our vertical handover decision framework. We find that vertical handovers on heterogeneous wireless networks suffer from the lack of a standard and efficient method to satisfy both user and network quality of service requirements at different levels including architectural, decision-making and protocols. Also, the consolidation of network terminal, cross-layer information, multi packet casting and intelligent network selection algorithm appears to be an optimum solution for achieving seamless service continuity in order to facilitate seamless connectivity.

Keywords: heterogeneous wireless networks, vertical handovers, vertical handover metric, decision-making algorithms

Procedia PDF Downloads 369
752 Microbial Resource Research Infrastructure: A Large-Scale Research Infrastructure for Microbiological Services

Authors: R. Hurtado-Ortiz, D. Clermont, M. Schüngel, C. Bizet, D. Smith, E. Stackebrandt

Abstract:

Microbiological resources and their derivatives are the essential raw material for the advancement of human health, agro-food, food security, biotechnology, research and development in all life sciences. Microbial resources, and their genetic and metabolic products, are utilised in many areas such as production of healthy and functional food, identification of new antimicrobials against emerging and resistant pathogens, fighting agricultural disease, identifying novel energy sources on the basis of microbial biomass and screening for new active molecules for the bio-industries. The complexity of public collections, distribution and use of living biological material (not only living but also DNA, services, training, consultation, etc.) and service offer, demands the coordination and sharing of policies, processes and procedures. The Microbial Resource Research Infrastructure (MIRRI) is an initiative within the European Strategy Forum Infrastructures (ESFRI), bring together 16 partners including 13 European public microbial culture collections and biological resource centres (BRCs), supported by several European and non-European associated partners. The objective of MIRRI is to support innovation in microbiology by provision of a one-stop shop for well-characterized microbial resources and high quality services on a not-for-profit basis for biotechnology in support of microbiological research. In addition, MIRRI contributes to the structuring of microbial resources capacity both at the national and European levels. This will facilitate access to microorganisms for biotechnology for the enhancement of the bio-economy in Europe. MIRRI will overcome the fragmentation of access to current resources and services, develop harmonised strategies for delivery of associated information, ensure bio-security and other regulatory conditions to bring access and promote the uptake of these resources into European research. Data mining of the landscape of current information is needed to discover potential and drive innovation, to ensure the uptake of high quality microbial resources into research. MIRRI is in its Preparatory Phase focusing on governance and structure including technical, legal governance and financial issues. MIRRI will help the Biological Resources Centres to work more closely with policy makers, stakeholders, funders and researchers, to deliver resources and services needed for innovation.

Keywords: culture collections, microbiology, infrastructure, microbial resources, biotechnology

Procedia PDF Downloads 421