Search results for: objective function clustering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11601

Search results for: objective function clustering

9861 Numerical Simulation of Two-Dimensional Flow over a Stationary Circular Cylinder Using Feedback Forcing Scheme Based Immersed Boundary Finite Volume Method

Authors: Ranjith Maniyeri, Ahamed C. Saleel

Abstract:

Two-dimensional fluid flow over a stationary circular cylinder is one of the bench mark problem in the field of fluid-structure interaction in computational fluid dynamics (CFD). Motivated by this, in the present work, a two-dimensional computational model is developed using an improved version of immersed boundary method which combines the feedback forcing scheme of the virtual boundary method with Peskin’s regularized delta function approach. Lagrangian coordinates are used to represent the cylinder and Eulerian coordinates are used to describe the fluid flow. A two-dimensional Dirac delta function is used to transfer the quantities between the sold to fluid domain. Further, continuity and momentum equations governing the fluid flow are solved using fractional step based finite volume method on a staggered Cartesian grid system. The developed code is validated by comparing the values of drag coefficient obtained for different Reynolds numbers with that of other researcher’s results. Also, through numerical simulations for different Reynolds numbers flow behavior is well captured. The stability analysis of the improved version of immersed boundary method is tested for different values of feedback forcing coefficients.

Keywords: Feedback Forcing Scheme, Finite Volume Method, Immersed Boundary Method, Navier-Stokes Equations

Procedia PDF Downloads 297
9860 Extension of Moral Agency to Artificial Agents

Authors: Sofia Quaglia, Carmine Di Martino, Brendan Tierney

Abstract:

Artificial Intelligence (A.I.) constitutes various aspects of modern life, from the Machine Learning algorithms predicting the stocks on Wall streets to the killing of belligerents and innocents alike on the battlefield. Moreover, the end goal is to create autonomous A.I.; this means that the presence of humans in the decision-making process will be absent. The question comes naturally: when an A.I. does something wrong when its behavior is harmful to the community and its actions go against the law, which is to be held responsible? This research’s subject matter in A.I. and Robot Ethics focuses mainly on Robot Rights and its ultimate objective is to answer the questions: (i) What is the function of rights? (ii) Who is a right holder, what is personhood and the requirements needed to be a moral agent (therefore, accountable for responsibility)? (iii) Can an A.I. be a moral agent? (ontological requirements) and finally (iv) if it ought to be one (ethical implications). With the direction to answer this question, this research project was done via a collaboration between the School of Computer Science in the Technical University of Dublin that oversaw the technical aspects of this work, as well as the Department of Philosophy in the University of Milan, who supervised the philosophical framework and argumentation of the project. Firstly, it was found that all rights are positive and based on consensus; they change with time based on circumstances. Their function is to protect the social fabric and avoid dangerous situations. The same goes for the requirements considered necessary to be a moral agent: those are not absolute; in fact, they are constantly redesigned. Hence, the next logical step was to identify what requirements are regarded as fundamental in real-world judicial systems, comparing them to that of ones used in philosophy. Autonomy, free will, intentionality, consciousness and responsibility were identified as the requirements to be considered a moral agent. The work went on to build a symmetrical system between personhood and A.I. to enable the emergence of the ontological differences between the two. Each requirement is introduced, explained in the most relevant theories of contemporary philosophy, and observed in its manifestation in A.I. Finally, after completing the philosophical and technical analysis, conclusions were drawn. As underlined in the research questions, there are two issues regarding the assignment of moral agency to artificial agent: the first being that all the ontological requirements must be present and secondly being present or not, whether an A.I. ought to be considered as an artificial moral agent. From an ontological point of view, it is very hard to prove that an A.I. could be autonomous, free, intentional, conscious, and responsible. The philosophical accounts are often very theoretical and inconclusive, making it difficult to fully detect these requirements on an experimental level of demonstration. However, from an ethical point of view it makes sense to consider some A.I. as artificial moral agents, hence responsible for their own actions. When considering artificial agents as responsible, there can be applied already existing norms in our judicial system such as removing them from society, and re-educating them, in order to re-introduced them to society. This is in line with how the highest profile correctional facilities ought to work. Noticeably, this is a provisional conclusion and research must continue further. Nevertheless, the strength of the presented argument lies in its immediate applicability to real world scenarios. To refer to the aforementioned incidents, involving the murderer of innocents, when this thesis is applied it is possible to hold an A.I. accountable and responsible for its actions. This infers removing it from society by virtue of its un-usability, re-programming it and, only when properly functioning, re-introducing it successfully

Keywords: artificial agency, correctional system, ethics, natural agency, responsibility

Procedia PDF Downloads 174
9859 Nanoparaquat Effects on Oxidative Stress Status and Liver Function in Male Rats

Authors: Zahra Azizi, Ashkan Karbasi, Farzin Firouzian, Sara Soleimani Asl, Akram Ranjbar

Abstract:

Background: One of the most often used herbicides in agriculture is paraquat (PQ), which is very harmful to both people and animals. Chitosan is a well-known, non-toxic polymer commonly used in preparing particles via ionotropic gelation facilitated by negatively charged agents such as sodium alginate. This study aimed to compare the effects of PQ and nanoparaquat (PQNPs) on liver function in male rats. Materials & Methods: Rats were exposed to PQ & PQNPs (4 mg/kg/day, intraperitoneally) for seven days. Then, rats were anesthetized, and serum and liver samples were collected. Later, enzymatic activities such as alanine transaminase (ALT), aspartate transaminase (AST), and alkaline phosphatase (ALP) in serum and oxidative stress biomarkers such as lipid peroxidation (LPO), total antioxidant capacity (TAC) and total thiol groups (TTG) levels in liver tissue were measured by colorimetric methods. Also, histological changes in the liver were evaluated. Results: PQ altered the levels of ALT, AST, and ALP while inducing oxidative stress in the liver. Additionally, liver homogenates with PQ exposure had challenged LPO, TAC, and TTG levels. The severe liver damage is indicated by a significant increase in the enzyme activity of AST, ALT, and ALP in serum. According to the results of the current study, PQNPs, as compared to PQ and the control group, lowered ALT, AST, ALP, and LPO levels while increasing TAC and TTG levels. Conclusion: According to biochemical and histological investigations, PQ loaded in chitosan-alginate particles is more efficient than free PQ at reducing liver toxicity.

Keywords: paraquat, paraquat nanoparticles, liver, oxidative stress

Procedia PDF Downloads 49
9858 Over Expression of Mapk8ip3 Patient Variants in Zebrafish to Establish a Spectrum of Phenotypes in a Rare-Neurodevelopmental Disorder

Authors: Kinnsley Travis, Camerron M. Crowder

Abstract:

Mapk8ip3 (Mitogen-Activated Protein Kinase 8 Interacting Protein 3) is a gene that codes for the JIP3 protein, which is a part of the JIP scaffolding protein family. This protein is involved in axonal vesicle transport, elongation and regeneration. Variants in the Mapk8ip3 gene are associated with a rare-genetic condition that results in a neurodevelopmental disorder that can cause a range of phenotypes including global developmental delay and intellectual disability. Currently, there are 18 known individuals diagnosed to have sequenced confirmed Mapk8ip3 genetic disorders. This project focuses on examining the impact of a subset of missense patient variants on the Jip3 protein function by overexpressing the mRNA of these variants in a zebrafish knockout model for Jip3. Plasmids containing cDNA with individual missense variants were reverse transcribed, purified, and injected into single-cell zebrafish embryos (Wild Type, Jip3 -/+, and Jip3 -/-). At 6-days post mRNA microinjection, morphological, behavioral, and microscopic phenotypes were examined in zebrafish larvae. Morphologically, we compared the size and shape of the zebrafish during their development over a 5-day period. Total locomotive activity was assessed using the Microtracker assay and patterns of movement over time were examined using the DanioVision assay. Lastly, we used confocal microscopy to examine sensory axons for swelling and shortened length, which are phenotypes observed in the loss-of-function knockout Jip3 zebrafish model. Using these assays during embryonic development, we determined the impact of various missense variants on Jip3 protein function, compared to knockout and wild-type zebrafish embryo models. Variants in the gene Mapk8ip3 cause rare-neurodevelopmental disorders due to an essential role in axonal vesicle transport, elongation and regeneration. A subset of missense variants was examined by overexpressing the mRNA of these variants in a Jip3 knock-out zebrafish. Morphological, behavioral, and microscopic phenotypes were examined in zebrafish larvae. Using these assays, the spectrum of disorders can be phenotypically determined and the impact of variant location can be compared to knockout and wild-type zebrafish embryo models.

Keywords: rare disease, neurodevelopmental disorders, mrna overexpression, zebrafish research

Procedia PDF Downloads 107
9857 U.S. Trade and Trade Balance with China: Testing for Marshall-Lerner Condition and the J-Curve Hypothesis

Authors: Anisul Islam

Abstract:

The U.S. has a very strong trade relationship with China but with a large and persistent trade deficit. Some has argued that the undervalued Chinese Yuan is to be blamed for the persistent trade deficit. The empirical results are mixed at best. This paper empirically estimates the U.S. export function along with the U.S. import function with its trade with China with the purpose of testing for the existence of the Marshall-Lerner (ML) condition as well for the possible existence of the J-curve hypothesis. Annual export and import data will be utilized for as long as the time series data exists. The export and import functions will be estimated using advanced econometric techniques, along with appropriate diagnostic tests performed to examine the validity and reliability of the estimated results. The annual time-series data covers from 1975 to 2022 with a sample size of 48 years, the longest period ever utilized before in any previous study. The data is collected from several sources, such as the World Bank’s World Development Indicators, IMF Financial Statistics, IMF Direction of Trade Statistics, and several other sources. The paper is expected to shed important light on the ongoing debate regarding the persistent U.S. trade deficit with China and the policies that may be useful to reduce such deficits over time. As such, the paper will be of great interest for the academics, researchers, think tanks, global organizations, and policy makers in both China and the U.S.

Keywords: exports, imports, marshall-lerner condition, j-curve hypothesis, united states, china

Procedia PDF Downloads 53
9856 Hydro-Gravimetric Ann Model for Prediction of Groundwater Level

Authors: Jayanta Kumar Ghosh, Swastik Sunil Goriwale, Himangshu Sarkar

Abstract:

Groundwater is one of the most valuable natural resources that society consumes for its domestic, industrial, and agricultural water supply. Its bulk and indiscriminate consumption affects the groundwater resource. Often, it has been found that the groundwater recharge rate is much lower than its demand. Thus, to maintain water and food security, it is necessary to monitor and management of groundwater storage. However, it is challenging to estimate groundwater storage (GWS) by making use of existing hydrological models. To overcome the difficulties, machine learning (ML) models are being introduced for the evaluation of groundwater level (GWL). Thus, the objective of this research work is to develop an ML-based model for the prediction of GWL. This objective has been realized through the development of an artificial neural network (ANN) model based on hydro-gravimetry. The model has been developed using training samples from field observations spread over 8 months. The developed model has been tested for the prediction of GWL in an observation well. The root means square error (RMSE) for the test samples has been found to be 0.390 meters. Thus, it can be concluded that the hydro-gravimetric-based ANN model can be used for the prediction of GWL. However, to improve the accuracy, more hydro-gravimetric parameter/s may be considered and tested in future.

Keywords: machine learning, hydro-gravimetry, ground water level, predictive model

Procedia PDF Downloads 114
9855 The Impact of Board Director Characteristics on the Quality of Information Disclosure

Authors: Guo Jinhong

Abstract:

The purpose of this study is to explore the association between board member functions and information disclosure levels. Based on the literature variables, such as the characteristics of the board of directors in the past, a single comprehensive indicator is established as a substitute variable for board functions, and the information disclosure evaluation results published by the Securities and Foundation are used to measure the information disclosure level of the company. This study focuses on companies listed on the Taiwan Stock Exchange from 2006 to 2010 and uses descriptive statistical analysis, univariate analysis, correlation analysis and ordered normal probability (Ordered Probit) regression for empirical analysis. The empirical results show that there is a significant positive correlation between the function of board members and the level of information disclosure. This study also conducts a sensitivity test and draws similar conclusions, showing that boards with better board member functions have higher levels of information disclosure. In addition, this study also found that higher board independence, lower director shareholding pledge ratio, higher director shareholding ratio, and directors with rich professional knowledge and practical experience can help improve the level of information disclosure. The empirical results of this study provide strong support for the "relative regulations to improve the level of information disclosure" formulated by the competent authorities in recent years.

Keywords: function of board members, information disclosure, securities, foundation

Procedia PDF Downloads 88
9854 Carbon Fiber Manufacturing Conditions to Improve Interfacial Adhesion

Authors: Filip Stojcevski, Tim Hilditch, Luke Henderson

Abstract:

Although carbon fibre composites are becoming ever more prominent in the engineering industry, interfacial failure still remains one of the most common limitations to material performance. Carbon fiber surface treatments have played a major role in advancing composite properties however research into the influence of manufacturing variables on a fiber manufacturing line is lacking. This project investigates the impact of altering carbon fiber manufacturing conditions on a production line (specifically electrochemical oxidization and sizing variables) to assess fiber-matrix adhesion. Pristine virgin fibers were manufactured and interfacial adhesion systematically assessed from a microscale (single fiber) to a mesoscale (12k tow), and ultimately a macroscale (laminate). Correlations between interfacial shear strength (IFSS) at each level is explored as a function of known interfacial bonding mechanisms; namely mechanical interlocking, chemical adhesion and fiber wetting. Impact of these bonding mechanisms is assessed through extensive mechanical, topological and chemical characterisation. They are correlated to performance as a function of IFSS. Ultimately this study provides a bottoms up approach to improving composite laminates. By understanding the scaling effects from a singular fiber to a composite laminate and linking this knowledge to specific bonding mechanisms, material scientists can make an informed decision on the manufacturing conditions most beneficial for interfacial adhesion.

Keywords: carbon fibers, interfacial adhesion, surface treatment, sizing

Procedia PDF Downloads 255
9853 An Optimal Control Model to Determine Body Forces of Stokes Flow

Authors: Yuanhao Gao, Pin Lin, Kees Weijer

Abstract:

In this paper, we will determine the external body force distribution with analysis of stokes fluid motion using mathematical modelling and numerical approaching. The body force distribution is regarded as the unknown variable and could be determined by the idea of optimal control theory. The Stokes flow motion and its velocity are generated by given forces in a unit square domain. A regularized objective functional is built to match the numerical result of flow velocity with the generated velocity data. So that the force distribution could be determined by minimizing the value of objective functional, which is also the difference between the numerical and experimental velocity. Then after utilizing the Lagrange multiplier method, some partial differential equations are formulated consisting the optimal control system to solve. Finite element method and conjugate gradient method are used to discretize equations and deduce the iterative expression of target body force to compute the velocity numerically and body force distribution. Programming environment FreeFEM++ supports the implementation of this model.

Keywords: optimal control model, Stokes equation, finite element method, conjugate gradient method

Procedia PDF Downloads 389
9852 Hybrid Wind Solar Gas Reliability Optimization Using Harmony Search under Performance and Budget Constraints

Authors: Meziane Rachid, Boufala Seddik, Hamzi Amar, Amara Mohamed

Abstract:

Today’s energy industry seeks maximum benefit with maximum reliability. In order to achieve this goal, design engineers depend on reliability optimization techniques. This work uses a harmony search algorithm (HS) meta-heuristic optimization method to solve the problem of wind-Solar-Gas power systems design optimization. We consider the case where redundant electrical components are chosen to achieve a desirable level of reliability. The electrical power components of the system are characterized by their cost, capacity and reliability. The reliability is considered in this work as the ability to satisfy the consumer demand which is represented as a piecewise cumulative load curve. This definition of the reliability index is widely used for power systems. The proposed meta-heuristic seeks for the optimal design of series-parallel power systems in which a multiple choice of wind generators, transformers and lines are allowed from a list of product available in the market. Our approach has the advantage to allow electrical power components with different parameters to be allocated in electrical power systems. To allow fast reliability estimation, a universal moment generating function (UMGF) method is applied. A computer program has been developed to implement the UMGF and the HS algorithm. An illustrative example is presented.

Keywords: reliability optimization, harmony search optimization (HSA), universal generating function (UMGF)

Procedia PDF Downloads 566
9851 Application of WHO's Guideline to Evaluating Apps for Smoking Cessation

Authors: Suin Seo, Sung-Il Cho

Abstract:

Background: The use of mobile apps for smoking cessation has grown exponentially in recent years. Yet, there were limited researches which evaluated the quality of smoking cessation apps to our knowledge. In most cases, a clinical practice guideline which is focused on clinical physician was used as an evaluation tool. Objective: The objective of this study was to develop a user-centered measure for quality of mobile smoking cessation apps. Methods: A literature search was conducted to identify articles containing explicit smoking cessation guideline for smoker published until January 2018. WHO’s guide for tobacco users to quit was adopted for evaluation tool which assesses smoker-oriented contents of smoking cessation apps. Compared to the clinical practice guideline, WHO guideline was designed for smokers (non-specialist). On the basis of existing criteria which was developed based on 2008 clinical practice guideline for Treating Tobacco Use and Dependence, evaluation tool was modified and developed by an expert panel. Results: There were five broad categories of criteria that were identified including five objective quality scales: enhancing motivation, assistance with a planning and making quit attempts, preparation for relapse, self-efficacy, connection to smoking. Enhancing motivation and assistance with planning and making quit attempts were similar to contents of clinical practice guideline, but preparation for relapse, self-efficacy and connection to smoking (environment or habit which reminds of smoking) only existed on WHO guideline. WHO guideline had more user-centered elements than clinical guideline. Especially, self-efficacy is the most important determinant of behavior change in accordance with many health behavior change models. With the WHO guideline, it is now possible to analyze the content of the app in the light of a health participant, not a provider. Conclusion: The WHO guideline evaluation tool is a simple, reliable and smoker-centered tool for assessing the quality of mobile smoking cessation apps. It can also be used to provide a checklist for the development of new high-quality smoking cessation apps.

Keywords: smoking cessation, evaluation, mobile application, WHO, guideline

Procedia PDF Downloads 179
9850 M-Machine Assembly Scheduling Problem to Minimize Total Tardiness with Non-Zero Setup Times

Authors: Harun Aydilek, Asiye Aydilek, Ali Allahverdi

Abstract:

Our objective is to minimize the total tardiness in an m-machine two-stage assembly flowshop scheduling problem. The objective is an important performance measure because of the fact that the fulfillment of due dates of customers has to be taken into account while making scheduling decisions. In the literature, the problem is considered with zero setup times which may not be realistic and appropriate for some scheduling environments. Considering separate setup times from processing times increases machine utilization by decreasing the idle time and reduces total tardiness. We propose two new algorithms and adapt four existing algorithms in the literature which are different versions of simulated annealing and genetic algorithms. Moreover, a dominance relation is developed based on the mathematical formulation of the problem. The developed dominance relation is incorporated in our proposed algorithms. Computational experiments are conducted to investigate the performance of the newly proposed algorithms. We find that one of the proposed algorithms performs significantly better than the others, i.e., the error of the best algorithm is less than those of the other algorithms by minimum 50%. The newly proposed algorithm is also efficient for the case of zero setup times and performs better than the best existing algorithm in the literature.

Keywords: algorithm, assembly flowshop, scheduling, simulation, total tardiness

Procedia PDF Downloads 319
9849 A General Framework for Knowledge Discovery from Echocardiographic and Natural Images

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: active contour, Bayesian, echocardiographic image, feature vector

Procedia PDF Downloads 431
9848 Identifying the Goals of a Multicultural Curriculum for the Primary Education Course

Authors: Fatemeh Havas Beigi

Abstract:

The purpose of this study is to identify the objectives of a multicultural curriculum for the primary education period from the perspective of ethnic teachers and education experts and cultural professionals. The research paradigm is interpretive, the research approach is qualitative, the research strategy is content analysis, the sampling method is purposeful and it is a snowball, and the sample of informants in the research for Iranian ethnic teachers and experts until the theoretical saturation was estimated to be 67 people. The data collection tools used were based on semi-structured interviews and individual interviews and focal interviews were used to collect information. The data format was also in audio format and the first period coding and the second coding were used to analyze the data. Based on data analysis 11 Objective: Paying attention to ethnic equality, expanding educational opportunities and justice, peaceful coexistence, anti-ethnic and racial discrimination education, paying attention to human value and dignity, accepting religious diversity, getting to know ethnicities and cultures, promoting teaching-learning, fostering self-confidence, building national unity, and developing cultural commonalities for a multicultural curriculum were identified.

Keywords: objective, multicultural curriculum, connect, elementary education period

Procedia PDF Downloads 87
9847 Analyzing and Predicting the CL-20 Detonation Reaction Mechanism Based on Artificial Intelligence Algorithm

Authors: Kaining Zhang, Lang Chen, Danyang Liu, Jianying Lu, Kun Yang, Junying Wu

Abstract:

In order to solve the problem of a large amount of simulation and limited simulation scale in the first-principle molecular dynamics simulation of energetic material detonation reaction, we established an artificial intelligence model for analyzing and predicting the detonation reaction mechanism of CL-20 based on the first-principle molecular dynamics simulation of the multiscale shock technique (MSST). We employed principal component analysis to identify the dominant charge features governing molecular reactions. We adopted the K-means clustering algorithm to cluster the reaction paths and screen out the key reactions. We introduced the neural network algorithm to construct the mapping relationship between the charge characteristics of the molecular structure and the key reaction characteristics so as to establish a calculation method for predicting detonation reactions based on the charge characteristics of CL-20 and realize the rapid analysis of the reaction mechanism of energetic materials.

Keywords: energetic material detonation reaction, first-principle molecular dynamics simulation of multiscale shock technique, neural network, CL-20

Procedia PDF Downloads 96
9846 Discovering Word-Class Deficits in Persons with Aphasia

Authors: Yashaswini Channabasavegowda, Hema Nagaraj

Abstract:

Aim: The current study aims at discovering word-class deficits concerning the noun-verb ratio in confrontation naming, picture description, and picture-word matching tasks. A total of ten persons with aphasia (PWA) and ten age-matched neurotypical individuals (NTI) were recruited for the study. The research includes both behavioural and objective measures to assess the word class deficits in PWA. Objective: The main objective of the research is to identify word class deficits seen in persons with aphasia, using various speech eliciting tasks. Method: The study was conducted in the L1 of the participants, considered to be Kannada. Action naming test and Boston naming test adapted to the Kannada version are administered to the participants; also, a picture description task is carried out. Picture-word matching task was carried out using e-prime software (version 2) to measure the accuracy and reaction time with respect to identification verbs and nouns. The stimulus was presented through auditory and visual modes. Data were analysed to identify errors noticed in the naming of nouns versus verbs, with respect to the Boston naming test and action naming test and also usage of nouns and verbs in the picture description task. Reaction time and accuracy for picture-word matching were extracted from the software. Results: PWA showed a significant difference in sentence structure compared to age-matched NTI. Also, PWA showed impairment in syntactic measures in the picture description task, with fewer correct grammatical sentences and fewer correct usage of verbs and nouns, and they produced a greater proportion of nouns compared to verbs. PWA had poorer accuracy and lesser reaction time in the picture-word matching task compared to NTI, and accuracy was higher for nouns compared to verbs in PWA. The deficits were noticed irrespective of the cause leading to aphasia.

Keywords: nouns, verbs, aphasia, naming, description

Procedia PDF Downloads 95
9845 Smelling Our Way through Names: Understanding the Potential of Floral Volatiles as Taxonomic Traits in the Fragrant Ginger Genus Hedychium

Authors: Anupama Sekhar, Preeti Saryan, Vinita Gowda

Abstract:

Plants, due to their sedentary lifestyle, have evolved mechanisms to synthesize a huge diversity of complex, specialized chemical metabolites, a majority of them being volatile organic compounds (VOCs). These VOCs are heavily involved in their biotic and abiotic interactions. Since chemical composition could be under the same selection processes as other morphological characters, we test if VOCs can be used to taxonomically distinguish species in the well-studied, fragrant ginger genus -Hedychium (Zingiberaceae). We propose that variations in the volatile profiles are suggestive of adaptation to divergent environments, and their presence could be explained by either phylogenetic conservatism or ecological factors. In this study, we investigate the volatile chemistry within Hedychium, which is endemic to Asian palaeotropics. We used an unsupervised clustering approach which clearly distinguished most taxa, and we used ancestral state reconstruction to estimate phylogenetic signals and chemical trait evolution in the genus. We propose that taxonomically, the chemical composition could aid in species identification, especially in species complexes where taxa are not morphologically distinguishable, and extensive, targeted chemical libraries will help in this effort.

Keywords: chemotaxonomy, dynamic headspace sampling, floral fragrance, floral volatile evolution, gingers, Hedychium

Procedia PDF Downloads 83
9844 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning

Authors: Michael A. Sprayberry, Vincent C. Paquit

Abstract:

Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.

Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization

Procedia PDF Downloads 83
9843 Portfolio Selection with Active Risk Monitoring

Authors: Marc S. Paolella, Pawel Polak

Abstract:

The paper proposes a framework for large-scale portfolio optimization which accounts for all the major stylized facts of multivariate financial returns, including volatility clustering, dynamics in the dependency structure, asymmetry, heavy tails, and non-ellipticity. It introduces a so-called risk fear portfolio strategy which combines portfolio optimization with active risk monitoring. The former selects optimal portfolio weights. The latter, independently, initiates market exit in case of excessive risks. The strategy agrees with the stylized fact of stock market major sell-offs during the initial stage of market downturns. The advantages of the new framework are illustrated with an extensive empirical study. It leads to superior multivariate density and Value-at-Risk forecasting, and better portfolio performance. The proposed risk fear portfolio strategy outperforms various competing types of optimal portfolios, even in the presence of conservative transaction costs and frequent rebalancing. The risk monitoring of the optimal portfolio can serve as an early warning system against large market risks. In particular, the new strategy avoids all the losses during the 2008 financial crisis, and it profits from the subsequent market recovery.

Keywords: comfort, financial crises, portfolio optimization, risk monitoring

Procedia PDF Downloads 514
9842 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm

Authors: Lydia Novozhilova, Vladimir Urazhdin

Abstract:

An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.

Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier

Procedia PDF Downloads 314
9841 Energy Audit and Renovation Scenarios for a Historical Building in Rome: A Pilot Case Towards the Zero Emission Building Goal

Authors: Domenico Palladino, Nicolandrea Calabrese, Francesca Caffari, Giulia Centi, Francesca Margiotta, Giovanni Murano, Laura Ronchetti, Paolo Signoretti, Lisa Volpe, Silvia Di Turi

Abstract:

The aim to achieve a fully decarbonized building stock by 2050 stands as one of the most challenging issues within the spectrum of energy and climate objectives. Numerous strategies are imperative, particularly emphasizing the reduction and optimization of energy demand. Ensuring the high energy performance of buildings emerges as a top priority, with measures aimed at cutting energy consumptions. Concurrently, it is imperative to decrease greenhouse gas emissions by using renewable energy sources for the on-site energy production, thereby striving for an energy balance leading towards zero-emission buildings. Italy's predominant building stock comprises ancient buildings, many of which hold historical significance and are subject to stringent preservation and conservation regulations. Attaining high levels of energy efficiency and reducing CO2 emissions in such buildings poses a considerable challenge, given their unique characteristics and the imperative to adhere to principles of conservation and restoration. Additionally, conducting a meticulous analysis of these buildings' current state is crucial for accurately quantifying their energy performance and predicting the potential impacts of proposed renovation strategies on energy consumption reduction. Within this framework, the paper presents a pilot case in Rome, outlining a methodological approach for the renovation of historic buildings towards achieving Zero Emission Building (ZEB) objective. The building has a mixed function with offices, a conference hall, and an exposition area. The building envelope is made of historical and precious materials used as cladding which must be preserved. A thorough understanding of the building's current condition serves as a prerequisite for analyzing its energy performance. This involves conducting comprehensive archival research, undertaking on-site diagnostic examinations to characterize the building envelope and its systems, and evaluating actual energy usage data derived from energy bills. Energy simulations and audit are the first step in the analysis with the assessment of the energy performance of the actual current state. Subsequently, different renovation scenarios are proposed, encompassing advanced building techniques, to pinpoint the key actions necessary for improving mechanical systems, automation and control systems, and the integration of renewable energy production. These scenarios entail different levels of renovation, ranging from meeting minimum energy performance goals to achieving the highest possible energy efficiency level. The proposed interventions are meticulously analyzed and compared to ascertain the feasibility of attaining the Zero Emission Building objective. In conclusion, the paper provides valuable insights that can be extrapolated to inform a broader approach towards energy-efficient refurbishment of historical buildings that may have limited potential for renovation in their building envelopes. By adopting a methodical and nuanced approach, it is possible to reconcile the imperative of preserving cultural heritage with the pressing need to transition towards a sustainable, low-carbon future.

Keywords: energy conservation and transition, energy efficiency in historical buildings, buildings energy performance, energy retrofitting, zero emission buildings, energy simulation

Procedia PDF Downloads 50
9840 Orthodontic Treatment Using CAD/CAM System

Authors: Cristiane C. B. Alves, Livia Eisler, Gustavo Mota, Kurt Faltin Jr., Cristina L. F. Ortolani

Abstract:

The correct positioning of the brackets is essential for the success of orthodontic treatment. Indirect bracket placing technique has the main objective of eliminating the positioning errors, which commonly occur in the technique of direct system of brackets. The objective of this study is to demonstrate that the exact positioning of the brackets is of extreme relevance for the success of the treatment. The present work shows a case report of an adult female patient who attended the clinic with the complaint of being in orthodontic treatment for more than 5 years without noticing any progress. As a result of the intra-oral clinical examination and documentation analysis, a class III malocclusion, an anterior open bite, and absence of all third molars and first upper and lower bilateral premolars were observed. For the treatment, the indirect bonding technique with self-ligating ceramic braces was applied. The preparation of the trays was done after the intraoral digital scanning and printing of models with a 3D printer. Brackets were positioned virtually, using a specialized software. After twelve months of treatment, correction of the malocclusion was observed, as well as the closing of the anterior open bite. It is concluded that the adequate and precise positioning of brackets is necessary for a successful treatment.

Keywords: anterior open-bite, CAD/CAM, orthodontics, malocclusion, angle class III

Procedia PDF Downloads 174
9839 Multiscale Computational Approach to Enhance the Understanding, Design and Development of CO₂ Catalytic Conversion Technologies

Authors: Agnieszka S. Dzielendziak, Lindsay-Marie Armstrong, Matthew E. Potter, Robert Raja, Pier J. A. Sazio

Abstract:

Reducing carbon dioxide, CO₂, is one of the greatest global challenges. Conversion of CO₂ for utilisation across synthetic fuel, pharmaceutical, and agrochemical industries offers a promising option, yet requires significant research to understanding the complex multiscale processes involved. To experimentally understand and optimize such processes at that catalytic sites and exploring the impact of the process at reactor scale, is too expensive. Computational methods offer significant insight and flexibility but require a more detailed multi-scale approach which is a significant challenge in itself. This work introduces a computational approach which incorporates detailed catalytic models, taken from experimental investigations, into a larger-scale computational flow dynamics framework. The reactor-scale species transport approach is modified near the catalytic walls to determine the influence of catalytic clustering regions. This coupling approach enables more accurate modelling of velocity, pressures, temperatures, species concentrations and near-wall surface characteristics which will ultimately enable the impact of overall reactor design on chemical conversion performance.

Keywords: catalysis, CCU, CO₂, multi-scale model

Procedia PDF Downloads 241
9838 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms

Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano

Abstract:

In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.

Keywords: heuristic, MIP model, remedial course, school, timetabling

Procedia PDF Downloads 592
9837 Teaching English as a Second Language to Primary Students with Autism Spectrum Disorder

Authors: Puteri Zarina M. K., Haddi J. K., Zolkepli N., Shu M. H. B., Hosshan H., Saad M. A.

Abstract:

This paper provides an overview of the current state of ESL instruction for children with autism in Malaysia. Equal rights, independence, and active participation are guaranteed by the 2006 Convention on the Rights of Persons with Disabilities. Every child is entitled to receive education in an inclusive atmosphere that embraces diversity and ensures equal opportunity for all. The primary objective of the research was to investigate if English as a Second Language (ESL) teachers employ distinct instructional methods and strategies while teaching children diagnosed with autism. Moreover, the objective was to assess the similarities in the challenges faced by teachers when teaching ESL to children with autism in Malaysia. The study aimed to increase understanding of the challenges faced by ESL teachers in teaching autistic students. The study was structured as a qualitative research endeavour. A total of twelve (12) ESL teachers from selected primary schools in Malaysia were involved in this study. The research findings accurately depict the actual state of teaching ESL to autistic children. They confirm the imperative need for additional support in order to facilitate the successful integration of these children into the educational system.

Keywords: autism spectrum disorder, ESL, inclusion, Malaysia, special educational needs

Procedia PDF Downloads 52
9836 Pitch Processing in Autistic Mandarin-Speaking Children with Hypersensitivityand Hypo-Sensitivity: An Event-Related Potential Study

Authors: Kaiying Lai, Suiping Wang, Luodi Yu, Yang Zhang, Pengmin Qin

Abstract:

Abnormalities in auditory processing are one of the most commonly reported sensory processing impairments in children with Autism Spectrum Disorder (ASD). Tonal language speaker with autism has enhanced neural sensitivity to pitch changes in pure tone. However, not all children with ASD exhibit the same performance in pitch processing due to different auditory sensitivity. The current study aimed to examine auditory change detection in ASD with different auditory sensitivity. K-means clustering method was adopted to classify ASD participants into two groups according to the auditory processing scores of the Sensory Profile, 11 autism with hypersensitivity (mean age = 11.36 ; SD = 1.46) and 18 with hypo-sensitivity (mean age = 10.64; SD = 1.89) participated in a passive auditory oddball paradigm designed for eliciting mismatch negativity (MMN) under the pure tone condition. Results revealed that compared to hypersensitive autism, the children with hypo-sensitivity showed smaller MMN responses to pure tone stimuli. These results suggest that ASD with auditory hypersensitivity and hypo-sensitivity performed differently in processing pure tone, so neural responses to pure tone hold promise for predicting the auditory sensitivity of ASD and targeted treatment in children with ASD.

Keywords: ASD, sensory profile, pitch processing, mismatch negativity, MMN

Procedia PDF Downloads 374
9835 The Effect of Particulate Matter on Cardiomyocyte Apoptosis Through Mitochondrial Fission

Authors: Tsai-chun Lai, Szu-ju Fu, Tzu-lin Lee, Yuh-Lien Chen

Abstract:

There is much evidence that exposure to fine particulate matter (PM) from air pollution increases the risk of cardiovascular morbidity and mortality. According to previous reports, PM in the air enters the respiratory tract, contacts the alveoli, and enters the blood circulation, leading to the progression of cardiovascular disease. PM pollution may also lead to cardiometabolic disturbances, increasing the risk of cardiovascular disease. The effects of PM on cardiac function and mitochondrial damage are currently unknown. We used mice and rat cardiomyocytes (H9c2) as animal and in vitro cell models, respectively, to simulate an air pollution environment using PM. These results indicate that the apoptosis-related factor PUMA, a regulator of apoptosis upregulated by p53, is increased in mice treated with PM. Apoptosis was aggravated in cardiomyocytes treated with PM, as measured by TUNEL assay and Annexin V/PI. Western blot results showed that CASPASE3 was significantly increased and BCL2 (B-cell lymphoid 2) was significantly decreased under PM treatment. Concurrent exposure to PM increases mitochondrial reactive oxygen species (ROS) production by MitoSOX Red staining. Furthermore, using Mitotracker staining, PM treatment significantly shortened mitochondrial length, indicating mitochondrial fission. The expression of mitochondrial fission-related proteins p-DRP1 (phosphodynamics-related protein 1) and FIS1 (mitochondrial fission 1 protein) was significantly increased. Based on these results, the exposure to PM worsens mitochondrial function and leads to cardiomyocyte apoptosis.

Keywords: particulate matter, cardiomyocyte, apoptosis, mitochondria

Procedia PDF Downloads 91
9834 Computational Fluid Dynamic Modeling of Mixing Enhancement by Stimulation of Ferrofluid under Magnetic Field

Authors: Neda Azimi, Masoud Rahimi, Faezeh Mohammadi

Abstract:

Computational fluid dynamics (CFD) simulation was performed to investigate the effect of ferrofluid stimulation on hydrodynamic and mass transfer characteristics of two immiscible liquid phases in a Y-micromixer. The main purpose of this work was to develop a numerical model that is able to simulate hydrodynamic of the ferrofluid flow under magnetic field and determine its effect on mass transfer characteristics. A uniform external magnetic field was applied perpendicular to the flow direction. The volume of fluid (VOF) approach was used for simulating the multiphase flow of ferrofluid and two-immiscible liquid flows. The geometric reconstruction scheme (Geo-Reconstruct) based on piecewise linear interpolation (PLIC) was used for reconstruction of the interface in the VOF approach. The mass transfer rate was defined via an equation as a function of mass concentration gradient of the transported species and added into the phase interaction panel using the user-defined function (UDF). The magnetic field was solved numerically by Fluent MHD module based on solving the magnetic induction equation method. CFD results were validated by experimental data and good agreements have been achieved, which maximum relative error for extraction efficiency was about 7.52 %. It was showed that ferrofluid actuation by a magnetic field can be considered as an efficient mixing agent for liquid-liquid two-phase mass transfer in microdevices.

Keywords: CFD modeling, hydrodynamic, micromixer, ferrofluid, mixing

Procedia PDF Downloads 184
9833 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 482
9832 The Classification Accuracy of Finance Data through Holder Functions

Authors: Yeliz Karaca, Carlo Cattani

Abstract:

This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).

Keywords: artificial neural networks, finance data, Holder regularity, multifractals

Procedia PDF Downloads 237