Search results for: the soil variables
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7102

Search results for: the soil variables

3532 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin

Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa

Abstract:

Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.

Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®

Procedia PDF Downloads 132
3531 The Classical Conditioning Effect of Animated Spokes-Characters

Authors: Chia-Ching Tsai, Ting-Hsiu Chen

Abstract:

This paper adopted 2X2 factorial design. One factor was experimental versus control condition. The other factor was types of animated spokescharacter, and one of the two levels was expert type, and the other level is attractive type. In the study, we use control versus experimental conditioning and types of animated spokescharacter as independent variables, and brand attitude as dependent variable to examine the conditioning effect of types of animated spokescharacter on brand attitude. There are 123 subjects participating in the experiment. The results showed conditioning group presents that animated spokescharacter has significantly superior effect of product endorsement in contrast to non-conditioning one, while there is no significant impact of types of animated spokescharacter on brand attitude.

Keywords: classical conditioning, animated spokes-character, brand attitude, factorial design

Procedia PDF Downloads 275
3530 The Effect of Non-Normality on CB-SEM and PLS-SEM Path Estimates

Authors: Z. Jannoo, B. W. Yap, N. Auchoybur, M. A. Lazim

Abstract:

The two common approaches to Structural Equation Modeling (SEM) are the Covariance-Based SEM (CB-SEM) and Partial Least Squares SEM (PLS-SEM). There is much debate on the performance of CB-SEM and PLS-SEM for small sample size and when distributions are non-normal. This study evaluates the performance of CB-SEM and PLS-SEM under normality and non-normality conditions via a simulation. Monte Carlo Simulation in R programming language was employed to generate data based on the theoretical model with one endogenous and four exogenous variables. Each latent variable has three indicators. For normal distributions, CB-SEM estimates were found to be inaccurate for small sample size while PLS-SEM could produce the path estimates. Meanwhile, for a larger sample size, CB-SEM estimates have lower variability compared to PLS-SEM. Under non-normality, CB-SEM path estimates were inaccurate for small sample size. However, CB-SEM estimates are more accurate than those of PLS-SEM for sample size of 50 and above. The PLS-SEM estimates are not accurate unless sample size is very large.

Keywords: CB-SEM, Monte Carlo simulation, normality conditions, non-normality, PLS-SEM

Procedia PDF Downloads 414
3529 Financial Assets Return, Economic Factors and Investor's Behavioral Indicators Relationships Modeling: A Bayesian Networks Approach

Authors: Nada Souissi, Mourad Mroua

Abstract:

The main purpose of this study is to examine the interaction between financial asset volatility, economic factors and investor's behavioral indicators related to both the company's and the markets stocks for the period from January 2000 to January2020. Using multiple linear regression and Bayesian Networks modeling, results show a positive and negative relationship between investor's psychology index, economic factors and predicted stock market return. We reveal that the application of the Bayesian Discrete Network contributes to identify the different cause and effect relationships between all economic, financial variables and psychology index.

Keywords: Financial asset return predictability, Economic factors, Investor's psychology index, Bayesian approach, Probabilistic networks, Parametric learning

Procedia PDF Downloads 152
3528 Robust Variable Selection Based on Schwarz Information Criterion for Linear Regression Models

Authors: Shokrya Saleh A. Alshqaq, Abdullah Ali H. Ahmadini

Abstract:

The Schwarz information criterion (SIC) is a popular tool for selecting the best variables in regression datasets. However, SIC is defined using an unbounded estimator, namely, the least-squares (LS), which is highly sensitive to outlying observations, especially bad leverage points. A method for robust variable selection based on SIC for linear regression models is thus needed. This study investigates the robustness properties of SIC by deriving its influence function and proposes a robust SIC based on the MM-estimation scale. The aim of this study is to produce a criterion that can effectively select accurate models in the presence of vertical outliers and high leverage points. The advantages of the proposed robust SIC is demonstrated through a simulation study and an analysis of a real dataset.

Keywords: influence function, robust variable selection, robust regression, Schwarz information criterion

Procedia PDF Downloads 142
3527 Analysis of Erosion Quantity on Application of Conservation Techniques in Ci Liwung Hulu Watershed

Authors: Zaenal Mutaqin

Abstract:

The level of erosion that occurs in the upsteam watersheed will lead to limited infiltrattion, land degradation and river trivialisation and estuaries in the body. One of the watesheed that has been degraded caused by using land is the DA Ci Liwung Upstream. The high degradation that occurs in the DA Ci Liwung upstream is indicated by the hugher rate of erosion on the region, especially in the area of agriculture. In this case, agriculture cultivation intent to the agricultural land that has been applied conservation techniques. This study is applied to determine the quantity of erosion by reviewing Hidrologic Response Unit (HRU) in agricuktural cultivation land which is contained in DA Ci Liwung upstream by using the Soil and Water Assessmen Tool (SWAT). Conservation techniques applied are terracing, agroforestry and gulud terrace. It was concluded that agroforestry conservation techniques show the best value of erosion (lowest) compared with other conservation techniques with the contribution of erosion of 25.22 tonnes/ha/year. The results of the calibration between the discharge flow models with the observation that R²=0.9014 and NS=0.79 indicates that this model is acceptable and feasible applied to the Ci Liwung Hulu watershed.

Keywords: conservation, erosion, SWAT analysis, watersheed

Procedia PDF Downloads 295
3526 Comparison of Two Transcranial Magnetic Stimulation Protocols on Spasticity in Multiple Sclerosis - Pilot Study of a Randomized and Blind Cross-over Clinical Trial

Authors: Amanda Cristina da Silva Reis, Bruno Paulino Venâncio, Cristina Theada Ferreira, Andrea Fialho do Prado, Lucimara Guedes dos Santos, Aline de Souza Gravatá, Larissa Lima Gonçalves, Isabella Aparecida Ferreira Moretto, João Carlos Ferrari Corrêa, Fernanda Ishida Corrêa

Abstract:

Objective: To compare two protocols of Transcranial Magnetic Stimulation (TMS) on quadriceps muscle spasticity in individuals diagnosed with Multiple Sclerosis (MS). Method: Clinical, crossover study, in which six adult individuals diagnosed with MS and spasticity in the lower limbs were randomized to receive one session of high-frequency (≥5Hz) and low-frequency (≤ 1Hz) TMS on motor cortex (M1) hotspot for quadriceps muscle, with a one-week interval between the sessions. To assess the spasticity was applied the Ashworth scale and were analyzed the latency time (ms) of the motor evoked potential (MEP) and the central motor conduction time (CMCT) of the bilateral quadriceps muscle. Assessments were performed before and after each intervention. The difference between groups was analyzed using the Friedman test, with a significance level of 0.05 adopted. Results: All statistical analyzes were performed using the SPSS Statistic version 26 programs, with a significance level established for the analyzes at p<0.05. Shapiro Wilk normality test. Parametric data were represented as mean and standard deviation for non-parametric variables, median and interquartile range, and frequency and percentage for categorical variables. There was no clinical change in quadriceps spasticity assessed using the Ashworth scale for the 1 Hz (p=0.813) and 5 Hz (p= 0.232) protocols for both limbs. Motor Evoked Potential latency time: in the 5hz protocol, there was no significant change for the contralateral side from pre to post-treatment (p>0.05), and for the ipsilateral side, there was a decrease in latency time of 0.07 seconds (p<0.05 ); for the 1Hz protocol there was an increase of 0.04 seconds in the latency time (p<0.05) for the contralateral side to the stimulus, and for the ipsilateral side there was a decrease in the latency time of 0.04 seconds (p=<0.05), with a significant difference between the contralateral (p=0.007) and ipsilateral (p=0.014) groups. Central motor conduction time in the 1Hz protocol, there was no change for the contralateral side (p>0.05) and for the ipsilateral side (p>0.05). In the 5Hz protocol for the contralateral side, there was a small decrease in latency time (p<0.05) and for the ipsilateral side, there was a decrease of 0.6 seconds in the latency time (p<0.05) with a significant difference between groups (p=0.019). Conclusion: A high or low-frequency session does not change spasticity, but it is observed that when the low-frequency protocol was performed, there was an increase in latency time on the stimulated side, and a decrease in latency time on the non-stimulated side, considering then that inhibiting the motor cortex increases cortical excitability on the opposite side.

Keywords: multiple sclerosis, spasticity, motor evoked potential, transcranial magnetic stimulation

Procedia PDF Downloads 92
3525 A Formal Microlectic Framework for Biological Circularchy

Authors: Ellis D. Cooper

Abstract:

“Circularchy” is supposed to be an adjustable formal framework with enough expressive power to articulate biological theory about Earthly Life in the sense of multi-scale biological autonomy constrained by non-equilibrium thermodynamics. “Formal framework” means specifically a multi-sorted first-order-theorywithequality (for each sort). Philosophically, such a theory is one kind of “microlect,” which means a “way of speaking” (or, more generally, a “way of behaving”) for overtly expressing a “mental model” of some “referent.” Other kinds of microlect include “natural microlect,” “diagrammatic microlect,” and “behavioral microlect,” with examples such as “political theory,” “Euclidean geometry,” and “dance choreography,” respectively. These are all describable in terms of a vocabulary conforming to grammar. As aspects of human culture, they are possibly reminiscent of Ernst Cassirer’s idea of “symbolic form;” as vocabularies, they are akin to Richard Rorty’s idea of “final vocabulary” for expressing a mental model of one’s life. A formal microlect is presented by stipulating sorts, variables, calculations, predicates, and postulates. Calculations (a.k.a., “terms”) may be composed to form more complicated calculations; predicates (a.k.a., “relations”) may be logically combined to form more complicated predicates; and statements (a.k.a., “sentences”) are grammatically correct expressions which are true or false. Conclusions are statements derived using logical rules of deduction from postulates, other assumed statements, or previously derived conclusions. A circularchy is a formal microlect constituted by two or more sub-microlects, each with its distinct stipulations of sorts, variables, calculations, predicates, and postulates. Within a sub-microlect some postulates or conclusions are equations which are statements that declare equality of specified calculations. An equational bond between an equation in one sub-microlect and an equation in either the same sub-microlect or in another sub-microlect is a predicate that declares equality of symbols occurring in a side of one equation with symbols occurring in a side of the other equation. Briefly, a circularchy is a network of equational bonds between sub-microlects. A circularchy is solvable if there exist solutions for all equations that satisfy all equational bonds. If a circularchy is not solvable, then a challenge would be to discover the obstruction to solvability and then conjecture what adjustments might remove the obstruction. Adjustment means changes in stipulated ingredients (sorts, etc.) of sub-microlects, or changes in equational bonds between sub-microlects, or introduction of new sub-microlects and new equational bonds. A circularchy is modular insofar as each sub-microlect is a node in a network of equation bonds. Solvability of a circularchy may be conjectured. Efforts to prove solvability may be thwarted by a counter-example or may lead to the construction of a solution. An automated theorem-proof assistant would likely be necessary for investigating a substantial circularchy, such as one purported to represent Earthly Life. Such investigations (chains of statements) would be concurrent with and no substitute for simulations (chains of numbers).

Keywords: autonomy, first-order theory, mathematics, thermodynamics

Procedia PDF Downloads 221
3524 Exergy: An Effective Tool to Quantify Sustainable Development of Biodiesel Production

Authors: Mahmoud Karimi, Golmohammad Khoobbakht

Abstract:

This study focuses on the exergy flow analysis in the transesterification of waste cooking oil with methanol to decrease the consumption of materials and energy and promote the use of renewable resources. The exergy analysis performed is based on the thermodynamic performance parameters namely exergy destruction and exergy efficiency to investigate the effects of variable parameters on renewability of transesterification. The experiment variables were methanol to WCO ratio, catalyst concentration and reaction temperature in the transesterification reaction. The optimum condition with yield of 90.2% and exergy efficiency of 95.2% was obtained at methanol to oil molar ratio of 8:1, 1 wt.% of KOH, at 55 °C. In this condition, the total waste exergy was found to be 45.4 MJ for 1 kg biodiesel production. However high yield in the optimal condition resulted high exergy efficiency in the transesterification of WCO with methanol.

Keywords: biodiesel, exergy, thermodynamic analysis, transesterification, waste cooking oil

Procedia PDF Downloads 195
3523 Modeling of Digital and Settlement Consolidation of Soil under Oedomete

Authors: Yu-Lin Shen, Ming-Kuen Chang

Abstract:

In addition to a considerable amount of machinery and equipment, intricacies of the transmission pipeline exist in Petrochemical plants. Long term corrosion may lead to pipeline thinning and rupture, causing serious safety concerns. With the advances in non-destructive testing technology, more rapid and long-range ultrasonic detection techniques are often used for pipeline inspection, EMAT without coupling to detect, it is a non-contact ultrasonic, suitable for detecting elevated temperature or roughened e surface of line. In this study, we prepared artificial defects in pipeline for Electromagnetic Acoustic Transducer Testing (EMAT) to survey the relationship between the defect location, sizing and the EMAT signal. It was found that the signal amplitude of EMAT exhibited greater signal attenuation with larger defect depth and length.. In addition, with bigger flat hole diameter, greater amplitude attenuation was obtained. In summary, signal amplitude attenuation of EMAT was affected by the defect depth, defect length and the hole diameter and size.

Keywords: EMAT, artificial defect, NDT, ultrasonic testing

Procedia PDF Downloads 334
3522 Naphtha Catalytic Reform: Modeling and Simulation of Unity

Authors: Leal Leonardo, Pires Carlos Augusto de Moraes, Casiraghi Magela

Abstract:

In this work were realized the modeling and simulation of the catalytic reformer process, of ample form, considering all the equipment that influence the operation performance. Considered it a semi-regenerative reformer, with four reactors in series intercalated with four furnaces, two heat exchanges, one product separator and one recycle compressor. A simplified reactional system was considered, involving only ten chemical compounds related through five reactions. The considered process was the applied to aromatics production (benzene, toluene, and xylene). The models developed to diverse equipment were interconnecting in a simulator that consists of a computer program elaborate in FORTRAN 77. The simulation of the global model representative of reformer unity achieved results that are compatibles with the literature ones. It was then possible to study the effects of operational variables in the products concentration and in the performance of the unity equipment.

Keywords: catalytic reforming, modeling, simulation, petrochemical engineering

Procedia PDF Downloads 518
3521 Empirical and Indian Automotive Equity Portfolio Decision Support

Authors: P. Sankar, P. James Daniel Paul, Siddhant Sahu

Abstract:

A brief review of the empirical studies on the methodology of the stock market decision support would indicate that they are at a threshold of validating the accuracy of the traditional and the fuzzy, artificial neural network and the decision trees. Many researchers have been attempting to compare these models using various data sets worldwide. However, the research community is on the way to the conclusive confidence in the emerged models. This paper attempts to use the automotive sector stock prices from National Stock Exchange (NSE), India and analyze them for the intra-sectorial support for stock market decisions. The study identifies the significant variables and their lags which affect the price of the stocks using OLS analysis and decision tree classifiers.

Keywords: Indian automotive sector, stock market decisions, equity portfolio analysis, decision tree classifiers, statistical data analysis

Procedia PDF Downloads 486
3520 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics

Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez

Abstract:

In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.

Keywords: data analysis, emotional domotics, performance improvement, neural network

Procedia PDF Downloads 143
3519 A Seismic Study on The Settlement of Superstructures Due to the Tunnel Construction

Authors: Seyed Abolhasan Naeini, Saeideh Mohammadi

Abstract:

Rapid urban development leads to the construction of urban tunnels for transport. Passage of tunnels under the surface structures and utilities prompted the changes in the site conditions and hence alteration of the dynamic response of surface structures. Therefore, in this study, the effect of the interaction of tunnel-superstructure on the site response is investigated numerically. For this purpose, Fast Lagrangian Analysis of Continua (FLAC 2D) is used, and stratification and properties of soil layers are selected based on the line No 7 of Tehran subway. The superstructure is modeled both as an equivalent surcharge and the actual structure, and the results are compared. A comparison of the results shows that consideration of structure geometry is necessary for dynamic analysis and it leads to the changes in displacements and accelerations. Consequently, the geometry of the superstructure should be modeled completely instead of the application of an equivalent load. The effect of tunnel diameter and depth on the settlement of superstructures is also studied. Results show that when the tunnel depth and diameter grow, the settlements increase considerably.

Keywords: tunnel, FLAC2D, settlement, dynamic analysis

Procedia PDF Downloads 133
3518 Removal of Per- and Polyfluoroalkyl Substances (PFASs) Contaminants from the Aqueous Phase Using Chitosan Beads

Authors: Rahim Shahrokhi, Junboum Park

Abstract:

Per- and Polyfluoroalkyl Substances (PFASs) are environmentally persistent halogenated hydrocarbons that have been widely used in many industrial and commercial applications. Recently, contaminating the soil and groundwater due to the ubiquity of PFAS in environments has raised great concern. Adsorption technology is one of the most promising methods for PFAS removal. Chitosan is a biopolymer substance with abundant amine and hydroxyl functional groups, which render it a good adsorbent. This study has tried to enhance the adsorption capacity of chitosan by grafting more amine functional groups on its surface for the removal of two long (PFOA and PFOS) and two short-chain (PFBA, PFBS) PFAS substances from the aqueous phase. A series of batch adsorption tests have been performed to evaluate the adsorption capacity of the used sorbent. Also, the sorbent was analyzed by SEM, FT-IR, zeta potential, and XRD tests. The results demonstrated that both chitosan beads have good potential for adsorbing short and long-chain PFAS from the aqueous phase.

Keywords: PFAS, chitosan beads, adsorption, grafted chitosan

Procedia PDF Downloads 67
3517 The Fiscal-Monetary Policy and Economic Growth in Algeria: VECM Approach

Authors: K. Bokreta, D. Benanaya

Abstract:

The objective of this study is to examine the relative effectiveness of monetary and fiscal policy in Algeria using the econometric modelling techniques of cointegration and vector error correction modelling to analyse and draw policy inferences. The chosen variables of fiscal policy are government expenditure and net taxes on products, while the effect of monetary policy is presented by the inflation rate and the official exchange rate. From the results, we find that in the long-run, the impact of government expenditures is positive, while the effect of taxes is negative on growth. Additionally, we find that the inflation rate is found to have little effect on GDP per capita but the impact of the exchange rate is insignificant. We conclude that fiscal policy is more powerful then monetary policy in promoting economic growth in Algeria.

Keywords: economic growth, monetary policy, fiscal policy, VECM

Procedia PDF Downloads 312
3516 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 120
3515 The Factors to Determine the Content About Gender and Sexuality Education Among Adolescents in China

Authors: Yixiao Tang

Abstract:

The risks of adolescents being exposed to sexually transmitted diseases (STDs) and participating in unsafe sexual practices are increasing. There is the necessity and significance of providing adolescents with appropriate sex education, considering they are at the stage of life exploration and risk-taking. However, in delivering sex education, the contents and instruction methods are usually discussed with contextual differences. In the Chinese context, the socially prejudiced perceptions of homosexuality can be attributed to the traditional Chinese Confucian philosophy, which has been dominating Chinese education for thousands of years. In China, students rarely receive adequate information about HIV, STDs, the use of contraceptives, pregnancies, and other sexually related topics in their formal education. Underlying the Confucian cultural background, this essay will analyze the variables that determine the subject matter of sex education for adolescents and then discuss how this cultural form affects social views and policy on sex education.

Keywords: homosexuality education, adolescent, China, education policy

Procedia PDF Downloads 79
3514 A Strategy for the Application of Second-Order Monte Carlo Algorithms to Petroleum Exploration and Production Projects

Authors: Obioma Uche

Abstract:

Due to the recent volatility in oil & gas prices as well as increased development of non-conventional resources, it has become even more essential to critically evaluate the profitability of petroleum prospects prior to making any investment decisions. Traditionally, simple Monte Carlo (MC) algorithms have been used to randomly sample probability distributions of economic and geological factors (e.g. price, OPEX, CAPEX, reserves, productive life, etc.) in order to obtain probability distributions for profitability metrics such as Net Present Value (NPV). In recent years, second-order MC algorithms have been shown to offer an advantage over simple MC techniques due to the added consideration of uncertainties associated with the probability distributions of the relevant variables. Here, a strategy for the application of the second-order MC technique to a case study is demonstrated to analyze its effectiveness as a tool for portfolio management.

Keywords: Monte Carlo algorithms, portfolio management, profitability, risk analysis

Procedia PDF Downloads 340
3513 Agent-Base Modeling of IoT Applications by Using Software Product Line

Authors: Asad Abbas, Muhammad Fezan Afzal, Muhammad Latif Anjum, Muhammad Azmat

Abstract:

The Internet of Things (IoT) is used to link up real objects that use the internet to interact. IoT applications allow handling and operating the equipment in accordance with environmental needs, such as transportation and healthcare. IoT devices are linked together via a number of agents that act as a middleman for communications. The operation of a heat sensor differs indoors and outside because agent applications work with environmental variables. In this article, we suggest using Software Product Line (SPL) to model IoT agents and applications' features on an XML-based basis. The contextual diversity within the same domain of application can be handled, and the reusability of features is increased by XML-based feature modelling. For the purpose of managing contextual variability, we have embraced XML for modelling IoT applications, agents, and internet-connected devices.

Keywords: IoT agents, IoT applications, software product line, feature model, XML

Procedia PDF Downloads 98
3512 Parental Rejection and Psychological Adjustment among Adolescents: Does the Peer Rejection Mediate?

Authors: Sultan Shujja, Farah Malik

Abstract:

The study examined the mediating role of peer rejection in direct relationship of parental rejection and psychological adjustment among adolescents. Researchers used self-report measures e.g., Parental Acceptance-Rejection Questionnaire (PARQ), Children Rejection Sensitivity Questionnaire (PARQ), and Personality Assessment Questionnaire (PAQ) to assess perception of parent-peer rejection, psychological adjustment among adolescents (14-18 years). Findings revealed that peer rejection did not mediate the parental rejection and psychological adjustment whereas parental rejection emerged as strong predictor when demographic variables were statistically controlled. On average, girls were psychologically less adjusted than that of boys. Despite of equal perception of peer rejection, girls more anxiously anticipated peer rejection than did the boys. It is suggested that peer influence on adolescents, specifically girls, should not be underestimated.

Keywords: peer relationships, parental perception, psychological adjustment, applied psychology

Procedia PDF Downloads 514
3511 Studies and Full Scale Tests for the Development of a Ravine Filling with a Depth of about 12.00m

Authors: Dana Madalina Pohrib, Elena Irina Ciobanu

Abstract:

In compaction works, the most often used codes and standards are those for road embankments and refer to a maximum filling height of 3.00m. When filling a height greater than 3.00m, such codes are no longer valid and thus their application may lead to technical difficulties in the process of compaction and to the achievement of a sufficient degree of compaction. For this reason, in the case of controlled fillings with heights greater than 3.00m it is necessary to formulate and apply a number of special techniques, which can be determined by performing a full scale test. This paper presents the results of the studies and full scale tests conducted for the stabilization of a ravine with vertical banks and a depth of about 12.00m. The fillings will support a heavy traffic road connecting the two parts of a village in Vaslui County, Romania. After analyzing two comparative intervention solutions, the variant of a controlled filling bordered by a monolith concrete retaining wall was chosen. The results obtained by the authors highlighted the need to insert a geogrid reinforcement at every 2.00m for creating a 12.00m thick compacted fill.

Keywords: compaction, dynamic probing, stability, soil stratification

Procedia PDF Downloads 315
3510 Response of Okra (Abelmoschus Esculentus (L). Moench) to Soil Amendments and Weeding Regime

Authors: Olusegun Raphael Adeyemi, Samuel Oluwaseun Osunleti, Abiddin Adekunle Bashiruddin

Abstract:

Field trials were conducted in 2020 and 2021 at the Teaching and Research Farm of the Federal University of Agriculture Abeokuta, Ogun State, Nigeria to evaluate the effect of biochar application under different weeding regimes on growth and yield of okra. Treatments were laid out in split- plot in a randomized complete block design with three replications. Main plot treatments were three levels of biochar namely 0t/ha, 10t/ha and 20t/ha while sub-plots treatments consisted of four weeding regimes (weeding at 3, 6 and 9 WAS, weeding at 3 and 6 WAS, weeding at 3 WAS and weedy check as control). Data collected on growth and yield of okra, and weed parameters were subjected to analysis of variance and treatment means were separated using least significant difference at p < 0.05. Results showed that biochar applied at 20 t/ha increased okra yield by 47.5% compared to the control. Weeding at 3, 6 and 9 WAS gave the highest okra yield. Uncontrolled weed infestation throughout crop growth resulted in 87.3% yield reduction in okra. It is concluded that weed suppression , growth and yield of okra can be enhanced by the application of biochar at 20t/ha and weeding at 3, 6 and 9 WAS hence recommended.

Keywords: biochar, okra, weeding, weed competition

Procedia PDF Downloads 65
3509 Relationship of Workplace Stress and Mental Wellbeing among Health Professionals

Authors: Rabia Mushtaq, Uroosa Javaid

Abstract:

It has been observed that health professionals are at higher danger of stress in light of the fact that being a specialist is physically and emotionally demanding. The study aimed to investigate the relationship between workplace stress and mental wellbeing among health professionals. Sample of 120 male and female health professionals belonging to two age groups, i.e., early adulthood and middle adulthood, was employed through purposive sampling technique. Job stress scale, mindful attention awareness scale, and Warwick Edinburgh mental wellbeing scales were used for the measurement of study variables. Results of the study indicated that job stress has a significant negative relationship with mental wellbeing among health professionals. The current study opened the door for more exploratory work on mindfulness among health professionals. Yielding outcomes helped in consolidating adapting procedures among workers to improve their mental wellbeing and lessen the job stress.

Keywords: health professionals, job stress, mental wellbeing, mindfulness

Procedia PDF Downloads 177
3508 Corporate Governance in Africa: A Review of Literature

Authors: Kisanga Arsene

Abstract:

The abundant literature on corporate governance identifies four main objectives: the configuration of power within firms, control, conflict prevention and the equitable distribution of value created. The persistent dysfunctions in companies in developing countries in general and in African countries, in particular, show that these objectives are generally not achieved, which supports the idea of analyzing corporate governance practices in Africa. Indeed, the objective of this paper is to review the literature on corporate governance in Africa, to outline the specific practices and challenges of corporate governance in Africa and to identify reliable indicators and variables to capture corporate governance in Africa. In light of the existing literature, we argue that corporate governance in Africa can only be studied in the light of African realities and by taking into account the institutional environment. These studies show the existence of a divide between governance practices and the legislative and regulatory texts in force in the African context.

Keywords: institutional environment, transparency, accountability, Africa

Procedia PDF Downloads 180
3507 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 130
3506 A Statistical Approach to Rationalise the Number of Working Load Test for Quality Control of Pile Installation in Singapore Jurong Formation

Authors: Nuo Xu, Kok Hun Goh, Jeyatharan Kumarasamy

Abstract:

Pile load testing is significant during foundation construction due to its traditional role of design validation and routine quality control of the piling works. In order to verify whether piles can take loadings at specified settlements, piles will have to undergo working load test where the test load should normally up to 150% of the working load of a pile. Selection or sampling of piles for the working load test is done subject to the number specified in Singapore National Annex to Eurocode 7 SS EN 1997-1:2010. This paper presents an innovative way to rationalize the number of pile load test by adopting statistical analysis approach and looking at the coefficient of variance of pile elastic modulus using a case study at Singapore Tuas depot. Results are very promising and have shown that it is possible to reduce the number of working load test without influencing the reliability and confidence on the pile quality. Moving forward, it is suggested that more load test data from other geological formations to be examined to compare with the findings from this paper.

Keywords: elastic modulus of pile under soil interaction, jurong formation, kentledge test, pile load test

Procedia PDF Downloads 388
3505 Palliative Care Referral Behavior Among Nurse Practitioners in Hospital Medicine

Authors: Sharon Jackson White

Abstract:

Purpose: Nurse practitioners (NPs) practicing within hospital medicine play a significant role in caring for patients who might benefit from palliative care (PC) services. Using the Theory of Planned Behavior, the purpose of this study was to examine the relationships among facilitators to referral, barriers to referral, self-efficacy with end-of-life discussions, history of referral, and referring to PC among NPs in hospital medicine. Hypotheses: 1) Perceived facilitators to referral will be associated with a higher history of referral and a higher number of referrals to PC. 2) Perceived barriers to referral will be associated with a lower history of referral and a lower number of referrals to PC. 3) Increased self-efficacy with end-of-life discussions will be associated with a higher history of referral and a higher number of referrals to PC. 4) Perceived facilitators to referral, perceived barriers to referral, and self–efficacy with end-of-life discussions will contribute to a significant variance in the history of referral to PC. 5) Perceived facilitators to referral, perceived barriers to referral, and self–efficacy with end-of-life discussions will contribute to a significant variance in the number of referrals to PC. Significance: Previous studies of referring patients to PC within the hospital setting care have focused on physician practices. Identifying factors that influence NPs referring hospitalized patients to PC is essential to ensure that patients have access to these important services. This study incorporates the SNRS mission of advancing nursing research through the dissemination of research findings and the promotion of nursing science. Methods: A cross-sectional, predictive correlational study was conducted. History of referral to PC, facilitators to referring to PC, barriers to referring to PC, self-efficacy in end-of-life discussions, and referral to PC were measured using the PC referral case study survey, facilitators and barriers to PC referral survey, and self-assessment with end-of-life discussions survey. Data were analyzed descriptively and with Pearson’s Correlation, Spearman’s Rho, point-biserial correlation, multiple regression, logistic regression, Chi-Square test, and the Mann-Whitney U test. Results: Only one facilitator (PC team being helpful with establishing goals of care) was significantly associated with referral to PC. Three variables were statistically significant in relation to the history of referring to PC: “Inclined to refer: PC can help decrease the length of stay in hospital”, “Most inclined to refer: Patients with serious illnesses and/or poor prognoses”, and “Giving bad news to a patient or family member”. No predictor variables contributed a significant variance in the number of referrals to PC for all three case studies. There were no statistically significant results showing a relationship between the history of referral and referral to PC. All five hypotheses were partially supported. Discussion: Findings from this study emphasize the need for further research on NPs who work in hospital settings and what factors influence their behaviors of referring to PC. Since there is an increase in NPs practicing within hospital settings, future studies should use a larger sample size and incorporate hospital medicine NPs and other types of NPs that work in hospitals.

Keywords: palliative care, nurse practitioners, hospital medicine, referral

Procedia PDF Downloads 75
3504 Optimum Dispatching Rule in Solar Ingot-Wafer Manufacturing System

Authors: Wheyming Song, Hung-Hsiang Lin, Scott Lian

Abstract:

In this research, we investigate the optimal dispatching rule for machines and manpower allocation in the solar ingot-wafer systems. The performance of the method is measured by the sales profit for each dollar paid to the operators in a one week at steady-state. The decision variables are identification-number of machines and operators when each job is required to be served in each process. We propose a rule which is a function of operator’s ability, corresponding salary, and standing location while in the factory. The rule is named ‘Multi-nominal distribution dispatch rule’. The proposed rule performs better than many traditional rules including generic algorithm and particle swarm optimization. Simulation results show that the proposed Multi-nominal distribution dispatch rule improvement on the sales profit dramatically.

Keywords: dispatching, solar ingot, simulation, flexsim

Procedia PDF Downloads 301
3503 Targeted Effects of Subsidies on Prices of Selected Commodities in Iran Market

Authors: Sayedramin Hashemianesfehani, Seyed Hossein Hosseinilargani

Abstract:

In this study, we attempt to realize that to what extent the increase in selected commodities in Iran Market is originated from the implementation of the targeted subsidies law. Hence, an econometric model based on existing theories of increasing and transferring prices in order to transferring inflation is developed. In other words, world price index and virtual variables defined for targeted subsidies has significant and positive impact on the producer price index. The obtained results indicated that the targeted subsidies act in Iran has influential long and short-term impacts on producer price indexes. Finally, world prices of dairy products and dairy price with respect to major parameters is carried out to obtain some managerial ‎results.

Keywords: econometric models, targeted subsidies, consumer price index (CPI), producer price index (PPI)

Procedia PDF Downloads 361