Search results for: constriction factor based particle swarm optimization (CPSO)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 34340

Search results for: constriction factor based particle swarm optimization (CPSO)

31580 Design and Analysis of Shielding Magnetic Field for Active Space Radiation Protection

Authors: Chaoyan Huang, Hongxia Zheng

Abstract:

For deep space exploration and long duration interplanetary manned missions, protection of astronauts from cosmic radiation is an unavoidable problem. However, passive shielding can be little effective for protecting particles which energies are greater than 1GeV/nucleon. In this study, active magnetic protection method is adopted. Taking into account the structure and size of the end-cap, eight shielding magnetic field configurations are designed based on the Hoffman configuration. The shielding effect of shielding magnetic field structure, intensity B and thickness L on H particles with 2GeV energy is compared by test particle simulation. The result shows that the shielding effect is better with the linear type magnetic field structure in the end-cap region. Furthermore, two magnetic field configurations with better shielding effect are investigated through H and He galactic cosmic spectra. And the shielding effect of the linear type configuration adopted in the barrel and end-cap regions is best.

Keywords: galactic cosmic rays, active protection, shielding magnetic field configuration, shielding effect

Procedia PDF Downloads 144
31579 Memory and Narratives Rereading before and after One Week

Authors: Abigail M. Csik, Gabriel A. Radvansky

Abstract:

As people read through event-based narratives, they construct an event model that captures information about the characters, goals, location, time, and causality. For many reasons, memory for such narratives is represented at different levels, namely, the surface form, textbase, and event model levels. Rereading has been shown to decrease surface form memory, while, at the same time, increasing textbase and event model memories. More generally, distributed practice has consistently shown memory benefits over massed practice for different types of materials, including texts. However, little research has investigated distributed practice of narratives at different inter-study intervals and these effects on these three levels of memory. Recent work in our lab has indicated that there may be dramatic changes in patterns of forgetting around one week, which may affect the three levels of memory. The present experiment aimed to determine the effects of rereading on the three levels of memory as a factor of whether the texts were reread before versus after one week. Participants (N = 42) read a set of stories, re-read them either before or after one week (with an inter-study interval of three days, seven days, or fourteen days), and then took a recognition test, from which the three levels of representation were derived. Signal detection results from this study reveal that differential patterns at the three levels as a factor of whether the narratives were re-read prior to one week or after one week. In particular, an ANOVA revealed that surface form memory was lower (p = .08) while textbase (p = .02) and event model memory (p = .04) were greater if narratives were re-read 14 days later compared to memory when narratives were re-read 3 days later. These results have implications for what type of memory benefits from distributed practice at various inter-study intervals.

Keywords: memory, event cognition, distributed practice, consolidation

Procedia PDF Downloads 225
31578 Stress Concentration Trend for Combined Loading Conditions

Authors: Aderet M. Pantierer, Shmuel Pantierer, Raphael Cordina, Yougashwar Budhoo

Abstract:

Stress concentration occurs when there is an abrupt change in geometry, a mechanical part under loading. These changes in geometry can include holes, notches, or cracks within the component. The modifications create larger stress within the part. This maximum stress is difficult to determine, as it is directly at the point of the minimum area. Strain gauges have yet to be developed to analyze stresses at such minute areas. Therefore, a stress concentration factor must be utilized. The stress concentration factor is a dimensionless parameter calculated solely on the geometry of a part. The factor is multiplied by the nominal, or average, stress of the component, which can be found analytically or experimentally. Stress concentration graphs exist for common loading conditions and geometrical configurations to aid in the determination of the maximum stress a part can withstand. These graphs were developed from historical data yielded from experimentation. This project seeks to verify a stress concentration graph for combined loading conditions. The aforementioned graph was developed using CATIA Finite Element Analysis software. The results of this analysis will be validated through further testing. The 3D modeled parts will be subjected to further finite element analysis using Patran-Nastran software. The finite element models will then be verified by testing physical specimen using a tensile testing machine. Once the data is validated, the unique stress concentration graph will be submitted for publication so it can aid engineers in future projects.

Keywords: stress concentration, finite element analysis, finite element models, combined loading

Procedia PDF Downloads 444
31577 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction

Authors: Qais M. Yousef, Yasmeen A. Alshaer

Abstract:

Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.

Keywords: artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization

Procedia PDF Downloads 175
31576 Optimal Solutions for Real-Time Scheduling of Reconfigurable Embedded Systems Based on Neural Networks with Minimization of Power Consumption

Authors: Ghofrane Rehaiem, Hamza Gharsellaoui, Samir Benahmed

Abstract:

In this study, Artificial Neural Networks (ANNs) were used for modeling the parameters that allow the real-time scheduling of embedded systems under resources constraints designed for real-time applications running. The objective of this work is to implement a neural networks based approach for real-time scheduling of embedded systems in order to handle real-time constraints in execution scenarios. In our proposed approach, many techniques have been proposed for both the planning of tasks and reducing energy consumption. In fact, a combination of Dynamic Voltage Scaling (DVS) and time feedback can be used to scale the frequency dynamically adjusting the operating voltage. Indeed, we present in this paper a hybrid contribution that handles the real-time scheduling of embedded systems, low power consumption depending on the combination of DVS and Neural Feedback Scheduling (NFS) with the energy Priority Earlier Deadline First (PEDF) algorithm. Experimental results illustrate the efficiency of our original proposed approach.

Keywords: optimization, neural networks, real-time scheduling, low-power consumption

Procedia PDF Downloads 371
31575 Study on the Electrochemical Performance of Graphene Effect on Cadmium Oxide in Lithium Battery

Authors: Atef Y. Shenouda, Anton A. Momchilov

Abstract:

Graphene and CdO with different stoichiometric ratios of Cd(CH₃COO)₂ and graphene samples were prepared by hydrothermal reaction. The crystalline phases of pure CdO and 3CdO:1graphene were identified by X-ray diffraction (XRD). The particle morphology was studied with SEM. Furthermore, impedance measurements were applied. Galvanostatic measurements for the cells were carried out using potential limits between 0.01 and 3 V vs. Li/Li⁺. The current cycling intensity was 10⁻⁴ A. The specific discharge capacity of 3CdO-1G cell was about 450 Ah.Kg⁻¹ up to more than 100 cycles.

Keywords: CdO, graphene, negative electrode, lithium battery

Procedia PDF Downloads 162
31574 Application of Global Predictive Real Time Control Strategy to Improve Flooding Prevention Performance of Urban Stormwater Basins

Authors: Shadab Shishegar, Sophie Duchesne, Genevieve Pelletier

Abstract:

Sustainability as one of the key elements of Smart cities, can be realized by employing Real Time Control Strategies for city’s infrastructures. Nowadays Stormwater management systems play an important role in mitigating the impacts of urbanization on natural hydrological cycle. These systems can be managed in such a way that they meet the smart cities standards. In fact, there is a huge potential for sustainable management of urban stormwater and also its adaptability to global challenges like climate change. Hence, a dynamically managed system that can adapt itself to instability of the environmental conditions is desirable. A Global Predictive Real Time Control approach is proposed in this paper to optimize the performance of stormwater management basins in terms of flooding prevention. To do so, a mathematical optimization model is developed then solved using Genetic Algorithm (GA). Results show an improved performance at system-level for the stormwater basins in comparison to static strategy.

Keywords: environmental sustainability, optimization, real time control, storm water management

Procedia PDF Downloads 177
31573 Stator Short-Circuits Fault Diagnosis in Induction Motors

Authors: K. Yahia, M. Sahraoui, A. Guettaf

Abstract:

This paper deals with the problem of stator faults diagnosis in induction motors. Using the discrete wavelet transform (DWT) for the current Park’s vector modulus (CPVM) analysis, the inter-turn short-circuit faults diagnosis can be achieved. This method is based on the decomposition of the CPVM signal, where wavelet approximation and detail coefficients of this signal have been extracted. The energy evaluation of a known bandwidth detail permits to define a fault severity factor (FSF). This method has been tested through the simulation of an induction motor using a mathematical model based on the winding-function approach. Simulation, as well as experimental results, show the effectiveness of the used method.

Keywords: induction motors (IMs), inter-turn short-circuits diagnosis, discrete wavelet transform (DWT), Current Park’s Vector Modulus (CPVM)

Procedia PDF Downloads 457
31572 Reliability Analysis of Dam under Quicksand Condition

Authors: Manthan Patel, Vinit Ahlawat, Anshh Singh Claire, Pijush Samui

Abstract:

This paper focuses on the analysis of quicksand condition for a dam foundation. The quicksand condition occurs in cohesion less soil when effective stress of soil becomes zero. In a dam, the saturated sediment may appear quite solid until a sudden change in pressure or shock initiates liquefaction. This causes the sand to form a suspension and lose strength hence resulting in failure of dam. A soil profile shows different properties at different points and the values obtained are uncertain thus reliability analysis is performed. The reliability is defined as probability of safety of a system in a given environment and loading condition and it is assessed as Reliability Index. The reliability analysis of dams under quicksand condition is carried by Gaussian Process Regression (GPR). Reliability index and factor of safety relating to liquefaction of soil is analysed using GPR. The results of reliability analysis by GPR is compared to that of conventional method and it is demonstrated that on applying GPR the probabilistic analysis reduces the computational time and efforts.

Keywords: factor of safety, GPR, reliability index, quicksand

Procedia PDF Downloads 482
31571 Membrane Bioreactor versus Activated Sludge Process for Aerobic Wastewater Treatment and Recycling

Authors: Sarra Kitanou

Abstract:

Membrane bioreactor (MBR) systems are one of the most widely used wastewater treatment processes for various municipal and industrial waste streams. It is based on complex interactions between biological processes, filtration process and rheological properties of the liquid to be treated. Its complexity makes understanding system operation and optimization more difficult, and traditional methods based on experimental analysis are costly and time consuming. The present study was based on an external membrane bioreactor pilot scale with ceramic membranes compared to conventional activated sludge process (ASP) plant. Both systems received their influent from a domestic wastewater. The membrane bioreactor (MBR) produced an effluent with much better quality than ASP in terms of total suspended solids (TSS), organic matter such as biological oxygen demand (BOD) and chemical oxygen demand (COD), total Phosphorus and total Nitrogen. Other effluent quality parameters also indicate substantial differences between ASP and MBR. This study leads to conclude that in the case domestic wastewater, MBR treatment has excellent effluent quality. Hence, the replacement of the ASP by the MBRs may be justified on the basis of their improved removal of solids, nutrients, and micropollutants. Furthermore, in terms of reuse the great quality of the treated water allows it to be reused for irrigation.

Keywords: aerobic wastewater treatment, conventional activated sludge process, membrane bioreactor, reuse for irrigation

Procedia PDF Downloads 78
31570 Validation of the Trait Emotional Intelligence Questionnaire: Adolescent Short Form (TEIQue-ASF) among Adolescents in Vietnam

Authors: Anh Nguyen, Jane Fisher, Thach Tran, Anh T. T. Tran

Abstract:

Trait Emotional Intelligence is the knowledge, beliefs, and attitudes an individual has about their own and other people’s emotions. It is believed that trait emotional intelligence is a component of personality. Petrides’ Trait Emotional Intelligence Questionnaire (TEIQue) is well regarded and well-established, with validation data about its functioning among adults from many countries. However, there is little data yet about its use among Asian populations, including adolescents. The aims were to translate and culturally verify the Trait Emotional Intelligence Adolescent Short Form (TEIQue-ASF) and investigate content validity, construct validity, and reliability among adolescents attending high schools in Vietnam. Content of the TEIQue-ASF was translated (English to Vietnamese) and back-translated (Vietnamese to English) in consultation with bilingual and bicultural health researchers and pilot tested among 51 potential respondents. Phraseology and wording were then adjusted and the final version is named the VN-TEIQue-ASF. The VN-TEIQue-ASF’s properties were investigated in a cross-sectional elf-report survey among high school students in Central Vietnam. In total 1,546 / 1,573 (98.3%) eligible students from nine high schools in rural, urban, and coastline areas completed the survey. Explanatory Factor Analysis yielded a four-factor solution, including some with facets that loaded differently compared to the original version: Well-being, Emotion in Relationships, Emotion Self-management, and Emotion Sensitivity. The Cronbach’s alpha of the global score for the VN-TEIQue-ASF was .77. The VN-TEIQue-ASF is comprehensible and has good content and construct validity and reliability among adolescents in Vietnam. The factor structure is only partly replicated the original version. The VN-TEIQue-ASF is recommended for use in school or community surveys and professional study in education, psychology, and public health to investigate the trait emotional intelligence of adolescents in Vietnam.

Keywords: adolescents, construct validity, content validity, factor analysis, questionnaire validity, trait emotional intelligence, Vietnam

Procedia PDF Downloads 268
31569 Analysis of the Result for the Accelerated Life Cycle Test of the Motor for Washing Machine by Using Acceleration Factor

Authors: Youn-Sung Kim, Jin-Ho Jo, Mi-Sung Kim, Jae-Kun Lee

Abstract:

Accelerated life cycle test is applied to various products or components in order to reduce the time of life cycle test in industry. It must be considered for many test conditions according to the product characteristics for the test and the selection of acceleration parameter is especially very important. We have carried out the general life cycle test and the accelerated life cycle test by applying the acceleration factor (AF) considering the characteristics of brushless DC (BLDC) motor for washing machine. The final purpose of this study is to verify the validity by analyzing the results of the general life cycle test and the accelerated life cycle test. It will make it possible to reduce the life test time through the reasonable accelerated life cycle test.

Keywords: accelerated life cycle test, reliability test, motor for washing machine, brushless dc motor test

Procedia PDF Downloads 611
31568 Mathematical Modeling Pressure Losses of Trapezoidal Labyrinth Channel and Bi-Objective Optimization of the Design Parameters

Authors: Nina Philipova

Abstract:

The influence of the geometric parameters of trapezoidal labyrinth channel on the pressure losses along the labyrinth length is investigated in this work. The impact of the dentate height is studied at fixed values of the dentate angle and the dentate spacing. The objective of the work presented in this paper is to derive a mathematical model of the pressure losses along the labyrinth length depending on the dentate height. The numerical simulations of the water flow movement are performed by using Commercial codes ANSYS GAMBIT and FLUENT. Dripper inlet pressure is set up to be 1 bar. As a result, the mathematical model of the pressure losses is determined as a second-order polynomial by means Commercial code STATISTIKA. Bi-objective optimization is performed by using the mean algebraic function of utility. The optimum value of the dentate height is defined at fixed values of the dentate angle and the dentate spacing. The derived model of the pressure losses and the optimum value of the dentate height are used as a basis for a more successful emitter design.

Keywords: drip irrigation, labyrinth channel hydrodynamics, numerical simulations, Reynolds stress model

Procedia PDF Downloads 154
31567 Microwave Dielectric Properties and Microstructures of Nd(Ti₀.₅W₀.₅)O₄ Ceramics for Application in Wireless Gas Sensors

Authors: Yih-Chien Chen, Yue-Xuan Du, Min-Zhe Weng

Abstract:

Carbon monoxide is a substance produced by the incomplete combustion. It is toxic even at concentrations of less than 100ppm. Since it is colorless and odorless, it is difficult to detect. CO sensors have been developed using a variety of physical mechanisms, including semiconductor oxides, solid electrolytes, and organic semiconductors. Many works have focused on using semiconducting sensors composed of sensitive layers such as ZnO, TiO₂, and NiO with high sensitivity for gases. However, these sensors working at high temperatures increased their power consumption. On the other hand, the dielectric resonator (DR) is attractive for gas detection due to its large surface area and sensitivity for external environments. Materials that are to be employed in sensing devices must have a high-quality factor. Numerous researches into the fergusonite-type structure and related ceramic systems have explored. Extensive research into RENbO₄ ceramics has explored their potential application in resonators, filters, and antennas in modern communication systems, which are operated at microwave frequencies. Nd(Ti₀.₅W₀.₅)O₄ ceramics were synthesized herein using the conventional mixed-oxide method. The Nd(Ti₀.₅W₀.₅)O₄ ceramics were prepared using the conventional solid-state method. Dielectric constants (εᵣ) of 15.4-19.4 and quality factor (Q×f) of 3,600-11,100 GHz were obtained at sintering temperatures in the range 1425-1525°C for 4 h. The dielectric properties of the Nd(Ti₀.₅W₀.₅)O₄ ceramics at microwave frequencies were found to vary with the sintering temperature. For a further understanding of these microwave dielectric properties, they were analyzed by densification, X-ray diffraction (XRD), and by making microstructural observations.

Keywords: dielectric constant, dielectric resonators, sensors, quality factor

Procedia PDF Downloads 260
31566 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 103
31565 Multi-Agent System Based Distributed Voltage Control in Distribution Systems

Authors: A. Arshad, M. Lehtonen. M. Humayun

Abstract:

With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.

Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids

Procedia PDF Downloads 312
31564 An Improved Parallel Algorithm of Decision Tree

Authors: Jiameng Wang, Yunfei Yin, Xiyu Deng

Abstract:

Parallel optimization is one of the important research topics of data mining at this stage. Taking Classification and Regression Tree (CART) parallelization as an example, this paper proposes a parallel data mining algorithm based on SSP-OGini-PCCP. Aiming at the problem of choosing the best CART segmentation point, this paper designs an S-SP model without data association; and in order to calculate the Gini index efficiently, a parallel OGini calculation method is designed. In addition, in order to improve the efficiency of the pruning algorithm, a synchronous PCCP pruning strategy is proposed in this paper. In this paper, the optimal segmentation calculation, Gini index calculation, and pruning algorithm are studied in depth. These are important components of parallel data mining. By constructing a distributed cluster simulation system based on SPARK, data mining methods based on SSP-OGini-PCCP are tested. Experimental results show that this method can increase the search efficiency of the best segmentation point by an average of 89%, increase the search efficiency of the Gini segmentation index by 3853%, and increase the pruning efficiency by 146% on average; and as the size of the data set increases, the performance of the algorithm remains stable, which meets the requirements of contemporary massive data processing.

Keywords: classification, Gini index, parallel data mining, pruning ahead

Procedia PDF Downloads 124
31563 Psychometric Properties of the Secondary School Stressor Questionnaire among Adolescents at Five Secondary Schools

Authors: Muhamad Saiful Bahri Yusoff

Abstract:

This study aimed to evaluate the construct, convergent, and discriminant validity of the Secondary School Stressor Questionnaire (3SQ) as well as to evaluate its internal consistency among adolescents in Malaysian secondary schools. A cross-sectional study was conducted on 700 secondary school students in five secondary schools. Stratified random sampling was used to select schools and participants. The confirmatory factor analysis was performed by AMOS to examine construct, convergent, and discriminant validity. The reliability analysis was performed by SPSS to determine internal consistency. The results showed that the original six-factor model with 44 items failed to achieve acceptable values of the goodness of fit indices, suggesting poor model fit. The new five-factor model of 3SQ with 22 items demonstrated acceptable level of goodness of fit indices to signify a model fit. The overall Cronbach’s alpha value for the new version 3SQ was 0.93, while the five constructs ranged from 0.68 to 0.94. The composite reliability values of each construct ranged between 0.68 and 0.93, indicating satisfactory to high level of convergent validity. Our study did not support the construct validity of the original version of 3SQ. We found the new version 3SQ showed more convincing evidence of validity and reliability to measure stressors of adolescents. Continued research is needed to verify and maximize the psychometric credentials of 3SQ across countries.

Keywords: stressors, adolescents, secondary school students, 3SQ, psychometric properties

Procedia PDF Downloads 403
31562 Optimization Model for Support Decision for Maximizing Production of Mixed Fruit Tree Farms

Authors: Andrés I. Ávila, Patricia Aros, César San Martín, Elizabeth Kehr, Yovana Leal

Abstract:

We consider a linear programming model to help farmers to decide if it is convinient to choose among three kinds of export fruits for their future investment. We consider area, investment, water, productivitiy minimal unit, and harvest restrictions and a monthly based model to compute the average income in five years. Also, conditions on the field as area, water availability and initia investment are required. Using the Chilean costs and dollar-peso exchange rate, we can simulate several scenarios to understand the possible risks associated to this market.

Keywords: mixed integer problem, fruit production, support decision model, fruit tree farms

Procedia PDF Downloads 456
31561 Numerical Solution of Portfolio Selecting Semi-Infinite Problem

Authors: Alina Fedossova, Jose Jorge Sierra Molina

Abstract:

SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.

Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution

Procedia PDF Downloads 309
31560 Cable De-Commissioning of Legacy Accelerators at CERN

Authors: Adya Uluwita, Fernando Pedrosa, Georgi Georgiev, Christian Bernard, Raoul Masterson

Abstract:

CERN is an international organisation funded by 23 countries that provide the particle physics community with excellence in particle accelerators and other related facilities. Founded in 1954, CERN has a wide range of accelerators that allow groundbreaking science to be conducted. Accelerators bring particles to high levels of energy and make them collide with each other or with fixed targets, creating specific conditions that are of high interest to physicists. A chain of accelerators is used to ramp up the energy of particles and eventually inject them into the largest and most recent one: the Large Hadron Collider (LHC). Among this chain of machines is, for instance the Proton Synchrotron, which was started in 1959 and is still in operation. These machines, called "injectors”, keep evolving over time, as well as the related infrastructure. Massive decommissioning of obsolete cables started in 2015 at CERN in the frame of the so-called "injectors de-cabling project phase 1". Its goal was to replace aging cables and remove unused ones, freeing space for new cables necessary for upgrades and consolidation campaigns. To proceed with the de-cabling, a project co-ordination team was assembled. The start of this project led to the investigation of legacy cables throughout the organisation. The identification of cables stacked over half a century proved to be arduous. Phase 1 of the injectors de-cabling was implemented for 3 years with success after overcoming some difficulties. Phase 2, started 3 years later, focused on improving safety and structure with the introduction of a quality assurance procedure. This paper discusses the implementation of this quality assurance procedure throughout phase 2 of the project and the transition between the two phases. Over hundreds of kilometres of cable were removed in the injectors complex at CERN from 2015 to 2023.

Keywords: CERN, de-cabling, injectors, quality assurance procedure

Procedia PDF Downloads 93
31559 From Intuitive to Constructive Audit Risk Assessment: A Complementary Approach to CAATTs Adoption

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

The use of the audit risk model in auditing has faced limitations and difficulties, leading auditors to rely on a conceptual level of its application. The qualitative approach to assessing risks has resulted in different risk assessments, affecting the quality of audits and decision-making on the adoption of CAATTs. This study aims to investigate risk factors impacting the implementation of the audit risk model and propose a complementary risk-based instrument (KRIs) to form substance risk judgments and mitigate against heightened risk of material misstatement (RMM). The study addresses the question of how risk factors impact the implementation of the audit risk model, improve risk judgments, and aid in the adoption of CAATTs. The study uses a three-stage scale development procedure involving a pretest and subsequent study with two independent samples. The pretest involves an exploratory factor analysis, while the subsequent study employs confirmatory factor analysis for construct validation. Additionally, the authors test the ability of the KRIs to predict audit efforts needed to mitigate against heightened RMM. Data was collected through two independent samples involving 767 participants. The collected data was analyzed using exploratory factor analysis and confirmatory factor analysis to assess scale validity and construct validation. The suggested KRIs, comprising two risk components and seventeen risk items, are found to have high predictive power in determining audit efforts needed to reduce RMM. The study validates the suggested KRIs as an effective instrument for risk assessment and decision-making on the adoption of CAATTs. This study contributes to the existing literature by implementing a holistic approach to risk assessment and providing a quantitative expression of assessed risks. It bridges the gap between intuitive risk evaluation and the theoretical domain, clarifying the mechanism of risk assessments. It also helps improve the uniformity and quality of risk assessments, aiding audit standard-setters in issuing updated guidelines on CAATT adoption. A few limitations and recommendations for future research should be mentioned. First, the process of developing the scale was conducted in the Israeli auditing market, which follows the International Standards on Auditing (ISAs). Although ISAs are adopted in European countries, for greater generalization, future studies could focus on other countries that adopt additional or local auditing standards. Second, this study revealed risk factors that have a material impact on the assessed risk. However, there could be additional risk factors that influence the assessment of the RMM. Therefore, future research could investigate other risk segments, such as operational and financial risks, to bring a broader generalizability to our results. Third, although the sample size in this study fits acceptable scale development procedures and enables drawing conclusions from the body of research, future research may develop standardized measures based on larger samples to reduce the generation of equivocal results and suggest an extended risk model.

Keywords: audit risk model, audit efforts, CAATTs adoption, key risk indicators, sustainability

Procedia PDF Downloads 77
31558 Osteoprotegerin and Osteoprotegerin/TRAIL Ratio are Associated with Cardiovascular Dysfunction and Mortality among Patients with Renal Failure

Authors: Marek Kuźniewski, Magdalena B. Kaziuk , Danuta Fedak, Paulina Dumnicka, Ewa Stępień, Beata Kuśnierz-Cabala, Władysław Sułowicz

Abstract:

Background: The high prevalence of cardiovascular morbidity and mortality among patients with chronic kidney disease (CKD) is observed especially in those undergoing dialysis. Osteoprotegerin (OPG) and its ligands, receptor activator of nuclear factor kappa-B ligand (RANKL) and tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) have been associated with cardiovascular complications. Our aim was to study their role as cardiovascular risk factors in stage 5 CKD patients. Methods: OPG, RANKL and TRAIL concentrations were measured in 69 hemodialyzed CKD patients and 35 healthy volunteers. In CKD patients, cardiovascular dysfunction was assessed with aortic pulse wave velocity (AoPWV), carotid artery intima-media thickness (CCA-IMT), coronary artery calcium score (CaSc) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) serum concentration. Cardiovascular and overall mortality data were collected during a 7-years follow-up. Results: OPG plasma concentrations were higher in CKD patients comparing to controls. Total soluble RANKL was lower and OPG/RANKL ratio higher in patients. Soluble TRAIL concentrations did not differ between the groups and OPG/TRAIL ratio was higher in CKD patients. OPG and OPG/TRAIL positively predicted long-term mortality (all-cause and cardiovascular) in CKD patients. OPG positively correlated with AoPWV, CCA-IMT and NT-proBNP whereas OPG/TRAIL with AoPWV and NT-proBNP. Described relationships were independent of classical and non-classical cardiovascular risk factors, with exception of age. Conclusions: Our study confirmed the role of OPG as a biomarker of cardiovascular dysfunction and a predictor of mortality in stage 5 CKD. OPG/TRAIL ratio can be proposed as a predictor of cardiovascular dysfunction and mortality.

Keywords: osteoprotegerin, tumor necrosis factor-related apoptosis-inducing ligand, receptor activator of nuclear factor kappa-B ligand, hemodialysis, chronic kidney disease, cardiovascular disease

Procedia PDF Downloads 334
31557 Optimizing Logistics for Courier Organizations with Considerations of Congestions and Pickups: A Courier Delivery System in Amman as Case Study

Authors: Nader A. Al Theeb, Zaid Abu Manneh, Ibrahim Al-Qadi

Abstract:

Traveling salesman problem (TSP) is a combinatorial integer optimization problem that asks "What is the optimal route for a vehicle to traverse in order to deliver requests to a given set of customers?”. It is widely used by the package carrier companies’ distribution centers. The main goal of applying the TSP in courier organizations is to minimize the time that it takes for the courier in each trip to deliver or pick up the shipments during a day. In this article, an optimization model is constructed to create a new TSP variant to optimize the routing in a courier organization with a consideration of congestion in Amman, the capital of Jordan. Real data were collected by different methods and analyzed. Then, concert technology - CPLEX was used to solve the proposed model for some random generated data instances and for the real collected data. At the end, results have shown a great improvement in time compared with the current trip times, and an economic study was conducted afterwards to figure out the impact of using such models.

Keywords: travel salesman problem, congestions, pick-up, integer programming, package carriers, service engineering

Procedia PDF Downloads 429
31556 Chronic Impact of Silver Nanoparticle on Aerobic Wastewater Biofilm

Authors: Sanaz Alizadeh, Yves Comeau, Arshath Abdul Rahim, Sunhasis Ghoshal

Abstract:

The application of silver nanoparticles (AgNPs) in personal care products, various household and industrial products has resulted in an inevitable environmental exposure of such engineered nanoparticles (ENPs). Ag ENPs, released via household and industrial wastes, reach water resource recovery facilities (WRRFs), yet the fate and transport of ENPs in WRRFs and their potential risk in the biological wastewater processes are poorly understood. Accordingly, our main objective was to elucidate the impact of long-term continuous exposure to AgNPs on biological activity of aerobic wastewater biofilm. The fate, transport and toxicity of 10 μg.L-1and 100 μg.L-1 PVP-stabilized AgNPs (50 nm) were evaluated in an attached growth biological treatment process, using lab-scale moving bed bioreactors (MBBRs). Two MBBR systems for organic matter removal were fed with a synthetic influent and operated at a hydraulic retention time (HRT) of 180 min and 60% volumetric filling ratio of Anox-K5 carriers with specific surface area of 800 m2/m3. Both reactors were operated for 85 days after reaching steady state conditions to develop a mature biofilm. The impact of AgNPs on the biological performance of the MBBRs was characterized over a period of 64 days in terms of the filtered biodegradable COD (SCOD) removal efficiency, the biofilm viability and key enzymatic activities (α-glucosidase and protease). The AgNPs were quantitatively characterized using single-particle inductively coupled plasma mass spectroscopy (spICP-MS), determining simultaneously the particle size distribution, particle concentration and dissolved silver content in influent, bioreactor and effluent samples. The generation of reactive oxygen species and the oxidative stress were assessed as the proposed toxicity mechanism of AgNPs. Results indicated that a low concentration of AgNPs (10 μg.L-1) did not significantly affect the SCOD removal efficiency whereas a significant reduction in treatment efficiency (37%) was observed at 100 μg.L-1AgNPs. Neither the viability nor the enzymatic activities of biofilm were affected at 10 μg.L-1AgNPs but a higher concentration of AgNPs induced cell membrane integrity damage resulting in 31% loss of viability and reduced α-glucosidase and protease enzymatic activities by 31% and 29%, respectively, over the 64-day exposure period. The elevated intercellular ROS in biofilm at a higher AgNPs concentration over time was consistent with a reduced biological biofilm performance, confirming the occurrence of a nanoparticle-induced oxidative stress in the heterotrophic biofilm. The spICP-MS analysis demonstrated a decrease in the nanoparticles concentration over the first 25 days, indicating a significant partitioning of AgNPs into the biofilm matrix in both reactors. The concentration of nanoparticles increased in effluent of both reactors after 25 days, however, indicating a decreased retention capacity of AgNPs in biofilm. The observed significant detachment of biofilm also contributed to a higher release of nanoparticles due to cell-wall destabilizing properties of AgNPs as an antimicrobial agent. The removal efficiency of PVP-AgNPs and the biofilm biological responses were a function of nanoparticle concentration and exposure time. This study contributes to a better understanding of the fate and behavior of AgNPs in biological wastewater processes, providing key information that can be used to predict the environmental risks of ENPs in aquatic ecosystems.

Keywords: biofilm, silver nanoparticle, single particle ICP-MS, toxicity, wastewater

Procedia PDF Downloads 268
31555 Stability and Rheology of Sodium Diclofenac-Loaded and Unloaded Palm Kernel Oil Esters Nanoemulsion Systems

Authors: Malahat Rezaee, Mahiran Basri, Raja Noor Zaliha Raja Abdul Rahman, Abu Bakar Salleh

Abstract:

Sodium diclofenac is one of the most commonly used drugs of nonsteroidal anti-inflammatory drugs (NSAIDs). It is especially effective in the controlling the severe conditions of inflammation and pain, musculoskeletal disorders, arthritis, and dysmenorrhea. Formulation as nanoemulsions is one of the nanoscience approaches that have been progressively considered in pharmaceutical science for transdermal delivery of drug. Nanoemulsions are a type of emulsion with particle sizes ranging from 20 nm to 200 nm. An emulsion is formed by the dispersion of one liquid, usually the oil phase in another immiscible liquid, water phase that is stabilized using surfactant. Palm kernel oil esters (PKOEs), in comparison to other oils; contain higher amounts of shorter chain esters, which suitable to be applied in micro and nanoemulsion systems as a carrier for actives, with excellent wetting behavior without the oily feeling. This research was aimed to study the effect of O/S ratio on stability and rheological behavior of sodium diclofenac loaded and unloaded palm kernel oil esters nanoemulsion systems. The effect of different O/S ratio of 0.25, 0.50, 0.75, 1.00 and 1.25 on stability of the drug-loaded and unloaded nanoemulsion formulations was evaluated by centrifugation, freeze-thaw cycle and storage stability tests. Lecithin and cremophor EL were used as surfactant. The stability of the prepared nanoemulsion formulations was assessed based on the change in zeta potential and droplet size as a function of time. Instability mechanisms including coalescence and Ostwald ripening for the nanoemulsion system were discussed. In comparison between drug-loaded and unloaded nanoemulsion formulations, drug-loaded formulations represented smaller particle size and higher stability. In addition, the O/S ratio of 0.5 was found to be the best ratio of oil and surfactant for production of a nanoemulsion with the highest stability. The effect of O/S ratio on rheological properties of drug-loaded and unloaded nanoemulsion systems was studied by plotting the flow curves of shear stress (τ) and viscosity (η) as a function of shear rate (γ). The data were fitted to the Power Law model. The results showed that all nanoemulsion formulations exhibited non-Newtonian flow behaviour by displaying shear thinning behaviour. Viscosity and yield stress were also evaluated. The nanoemulsion formulation with the O/S ratio of 0.5 represented higher viscosity and K values. In addition, the sodium diclofenac loaded formulations had more viscosity and higher yield stress than drug-unloaded formulations.

Keywords: nanoemulsions, palm kernel oil esters, sodium diclofenac, rheoligy, stability

Procedia PDF Downloads 423
31554 Heavy Metals (Pb, Cu, Fe, and Zn) Level in Shellfish (Etheria elliptica), Water, and Sediments of River Ogbese, Ondo State, Nigeria

Authors: O. O. Olawusi-Peters, O. E. Aguda, F. O. Okoye

Abstract:

Investigations on the accumulation of heavy metals in water and sediments of River Ogbese were carried out between December 2010 and February 2011 using Atomic Absorption Spectrophotometer. Etheria elliptica a sessile organism was also used to determine the concentration of heavy metal in the aquatic environmental. In water, Cu had the highest concentration (0.55–0.13 mg/l ±0.1) while in sediments, the highest value obtained was in Fe (1.46-3.89mg/l±0.27). The minimum concentrations recorded were in Pb; which was below detectable level. The result also revealed that the shell accumulated more heavy metals than the flesh of the mussel with Cu in the shell exhibiting a negative correlation with all the metals in the flesh. However, the condition factor (K) value is 6.44, an indication of good health. The length-weight relationship is expressed as W=-0.48xL 1.94 (r2=0.29) showing the growth pattern to be negatively allometric.

Keywords: condition factor, Etheria elliptica, heavy metals, River Ogbese

Procedia PDF Downloads 477
31553 Development of Multilayer Capillary Copper Wick Structure using Microsecond CO₂ Pulsed Laser

Authors: Talha Khan, Surendhar Kumaran, Rajeev Nair

Abstract:

The development of economical, efficient, and reliable next-generation thermal and water management systems to provide efficient cooling and water management technologies is being pursued application in compact and lightweight spacecraft. The elimination of liquid-vapor phase change-based thermal and water management systems is being done due to issues with the reliability and robustness of this technology. To achieve the motive of implementing the principle of using an innovative evaporator and condenser design utilizing bimodal wicks manufactured using a microsecond pulsed CO₂ laser has been proposed in this study. Cylin drical, multilayered capillary copper wicks with a substrate diameter of 39 mm are additively manufactured using a pulsed laser. The copper particles used for layer-by-layer addition on the substrate measure in a diameter range of 225 to 450 micrometers. The primary objective is to develop a novel, high-quality, fast turnaround, laser-based additive manufacturing process that will eliminate the current technical challenges involved with the traditional manufacturing processes for nano/micro-sized powders, like particle agglomeration. Raster-scanned, pulsed-laser sintering process has been developed to manufacture 3D wicks with controlled porosity and permeability.

Keywords: liquid-vapor phase change, bimodal wicks, multilayered, capillary, raster-scanned, porosity, permeability

Procedia PDF Downloads 191
31552 Business Continuity Risk Review for a Large Petrochemical Complex

Authors: Michel A. Thomet

Abstract:

A discrete-event simulation model was used to perform a Reliability-Availability-Maintainability (RAM) study of a large petrochemical complex which included sixteen process units, and seven feeds and intermediate streams. All the feeds and intermediate streams have associated storage tanks, so that if a processing unit fails and shuts down, the downstream units can keep producing their outputs. This also helps the upstream units which do not have to reduce their outputs, but can store their excess production until the failed unit restart. Each process unit and each pipe section carrying the feeds and intermediate streams has a probability of failure with an associated distribution and a Mean Time Between Failure (MTBF), as well as a distribution of the time to restore and a Mean Time To Restore (MTTR). The utilities supporting the process units can also fail and have their own distributions with specific MTBF and MTTR. The model runs are for ten years or more and the runs are repeated several times to obtain statistically relevant results. One of the main results is the On-Stream factor (OSF) of each process unit (percent of hours in a year when the unit is running in nominal conditions). One of the objectives of the study was to investigate if the storage capacity of each of the feeds and the intermediate stream was adequate. This was done by increasing the storage capacities in several steps and through running the simulation to see if the OSF were improved and by how much. Other objectives were to see if the failure of the utilities were an important factor in the overall OSF, and what could be done to reduce their failure rates through redundant equipment.

Keywords: business continuity, on-stream factor, petrochemical, RAM study, simulation, MTBF

Procedia PDF Downloads 219
31551 Meta Mask Correction for Nuclei Segmentation in Histopathological Image

Authors: Jiangbo Shi, Zeyu Gao, Chen Li

Abstract:

Nuclei segmentation is a fundamental task in digital pathology analysis and can be automated by deep learning-based methods. However, the development of such an automated method requires a large amount of data with precisely annotated masks which is hard to obtain. Training with weakly labeled data is a popular solution for reducing the workload of annotation. In this paper, we propose a novel meta-learning-based nuclei segmentation method which follows the label correction paradigm to leverage data with noisy masks. Specifically, we design a fully conventional meta-model that can correct noisy masks by using a small amount of clean meta-data. Then the corrected masks are used to supervise the training of the segmentation model. Meanwhile, a bi-level optimization method is adopted to alternately update the parameters of the main segmentation model and the meta-model. Extensive experimental results on two nuclear segmentation datasets show that our method achieves the state-of-the-art result. In particular, in some noise scenarios, it even exceeds the performance of training on supervised data.

Keywords: deep learning, histopathological image, meta-learning, nuclei segmentation, weak annotations

Procedia PDF Downloads 140