Search results for: linear acceleration method
19386 Multivalued Behavior for a Two-Level System Using Homotopy Analysis Method
Authors: Angelo I. Aquino, Luis Ma. T. Bo-ot
Abstract:
We use the Homotopy Analysis Method (HAM) to solve the system of equations modeling the two-level system and extract results which will pinpoint to turbulent behavior. We look at multi-valued solutions as indicative of turbulence or turbulent-like behavior. We take dierent specic cases which result in multi-valued velocities. The solutions are in series form and application of HAM ensures convergence in some region.Keywords: multivalued solutions, homotopy analysis method, two-level system, equation
Procedia PDF Downloads 59319385 Inference for Synthetic Control Methods with Multiple Treated Units
Authors: Ziyan Zhang
Abstract:
Although the Synthetic Control Method (SCM) is now widely applied, its most commonly- used inference method, placebo test, is often problematic, especially when the treatment is not uniquely assigned. This paper discusses the problems with the placebo test under the multivariate treatment case. And, to improve the power of inferences, I further propose an Andrews-type procedure as it potentially solves some drawbacks of the placebo test. Simulations are conducted to show the Andrews’ test is often valid and powerful, compared with the placebo test.Keywords: Synthetic Control Method, Multiple treatments, Andrews' test, placebo test
Procedia PDF Downloads 16419384 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution
Authors: Saleem Z. Ramadan
Abstract:
This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the PTH percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.Keywords: reliability, accelerated life testing, cumulative exposure model, Bayesian estimation, progressive type-I censoring, Weibull distribution
Procedia PDF Downloads 50519383 Application of Compressed Sensing Method for Compression of Quantum Data
Authors: M. Kowalski, M. Życzkowski, M. Karol
Abstract:
Current quantum key distribution systems (QKD) offer low bit rate of up to single MHz. Compared to conventional optical fiber links with multiple GHz bitrates, parameters of recent QKD systems are significantly lower. In the article we present the conception of application of the Compressed Sensing method for compression of quantum information. The compression methodology as well as the signal reconstruction method and initial results of improving the throughput of quantum information link are presented.Keywords: quantum key distribution systems, fiber optic system, compressed sensing
Procedia PDF Downloads 69419382 The Effect of Critical Activity on Critical Path and Project Duration in Precedence Diagram Method
Abstract:
The additional relationships i.e., start-to-start, finish-to-finish, and start-to-finish, between activity in Precedence Diagram Method (PDM) provides a more flexible schedule than traditional Critical Path Method (CPM). But, changing the duration of critical activities in the PDM network will have an anomalous effect on the critical path and the project completion date. In this study, we classified the critical activities in two groups i.e., 1. activity on single critical path and 2. activity on multi-critical paths, and six classes i.e., normal, reverse, neutral, perverse, decrease-reverse and increase-normal, based on their effects on project duration in PDM. Furthermore, we determined the maximum float of time by which the duration each type of critical activities can be changed without effecting the project duration. This study would help the project manager to clearly understand the behavior of each critical activity on critical path, and he/she would be able to change the project duration by shortening or lengthening activities based on project budget and project deadline.Keywords: construction management, critical path method, project scheduling network, precedence diagram method
Procedia PDF Downloads 22219381 From Paper to the Ether: The Innovative and Historical Development of Distance Education from Correspondence to On-Line Learning and Teaching in Queensland Universities over the past Century
Authors: B. Adcock, H. van Rensburg
Abstract:
Education is ever-changing to keep up with innovative technological development and the rapid acceleration of globalisation. This chapter introduces the historical development and transformation of teaching in distance education from correspondence to on-line learning in Queensland universities. It furthermore investigates changes to the delivery models of distance education that have impacted on teaching at tertiary level in Queensland, and reflects on the social changes that have taken place during the past 100 years. This includes an analysis of the following five different periods in time: Foundation period (1911-1919) including World War I; 1920-1939 including the Great Depression; 1940-1970s, including World War II and the post war reconstruction; and the current technological era (1980s to present). In Queensland, the concept of distance education was begun by the University of Queensland (UQ) in 1911, when it began offering extension courses. The introduction of modern technology, in the form of electronic delivery, dramatically changed tertiary distance education due to political initiatives. The inclusion of electronic delivery in education signifies change at many levels, including policy, pedagogy, curriculum and governance. Changes in delivery not only affect the way study materials are delivered, but also the way courses are be taught and adjustments made by academics to their teaching methods.Keywords: distance education, innovative technological development, on line education, tertiary education
Procedia PDF Downloads 50419380 Culvert Blockage Evaluation Using Australian Rainfall And Runoff 2019
Authors: Rob Leslie, Taher Karimian
Abstract:
The blockage of cross drainage structures is a risk that needs to be understood and managed or lessened through the design. A blockage is a random event, influenced by site-specific factors, which needs to be quantified for design. Under and overestimation of blockage can have major impacts on flood risk and cost associated with drainage structures. The importance of this matter is heightened for those projects located within sensitive lands. It is a particularly complex problem for large linear infrastructure projects (e.g., rail corridors) located within floodplains where blockage factors can influence flooding upstream and downstream of the infrastructure. The selection of the appropriate blockage factors for hydraulic modeling has been subject to extensive research by hydraulic engineers. This paper has been prepared to review the current Australian Rainfall and Runoff 2019 (ARR 2019) methodology for blockage assessment by applying this method to a transport corridor brownfield upgrade case study in New South Wales. The results of applying the method are also validated against asset data and maintenance records. ARR 2019 – Book 6, Chapter 6 includes advice and an approach for estimating the blockage of bridges and culverts. This paper concentrates specifically on the blockage of cross drainage structures. The method has been developed to estimate the blockage level for culverts affected by sediment or debris due to flooding. The objective of the approach is to evaluate a numerical blockage factor that can be utilized in a hydraulic assessment of cross drainage structures. The project included an assessment of over 200 cross drainage structures. In order to estimate a blockage factor for use in the hydraulic model, a process has been advanced that considers the qualitative factors (e.g., Debris type, debris availability) and site-specific hydraulic factors that influence blockage. A site rating associated with the debris potential (i.e., availability, transportability, mobility) at each crossing was completed using the method outlined in ARR 2019 guidelines. The hydraulic results inputs (i.e., flow velocity, flow depth) and qualitative factors at each crossing were developed into an advanced spreadsheet where the design blockage level for cross drainage structures were determined based on the condition relating Inlet Clear Width and L10 (average length of the longest 10% of the debris reaching the site) and the Adjusted Debris Potential. Asset data, including site photos and maintenance records, were then reviewed and compared with the blockage assessment to check the validity of the results. The results of this assessment demonstrate that the estimated blockage factors at each crossing location using ARR 2019 guidelines are well-validated with the asset data. The primary finding of the study is that the ARR 2019 methodology is a suitable approach for culvert blockage assessment that has been validated against a case study spanning a large geographical area and multiple sub-catchments. The study also found that the methodology can be effectively coded within a spreadsheet or similar analytical tool to automate its application.Keywords: ARR 2019, blockage, culverts, methodology
Procedia PDF Downloads 36319379 Magnetotelluric Method Approach for the 3-D Inversion of Geothermal System’s Dissemination in Indonesia
Authors: Pelangi Wiyantika
Abstract:
Sustainable energy is the main concern in According to solve any problems on energy sectors. One of the sustainable energy that has lack of presentation is Geothermal energy which has developed lately as the new promising sustainable energy. Indonesia as country that has been passed by the ring of fire zone has many geothermal sources. This is the good opportunity to elaborate and learn more about geothermal as sustainable and renewable energy. Geothermal systems have special characteristic whom the zone of sources can be detected by measuring the resistivity of the subsurface. There are many methods to measuring the anomaly of the systems. One of the best method is Magnetotelluric approchment. Magnetotelluric is the passive method which the resistivity is obtained by injecting the eddy current of rocks in the subsurface with the sources. The sources of Magnetotelluric method can be obtained from lightning or solar wind which has the frequencies each below 1 Hz and above 1 Hz.Keywords: geothermal, magnetotelluric, renewable energy, resistivity, sustainable energy
Procedia PDF Downloads 30319378 Oxidosqualene Cyclase: A Novel Inhibitor
Authors: Devadrita Dey Sarkar
Abstract:
Oxidosqualene cyclase is a membrane bound enzyme in which helps in the formation of steroid scaffold in higher organisms. In a highly selective cyclization reaction oxidosqualene cyclase forms LANOSTEROL with seven chiral centres starting from the linear substrate 2,3-oxidosqualene. In humans OSC in cholesterol biosynthesis it represents a target for the discovery of novel anticholesteraemic drugs that could complement the widely used statins. The enzyme oxidosqualene: lanosterol cyclase (OSC) represents a novel target for the treatment of hypercholesterolemia. OSC catalyzes the cyclization of the linear 2,3-monoepoxysqualene to lanosterol, the initial four-ringed sterol intermediate in the cholesterol biosynthetic pathway. OSC also catalyzes the formation of 24(S), 25-epoxycholesterol, a ligand activator of the liver X receptor. Inhibition of OSC reduces cholesterol biosynthesis and selectively enhances 24(S),25-epoxycholesterol synthesis. Through this dual mechanism, OSC inhibition decreases plasma levels of low-density lipoprotein (LDL)-cholesterol and prevents cholesterol deposition within macrophages. The recent crystallization of OSC identifies the mechanism of action for this complex enzyme, setting the stage for the design of OSC inhibitors with improved pharmacological properties for cholesterol lowering and treatment of atherosclerosis. While studying and designing the inhibitor of oxidosqulene cyclase, I worked on the pdb id of 1w6k which was the most worked on pdb id and I used several methods, techniques and softwares to identify and validate the top most molecules which could be acting as an inhibitor for oxidosqualene cyclase. Thus, by partial blockage of this enzyme, both an inhibition of lanosterol and subsequently cholesterol formation as well as a concomitant effect on HMG-CoA reductase can be achieved. Both effects complement each other and lead to an effective control of cholesterol biosynthesis. It is therefore concluded that 2,3-oxidosqualene cyclase plays a crucial role in the regulation of intracellular cholesterol homeostasis. 2,3-Oxidosqualene cyclase inhibitors offer an attractive approach for novel lipid-lowering agents.Keywords: anticholesteraemic, crystallization, statins, homeostasis
Procedia PDF Downloads 35119377 Relation between Roots and Tangent Lines of Function in Fractional Dimensions: A Method for Optimization Problems
Authors: Ali Dorostkar
Abstract:
In this paper, a basic schematic of fractional dimensional optimization problem is presented. As will be shown, a method is performed based on a relation between roots and tangent lines of function in fractional dimensions for an arbitrary initial point. It is shown that for each polynomial function with order N at least N tangent lines must be existed in fractional dimensions of 0 < α < N+1 which pass exactly through the all roots of the proposed function. Geometrical analysis of tangent lines in fractional dimensions is also presented to clarify more intuitively the proposed method. Results show that with an appropriate selection of fractional dimensions, we can directly find the roots. Method is presented for giving a different direction of optimization problems by the use of fractional dimensions.Keywords: tangent line, fractional dimension, root, optimization problem
Procedia PDF Downloads 19219376 A Mixed Method Design to Studying the Effects of Lean Production on Job Satisfaction and Health Work in a French Context
Authors: Gregor Bouville, Celine Schmidt
Abstract:
This article presents a French case study on lean production drawing on a mixed method design which has received little attention in French management research-especially in French human resources research. The purpose is to show that using a mixed method approach in this particular case overstep the limitations of previous studies in lean production studies. The authors use the embedded design as a special articulation of mixed method to analyse and understand the effects of three organizational practices on job satisfaction and workers’ health. Results show that low scheduled autonomy, quality management, time constraint have deleterious effects on job satisfaction. Furthermore, these three practices have ambivalent effects on health work. Interest in the subjects of mixed method has been growing up among French health researchers and practioners, also recently among French management researchers. This study reinforces and refines how mixed methods may offer interesting perspectives in an integrated framework included human resources, management, and health fields. Finally, potentials benefits and limits for those interdisciplinary researches programs are discussed.Keywords: lean production, mixed method, work organization practices, job satisfaction
Procedia PDF Downloads 35919375 Effect of Thermal Radiation and Chemical Reaction on MHD Flow of Blood in Stretching Permeable Vessel
Authors: Binyam Teferi
Abstract:
In this paper, a theoretical analysis of blood flow in the presence of thermal radiation and chemical reaction under the influence of time dependent magnetic field intensity has been studied. The unsteady non linear partial differential equations of blood flow considers time dependent stretching velocity, the energy equation also accounts time dependent temperature of vessel wall, and concentration equation includes time dependent blood concentration. The governing non linear partial differential equations of motion, energy, and concentration are converted into ordinary differential equations using similarity transformations solved numerically by applying ode45. MATLAB code is used to analyze theoretical facts. The effect of physical parameters viz., permeability parameter, unsteadiness parameter, Prandtl number, Hartmann number, thermal radiation parameter, chemical reaction parameter, and Schmidt number on flow variables viz., velocity of blood flow in the vessel, temperature and concentration of blood has been analyzed and discussed graphically. From the simulation study, the following important results are obtained: velocity of blood flow increases with both increment of permeability and unsteadiness parameter. Temperature of the blood increases in vessel wall as Prandtl number and Hartmann number increases. Concentration of the blood decreases as time dependent chemical reaction parameter and Schmidt number increases.Keywords: stretching velocity, similarity transformations, time dependent magnetic field intensity, thermal radiation, chemical reaction
Procedia PDF Downloads 9219374 Tool for Maxillary Sinus Quantification in Computed Tomography Exams
Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina
Abstract:
The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.Keywords: maxillary sinus, support vector machine, region growing, volume quantification
Procedia PDF Downloads 50419373 Training Can Increase Knowledge and Skill of Teacher's on Measurement and Assessment Nutritional Status Children
Authors: Herawati Tri Siswati, Nurhidayat Ana Sıdık Fatimah
Abstract:
The Indonesia Basic Health Research, 2013 showed that prevalence of stunting of 6–12 children years old was 35,6%, wasting was 12,2% and obesiy was 9,2%. The Indonesian Goverment have School Health Program, held in coordination, plans, directing and responsible, developing and implement health student. However, it's implementation still under expected, while Indonesian Ministry of Health has initiated the School Health Program acceleration. This aimed is to know the influencing of training to knowledge and skill of elementary school teacher about measurement and assesment nutrirional status children. The research is quasy experimental with pre-post design, in Sleman disctrict, Yogyakarta province, Indonesia, 2015. Subject was all of elementary school teacher’s who responsible in School Health Program in Gamping sub-district, Sleman, Yogyakarta, i.e. 32 persons. The independent variable is training, while the dependent variable are teacher’s klowledge and skill on measurement and assesment nutrirional status children. The data was analized by t-test. The result showed that the knowledge score before training is 31,6±9,7 and after 56,4±12,6, with an increase 24,8±15,7, and p=0.00. The skill score before training is 46,6±11,1 and after 61,7±13, with an increase 15,2±14,2, p = 0.00. Training can increase the teacher’s klowledge and skill on measurement and assesment nutrirional status.Keywords: training, school health program, nutritional status, children.
Procedia PDF Downloads 39219372 Optimization of Processing Parameters of Acrylonitrile–Butadiene–Styrene Sheets Integrated by Taguchi Method
Authors: Fatemeh Sadat Miri, Morteza Ehsani, Seyed Farshid Hosseini
Abstract:
The present research is concerned with the optimization of extrusion parameters of ABS sheets by the Taguchi experimental design method. In this design method, three parameters of % recycling ABS, processing temperature and degassing time on mechanical properties, hardness, HDT, and color matching of ABS sheets were investigated. The variations of this research are the dosage of recycling ABS, processing temperature, and degassing time. According to experimental test data, the highest level of tensile strength and HDT belongs to the sample with 5% recycling ABS, processing temperature of 230°C, and degassing time of 3 hours. Additionally, the minimum level of MFI and color matching belongs to this sample, too. The present results are in good agreement with the Taguchi method. Based on the outcomes of the Taguchi design method, degassing time has the most effect on the mechanical properties of ABS sheets.Keywords: ABS, process optimization, Taguchi, mechanical properties
Procedia PDF Downloads 7319371 Optimization Method of Dispersed Generation in Electrical Distribution Systems
Authors: Mahmoud Samkan
Abstract:
Dispersed Generation (DG) is a promising solution to many power system problems such as voltage regulation and power loss. This paper proposes a heuristic two-step method to optimize the location and size of DG for reducing active power losses and, therefore, improve the voltage profile in radial distribution networks. In addition to a DG placed at the system load gravity center, this method consists in assigning a DG to each lateral of the network. After having determined the central DG placement, the location and size of each lateral DG are predetermined in the first step. The results are then refined in the second step. This method is tested for 33-bus system for 100% DG penetration. The results obtained are compared with those of other methods found in the literature.Keywords: optimal location, optimal size, dispersed generation (DG), radial distribution networks, reducing losses
Procedia PDF Downloads 44319370 Static and Dynamic Analysis of Hyperboloidal Helix Having Thin Walled Open and Close Sections
Authors: Merve Ermis, Murat Yılmaz, Nihal Eratlı, Mehmet H. Omurtag
Abstract:
The static and dynamic analyses of hyperboloidal helix having the closed and the open square box sections are investigated via the mixed finite element formulation based on Timoshenko beam theory. Frenet triad is considered as local coordinate systems for helix geometry. Helix domain is discretized with a two-noded curved element and linear shape functions are used. Each node of the curved element has 12 degrees of freedom, namely, three translations, three rotations, two shear forces, one axial force, two bending moments and one torque. Finite element matrices are derived by using exact nodal values of curvatures and arc length and it is interpolated linearly throughout the element axial length. The torsional moments of inertia for close and open square box sections are obtained by finite element solution of St. Venant torsion formulation. With the proposed method, the torsional rigidity of simply and multiply connected cross-sections can be also calculated in same manner. The influence of the close and the open square box cross-sections on the static and dynamic analyses of hyperboloidal helix is investigated. The benchmark problems are represented for the literature.Keywords: hyperboloidal helix, squared cross section, thin walled cross section, torsional rigidity
Procedia PDF Downloads 37719369 Optimal Placement of the Unified Power Controller to Improve the Power System Restoration
Authors: Mohammad Reza Esmaili
Abstract:
One of the most important parts of the restoration process of a power network is the synchronizing of its subsystems. In this situation, the biggest concern of the system operators will be the reduction of the standing phase angle (SPA) between the endpoints of the two islands. In this regard, the system operators perform various actions and maneuvers so that the synchronization operation of the subsystems is successfully carried out and the system finally reaches acceptable stability. The most common of these actions include load control, generation control and, in some cases, changing the network topology. Although these maneuvers are simple and common, due to the weak network and extreme load changes, the restoration will be associated with low speed. One of the best ways to control the SPA is to use FACTS devices. By applying a soft control signal, these tools can reduce the SPA between two subsystems with more speed and accuracy, and the synchronization process can be done in less time. Meanwhile, the unified power controller (UPFC), a series-parallel compensator device with the change of transmission line power and proper adjustment of the phase angle, will be the proposed option in order to realize the subject of this research. Therefore, with the optimal placement of UPFC in a power system, in addition to improving the normal conditions of the system, it is expected to be effective in reducing the SPA during power system restoration. Therefore, the presented paper provides an optimal structure to coordinate the three problems of improving the division of subsystems, reducing the SPA and optimal power flow with the aim of determining the optimal location of UPFC and optimal subsystems. The proposed objective functions in this paper include maximizing the quality of the subsystems, reducing the SPA at the endpoints of the subsystems, and reducing the losses of the power system. Since there will be a possibility of creating contradictions in the simultaneous optimization of the proposed objective functions, the structure of the proposed optimization problem is introduced as a non-linear multi-objective problem, and the Pareto optimization method is used to solve it. The innovative technique proposed to implement the optimization process of the mentioned problem is an optimization algorithm called the water cycle (WCA). To evaluate the proposed method, the IEEE 39 bus power system will be used.Keywords: UPFC, SPA, water cycle algorithm, multi-objective problem, pareto
Procedia PDF Downloads 6619368 Development of Extended Trapezoidal Method for Numerical Solution of Volterra Integro-Differential Equations
Authors: Fuziyah Ishak, Siti Norazura Ahmad
Abstract:
Volterra integro-differential equations appear in many models for real life phenomena. Since analytical solutions for this type of differential equations are hard and at times impossible to attain, engineers and scientists resort to numerical solutions that can be made as accurately as possible. Conventionally, numerical methods for ordinary differential equations are adapted to solve Volterra integro-differential equations. In this paper, numerical solution for solving Volterra integro-differential equation using extended trapezoidal method is described. Formulae for the integral and differential parts of the equation are presented. Numerical results show that the extended method is suitable for solving first order Volterra integro-differential equations.Keywords: accuracy, extended trapezoidal method, numerical solution, Volterra integro-differential equations
Procedia PDF Downloads 42619367 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System
Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu
Abstract:
In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission
Procedia PDF Downloads 14319366 Correlation of SPT N-Value and Equipment Drilling Parameters in Deep Soil Mixing
Authors: John Eric C. Bargas, Maria Cecilia M. Marcos
Abstract:
One of the most common ground improvement techniques is Deep Soil Mixing (DSM). As the technique progresses, there is still lack in the development when it comes to depth control. This was the issue experienced during the installation of DSM in one of the National projects in the Philippines. This study assesses the feasibility of using equipment drilling parameters such as hydraulic pressure, drilling speed and rotational speed in determining the Standard Penetration Test N-value of a specific soil. Hydraulic pressure and drilling speed with a constant rotational speed of 30 rpm have a positive correlation with SPT N-value for cohesive soil and sand. A linear trend was observed for cohesive soil. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.5377 while the correlation of SPT N-value and drilling speed has a R²=0.6355. While the best fitted model for sand is polynomial trend. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.7088 while the correlation of SPT N-value and drilling speed has a R²=0.4354. The low correlation may be attributed to the behavior of sand when the auger penetrates. Sand tends to follow the rotation of the auger rather than resisting which was observed for very loose to medium dense sand. Specific Energy and the product of hydraulic pressure and drilling speed yielded same R² with a positive correlation. Linear trend was observed for cohesive soil while polynomial trend for sand. Cohesive soil yielded a R²=0.7320 which has a strong relationship. Sand also yielded a strong relationship having a coefficient of determination, R²=0.7203. It is feasible to use hydraulic pressure and drilling speed to estimate the SPT N-value of the soil. Also, the product of hydraulic pressure and drilling speed can be a substitute to specific energy when estimating the SPT N-value of a soil. However, additional considerations are necessary to account for other influencing factors like ground water and physical and mechanical properties of soil.Keywords: ground improvement, equipment drilling parameters, standard penetration test, deep soil mixing
Procedia PDF Downloads 4919365 Matrix Method Posting
Authors: Varong Pongsai
Abstract:
The objective of this paper is introducing a new method of accounting posting which is called Matrix Method Posting. This method is based on the Matrix operation of pure Mathematics. Although, accounting field is classified as one of the social-science knowledge, many of accounting operations are placed by Mathematics sign and operation. Through the operation applying, it seems to be that the operations of Mathematics should be applied to accounting possibly. So, this paper tries to over-lap Mathematics logic to accounting logic smoothly. According to the context of discovery, deductive approach is employed to prove a simultaneously logical concept of both Mathematics and Accounting. The result proves that the Matrix can be placed to operate accounting perfectly, because Matrix and accounting logic also have a similarity concept which is balancing 2 sides during operations. Moreover, the Matrix posting also has a lot of benefit. It can help financial analyst calculating financial ratios comfortably. Furthermore, the matrix determinant which is a signature operation itself also helps auditors checking out the correction of clients’ recording. If the determinant is not equaled to 0, it will point out that the recording process of clients getting into the problem. Finally, the Matrix should be easily determining a concept of merger and consolidation far beyond the present day concept.Keywords: matrix method posting, deductive approach, determinant, accounting application
Procedia PDF Downloads 36719364 Optimization of Spatial Light Modulator to Generate Aberration Free Optical Traps
Authors: Deepak K. Gupta, T. R. Ravindran
Abstract:
Holographic Optical Tweezers (HOTs) in general use iterative algorithms such as weighted Gerchberg-Saxton (WGS) to generate multiple traps, which produce traps with 99% uniformity theoretically. But in experiments, it is the phase response of the spatial light modulator (SLM) which ultimately determines the efficiency, uniformity, and quality of the trap spots. In general, SLMs show a nonlinear phase response behavior, and they may even have asymmetric phase modulation depth before and after π. This affects the resolution with which the gray levels are addressed before and after π, leading to a degraded trap performance. We present a method to optimize the SLM for a linear phase response behavior along with a symmetric phase modulation depth around π. Further, we optimize the SLM for its varying phase response over different spatial regions by optimizing the brightness/contrast and gamma of the hologram in different subsections. We show the effect of the optimization on an array of trap spots resulting in improved efficiency and uniformity. We also calculate the spot sharpness metric and trap performance metric and show a tightly focused spot with reduced aberration. The trap performance is compared by calculating the trap stiffness of a trapped particle in a given trap spot before and after aberration correction. The trap stiffness is found to improve by 200% after the optimization.Keywords: spatial light modulator, optical trapping, aberration, phase modulation
Procedia PDF Downloads 18819363 Parameter Estimation of Induction Motors by PSO Algorithm
Authors: A. Mohammadi, S. Asghari, M. Aien, M. Rashidinejad
Abstract:
After emergent of alternative current networks and their popularity, asynchronous motors became more widespread than other kinds of industrial motors. In order to control and run these motors efficiently, an accurate estimation of motor parameters is needed. There are different methods to obtain these parameters such as rotor locked test, no load test, DC test, analytical methods, and so on. The most common drawback of these methods is their inaccuracy in estimation of some motor parameters. In order to remove this concern, a novel method for parameter estimation of induction motors using particle swarm optimization (PSO) algorithm is proposed. In the proposed method, transient state of motor is used for parameter estimation. Comparison of the simulation results purtuined to the PSO algorithm with other available methods justifies the effectiveness of the proposed method.Keywords: induction motor, motor parameter estimation, PSO algorithm, analytical method
Procedia PDF Downloads 63319362 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage
Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng
Abstract:
Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning
Procedia PDF Downloads 7319361 Computational Homogenization of Thin Walled Structures: On the Influence of the Global vs Local Applied Plane Stress Condition
Authors: M. Beusink, E. W. C. Coenen
Abstract:
The increased application of novel structural materials, such as high grade asphalt, concrete and laminated composites, has sparked the need for a better understanding of the often complex, non-linear mechanical behavior of such materials. The effective macroscopic mechanical response is generally dependent on the applied load path. Moreover, it is also significantly influenced by the microstructure of the material, e.g. embedded fibers, voids and/or grain morphology. At present, multiscale techniques are widely adopted to assess micro-macro interactions in a numerically efficient way. Computational homogenization techniques have been successfully applied over a wide range of engineering cases, e.g. cases involving first order and second order continua, thin shells and cohesive zone models. Most of these homogenization methods rely on Representative Volume Elements (RVE), which model the relevant microstructural details in a confined volume. Imposed through kinematical constraints or boundary conditions, a RVE can be subjected to a microscopic load sequence. This provides the RVE's effective stress-strain response, which can serve as constitutive input for macroscale analyses. Simultaneously, such a study of a RVE gives insight into fine scale phenomena such as microstructural damage and its evolution. It has been reported by several authors that the type of boundary conditions applied to the RVE affect the resulting homogenized stress-strain response. As a consequence, dedicated boundary conditions have been proposed to appropriately deal with this concern. For the specific case of a planar assumption for the analyzed structure, e.g. plane strain, axisymmetric or plane stress, this assumption needs to be addressed consistently in all considered scales. Although in many multiscale studies a planar condition has been employed, the related impact on the multiscale solution has not been explicitly investigated. This work therefore focuses on the influence of the planar assumption for multiscale modeling. In particular the plane stress case is highlighted, by proposing three different implementation strategies which are compatible with a first-order computational homogenization framework. The first method consists of applying classical plane stress theory at the microscale, whereas with the second method a generalized plane stress condition is assumed at the RVE level. For the third method, the plane stress condition is applied at the macroscale by requiring that the resulting macroscopic out-of-plane forces are equal to zero. These strategies are assessed through a numerical study of a thin walled structure and the resulting effective macroscale stress-strain response is compared. It is shown that there is a clear influence of the length scale at which the planar condition is applied.Keywords: first-order computational homogenization, planar analysis, multiscale, microstrucutures
Procedia PDF Downloads 23319360 Combined Safety and Cybersecurity Risk Assessment for Intelligent Distributed Grids
Authors: Anders Thorsén, Behrooz Sangchoolie, Peter Folkesson, Ted Strandberg
Abstract:
As more parts of the power grid become connected to the internet, the risk of cyberattacks increases. To identify the cybersecurity threats and subsequently reduce vulnerabilities, the common practice is to carry out a cybersecurity risk assessment. For safety classified systems and products, there is also a need for safety risk assessments in addition to the cybersecurity risk assessment in order to identify and reduce safety risks. These two risk assessments are usually done separately, but since cybersecurity and functional safety are often related, a more comprehensive method covering both aspects is needed. Some work addressing this has been done for specific domains like the automotive domain, but more general methods suitable for, e.g., intelligent distributed grids, are still missing. One such method from the automotive domain is the Security-Aware Hazard Analysis and Risk Assessment (SAHARA) method that combines safety and cybersecurity risk assessments. This paper presents an approach where the SAHARA method has been modified in order to be more suitable for larger distributed systems. The adapted SAHARA method has a more general risk assessment approach than the original SAHARA. The proposed method has been successfully applied on two use cases of an intelligent distributed grid.Keywords: intelligent distribution grids, threat analysis, risk assessment, safety, cybersecurity
Procedia PDF Downloads 15319359 Measuring the Quality of Business Education: Employment Readiness Assessment
Authors: Gulbakhyt Sultanova
Abstract:
Business education institutions assess the progress of their students by giving them grades for courses completed and calculating a Grade Point Average (GPA). Whether the participation in these courses has led to the development of competences enabling graduates to successfully compete in the labor market should be measured using a new index: Employment Readiness Assessment (ERA). The higher the ERA, the higher the quality of education at a business school. This is applied, empirical research conducted by using a method of linear optimization. The aim of research is to identify factors which lead to the minimization of the deviation of GPA from ERA as well as to the maximization of ERA. ERA is composed of three components resulting from testing proficiency in Business English, testing work and personal skills, and job interview simulation. The quality of education is improving if GPA approximates ERA and ERA increases. Factors which have had a positive effect on quality enhancement are academic mobility of students and staff, practical-oriented courses taught by staff with work experience, and research-based courses taught by staff with research experience. ERA is a better index to measure the quality of business education than traditional indexes such as GPA due to its greater accuracy in assessing the level of graduates’ competences demanded in the labor market. Optimizing the educational process in pursuit of quality enhancement, ERA has to be used in parallel with GPA to find out which changes worked and resulted in improvement.Keywords: assessment and evaluation, competence evaluation, education quality, employment readiness
Procedia PDF Downloads 44519358 Numerical Simulation of Fluid Structure Interaction Using Two-Way Method
Authors: Samira Laidaoui, Mohammed Djermane, Nazihe Terfaya
Abstract:
The fluid-structure coupling is a natural phenomenon which reflects the effects of two continuums: fluid and structure of different types in the reciprocal action on each other, involving knowledge of elasticity and fluid mechanics. The solution for such problems is based on the relations of continuum mechanics and is mostly solved with numerical methods. It is a computational challenge to solve such problems because of the complex geometries, intricate physics of fluids, and complicated fluid-structure interactions. The way in which the interaction between fluid and solid is described gives the largest opportunity for reducing the computational effort. In this paper, a problem of fluid structure interaction is investigated with two-way coupling method. The formulation Arbitrary Lagrangian-Eulerian (ALE) was used, by considering a dynamic grid, where the solid is described by a Lagrangian formulation and the fluid by a Eulerian formulation. The simulation was made on the ANSYS software.Keywords: ALE, coupling, FEM, fluid-structure, interaction, one-way method, two-way method
Procedia PDF Downloads 67819357 A Task Scheduling Algorithm in Cloud Computing
Authors: Ali Bagherinia
Abstract:
Efficient task scheduling method can meet users' requirements, and improve the resource utilization, then increase the overall performance of the cloud computing environment. Cloud computing has new features, such as flexibility, virtualization and etc., in this paper we propose a two levels task scheduling method based on load balancing in cloud computing. This task scheduling method meet user's requirements and get high resource utilization, that simulation results in CloudSim simulator prove this.Keywords: cloud computing, task scheduling, virtualization, SLA
Procedia PDF Downloads 401