Search results for: probabilistic scoring distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5535

Search results for: probabilistic scoring distribution

4815 Orientational Pair Correlation Functions Modelling of the LiCl6H2O by the Hybrid Reverse Monte Carlo: Using an Environment Dependence Interaction Potential

Authors: Mohammed Habchi, Sidi Mohammed Mesli, Rafik Benallal, Mohammed Kotbi

Abstract:

On the basis of four partial correlation functions and some geometric constraints obtained from neutron scattering experiments, a Reverse Monte Carlo (RMC) simulation has been performed in the study of the aqueous electrolyte LiCl6H2O at the glassy state. The obtained 3-dimensional model allows computing pair radial and orientational distribution functions in order to explore the structural features of the system. Unrealistic features appeared in some coordination peaks. To remedy to this, we use the Hybrid Reverse Monte Carlo (HRMC), incorporating an additional energy constraint in addition to the usual constraints derived from experiments. The energy of the system is calculated using an Environment Dependence Interaction Potential (EDIP). Ions effects is studied by comparing correlations between water molecules in the solution and in pure water at room temperature Our results show a good agreement between experimental and computed partial distribution functions (PDFs) as well as a significant improvement in orientational distribution curves.

Keywords: LiCl6H2O, glassy state, RMC, HRMC

Procedia PDF Downloads 460
4814 Multivariate Control Chart to Determine Efficiency Measurements in Industrial Processes

Authors: J. J. Vargas, N. Prieto, L. A. Toro

Abstract:

Control charts are commonly used to monitor processes involving either variable or attribute of quality characteristics and determining the control limits as a critical task for quality engineers to improve the processes. Nonetheless, in some applications it is necessary to include an estimation of efficiency. In this paper, the ability to define the efficiency of an industrial process was added to a control chart by means of incorporating a data envelopment analysis (DEA) approach. In depth, a Bayesian estimation was performed to calculate the posterior probability distribution of parameters as means and variance and covariance matrix. This technique allows to analyse the data set without the need of using the hypothetical large sample implied in the problem and to be treated as an approximation to the finite sample distribution. A rejection simulation method was carried out to generate random variables from the parameter functions. Each resulting vector was used by stochastic DEA model during several cycles for establishing the distribution of each efficiency measures for each DMU (decision making units). A control limit was calculated with model obtained and if a condition of a low level efficiency of DMU is presented, system efficiency is out of control. In the efficiency calculated a global optimum was reached, which ensures model reliability.

Keywords: data envelopment analysis, DEA, Multivariate control chart, rejection simulation method

Procedia PDF Downloads 372
4813 Radial Distribution Network Reliability Improvement by Using Imperialist Competitive Algorithm

Authors: Azim Khodadadi, Sahar Sadaat Vakili, Ebrahim Babaei

Abstract:

This study presents a numerical method to optimize the failure rate and repair time of a typical radial distribution system. Failure rate and repair time are effective parameters in customer and energy based indices of reliability. Decrease of these parameters improves reliability indices. Thus, system stability will be boost. The penalty functions indirectly reflect the cost of investment which spent to improve these indices. Constraints on customer and energy based indices, i.e. SAIFI, SAIDI, CAIDI and AENS have been considered by using a new method which reduces optimization algorithm controlling parameters. Imperialist Competitive Algorithm (ICA) used as main optimization technique and particle swarm optimization (PSO), simulated annealing (SA) and differential evolution (DE) has been applied for further investigation. These algorithms have been implemented on a test system by MATLAB. Obtained results have been compared with each other. The optimized values of repair time and failure rate are much lower than current values which this achievement reduced investment cost and also ICA gives better answer than the other used algorithms.

Keywords: imperialist competitive algorithm, failure rate, repair time, radial distribution network

Procedia PDF Downloads 663
4812 Relationship between Functionality and Cognitive Impairment in Older Adult Women from the Southeast of Mexico

Authors: Estrella C. Damaris, Ingrid A. Olais, Gloria P. Uicab

Abstract:

This study explores the relationship between the level of functionality and cognitive impairment in older adult women from the south-east of Mexico. It is a descriptive, cross-sectional study; performed with 172 participants in total who attended a health institute and live in Merida, Yucatan Mexico. After a non-probabilistic sampling, Barthel and Pfeiffer scales were applied. The results show statistically significant correlation between the cognitive impairment (Pfeiffer) and the levels of independence and function (Barthel) (r =0.489; p =0.001). Both determine a dependence level so they need either a little or a lot of help. Society needs that the older woman be healthy and that the professionals of mental health develop activities to prevent and rehabilitate because cognitive impairment and function are directly related with the quality of life.

Keywords: functionality, cognition, routine activities, cognitive impairment

Procedia PDF Downloads 288
4811 Determining Inventory Replenishment Policy for Major Component in Assembly-to-Order of Cooling System Manufacturing

Authors: Tippawan Nasawan

Abstract:

The objective of this study is to find the replenishment policy in Assembly-to-Order manufacturing (ATO) which some of the major components have lead-time longer than customer lead-time. The variety of products, independent component demand, and long component lead-time are the difficulty that has resulted in the overstock problem. In addition, the ordering cost is trivial when compared to the cost of material of the major component. A conceptual design of the Decision Supporting System (DSS) has introduced to assist the replenishment policy. Component replenishment by using the variable which calls Available to Promise (ATP) for making the decision is one of the keys. The Poisson distribution is adopted to realize demand patterns in order to calculate Safety Stock (SS) at the specified Customer Service Level (CSL). When distribution cannot identify, nonparametric will be applied instead. The test result after comparing the ending inventory between the new policy and the old policy, the overstock has significantly reduced by 46.9 percent or about 469,891.51 US-Dollars for the cost of the major component (material cost only). Besides, the number of the major component inventory is also reduced by about 41 percent which helps to mitigate the chance of damage and keeping stock.

Keywords: Assembly-to-Order, Decision Supporting System, Component replenishment , Poisson distribution

Procedia PDF Downloads 122
4810 Study on Moisture-Induced-Damage of Semi-Rigid Base under Hydrodynamic Pressure

Authors: Baofeng Pan, Heng Liu

Abstract:

Because of the high strength and large carrying capacity, the semi-rigid base is widely used in modern road engineering. However, hydrodynamic pressure, which is one of the main factors to cause early damage of semi-rigid base, cannot be avoided in the nature environment when pavement is subjected to some loadings such as the passing vehicles. In order to investigating how moisture-induced-damage of semi-rigid base influenced by hydrodynamic pressure, a new and effective experimental research method is provided in this paper. The results show that: (a) The washing action of high hydrodynamic pressure is the direct cause of strength reducing of road semi-rigid base. (b) The damage of high hydrodynamic pressure mainly occurs at the beginning of the scoring test and with the increasing of testing time the influence reduces. (c) Under the same hydrodynamic pressure, the longer the specimen health age, the stronger ability to resist moisture induced damage.

Keywords: semi-rigid base, hydrodynamic pressure, moisture-induced-damage, experimental research

Procedia PDF Downloads 316
4809 Finite Element Modeling of Ultrasonic Shot Peening Process using Multiple Pin Impacts

Authors: Chao-xun Liu, Shi-hong Lu

Abstract:

In spite of its importance to the aerospace and automobile industries, little or no attention has been devoted to the accurate modeling of the ultrasonic shot peening (USP) process. It is therefore the purpose of this study to conduct finite element analysis of the process using a realistic multiple pin impacts model with the explicit solver of ABAQUS. In this paper, we research the effect of several key parameters on the residual stress distribution within the target, including impact velocity, incident angle, friction coefficient between pins and target and impact number of times were investigated. The results reveal that the impact velocity and impact number of times have obvious effect and impacting vertically could produce the most perfect residual stress distribution. Then we compare the results with the date in USP experiment and verify the exactness of the model. The analysis of the multiple pin impacts date reveal the relationships between peening process parameters and peening quality, which are useful for identifying the parameters which need to be controlled and regulated in order to produce a more beneficial compressive residual stress distribution within the target.

Keywords: ultrasonic shot peening, finite element, multiple pins, residual stress, numerical simulation

Procedia PDF Downloads 444
4808 Advanced Hybrid Particle Swarm Optimization for Congestion and Power Loss Reduction in Distribution Networks with High Distributed Generation Penetration through Network Reconfiguration

Authors: C. Iraklis, G. Evmiridis, A. Iraklis

Abstract:

Renewable energy sources and distributed power generation units already have an important role in electrical power generation. A mixture of different technologies penetrating the electrical grid, adds complexity in the management of distribution networks. High penetration of distributed power generation units creates node over-voltages, huge power losses, unreliable power management, reverse power flow and congestion. This paper presents an optimization algorithm capable of reducing congestion and power losses, both described as a function of weighted sum. Two factors that describe congestion are being proposed. An upgraded selective particle swarm optimization algorithm (SPSO) is used as a solution tool focusing on the technique of network reconfiguration. The upgraded SPSO algorithm is achieved with the addition of a heuristic algorithm specializing in reduction of power losses, with several scenarios being tested. Results show significant improvement in minimization of losses and congestion while achieving very small calculation times.

Keywords: congestion, distribution networks, loss reduction, particle swarm optimization, smart grid

Procedia PDF Downloads 441
4807 Sensor Validation Using Bottleneck Neural Network and Variable Reconstruction

Authors: Somia Bouzid, Messaoud Ramdani

Abstract:

The success of any diagnosis strategy critically depends on the sensors measuring process variables. This paper presents a detection and diagnosis sensor faults method based on a Bottleneck Neural Network (BNN). The BNN approach is used as a statistical process control tool for drinking water distribution (DWD) systems to detect and isolate the sensor faults. Variable reconstruction approach is very useful for sensor fault isolation, this method is validated in simulation on a nonlinear system: actual drinking water distribution system. Several results are presented.

Keywords: fault detection, localization, PCA, NLPCA, auto-associative neural network

Procedia PDF Downloads 384
4806 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 130
4805 Research on Modern Semiconductor Converters and the Usage of SiC Devices in the Technology Centre of Ostrava

Authors: P. Vaculík, P. Kaňovský

Abstract:

The following article presents Technology Centre of Ostrava (TCO) in the Czech Republic. Describes the structure and main research areas realized by the project ENET-Energy Units for Utilization of non-traditional Energy Sources. More details are presented from the research program dealing with transformation, accumulation, and distribution of electric energy. Technology Centre has its own energy mix consisting of alternative sources of fuel sources that use of process gases from the storage part and also the energy from distribution network. The article will focus on the properties and application possibilities SiC semiconductor devices for power semiconductor converter for photo-voltaic systems.

Keywords: SiC, Si, technology centre of Ostrava, photovoltaic systems, DC/DC Converter, simulation

Procedia PDF Downloads 604
4804 Temperature-Dependent Barrier Characteristics of Inhomogeneous Pd/n-GaN Schottky Barrier Diodes Surface

Authors: K. Al-Heuseen, M. R. Hashim

Abstract:

The current-voltage (I-V) characteristics of Pd/n-GaN Schottky barrier were studied at temperatures over room temperature (300-470K). The values of ideality factor (n), zero-bias barrier height (φB0), flat barrier height (φBF) and series resistance (Rs) obtained from I-V-T measurements were found to be strongly temperature dependent while (φBo) increase, (n), (φBF) and (Rs) decrease with increasing temperature. The apparent Richardson constant was found to be 2.1x10-9 Acm-2K-2 and mean barrier height of 0.19 eV. After barrier height inhomogeneities correction, by assuming a Gaussian distribution (GD) of the barrier heights, the Richardson constant and the mean barrier height were obtained as 23 Acm-2K-2 and 1.78eV, respectively. The corrected Richardson constant was very closer to theoretical value of 26 Acm-2K-2.

Keywords: electrical properties, Gaussian distribution, Pd-GaN Schottky diodes, thermionic emission

Procedia PDF Downloads 273
4803 Temporal Variation of Shorebirds Population in Two Different Mudflats Areas

Authors: N. Norazlimi, R. Ramli

Abstract:

A study was conducted to determine the diversity and abundance of shorebird species habituating the mudflat area of Jeram Beach and Remis Beach, Selangor, Peninsular Malaysia. Direct observation technique (using binoculars and video camera) was applied to record the presence of bird species in the sampling sites from August 2013 until July 2014. A total of 32 species of shorebird were recorded during both migratory and non-migratory seasons. Of these, eleven species (47.8%) are migrants, six species (26.1%) have both migrant and resident populations, four species (17.4%) are vagrants and two species (8.7%) are residents. The compositions of the birds differed significantly in all months (χ2=84.35, p<0.001). There is a significant difference in avian abundance between migratory and non-migratory seasons (Mann-Whitney, t=2.39, p=0.036). The avian abundance were differed significantly in Jeram and Remis Beaches during migratory periods (t=4.39, p=0.001) but not during non-migratory periods (t=0.78, p=0.456). Shorebird diversity was also affected by tidal cycle. There is a significance difference between high tide and low tide (Mann-Whitney, t=78.0, p<0.005). Frequency of disturbance also affected the shorebird distribution (Mann-Whitney, t=57.0, p= 0.0134). Therefore, this study concluded that tides and disturbances are two factors that affecting temporal distribution of shorebird in mudflats area.

Keywords: biodiversity, distribution, migratory birds, direct observation

Procedia PDF Downloads 389
4802 The Use of Thermal Infrared Wavelengths to Determine the Volcanic Soils

Authors: Levent Basayigit, Mert Dedeoglu, Fadime Ozogul

Abstract:

In this study, an application was carried out to determine the Volcanic Soils by using remote sensing.  The study area was located on the Golcuk formation in Isparta-Turkey. The thermal bands of Landsat 7 image were used for processing. The implementation of the climate model that was based on the water index was used in ERDAS Imagine software together with pixel based image classification. Soil Moisture Index (SMI) was modeled by using the surface temperature (Ts) which was obtained from thermal bands and vegetation index (NDVI) derived from Landsat 7. Surface moisture values were grouped and classified by using scoring system. Thematic layers were compared together with the field studies. Consequently, different moisture levels for volcanic soils were indicator for determination and separation. Those thermal wavelengths are preferable bands for separation of volcanic soils using moisture and temperature models.

Keywords: Landsat 7, soil moisture index, temperature models, volcanic soils

Procedia PDF Downloads 301
4801 Quantum Mechanics as A Limiting Case of Relativistic Mechanics

Authors: Ahmad Almajid

Abstract:

The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.

Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics

Procedia PDF Downloads 72
4800 Repair Workshop Queue System Modification Using Priority Scheme

Authors: C. Okonkwo Ugochukwu, E. Sinebe Jude, N. Odoh Blessing, E. Okafor Christian

Abstract:

In this paper, a modification on repair workshop queuing system using multi priority scheme was carried out. Chi square goodness of fit test was used to determine the random distribution of the inter arrival time and service time of crankshafts that come for maintenance in the workshop. The chi square values obtained for all the prioritized classes show that the distribution conforms to Poisson distribution. The mean waiting time in queue results of non-preemptive priority for 1st, 2nd and 3rd classes show 0.066, 0.09, and 0.224 day respectively, while preemptive priority show 0.007, 0.036 and 0.258 day. However, when non priority is used, which obviously has no class distinction it amounts to 0.17 days. From the results, one can observe that the preemptive priority system provides a very dramatic improvement over the non preemptive priority as it concerns arrivals that are of higher priority. However, the improvement has a detrimental effect on the low priority class. The trend of the results is similar to the mean waiting time in the system as a result of addition of the actual service time. Even though the mean waiting time for the queue and that of the system for no priority takes the least time when compared with the least priority, urgent and semi-urgent jobs will terribly suffer which will most likely result in reneging or balking of many urgent jobs. Hence, the adoption of priority scheme in this type of scenario will result in huge profit to the Company and more customer satisfaction.

Keywords: queue, priority class, preemptive, non-preemptive, mean waiting time

Procedia PDF Downloads 393
4799 Slip Limit Prediction of High-Strength Bolt Joints Based on Local Approach

Authors: Chang He, Hiroshi Tamura, Hiroshi Katsuchi, Jiaqi Wang

Abstract:

In this study, the aim is to infer the slip limit (static friction limit) of contact interfaces in bolt friction joints by analyzing other bolt friction joints with the same contact surface but in a different shape. By using the Weibull distribution to deal with microelements on the contact surface statistically, the slip limit of a certain type of bolt joint was predicted from other types of bolt joint with the same contact surface. As a result, this research succeeded in predicting the slip limit of bolt joins with different numbers of contact surfaces and with different numbers of bolt rows.

Keywords: bolt joints, slip coefficient, finite element method, Weibull distribution

Procedia PDF Downloads 165
4798 Real-Time Monitoring of Drinking Water Quality Using Advanced Devices

Authors: Amani Abdallah, Isam Shahrour

Abstract:

The quality of drinking water is a major concern of public health. The control of this quality is generally performed in the laboratory, which requires a long time. This type of control is not adapted for accidental pollution from sudden events, which can have serious consequences on population health. Therefore, it is of major interest to develop real-time innovative solutions for the detection of accidental contamination in drinking water systems This paper presents researches conducted within the SunRise Demonstrator for ‘Smart and Sustainable Cities’ with a particular focus on the supervision of the water quality. This work aims at (i) implementing a smart water system in a large water network (Campus of the University Lille1) including innovative equipment for real-time detection of abnormal events, such as those related to the contamination of drinking water and (ii) develop a numerical modeling of the contamination diffusion in the water distribution system. The first step included verification of the water quality sensors and their effectiveness on a network prototype of 50m length. This part included the evaluation of the efficiency of these sensors in the detection both bacterial and chemical contamination events in drinking water distribution systems. An on-line optical sensor integral with a laboratory-scale distribution system (LDS) was shown to respond rapidly to changes in refractive index induced by injected loads of chemical (cadmium, mercury) and biological contaminations (Escherichia coli). All injected substances were detected by the sensor; the magnitude of the response depends on the type of contaminant introduced and it is proportional to the injected substance concentration.

Keywords: distribution system, drinking water, refraction index, sensor, real-time

Procedia PDF Downloads 351
4797 Reliability Based Topology Optimization: An Efficient Method for Material Uncertainty

Authors: Mehdi Jalalpour, Mazdak Tootkaboni

Abstract:

We present a computationally efficient method for reliability-based topology optimization under material properties uncertainty, which is assumed to be lognormally distributed and correlated within the domain. Computational efficiency is achieved through estimating the response statistics with stochastic perturbation of second order, using these statistics to fit an appropriate distribution that follows the empirical distribution of the response, and employing an efficient gradient-based optimizer. The proposed algorithm is utilized for design of new structures and the changes in the optimized topology is discussed for various levels of target reliability and correlation strength. Predictions were verified thorough comparison with results obtained using Monte Carlo simulation.

Keywords: material uncertainty, stochastic perturbation, structural reliability, topology optimization

Procedia PDF Downloads 601
4796 Optimal Capacitors Placement and Sizing Improvement Based on Voltage Reduction for Energy Efficiency

Authors: Zilaila Zakaria, Muhd Azri Abdul Razak, Muhammad Murtadha Othman, Mohd Ainor Yahya, Ismail Musirin, Mat Nasir Kari, Mohd Fazli Osman, Mohd Zaini Hassan, Baihaki Azraee

Abstract:

Energy efficiency can be realized by minimizing the power loss with a sufficient amount of energy used in an electrical distribution system. In this report, a detailed analysis of the energy efficiency of an electric distribution system was carried out with an implementation of the optimal capacitor placement and sizing (OCPS). The particle swarm optimization (PSO) will be used to determine optimal location and sizing for the capacitors whereas energy consumption and power losses minimization will improve the energy efficiency. In addition, a certain number of busbars or locations are identified in advance before the PSO is performed to solve OCPS. In this case study, three techniques are performed for the pre-selection of busbar or locations which are the power-loss-index (PLI). The particle swarm optimization (PSO) is designed to provide a new population with improved sizing and location of capacitors. The total cost of power losses, energy consumption and capacitor installation are the components considered in the objective and fitness functions of the proposed optimization technique. Voltage magnitude limit, total harmonic distortion (THD) limit, power factor limit and capacitor size limit are the parameters considered as the constraints for the proposed of optimization technique. In this research, the proposed methodologies implemented in the MATLAB® software will transfer the information, execute the three-phase unbalanced load flow solution and retrieve then collect the results or data from the three-phase unbalanced electrical distribution systems modeled in the SIMULINK® software. Effectiveness of the proposed methods used to improve the energy efficiency has been verified through several case studies and the results are obtained from the test systems of IEEE 13-bus unbalanced electrical distribution system and also the practical electrical distribution system model of Sultan Salahuddin Abdul Aziz Shah (SSAAS) government building in Shah Alam, Selangor.

Keywords: particle swarm optimization, pre-determine of capacitor locations, optimal capacitors placement and sizing, unbalanced electrical distribution system

Procedia PDF Downloads 429
4795 Distribution and Historical Trends of PAHs Deposition in Recent Sediment Cores of the Imo River, SE Nigeria

Authors: Miranda I. Dosunmu, Orok E. Oyo-Ita, Inyang O. Oyo-Ita

Abstract:

Polycyclic aromatic hydrocarbons (PAHs) are a class of priority listed organic pollutants due to their carcinogenicity, mutagenity, acute toxicity and persistency in the environment. The distribution and historical changes of PAHs contamination in recent sediment cores from the Imo River were investigated using gas chromatography coupled with mass spectrometer. The concentrations of total PAHs (TPAHs) ranging from 402.37 ng/g dry weight (dw) at the surface layer of the Estuary zone (ESC6; 0-5 cm) to 92,388.59 ng/g dw at the near surface layer of the Afam zone (ASC5; 5-10 cm) indicate that PAHs contamination was localized not only between sample sites but also within the same cores. Sediment-depth profiles for the four (Afam, Mangrove, Estuary and illegal Petroleum refinery) cores revealed irregular distribution patterns in the TPAH concentrations except the fact that these levels became maximized at the near surface layers (5-10 cm) corresponding to a geological time-frame of about 1996-2004. This time scale coincided with the period of intensive bunkering and oil pipeline vandalization by the Niger Delta militant groups. Also a general slight decline was found in the TPAHs levels from near the surface layers (5-10 cm) to the most recent top layers (0-5 cm) of the cores, attributable to the recent effort by the Nigerian government in clamping down the illegal activity of the economic saboteurs. Therefore, the recent amnesty period granted to the militant groups should be extended. Although mechanism of perylene formation still remains enigmatic, examination of its distributions down cores indicates natural biogenic, pyrogenic and petrogenic origins for the compound at different zones. Thus, the characteristic features of the Imo River environment provide a means of tracing diverse origins for perylene.

Keywords: perylene, historical trend, distribution, origin, Imo River

Procedia PDF Downloads 248
4794 Analysis of Urban Population Using Twitter Distribution Data: Case Study of Makassar City, Indonesia

Authors: Yuyun Wabula, B. J. Dewancker

Abstract:

In the past decade, the social networking app has been growing very rapidly. Geolocation data is one of the important features of social media that can attach the user's location coordinate in the real world. This paper proposes the use of geolocation data from the Twitter social media application to gain knowledge about urban dynamics, especially on human mobility behavior. This paper aims to explore the relation between geolocation Twitter with the existence of people in the urban area. Firstly, the study will analyze the spread of people in the particular area, within the city using Twitter social media data. Secondly, we then match and categorize the existing place based on the same individuals visiting. Then, we combine the Twitter data from the tracking result and the questionnaire data to catch the Twitter user profile. To do that, we used the distribution frequency analysis to learn the visitors’ percentage. To validate the hypothesis, we compare it with the local population statistic data and land use mapping released by the city planning department of Makassar local government. The results show that there is the correlation between Twitter geolocation and questionnaire data. Thus, integration the Twitter data and survey data can reveal the profile of the social media users.

Keywords: geolocation, Twitter, distribution analysis, human mobility

Procedia PDF Downloads 312
4793 Assessing Significance of Correlation with Binomial Distribution

Authors: Vijay Kumar Singh, Pooja Kushwaha, Prabhat Ranjan, Krishna Kumar Ojha, Jitendra Kumar

Abstract:

Present day high-throughput genomic technologies, NGS/microarrays, are producing large volume of data that require improved analysis methods to make sense of the data. The correlation between genes and samples has been regularly used to gain insight into many biological phenomena including, but not limited to, co-expression/co-regulation, gene regulatory networks, clustering and pattern identification. However, presence of outliers and violation of assumptions underlying Pearson correlation is frequent and may distort the actual correlation between the genes and lead to spurious conclusions. Here, we report a method to measure the strength of association between genes. The method assumes that the expression values of a gene are Bernoulli random variables whose outcome depends on the sample being probed. The method considers the two genes as uncorrelated if the number of sample with same outcome for both the genes (Ns) is equal to certainly expected number (Es). The extent of correlation depends on how far Ns can deviate from the Es. The method does not assume normality for the parent population, fairly unaffected by the presence of outliers, can be applied to qualitative data and it uses the binomial distribution to assess the significance of association. At this stage, we would not claim about the superiority of the method over other existing correlation methods, but our method could be another way of calculating correlation in addition to existing methods. The method uses binomial distribution, which has not been used until yet, to assess the significance of association between two variables. We are evaluating the performance of our method on NGS/microarray data, which is noisy and pierce by the outliers, to see if our method can differentiate between spurious and actual correlation. While working with the method, it has not escaped our notice that the method could also be generalized to measure the association of more than two variables which has been proven difficult with the existing methods.

Keywords: binomial distribution, correlation, microarray, outliers, transcriptome

Procedia PDF Downloads 410
4792 Undercooling of Refractory High-Entropy Alloy

Authors: Liang Hu

Abstract:

The innovation of refractory high-entropy alloy (RHEA) formed from refractory metals W, Ta, Mo, Nb, Hf, V, and Zr was firstly implemented in 2010 to obtain better strength at high temperature than conventional HEAs based on Al, Co, Cr, Cu, Fe and Ni. Due to the refractory characteristic and high chemical activity at elevated temperature, electrostatic levitation technique has been utilized to fulfill the rapid solidification of RHEA. Several RHEAs consisting W, Ta, Mo, Nb, Zr have been selected to perform the undercooling and rapid solidification by ESL. They are substantially undercooled by up to 0.2TL. The evolution of as-solidified microstructure and component redistribution with undercooling have been investigated by SEM, EBSD, and EPMA analysis. According to the EPMA results of composing elements at different undercooling levels, the chemical distribution relevant to undercooling was also analyzed.

Keywords: chemical distribution, high-entropy alloy, rapid solidification, undercooling

Procedia PDF Downloads 126
4791 Reliability Indices Evaluation of SEIG Rotor Core Magnetization with Minimum Capacitive Excitation for WECs

Authors: Lokesh Varshney, R. K. Saket

Abstract:

This paper presents reliability indices evaluation of the rotor core magnetization of the induction motor operated as a self-excited induction generator by using probability distribution approach and Monte Carlo simulation. Parallel capacitors with calculated minimum capacitive value across the terminals of the induction motor operating as a SEIG with unregulated shaft speed have been connected during the experimental study. A three phase, 4 poles, 50Hz, 5.5 hp, 12.3A, 230V induction motor coupled with DC Shunt Motor was tested in the electrical machine laboratory with variable reactive loads. Based on this experimental study, it is possible to choose a reliable induction machine operating as a SEIG for unregulated renewable energy application in remote area or where grid is not available. Failure density function, cumulative failure distribution function, survivor function, hazard model, probability of success and probability of failure for reliability evaluation of the three phase induction motor operating as a SEIG have been presented graphically in this paper.

Keywords: residual magnetism, magnetization curve, induction motor, self excited induction generator, probability distribution, Monte Carlo simulation

Procedia PDF Downloads 554
4790 Operations Research Applications in Audit Planning and Scheduling

Authors: Abdel-Aziz M. Mohamed

Abstract:

This paper presents a state-of-the-art survey of the operations research models developed for internal audit planning. Two alternative approaches have been followed in the literature for audit planning: (1) identifying the optimal audit frequency; and (2) determining the optimal audit resource allocation. The first approach identifies the elapsed time between two successive audits, which can be presented as the optimal number of audits in a given planning horizon, or the optimal number of transactions after which an audit should be performed. It also includes the optimal audit schedule. The second approach determines the optimal allocation of audit frequency among all auditable units in the firm. In our review, we discuss both the deterministic and probabilistic models developed for audit planning. In addition, game theory models are reviewed to find the optimal auditing strategy based on the interactions between the auditors and the clients.

Keywords: operations research applications, audit frequency, audit-staff scheduling, audit planning

Procedia PDF Downloads 812
4789 Financial Assets Return, Economic Factors and Investor's Behavioral Indicators Relationships Modeling: A Bayesian Networks Approach

Authors: Nada Souissi, Mourad Mroua

Abstract:

The main purpose of this study is to examine the interaction between financial asset volatility, economic factors and investor's behavioral indicators related to both the company's and the markets stocks for the period from January 2000 to January2020. Using multiple linear regression and Bayesian Networks modeling, results show a positive and negative relationship between investor's psychology index, economic factors and predicted stock market return. We reveal that the application of the Bayesian Discrete Network contributes to identify the different cause and effect relationships between all economic, financial variables and psychology index.

Keywords: Financial asset return predictability, Economic factors, Investor's psychology index, Bayesian approach, Probabilistic networks, Parametric learning

Procedia PDF Downloads 144
4788 Optical and Double Folding Analysis for 6Li+16O Elastic Scattering

Authors: Abd Elrahman Elgamala, N. Darwish, I. Bondouk, Sh. Hamada

Abstract:

Available experimental angular distributions for 6Li elastically scattered from 16O nucleus in the energy range 13.0–50.0 MeV are investigated and reanalyzed using optical model of the conventional phenomenological potential and also using double folding optical model of different interaction models: DDM3Y1, CDM3Y1, CDM3Y2, and CDM3Y3. All the involved models of interaction are of M3Y Paris except DDM3Y1 which is of M3Y Reid and the main difference between them lies in the different values for the parameters of the incorporated density distribution function F(ρ). We have extracted the renormalization factor NR for 6Li+16O nuclear system in the energy range 13.0–50.0 MeV using the aforementioned interaction models.

Keywords: elastic scattering, optical model, folding potential, density distribution

Procedia PDF Downloads 139
4787 A Succinct Method for Allocation of Reactive Power Loss in Deregulated Scenario

Authors: J. S. Savier

Abstract:

Real power is the component power which is converted into useful energy whereas reactive power is the component of power which cannot be converted to useful energy but it is required for the magnetization of various electrical machineries. If the reactive power is compensated at the consumer end, the need for reactive power flow from generators to the load can be avoided and hence the overall power loss can be reduced. In this scenario, this paper presents a succinct method called JSS method for allocation of reactive power losses to consumers connected to radial distribution networks in a deregulated environment. The proposed method has the advantage that no assumptions are made while deriving the reactive power loss allocation method.

Keywords: deregulation, reactive power loss allocation, radial distribution systems, succinct method

Procedia PDF Downloads 370
4786 A Prediction Method of Pollutants Distribution Pattern: Flare Motion Using Computational Fluid Dynamics (CFD) Fluent Model with Weather Research Forecast Input Model during Transition Season

Authors: Benedictus Asriparusa, Lathifah Al Hakimi, Aulia Husada

Abstract:

A large amount of energy is being wasted by the release of natural gas associated with the oil industry. This release interrupts the environment particularly atmosphere layer condition globally which contributes to global warming impact. This research presents an overview of the methods employed by researchers in PT. Chevron Pacific Indonesia in the Minas area to determine a new prediction method of measuring and reducing gas flaring and its emission. The method emphasizes advanced research which involved analytical studies, numerical studies, modeling, and computer simulations, amongst other techniques. A flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process releases emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the chemical composition of air and environment around the boundary layer mainly during transition season. Transition season in Indonesia is absolutely very difficult condition to predict its pattern caused by the difference of two air mass conditions. This paper research focused on transition season in 2013. A simulation to create the new pattern of the pollutants distribution is needed. This paper has outlines trends in gas flaring modeling and current developments to predict the dominant variables in the pollutants distribution. A Fluent model is used to simulate the distribution of pollutants gas coming out of the stack, whereas WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. Based on the running model, the most influence factor was wind speed. The goal of the simulation is to predict the new pattern based on the time of fastest wind and slowest wind occurs for pollutants distribution. According to the simulation results, it can be seen that the fastest wind (last of March) moves pollutants in a horizontal direction and the slowest wind (middle of May) moves pollutants vertically. Besides, the design of flare stack in compliance according to EPA Oil and Gas Facility Stack Parameters likely shows pollutants concentration remains on the under threshold NAAQS (National Ambient Air Quality Standards).

Keywords: flare motion, new prediction, pollutants distribution, transition season, WRF model

Procedia PDF Downloads 548