Search results for: skewed generalized error distribution
6174 Power System Stability Enhancement Using Self Tuning Fuzzy PI Controller for TCSC
Authors: Salman Hameed
Abstract:
In this paper, a self-tuning fuzzy PI controller (STFPIC) is proposed for thyristor controlled series capacitor (TCSC) to improve power system dynamic performance. In a STFPIC controller, the output scaling factor is adjusted on-line by an updating factor (α). The value of α is determined from a fuzzy rule-base defined on error (e) and change of error (Δe) of the controlled variable. The proposed self-tuning controller is designed using a very simple control rule-base and the most natural and unbiased membership functions (MFs) (symmetric triangles with equal base and 50% overlap with neighboring MFs). The comparative performances of the proposed STFPIC and the standard fuzzy PI controller (FPIC) have been investigated on a multi-machine power system (namely, 4 machine two area system) through detailed non-linear simulation studies using MATLAB/SIMULINK. From the simulation studies it has been found out that for damping oscillations, the performance of the proposed STFPIC is better than that obtained by the standard FPIC. Moreover, the proposed STFPIC as well as the FPIC have been found to be quite effective in damping oscillations over a wide range of operating conditions and are quite effective in enhancing the power carrying capability of the power system significantly.Keywords: genetic algorithm, power system stability, self-tuning fuzzy controller, thyristor controlled series capacitor
Procedia PDF Downloads 4236173 Lee-Carter Mortality Forecasting Method with Dynamic Normal Inverse Gaussian Mortality Index
Authors: Funda Kul, İsmail Gür
Abstract:
Pension scheme providers have to price mortality risk by accurate mortality forecasting method. There are many mortality-forecasting methods constructed and used in literature. The Lee-Carter model is the first model to consider stochastic improvement trends in life expectancy. It is still precisely used. Mortality forecasting is done by mortality index in the Lee-Carter model. It is assumed that mortality index fits ARIMA time series model. In this paper, we propose and use dynamic normal inverse gaussian distribution to modeling mortality indes in the Lee-Carter model. Using population mortality data for Italy, France, and Turkey, the model is forecasting capability is investigated, and a comparative analysis with other models is ensured by some well-known benchmarking criterions.Keywords: mortality, forecasting, lee-carter model, normal inverse gaussian distribution
Procedia PDF Downloads 3606172 Metazoan Meiofauna and Their Abundance in Relation to Environmental Variables in the Northern Red Sea
Authors: Hamed A. El-Serehy, Khaled A. Al-Rasheid, Fahad A. Al-Misned
Abstract:
The composition and distribution of the benthic meiofauna assemblages of the Egyptian coasts along the Red Sea are described in relation to abiotic variables. Sediment samples were collected seasonally from twelve stations chosen along the northern part of the Red Sea to observe the meiofaunal community structure, its temporal distribution and horizontal fluctuation in relation to environmental conditions of the Red Sea marine ecosystem. The meiofaunal assemblage in the area of study was well diversified including 140 taxa. The temperature, salinity, pH, dissolved oxygen, and redox potential were measured at the time of collection. The water content of the sediments, total organic matters and chlorophyll a values were determined, and sediment samples were subjected to granulometric analysis. A total of 10 meiofauna taxa were identified, with the meiofauna being primarily represented by nematodes (on annual average from 42% to 84%), harpacticoids, polycheates and ostracodes; and the meiofauna abundances ranging from 41- to 167 ind. / 10 cm2. The meiofaunal population density fluctuated seasonally with a peak of 192.52 ind. / 10 cm2 during summer at station II. The vertical zonation in the distribution of meiofaunal community was significantly correlated with interstitial water, chlorophyll a and total organic matter values. The present study indicates that the existing of well diversified meiofaunal group which can serve as food for higher trophic levels in the Red Sea interstitial environment.Keywords: benthos, diversity, meiofauna, Red Sea
Procedia PDF Downloads 3886171 New Kinetic Effects in Spatial Distribution of Electron Flux and Excitation Rates in Glow Discharge Plasmas in Middle and High Pressures
Authors: Kirill D. Kapustin, Mikhail B. Krasilnikov, Anatoly A. Kudryavtsev
Abstract:
Physical formation mechanisms of differential electron fluxes is high pressure positive column gas discharge are discussed. It is shown that the spatial differential fluxes of the electrons are directed both inward and outward depending on the energy relaxation law. In some cases the direction of energy differential flux at intermediate energies (5-10eV) in whole volume, except region near the wall, appeared to be down directed, so electron in this region dissipate more energy than gain from axial electric field. Paradoxical behaviour of electron flux in spatial-energy space is presented.Keywords: plasma kinetics, electron distribution function, excitation and radiation rates, local and nonlocal EDF
Procedia PDF Downloads 4006170 Microfluidic Continuous Approaches to Produce Magnetic Nanoparticles with Homogeneous Size Distribution
Authors: Ane Larrea, Victor Sebastian, Manuel Arruebo, Jesus Santamaria
Abstract:
We present a gas-liquid microfluidic system as a reactor to obtain magnetite nanoparticles with an excellent degree of control regarding their crystalline phase, shape and size. Several types of microflow approaches were selected to prevent nanomaterial aggregation and to promote homogenous size distribution. The selected reactor consists of a mixer stage aided by ultrasound waves and a reaction stage using a N2-liquid segmented flow to prevent magnetite oxidation to non-magnetic phases. A milli-fluidic reactor was developed to increase the production rate where a magnetite throughput close to 450 mg/h in a continuous fashion was obtained.Keywords: continuous production, magnetic nanoparticles, microfluidics, nanomaterials
Procedia PDF Downloads 5926169 Heavy Minerals Distribution in the Recent Stream Sediments of Diyala River Basin, Northeastern Iraq
Authors: Abbas R. Ali, Daroon Hasan Khorsheed
Abstract:
Twenty one samples of stream sediments were collected from the Diyala River Basin (DRB), which represent one of three major tributaries of the Tigris River at northeastern Iraq. This study is concerned with the heavy minerals (HM) analysis in the + 63μ m fraction of the Diyala River sediments, distribution pattern in the various river basin sectors, as well as comparing the present results with previous works.The metastable heavy minerals (epidote, staurolite, garnet) represent more than (30%) Whereas the ultrastable heavy minerals (pyroxene and amphibole) make only about (19 %). Opaques are present in high proportions reaching about (29%) as an average. The ultrastable (zircon, tourmaline, rutile) heavy minerals are the miner constituents (7%) in the sediments.According to the laboratory analytical data of heavy mineral distributions the studied sediments are derived from mafic and ultramafic rocks are found in northeastern Iraq that represent Walash – Nawpordan Series and Mawat complexes in Zagros zones. The presence of zircon and tourmaline in trace amounts may give an indication for the weak role of acidic rocks in the source area whereas the epidote group minerals give an indication for the role of metamorphic rocks.Keywords: heavy minerals, mineral distribution, recent stream sediment, Diyala river, northeastern Iraq
Procedia PDF Downloads 5186168 Development of Advanced Linear Calibration Technique for Air Flow Sensing by Using CTA-Based Hot Wire Anemometry
Authors: Ming-Jong Tsai, T. M. Wu, R. C. Chu
Abstract:
The purpose of this study is to develop an Advanced linear calibration Technique for air flow sensing by using CTA-based Hot wire Anemometry. It contains a host PC with Human Machine Interface, a wind tunnel, a wind speed controller, an automatic data acquisition module, and nonlinear calibration model. To improve the fitting error by using single fitting polynomial, this study proposes a Multiple three-order Polynomial Fitting Method (MPFM) for fitting the non-linear output of a CTA-based Hot wire Anemometry. The CTA-based anemometer with built-in fitting parameters is installed in the wind tunnel, and the wind speed is controlled by the PC-based controller. The Hot-Wire anemometer's thermistor resistance change is converted into a voltage signal or temperature differences, and then sent to the PC through a DAQ card. After completion measurements of original signal, the Multiple polynomial mathematical coefficients can be automatically calculated, and then sent into the micro-processor in the Hot-Wire anemometer. Finally, the corrected Hot-Wire anemometer is verified for the linearity, the repeatability, error percentage, and the system outputs quality control reports.Keywords: flow rate sensing, hot wire, constant temperature anemometry (CTA), linear calibration, multiple three-order polynomial fitting method (MPFM), temperature compensation
Procedia PDF Downloads 4166167 Different in Factors of the Distributor Selection for Food and Non-Food OTOP Entrepreneur in Thailand
Authors: Phutthiwat Waiyawuththanapoom
Abstract:
This study has only one objective which is to identify the different in factors of choosing the distributor for food and non-food OTOP entrepreneur in Thailand. In this research, the types of OTOP product will be divided into two groups which are food and non-food. The sample for the food type OTOP product was the processed fruit and vegetable from Nakorn Pathom province and the sample for the non-food type OTOP product was the court doll from Ang Thong province. The research was divided into 3 parts which were a study of the distribution pattern and how to choose the distributor of the food type OTOP product, a study of the distribution pattern and how to choose the distributor of the non-food type OTOP product and a comparison between 2 types of products to find the differentiation in the factor of choosing distributor. The data and information was collected by using the interview. The populations in the research were 5 producers of the processed fruit and vegetable from Nakorn Pathom province and 5 producers of the court doll from Ang Thong province. The significant factor in choosing the distributor of the food type OTOP product is the material handling efficiency and on-time delivery but for the non-food type OTOP product is focused on the channel of distribution and cost of the distributor.Keywords: distributor, OTOP, food and non-food, selection
Procedia PDF Downloads 3556166 Design an Algorithm for Software Development in CBSE Envrionment Using Feed Forward Neural Network
Authors: Amit Verma, Pardeep Kaur
Abstract:
In software development organizations, Component based Software engineering (CBSE) is emerging paradigm for software development and gained wide acceptance as it often results in increase quality of software product within development time and budget. In component reusability, main challenges are the right component identification from large repositories at right time. The major objective of this work is to provide efficient algorithm for storage and effective retrieval of components using neural network and parameters based on user choice through clustering. This research paper aims to propose an algorithm that provides error free and automatic process (for retrieval of the components) while reuse of the component. In this algorithm, keywords (or components) are extracted from software document, after by applying k mean clustering algorithm. Then weights assigned to those keywords based on their frequency and after assigning weights, ANN predicts whether correct weight is assigned to keywords (or components) or not, otherwise it back propagates in to initial step (re-assign the weights). In last, store those all keywords into repositories for effective retrieval. Proposed algorithm is very effective in the error correction and detection with user base choice while choice of component for reusability for efficient retrieval is there.Keywords: component based development, clustering, back propagation algorithm, keyword based retrieval
Procedia PDF Downloads 3786165 An Automatic Speech Recognition of Conversational Telephone Speech in Malay Language
Authors: M. Draman, S. Z. Muhamad Yassin, M. S. Alias, Z. Lambak, M. I. Zulkifli, S. N. Padhi, K. N. Baharim, F. Maskuriy, A. I. A. Rahim
Abstract:
The performance of Malay automatic speech recognition (ASR) system for the call centre environment is presented. The system utilizes Kaldi toolkit as the platform to the entire library and algorithm used in performing the ASR task. The acoustic model implemented in this system uses a deep neural network (DNN) method to model the acoustic signal and the standard (n-gram) model for language modelling. With 80 hours of training data from the call centre recordings, the ASR system can achieve 72% of accuracy that corresponds to 28% of word error rate (WER). The testing was done using 20 hours of audio data. Despite the implementation of DNN, the system shows a low accuracy owing to the varieties of noises, accent and dialect that typically occurs in Malaysian call centre environment. This significant variation of speakers is reflected by the large standard deviation of the average word error rate (WERav) (i.e., ~ 10%). It is observed that the lowest WER (13.8%) was obtained from recording sample with a standard Malay dialect (central Malaysia) of native speaker as compared to 49% of the sample with the highest WER that contains conversation of the speaker that uses non-standard Malay dialect.Keywords: conversational speech recognition, deep neural network, Malay language, speech recognition
Procedia PDF Downloads 3226164 The Effect of Exposure to High Noise Level on the Performance and Rate of Error in Manual Activities
Authors: Zahra Zamanian, Alireza Zamanian, Jafar Hasanzadeh
Abstract:
Background: Unwanted sound, as one of the most important physical factors in the majority of production units, imposes a great number of problems on the industrial workers. Sound is one of the environmental factors which can cause physical as well as psychological damages and also affects the individuals’ performance and productivity. Therefore, the present study aimed to determine the effect of noise exposure on human performance. Methods: The present study assessed the effect of noise on the performance of 50 students of Shiraz University of Medical Sciences (25 males and 25 females) at the sound pressures of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound pressure source as well as applying Two-Arm coordination Test. Results: The results of the present study revealed no significant difference between male and female subjects as well as different conditions of creating sound pressure regarding the length of performance (p> 0.05). In addition, as the sound pressure increased, the length of performance increased, as well. According to the results, no significant difference was found between the performance at 70 and 90 dB. On the other hand, the performance at 110 dB was significantly different from the performance at 70 and 90 dB (p<0.05 and p<0.001). Conclusion: In general, as the sound pressure increases, the performance decreases which results in a considerable increase in the individuals’ rate of error.Keywords: physical factors, two-arm coordination test, Shiraz University of Medical Sciences, noise
Procedia PDF Downloads 3056163 Spatial Differentiation Patterns and Influencing Mechanism of Urban Greening in China: Based on Data of 289 Cities
Authors: Fangzheng Li, Xiong Li
Abstract:
Significant differences in urban greening have occurred in Chinese cities, which accompanied with China's rapid urbanization. However, few studies focused on the spatial differentiation of urban greening in China with large amounts of data. The spatial differentiation pattern, spatial correlation characteristics and the distribution shape of urban green space ratio, urban green coverage rate and public green area per capita were calculated and analyzed, using Global and Local Moran's I using data from 289 cities in 2014. We employed Spatial Lag Model and Spatial Error Model to assess the impacts of urbanization process on urban greening of China. Then we used Geographically Weighted Regression to estimate the spatial variations of the impacts. The results showed: 1. a significant spatial dependence and heterogeneity existed in urban greening values, and the differentiation patterns were featured by the administrative grade and the spatial agglomeration simultaneously; 2. it revealed that urbanization has a negative correlation with urban greening in Chinese cities. Among the indices, the the proportion of secondary industry, urbanization rate, population and the scale of urban land use has significant negative correlation with the urban greening of China. Automobile density and per capita Gross Domestic Product has no significant impact. The results of GWR modeling showed that the relationship between urbanization and urban greening was not constant in space. Further, the local parameter estimates suggested significant spatial variation in the impacts of various urbanization factors on urban greening.Keywords: China’s urbanization, geographically weighted regression, spatial differentiation pattern, urban greening
Procedia PDF Downloads 4616162 Spatial Interpolation of Aerosol Optical Depth Pollution: Comparison of Methods for the Development of Aerosol Distribution
Authors: Sahabeh Safarpour, Khiruddin Abdullah, Hwee San Lim, Mohsen Dadras
Abstract:
Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD) values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard NASA’s Terra satellites, for the 10 years period of 2000-2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE) and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs) yielded the best results for spring, summer, and winter and ordinary kriging yielded the best results for fall.Keywords: aerosol optical depth, MODIS, spatial interpolation techniques, Radial Basis Functions
Procedia PDF Downloads 4076161 Cross-Sectional Study Investigating the Prevalence of Uncorrected Refractive Error and Visual Acuity through Mobile Vision Screening in the Homeless in Wales
Authors: Pakinee Pooprasert, Wanxin Wang, Tina Parmar, Dana Ahnood, Tafadzwa Young-Zvandasara, James Morgan
Abstract:
Homelessness has been shown to be correlated to poor health outcomes, including increased visual health morbidity. Despite this, there are relatively few studies regarding visual health in the homeless population, especially in the UK. This research aims to investigate visual disability and access barriers prevalent in the homeless population in Cardiff, South Wales. Data was collected from 100 homeless participants in three different shelters. Visual outcomes included near and distance visual acuity as well as non-cycloplegic refraction. Qualitative data was collected via a questionnaire and included socio-demographic profile, ocular history, subjective visual acuity and level of access to healthcare facilities. Based on the participants’ presenting visual acuity, the total prevalence of myopia and hyperopia was 17.0% and 19.0% respectively based on spherical equivalent from the eye with the greatest absolute value. The prevalence of astigmatism was 8.0%. The mean absolute spherical equivalent was 0.841D and 0.853D for right and left eye respectively. The number of participants with sight loss (as defined by VA= 6/12-6/60 in the better-seeing eye) was 27.0% in comparison to 0.89% and 1.1% in the general Cardiff and Wales population respectively (p-value is < 0.05). Additionally, 1.0% of the homeless subjects were registered blind (VA less than 3/60), in comparison to 0.17% for the national consensus after age standardization. Most participants had good knowledge regarding access to prescription glasses and eye examination services. Despite this, 85.0% never had their eyes examined by a doctor and 73.0% had their last optometrist appointment in more than 5 years. These findings suggested that there was a significant disparity in ocular health, including visual acuity and refractive error amongst the homeless in comparison to the general population. Further, the homeless were less likely to receive the same level of support and continued care in the community due to access barriers. These included a number of socio-economic factors such as travel expenses and regional availability of services, as well as administrative shortcomings. In conclusion, this research demonstrated unmet visual health needs within the homeless, and that inclusive policy changes may need to be implemented for better healthcare outcomes within this marginalized community.Keywords: homelessness, refractive error, visual disability, Wales
Procedia PDF Downloads 1726160 Determining Factors Influencing the Total Funding in Islamic Banking of Indonesia
Authors: Euphrasia Susy Suhendra, Lies Handrijaningsih
Abstract:
The banking sector as an intermediary party or intermediaries occupies a very important position in bridging the needs of working capital investment in the real sector with funds owner. This will certainly make money more effectively to improve the economic value added. As an intermediary, Islamic banks raise funds from the public and then distribute in the form of financing. In practice, the distribution of funding that is run by Islamic Banking is not as easy as, in theory, because, in fact, there are many financing problems; some are caused by lacking the assessment and supervision of banks to customers. This study aims to analyze the influence of the Third Party Funds, Return on Assets (ROA), Non Performing Financing (NPF), and Financing Deposit Ratio (FDR) to Total Financing provided to the Community by Islamic Banks in Indonesia. The data used is monthly data released by Bank of Indonesia in Islamic Banking Statistics in the time period of January 2009 - December 2013. This study uses cointegration test to see the long-term relationship, and use error correction models to examine the relationship of short-term. The results of this study indicate that the Third Party Fund has a short-term effect on total funding, Return on Assets has a long term effect on the total financing, Non Performing Financing has long-term effects of total financing, and Financing deposit ratio has the effect of short-term and long-term of the total financing provided by Islamic Banks in Indonesia.Keywords: Islamic banking, third party fund, return on asset, non-performing financing, financing deposit ratio
Procedia PDF Downloads 4666159 Proposal of Optimality Evaluation for Quantum Secure Communication Protocols by Taking the Average of the Main Protocol Parameters: Efficiency, Security and Practicality
Authors: Georgi Bebrov, Rozalina Dimova
Abstract:
In the field of quantum secure communication, there is no evaluation that characterizes quantum secure communication (QSC) protocols in a complete, general manner. The current paper addresses the problem concerning the lack of such an evaluation for QSC protocols by introducing an optimality evaluation, which is expressed as the average over the three main parameters of QSC protocols: efficiency, security, and practicality. For the efficiency evaluation, the common expression of this parameter is used, which incorporates all the classical and quantum resources (bits and qubits) utilized for transferring a certain amount of information (bits) in a secure manner. By using criteria approach whether or not certain criteria are met, an expression for the practicality evaluation is presented, which accounts for the complexity of the QSC practical realization. Based on the error rates that the common quantum attacks (Measurement and resend, Intercept and resend, probe attack, and entanglement swapping attack) induce, the security evaluation for a QSC protocol is proposed as the minimum function taken over the error rates of the mentioned quantum attacks. For the sake of clarity, an example is presented in order to show how the optimality is calculated.Keywords: quantum cryptography, quantum secure communcation, quantum secure direct communcation security, quantum secure direct communcation efficiency, quantum secure direct communcation practicality
Procedia PDF Downloads 1846158 A Nonlocal Means Algorithm for Poisson Denoising Based on Information Geometry
Authors: Dongxu Chen, Yipeng Li
Abstract:
This paper presents an information geometry NonlocalMeans(NLM) algorithm for Poisson denoising. NLM estimates a noise-free pixel as a weighted average of image pixels, where each pixel is weighted according to the similarity between image patches in Euclidean space. In this work, every pixel is a Poisson distribution locally estimated by Maximum Likelihood (ML), all distributions consist of a statistical manifold. A NLM denoising algorithm is conducted on the statistical manifold where Fisher information matrix can be used for computing distribution geodesics referenced as the similarity between patches. This approach was demonstrated to be competitive with related state-of-the-art methods.Keywords: image denoising, Poisson noise, information geometry, nonlocal-means
Procedia PDF Downloads 2856157 A Comparative Evaluation of the SIR and SEIZ Epidemiological Models to Describe the Diffusion Characteristics of COVID-19 Polarizing Viewpoints on Online
Authors: Maryam Maleki, Esther Mead, Mohammad Arani, Nitin Agarwal
Abstract:
This study is conducted to examine how opposing viewpoints related to COVID-19 were diffused on Twitter. To accomplish this, six datasets using two epidemiological models, SIR (Susceptible, Infected, Recovered) and SEIZ (Susceptible, Exposed, Infected, Skeptics), were analyzed. The six datasets were chosen because they represent opposing viewpoints on the COVID-19 pandemic. Three of the datasets contain anti-subject hashtags, while the other three contain pro-subject hashtags. The time frame for all datasets is three years, starting from January 2020 to December 2022. The findings revealed that while both models were effective in evaluating the propagation trends of these polarizing viewpoints, the SEIZ model was more accurate with a relatively lower error rate (6.7%) compared to the SIR model (17.3%). Additionally, the relative error for both models was lower for anti-subject hashtags compared to pro-subject hashtags. By leveraging epidemiological models, insights into the propagation trends of polarizing viewpoints on Twitter were gained. This study paves the way for the development of methods to prevent the spread of ideas that lack scientific evidence while promoting the dissemination of scientifically backed ideas.Keywords: mathematical modeling, epidemiological model, seiz model, sir model, covid-19, twitter, social network analysis, social contagion
Procedia PDF Downloads 626156 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.Keywords: wavelet transform, computational error, computational duration, strong ground motion data
Procedia PDF Downloads 3786155 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics
Authors: Janne Engblom, Elias Oikarinen
Abstract:
A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.Keywords: dynamic model, fixed effects, panel data, price dynamics
Procedia PDF Downloads 15086154 Optimal Planning of Dispatchable Distributed Generators for Power Loss Reduction in Unbalanced Distribution Networks
Authors: Mahmoud M. Othman, Y. G. Hegazy, A. Y. Abdelaziz
Abstract:
This paper proposes a novel heuristic algorithm that aims to determine the best size and location of distributed generators in unbalanced distribution networks. The proposed heuristic algorithm can deal with the planning cases where power loss is to be optimized without violating the system practical constraints. The distributed generation units in the proposed algorithm is modeled as voltage controlled node with the flexibility to be converted to constant power factor node in case of reactive power limit violation. The proposed algorithm is implemented in MATLAB and tested on the IEEE 37 -node feeder. The results obtained show the effectiveness of the proposed algorithm.Keywords: distributed generation, heuristic approach, optimization, planning
Procedia PDF Downloads 5256153 Evaluation of Complications Observed in Porcelain Fused to Metal Crowns Placed at a Teaching Institution
Authors: Shizrah Jamal, Robia Ghafoor, Farhan Raza
Abstract:
Porcelain fused to metal crown is the most versatile variety of crown that is commonly placed worldwide. Various complications have been reported in the PFM crowns with use over the period of time. These include chipping of the porcelain, recurrent caries, loss of retention, open contacts, and tooth fracture. The objective of the present study was to determine the frequency of these complications in crowns cemented over a period of five years in a tertiary care hospital and also to report the survival of these crowns. A retrospective study was conducted in Dental clinics, Aga Khan University Hospital in which 150 PFM crowns cemented over a period of five years were evaluated. Patient demographics, oral hygiene habits, para-functional habits, crown insertion and follow-up dates were recorded in a specially designed proforma. All PFM crowns fulfilling the inclusion criteria were assessed both clinically and radiographically for the presence of any complication. SPSS version 22.0 was used for statistical analysis. Frequency distribution and proportion of complications were determined. Chi-square test was used to determine the association of complications of PFM crowns with multiple variables including tooth wear, opposing dentition and betel nut chewing. Kaplan- meier survival analysis was used to determine the survival of PFM crowns over the period of five years. Level of significance was kept at 0.05. A total of 107 patients, with a mean age of 43.51 + 12.4 years, having 150 PFM crowns were evaluated. The most common complication observed was open proximal contacts (8.7%) followed by porcelain chipping (6%), decementation (5.3%), and abutment fracture (1.3%). Chi square test showed that there was no statistically significant association of PFM crown complication with tooth wear, betel nut and opposing dentition (p-value <0.05). The overall success and survival rates of PFM crowns turned out to be 78.7 and 84.7% respectively. Within the limitations of the study, it can be concluded that PFM crowns are an effective treatment modality with high success and survival rates. Since it was a single centered study; the results should be generalized with caution.Keywords: chipping, complication, crown, survival rate
Procedia PDF Downloads 2086152 Reductive Control in the Management of Redundant Actuation
Authors: Mkhinini Maher, Knani Jilani
Abstract:
We present in this work the performances of a mobile omnidirectional robot through evaluating its management of the redundancy of actuation. Thus we come to the predictive control implemented. The distribution of the wringer on the robot actions, through the inverse pseudo of Moore-Penrose, corresponds to a -geometric- distribution of efforts. We will show that the load on vehicle wheels would not be equi-distributed in terms of wheels configuration and of robot movement. Thus, the threshold of sliding is not the same for the three wheels of the vehicle. We suggest exploiting the redundancy of actuation to reduce the risk of wheels sliding and to ameliorate, thereby, its accuracy of displacement. This kind of approach was the subject of study for the legged robots.Keywords: mobile robot, actuation, redundancy, omnidirectional, inverse pseudo moore-penrose, reductive control
Procedia PDF Downloads 5116151 The Use of the Matlab Software as the Best Way to Recognize Penumbra Region in Radiotherapy
Authors: Alireza Shayegan, Morteza Amirabadi
Abstract:
The y tool was developed to quantitatively compare dose distributions, either measured or calculated. Before computing ɣ, the dose and distance scales of the two distributions, referred to as evaluated and reference, are re-normalized by dose and distance criteria, respectively. The re-normalization allows the dose distribution comparison to be conducted simultaneously along dose and distance axes. Several two-dimensional images were acquired using a Scanning Liquid Ionization Chamber EPID and Extended Dose Range (EDR2) films for regular and irregular radiation fields. The raw images were then converted into two-dimensional dose maps. Transitional and rotational manipulations were performed for images using Matlab software. As evaluated dose distribution maps, they were then compared with the corresponding original dose maps as the reference dose maps.Keywords: energetic electron, gamma function, penumbra, Matlab software
Procedia PDF Downloads 3016150 Statistical Analysis for Overdispersed Medical Count Data
Authors: Y. N. Phang, E. F. Loh
Abstract:
Many researchers have suggested the use of zero inflated Poisson (ZIP) and zero inflated negative binomial (ZINB) models in modeling over-dispersed medical count data with extra variations caused by extra zeros and unobserved heterogeneity. The studies indicate that ZIP and ZINB always provide better fit than using the normal Poisson and negative binomial models in modeling over-dispersed medical count data. In this study, we proposed the use of Zero Inflated Inverse Trinomial (ZIIT), Zero Inflated Poisson Inverse Gaussian (ZIPIG) and zero inflated strict arcsine models in modeling over-dispersed medical count data. These proposed models are not widely used by many researchers especially in the medical field. The results show that these three suggested models can serve as alternative models in modeling over-dispersed medical count data. This is supported by the application of these suggested models to a real life medical data set. Inverse trinomial, Poisson inverse Gaussian, and strict arcsine are discrete distributions with cubic variance function of mean. Therefore, ZIIT, ZIPIG and ZISA are able to accommodate data with excess zeros and very heavy tailed. They are recommended to be used in modeling over-dispersed medical count data when ZIP and ZINB are inadequate.Keywords: zero inflated, inverse trinomial distribution, Poisson inverse Gaussian distribution, strict arcsine distribution, Pearson’s goodness of fit
Procedia PDF Downloads 5426149 Effectiveness of Self-Learning Module on the Academic Performance of Students in Statistics and Probability
Authors: Aneia Rajiel Busmente, Renato Gunio Jr., Jazin Mautante, Denise Joy Mendoza, Raymond Benedict Tagorio, Gabriel Uy, Natalie Quinn Valenzuela, Ma. Elayza Villa, Francine Yezha Vizcarra, Sofia Madelle Yapan, Eugene Kurt Yboa
Abstract:
COVID-19’s rapid spread caused a dramatic change in the nation, especially the educational system. The Department of Education was forced to adopt a practical learning platform without neglecting health, a printed modular distance learning. The Philippines' K–12 curriculum includes Statistics and Probability as one of the key courses as it offers students the knowledge to evaluate and comprehend data. Due to student’s difficulty and lack of understanding of the concepts of Statistics and Probability in Normal Distribution. The Self-Learning Module in Statistics and Probability about the Normal Distribution created by the Department of Education has several problems, including many activities, unclear illustrations, and insufficient examples of concepts which enables learners to have a difficulty accomplishing the module. The purpose of this study is to determine the effectiveness of self-learning module on the academic performance of students in the subject Statistics and Probability, it will also explore students’ perception towards the quality of created Self-Learning Module in Statistics and Probability. Despite the availability of Self-Learning Modules in Statistics and Probability in the Philippines, there are still few literatures that discuss its effectiveness in improving the performance of Senior High School students in Statistics and Probability. In this study, a Self-Learning Module on Normal Distribution is evaluated using a quasi-experimental design. STEM students in Grade 11 from National University's Nazareth School will be the study's participants, chosen by purposive sampling. Google Forms will be utilized to find at least 100 STEM students in Grade 11. The research instrument consists of 20-item pre- and post-test to assess participants' knowledge and performance regarding Normal Distribution, and a Likert scale survey to evaluate how the students perceived the self-learning module. Pre-test, post-test, and Likert scale surveys will be utilized to gather data, with Jeffreys' Amazing Statistics Program (JASP) software being used for analysis.Keywords: self-learning module, academic performance, statistics and probability, normal distribution
Procedia PDF Downloads 1146148 Towards Automatic Calibration of In-Line Machine Processes
Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales
Abstract:
In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820Keywords: data model, machine learning, industrial winding, calibration
Procedia PDF Downloads 2416147 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals
Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam
Abstract:
The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study
Procedia PDF Downloads 3206146 Developing an ANN Model to Predict Anthropometric Dimensions Based on Real Anthropometric Database
Authors: Waleed A. Basuliman, Khalid S. AlSaleh, Mohamed Z. Ramadan
Abstract:
Applying the anthropometric dimensions is considered one of the important factors when designing any human-machine system. In this study, the estimation of anthropometric dimensions has been improved by developing artificial neural network that aims to predict the anthropometric measurements of the male in Saudi Arabia. A total of 1427 Saudi males from age 6 to 60 participated in measuring twenty anthropometric dimensions. These anthropometric measurements are important for designing the majority of work and life applications in Saudi Arabia. The data were collected during 8 months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining fifteen dimensions were set to be the measured variables (outcomes). The hidden layers have been varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was significantly able to predict the body dimensions for the population of Saudi Arabia. The network mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found 0.0348 and 3.225 respectively. The accuracy of the developed neural network was evaluated by compare the predicted outcomes with a multiple regression model. The ANN model performed better and resulted excellent correlation coefficients between the predicted and actual dimensions.Keywords: artificial neural network, anthropometric measurements, backpropagation, real anthropometric database
Procedia PDF Downloads 5766145 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology
Authors: Ugwu O. C., Mamah R. O., Awudu W. S.
Abstract:
This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.Keywords: beamforming algorithm, adaptive beamforming, simulink, reception
Procedia PDF Downloads 41