Search results for: least squares estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2173

Search results for: least squares estimation

1333 Enhancing Cybersecurity Protective Behaviour: Role of Information Security Competencies and Procedural Information Security Countermeasure Awareness

Authors: Norshima Humaidi, Saif Hussein Abdallah Alghazo

Abstract:

Cybersecurity threat have become a serious issue recently, and one of the cause is because human error, which is usually constituted by carelessness, ignorance, and failure to practice cybersecurity behaviour adequately. Using a data from a quantitative survey, Partial Least Squares-Structural Equation Modelling (PLS-SEM) analysis was used to determine the factors that affect cybersecurity protective behaviour (CPB). This study adapts cybersecurity protective behaviour model by focusing on two constructs that can enhance CPB: manager’s information security competencies (MISI) and procedural information security countermeasure (PCM) awareness. Theory of leadership competencies were adapted to measure user’s perception towards competencies among security managers/leader in the organization. Confirmatory factor analysis (CFA) testing shows that all the measurement items of each constructs were adequate in their validity individually based on their factor loading value. Moreover, each constructs are valid based on their parameter estimates and statistical significance. The quantitative research findings show that PCM awareness strongly influences CPB compared to MISI. Meanwhile, MISI was significantlyPCM awarenss. This study believes that the research findings can contribute to human behaviour in IS studies and are particularly beneficial to policy makers in improving organizations’ strategic plans in information security, especially in this new era. Most organizations spend time and resources to provide and establish strategic plans of information security; however, if employees are not willing to comply and practice information security behaviour appropriately, then these efforts are in vain.

Keywords: cybersecurity, protection behaviour, information security, information security competencies, countermeasure awareness

Procedia PDF Downloads 93
1332 Employment Mobility and the Effects of Wage Level and Tenure

Authors: Idit Kalisher, Israel Luski

Abstract:

One result of the growing dynamicity of labor markets in recent decades is a wider scope of employment mobility – i.e., transitions between employers, either within or between careers. Employment mobility decisions are primarily affected by the current employment status of the worker, which is reflected in wage and tenure. Using 34,328 observations from the National Longitudinal Survey of Youth 1979 (NLS79), which were derived from the USA population between 1990 and 2012, this paper aims to investigate the effects of wage and tenure over employment mobility choices, and additionally to examine the effects of other personal characteristics, individual labor market characteristics and macroeconomic factors. The estimation strategy was designed to address two challenges that arise from the combination of the model and the data: (a) endogeneity of the wage and the tenure in the choice equation; and (b) unobserved heterogeneity, as the data of this research is longitudinal. To address (a), estimation was performed using two-stage limited dependent variable procedure (2SLDV); and to address (b), the second stage was estimated using femlogit – an implementation of the multinomial logit model with fixed effects. Among workers who have experienced at least one turnover, the wage was found to have a main effect on career turnover likelihood of all workers, whereas the wage effect on job turnover likelihood was found to be dependent on individual characteristics. The wage was found to negatively affect the turnover likelihood and the effect was found to vary across wage level: high-wage workers were more affected compared to low-wage workers. Tenure was found to have a main positive effect on both turnover types’ likelihoods, though the effect was moderated by the wage. The findings also reveal that as their wage increases, women are more likely to turnover than men, and academically educated workers are more likely to turnover within careers. Minorities were found to be as likely as Caucasians to turnover post wage-increase, but less likely to turnover with each additional tenure year. The wage and the tenure effects were found to vary also between careers. The difference in attitude towards money, labor market opportunities and risk aversion could explain these findings. Additionally, the likelihood of a turnover was found to be affected by previous unemployment spells, age, and other labor market and personal characteristics. The results of this research could assist policymakers as well as business owners and employers. The former may be able to encourage women and older workers’ employment by considering the effects of gender and age on the probability of a turnover, and the latter may be able to assess their employees’ likelihood of a turnover by considering the effects of their personal characteristics.

Keywords: employment mobility, endogeneity, femlogit, turnover

Procedia PDF Downloads 146
1331 Non-Linear Assessment of Chromatographic Lipophilicity of Selected Steroid Derivatives

Authors: Milica Karadžić, Lidija Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Anamarija Mandić, Aleksandar Oklješa, Andrea Nikolić, Marija Sakač, Katarina Penov Gaši

Abstract:

Using chemometric approach, the relationships between the chromatographic lipophilicity and in silico molecular descriptors for twenty-nine selected steroid derivatives were studied. The chromatographic lipophilicity was predicted using artificial neural networks (ANNs) method. The most important in silico molecular descriptors were selected applying stepwise selection (SS) paired with partial least squares (PLS) method. Molecular descriptors with satisfactory variable importance in projection (VIP) values were selected for ANN modeling. The usefulness of generated models was confirmed by detailed statistical validation. High agreement between experimental and predicted values indicated that obtained models have good quality and high predictive ability. Global sensitivity analysis (GSA) confirmed the importance of each molecular descriptor used as an input variable. High-quality networks indicate a strong non-linear relationship between chromatographic lipophilicity and used in silico molecular descriptors. Applying selected molecular descriptors and generated ANNs the good prediction of chromatographic lipophilicity of the studied steroid derivatives can be obtained. This article is based upon work from COST Actions (CM1306 and CA15222), supported by COST (European Cooperation and Science and Technology).

Keywords: artificial neural networks, chemometrics, global sensitivity analysis, liquid chromatography, steroids

Procedia PDF Downloads 341
1330 Sustainable Practices through Organizational Internal Factors among South African Construction Firms

Authors: Oluremi I. Bamgbade, Oluwayomi Babatunde

Abstract:

Governments and nonprofits have been in the support of sustainability as the goal of businesses especially in the construction industry because of its considerable impacts on the environment, economy, and society. However, to measure the degree to which an organisation is being sustainable or pursuing sustainable growth can be difficult as a result of the clear sustainability strategy required to assume their commitment to the goal and competitive advantage. This research investigated the influence of organisational culture and organisational structure in achieving sustainable construction among South African construction firms. A total of 132 consultants from the nine provinces in South Africa participated in the survey. The data collected were initially screened using SPSS (version 21) while Partial Least Squares Structural Equation Modeling (PLS-SEM) algorithm and bootstrap techniques were employed to test the hypothesised paths. The empirical evidence also supported the hypothesised direct effects of organisational culture and organisational structure on sustainable construction. Similarly, the result regarding the relationship between organisational culture and organisational structure was supported. Therefore, construction industry can record a considerable level of construction sustainability and establish suitable cultures and structures within the construction organisations. Drawing upon organisational control theory, these findings supported the view that these organisational internal factors have a strong contingent effect on sustainability adoption in construction project execution. The paper makes theoretical, practical and methodological contributions within the domain of sustainable construction especially in the context of South Africa. Some limitations of the study are indicated, suggesting opportunities for future research.

Keywords: organisational culture, organisational structure, South African construction firms, sustainable construction

Procedia PDF Downloads 282
1329 Breast Cancer Incidence Estimation in Castilla-La Mancha (CLM) from Mortality and Survival Data

Authors: C. Romero, R. Ortega, P. Sánchez-Camacho, P. Aguilar, V. Segur, J. Ruiz, G. Gutiérrez

Abstract:

Introduction: Breast cancer is a leading cause of death in CLM. (2.8% of all deaths in women and 13,8% of deaths from tumors in womens). It is the most tumor incidence in CLM region with 26.1% from all tumours, except nonmelanoma skin (Cancer Incidence in Five Continents, Volume X, IARC). Cancer registries are a good information source to estimate cancer incidence, however the data are usually available with a lag which makes difficult their use for health managers. By contrast, mortality and survival statistics have less delay. In order to serve for resource planning and responding to this problem, a method is presented to estimate the incidence of mortality and survival data. Objectives: To estimate the incidence of breast cancer by age group in CLM in the period 1991-2013. Comparing the data obtained from the model with current incidence data. Sources: Annual number of women by single ages (National Statistics Institute). Annual number of deaths by all causes and breast cancer. (Mortality Registry CLM). The Breast cancer relative survival probability. (EUROCARE, Spanish registries data). Methods: A Weibull Parametric survival model from EUROCARE data is obtained. From the model of survival, the population and population data, Mortality and Incidence Analysis MODel (MIAMOD) regression model is obtained to estimate the incidence of cancer by age (1991-2013). Results: The resulting model is: Ix,t = Logit [const + age1*x + age2*x2 + coh1*(t – x) + coh2*(t-x)2] Where: Ix,t is the incidence at age x in the period (year) t; the value of the parameter estimates is: const (constant term in the model) = -7.03; age1 = 3.31; age2 = -1.10; coh1 = 0.61 and coh2 = -0.12. It is estimated that in 1991 were diagnosed in CLM 662 cases of breast cancer (81.51 per 100,000 women). An estimated 1,152 cases (112.41 per 100,000 women) were diagnosed in 2013, representing an increase of 40.7% in gross incidence rate (1.9% per year). The annual average increases in incidence by age were: 2.07% in women aged 25-44 years, 1.01% (45-54 years), 1.11% (55-64 years) and 1.24% (65-74 years). Cancer registries in Spain that send data to IARC declared 2003-2007 the average annual incidence rate of 98.6 cases per 100,000 women. Our model can obtain an incidence of 100.7 cases per 100,000 women. Conclusions: A sharp and steady increase in the incidence of breast cancer in the period 1991-2013 is observed. The increase was seen in all age groups considered, although it seems more pronounced in young women (25-44 years). With this method you can get a good estimation of the incidence.

Keywords: breast cancer, incidence, cancer registries, castilla-la mancha

Procedia PDF Downloads 309
1328 Chaotic Sequence Noise Reduction and Chaotic Recognition Rate Improvement Based on Improved Local Geometric Projection

Authors: Rubin Dan, Xingcai Wang, Ziyang Chen

Abstract:

A chaotic time series noise reduction method based on the fusion of the local projection method, wavelet transform, and particle swarm algorithm (referred to as the LW-PSO method) is proposed to address the problem of false recognition due to noise in the recognition process of chaotic time series containing noise. The method first uses phase space reconstruction to recover the original dynamical system characteristics and removes the noise subspace by selecting the neighborhood radius; then it uses wavelet transform to remove D1-D3 high-frequency components to maximize the retention of signal information while least-squares optimization is performed by the particle swarm algorithm. The Lorenz system containing 30% Gaussian white noise is simulated and verified, and the phase space, SNR value, RMSE value, and K value of the 0-1 test method before and after noise reduction of the Schreiber method, local projection method, wavelet transform method, and LW-PSO method are compared and analyzed, which proves that the LW-PSO method has a better noise reduction effect compared with the other three common methods. The method is also applied to the classical system to evaluate the noise reduction effect of the four methods and the original system identification effect, which further verifies the superiority of the LW-PSO method. Finally, it is applied to the Chengdu rainfall chaotic sequence for research, and the results prove that the LW-PSO method can effectively reduce the noise and improve the chaos recognition rate.

Keywords: Schreiber noise reduction, wavelet transform, particle swarm optimization, 0-1 test method, chaotic sequence denoising

Procedia PDF Downloads 191
1327 Efficacy of Agrobacterium Tumefaciens as a Possible Entomopathogenic Agent

Authors: Fouzia Qamar, Shahida Hasnain

Abstract:

The objective of the present study was to evaluate the possible role of Agrobacterium tumefaciens as a possible insect biocontrol agent. Pests selected for the present challenge were adult males of Periplaneta americana and last instar larvae of Pieris brassicae and Spodoptera litura. Different ranges of bacterial doses were selected and tested to score the mortalities of the insects after 24 hours, for the lethal dose estimation studies. Mode of application for the inoculation of the bacteria, was the microinjection technique. The evaluation of the possible entomopathogenic carrying attribute of bacterial Ti plasmid, led to the conclusion that the loss of plasmid was associated with the loss of virulence against target insects.

Keywords: agrobacterium tumefaciens, toxicity assessment, biopesticidal attribute, entomopathogenic agent

Procedia PDF Downloads 376
1326 Investigating the Impact of Job-Related and Organisational Factors on Employee Engagement: An Emotionally Relevant Approach Based on Psychological Climate and Organisational Emotional Intelligence (OEI)

Authors: Nuno Da Camara, Victor Dulewicz, Malcolm Higgs

Abstract:

Factors on employee engagement: In particular, although theorists have described the critical role of emotional cognition of the workplace environment as antecedents to employee engagement, empirical research on the impact of emotional cognition on employee engagement is limited. However, previous researchers have typically provided evidence of the link between emotional cognition of the workplace environment and workplace attitudes such as job satisfaction and organisational commitment. This study therefore aims to investigate the impact of emotional cognition of job, role, leader and organisation domains of the work environment – as represented by measures of psychological climate and organizational emotional intelligence (OEI) - on employee engagement. The research is based on a quantitative cross-sectional survey of employees in a UK charity organization (n=174). The research instruments applied include the psychological climate scale, the organisational emotional intelligence questionnaire (OEIQ) and the Utrecht Work Engagement Scale (UWES). The data were analysed using hierarchical regression and partial least squares (PLS) analytical techniques. The results of the study show that both psychological climate and OEI, which represent emotional cognition of job, role, leader and organisation domains in the workplace are significant drivers of employee engagement. In particular, the study found that a sense of contribution and challenge at work are the strongest drivers of vigour, dedication and absorption and highlights the importance of emotionally relevant approaches in furthering our understanding of workplace engagement.

Keywords: employee engagement, organisational emotional intelligence, psychological climate, workplace attitudes

Procedia PDF Downloads 503
1325 Kinetic Studies on CO₂ Gasification of Low and High Ash Indian Coals in Context of Underground Coal Gasification

Authors: Geeta Kumari, Prabu Vairakannu

Abstract:

Underground coal gasification (UCG) technology is an efficient and an economic in-situ clean coal technology, which converts unmineable coals into calorific valuable gases. This technology avoids ash disposal, coal mining, and storage problems. CO₂ gas can be a potential gasifying medium for UCG. CO₂ is a greenhouse gas and, the liberation of this gas to the atmosphere from thermal power plant industries leads to global warming. Hence, the capture and reutilization of CO₂ gas are crucial for clean energy production. However, the reactivity of high ash Indian coals with CO₂ needs to be assessed. In the present study, two varieties of Indian coals (low ash and high ash) are used for thermogravimetric analyses (TGA). Two low ash north east Indian coals (LAC) and a typical high ash Indian coal (HAC) are procured from the coal mines of India. Low ash coal with 9% ash (LAC-1) and 4% ash (LAC-2) and high ash coal (HAC) with 42% ash are used for the study. TGA studies are carried out to evaluate the activation energy for pyrolysis and gasification of coal under N₂ and CO₂ atmosphere. Coats and Redfern method is used to estimate the activation energy of coal under different temperature regimes. Volumetric model is assumed for the estimation of the activation energy. The activation energy estimated under different temperature range. The inherent properties of coals play a major role in their reactivity. The results show that the activation energy decreases with the decrease in the inherent percentage of coal ash due to the ash layer hindrance. A reverse trend was observed with volatile matter. High volatile matter of coal leads to the estimation of low activation energy. It was observed that the activation energy under CO₂ atmosphere at 400-600°C is less as compared to N₂ inert atmosphere. At this temperature range, it is estimated that 15-23% reduction in the activation energy under CO₂ atmosphere. This shows the reactivity of CO₂ gas with higher hydrocarbons of the coal volatile matters. The reactivity of CO₂ with the volatile matter of coal might occur through dry reforming reaction in which CO₂ reacts with higher hydrocarbon, aromatics of the tar content. The observed trend of Ea in the temperature range of 150-200˚C and 400-600˚C is HAC > LAC-1 >LAC-2 in both N₂ and CO₂ atmosphere. At the temperature range of 850-1000˚C, higher activation energy is estimated when compared to those values in the temperature range of 400-600°C. Above 800°C, char gasification through Boudouard reaction progressed under CO₂ atmosphere. It was observed that 8-20 kJ/mol of activation energy is increased during char gasification above 800°C compared to volatile matter pyrolysis between the temperature ranges of 400-600°C. The overall activation energy of the coals in the temperature range of 30-1000˚C is higher in N₂ atmosphere than CO₂ atmosphere. It can be concluded that higher hydrocarbons such as tar effectively undergoes cracking and reforming reactions in presence of CO₂. Thus, CO₂ gas is beneficial for the production of high calorific value syngas using high ash Indian coals.

Keywords: clean coal technology, CO₂ gasification, activation energy, underground coal gasification

Procedia PDF Downloads 166
1324 The Normal-Generalized Hyperbolic Secant Distribution: Properties and Applications

Authors: Hazem M. Al-Mofleh

Abstract:

In this paper, a new four-parameter univariate continuous distribution called the Normal-Generalized Hyperbolic Secant Distribution (NGHS) is defined and studied. Some general and structural distributional properties are investigated and discussed, including: central and non-central n-th moments and incomplete moments, quantile and generating functions, hazard function, Rényi and Shannon entropies, shapes: skewed right, skewed left, and symmetric, modality regions: unimodal and bimodal, maximum likelihood (MLE) estimators for the parameters. Finally, two real data sets are used to demonstrate empirically its flexibility and prove the strength of the new distribution.

Keywords: bimodality, estimation, hazard function, moments, Shannon’s entropy

Procedia PDF Downloads 342
1323 Localization of Radioactive Sources with a Mobile Radiation Detection System using Profit Functions

Authors: Luís Miguel Cabeça Marques, Alberto Manuel Martinho Vale, José Pedro Miragaia Trancoso Vaz, Ana Sofia Baptista Fernandes, Rui Alexandre de Barros Coito, Tiago Miguel Prates da Costa

Abstract:

The detection and localization of hidden radioactive sources are of significant importance in countering the illicit traffic of Special Nuclear Materials and other radioactive sources and materials. Radiation portal monitors are commonly used at airports, seaports, and international land borders for inspecting cargo and vehicles. However, these equipment can be expensive and are not available at all checkpoints. Consequently, the localization of SNM and other radioactive sources often relies on handheld equipment, which can be time-consuming. The current study presents the advantages of real-time analysis of gamma-ray count rate data from a mobile radiation detection system based on simulated data and field tests. The incorporation of profit functions and decision criteria to optimize the detection system's path significantly enhances the radiation field information and reduces survey time during cargo inspection. For source position estimation, a maximum likelihood estimation algorithm is employed, and confidence intervals are derived using the Fisher information. The study also explores the impact of uncertainties, baselines, and thresholds on the performance of the profit function. The proposed detection system, utilizing a plastic scintillator with silicon photomultiplier sensors, boasts several benefits, including cost-effectiveness, high geometric efficiency, compactness, and lightweight design. This versatility allows for seamless integration into any mobile platform, be it air, land, maritime, or hybrid, and it can also serve as a handheld device. Furthermore, integration of the detection system into drones, particularly multirotors, and its affordability enable the automation of source search and substantial reduction in survey time, particularly when deploying a fleet of drones. While the primary focus is on inspecting maritime container cargo, the methodologies explored in this research can be applied to the inspection of other infrastructures, such as nuclear facilities or vehicles.

Keywords: plastic scintillators, profit functions, path planning, gamma-ray detection, source localization, mobile radiation detection system, security scenario

Procedia PDF Downloads 108
1322 Optimal Image Representation for Linear Canonical Transform Multiplexing

Authors: Navdeep Goel, Salvador Gabarda

Abstract:

Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.

Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation

Procedia PDF Downloads 408
1321 Optimizing Protection of Medieval Glass Mosaic

Authors: J. Valach, S. Pospisil, S. Kuznecov

Abstract:

The paper deals with experimental estimation of future environmental load on medieval mosaic of Last Judgement on entrance to St. Vitus cathedral on Prague castle. The mosaic suffers from seasonal changes of weather pattern, as well as rains, their acidity, deposition of dust and sooth particles from polluted air and also from freeze-thaw cycles. These phenomena influence state of the mosaic. The mosaic elements, tesserae are mostly made from glass prone to weathering. To estimate future procedure of the best maintenance, relation between various weather scenarios and their effect on the mosaic was investigated. At the same time local method for evaluation of protective coating was developed. Together both methods will contribute to better care for the mosaic and also visitors aesthetical experience.

Keywords: environmental load, cultural heritage, glass mosaic, protection

Procedia PDF Downloads 275
1320 Estimation of Delay Due to Loading–Unloading of Passengers by Buses and Reduction of Number of Lanes at Selected Intersections in Dhaka City

Authors: Sumit Roy, A. Uddin

Abstract:

One of the significant reasons that increase the delay time in the intersections at heterogeneous traffic condition is a sudden reduction of the capacity of the roads. In this study, the delay for this sudden capacity reduction is estimated. Two intersections at Dhaka city were brought in to thestudy, i.e., Kakrail intersection, and SAARC Foara intersection. At Kakrail intersection, the sudden reduction of capacity in the roads is seen at three downstream legs of the intersection, which are because of slowing down or stopping of buses for loading and unloading of passengers. At SAARC Foara intersection, sudden reduction of capacity was seen at two downstream legs. At one leg, it was due to loading and unloading of buses, and at another leg, it was for both loading and unloading of buses and reduction of the number of lanes. With these considerations, the delay due to intentional stoppage or slowing down of buses and reduction of the number of lanes for these two intersections are estimated. Here the delay was calculated by two approaches. The first approach came from the concept of shock waves in traffic streams. Here the delay was calculated by determining the flow, density, and speed before and after the sudden capacity reduction. The second approach came from the deterministic analysis of queues. Here the delay is calculated by determining the volume, capacity and reduced capacity of the road. After determining the delay from these two approaches, the results were compared. For this study, the video of each of the two intersections was recorded for one hour at the evening peak. Necessary geometric data were also taken to determine speed, flow, and density, etc. parameters. The delay was calculated for one hour with one-hour data at both intersections. In case of Kakrail intersection, the per hour delay for Kakrail circle leg was 5.79, and 7.15 minutes, for Shantinagar cross intersection leg they were 13.02 and 15.65 minutes, and for Paltan T intersection leg, they were 3 and 1.3 minutes for 1st and 2nd approaches respectively. In the case of SAARC Foara intersection, the delay at Shahbag leg was only due to intentional stopping or slowing down of busses, which were 3.2 and 3 minutes respectively for both approaches. For the Karwan Bazar leg, the delays for buses by both approaches were 5 and 7.5 minutes respectively, and for reduction of the number of lanes, the delays for both approaches were 2 and 1.78 minutes respectively. Measuring the delay per hour for the Kakrail leg at Kakrail circle, it is seen that, with consideration of the first approach of delay estimation, the intentional stoppage and lowering of speed by buses contribute to 26.24% of total delay at Kakrail circle. If the loading and unloading of buses at intersection is made forbidden near intersection, and any other measures for loading and unloading of passengers are established far enough from the intersections, then the delay at intersections can be reduced at significant scale, and the performance of the intersections can be enhanced.

Keywords: delay, deterministic queue analysis, shock wave, passenger loading-unloading

Procedia PDF Downloads 172
1319 Adaptive Target Detection of High-Range-Resolution Radar in Non-Gaussian Clutter

Authors: Lina Pan

Abstract:

In non-Gaussian clutter of a spherically invariant random vector, in the cases that a certain estimated covariance matrix could become singular, the adaptive target detection of high-range-resolution radar is addressed. Firstly, the restricted maximum likelihood (RML) estimates of unknown covariance matrix and scatterer amplitudes are derived for non-Gaussian clutter. And then the RML estimate of texture is obtained. Finally, a novel detector is devised. It is showed that, without secondary data, the proposed detector outperforms the existing Kelly binary integrator.

Keywords: non-Gaussian clutter, covariance matrix estimation, target detection, maximum likelihood

Procedia PDF Downloads 462
1318 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 129
1317 Understanding Mathematics Achievements among U. S. Middle School Students: A Bayesian Multilevel Modeling Analysis with Informative Priors

Authors: Jing Yuan, Hongwei Yang

Abstract:

This paper aims to understand U.S. middle school students’ mathematics achievements by examining relevant student and school-level predictors. Through a variance component analysis, the study first identifies evidence supporting the use of multilevel modeling. Then, a multilevel analysis is performed under Bayesian statistical inference where prior information is incorporated into the modeling process. During the analysis, independent variables are entered sequentially in the order of theoretical importance to create a hierarchy of models. By evaluating each model using Bayesian fit indices, a best-fit and most parsimonious model is selected where Bayesian statistical inference is performed for the purpose of result interpretation and discussion. The primary dataset for Bayesian modeling is derived from the Program for International Student Assessment (PISA) in 2012 with a secondary PISA dataset from 2003 analyzed under the traditional ordinary least squares method to provide the information needed to specify informative priors for a subset of the model parameters. The dependent variable is a composite measure of mathematics literacy, calculated from an exploratory factor analysis of all five PISA 2012 mathematics achievement plausible values for which multiple evidences are found supporting data unidimensionality. The independent variables include demographics variables and content-specific variables: mathematics efficacy, teacher-student ratio, proportion of girls in the school, etc. Finally, the entire analysis is performed using the MCMCpack and MCMCglmm packages in R.

Keywords: Bayesian multilevel modeling, mathematics education, PISA, multilevel

Procedia PDF Downloads 331
1316 Parameters Estimation of Power Function Distribution Based on Selective Order Statistics

Authors: Moh'd Alodat

Abstract:

In this paper, we discuss the power function distribution and derive the maximum likelihood estimator of its parameter as well as the reliability parameter. We derive the large sample properties of the estimators based on the selective order statistic scheme. We conduct simulation studies to investigate the significance of the selective order statistic scheme in our setup and to compare the efficiency of the new proposed estimators.

Keywords: fisher information, maximum likelihood estimator, power function distribution, ranked set sampling, selective order statistics sampling

Procedia PDF Downloads 458
1315 Evaluation of Environmental Disclosures on Financial Performance of Quoted Industrial Goods Manufacturing Sectors in Nigeria (2011 – 2020)

Authors: C. C. Chima, C. J. M. Anumaka

Abstract:

This study evaluates environmental disclosures on the financial performance of quoted industrial goods manufacturing sectors in Nigeria. The study employed a quasi-experimental research design to establish the relationship that exists between the environmental disclosure index and financial performance indices (return on assets - ROA, return on equity - ROE, and earnings per share - EPS). A purposeful sampling technique was employed to select five (5) industrial goods manufacturing sectors quoted on the Nigerian Stock Exchange. Secondary data covering 2011 to 2020 financial years were extracted from annual reports of the study sectors using a content analysis method. The data were analyzed using SPSS, Version 23. Panel Ordinary Least Squares (OLS) regression method was employed in estimating the unknown parameters in the study’s regression model after conducting diagnostic and preliminary tests to ascertain that the data set are reliable and not misleading. Empirical results show that there is an insignificant negative relationship between the environmental disclosure index (EDI) and the performance indices (ROA, ROE, and EPS) of the industrial goods manufacturing sectors in Nigeria. The study recommends that: only relevant information which increases the performance indices should appear on the disclosure checklist; environmental disclosure practices should be country-specific; and company executives in Nigeria should increase and monitor the level of investment (resources, time, and energy) in order to ensure that environmental disclosure has a significant impact on financial performance.

Keywords: earnings per share, environmental disclosures, return on assets, return on equity

Procedia PDF Downloads 82
1314 An Investigation on Hot-Spot Temperature Calculation Methods of Power Transformers

Authors: Ahmet Y. Arabul, Ibrahim Senol, Fatma Keskin Arabul, Mustafa G. Aydeniz, Yasemin Oner, Gokhan Kalkan

Abstract:

In the standards of IEC 60076-2 and IEC 60076-7, three different hot-spot temperature estimation methods are suggested. In this study, the algorithms which used in hot-spot temperature calculations are analyzed by comparing the algorithms with the results of an experimental set-up made by a Transformer Monitoring System (TMS) in use. In tested system, TMS uses only top oil temperature and load ratio for hot-spot temperature calculation. And also, it uses some constants from standards which are on agreed statements tables. During the tests, it came out that hot-spot temperature calculation method is just making a simple calculation and not uses significant all other variables that could affect the hot-spot temperature.

Keywords: Hot-spot temperature, monitoring system, power transformer, smart grid

Procedia PDF Downloads 568
1313 Residual Life Estimation of K-out-of-N Cold Standby System

Authors: Qian Zhao, Shi-Qi Liu, Bo Guo, Zhi-Jun Cheng, Xiao-Yue Wu

Abstract:

Cold standby redundancy is considered to be an effective mechanism for improving system reliability and is widely used in industrial engineering. However, because of the complexity of the reliability structure, there is little literature studying on the residual life of cold standby system consisting of complex components. In this paper, a simulation method is presented to predict the residual life of k-out-of-n cold standby system. In practical cases, failure information of a system is either unknown, partly unknown or completely known. Our proposed method is designed to deal with the three scenarios, respectively. Differences between the procedures are analyzed. Finally, numerical examples are used to validate the proposed simulation method.

Keywords: cold standby system, k-out-of-n, residual life, simulation sampling

Procedia PDF Downloads 396
1312 Estimation of Antiurolithiatic Activity of a Biochemical Medicine, Magnesia phosphorica, in Ethylene Glycol-Induced Nephrolithiasis in Wistar Rats by Urine Analysis, Biochemical, Histopathological, and Electron Microscopic Studies

Authors: Priti S. Tidke, Chandragouda R. Patil

Abstract:

The present study was designed to investigate the effect of Magnesia phosphorica, a biochemical medicine on urine screeing, biochemical, histopathological, and electron microscopic images in ethylene glycol induced nepholithiasis in rats.Male Wistar albino rats were divided into six groups and were orally administered saline once daily (IR-sham and IR-control) or Magnesia phosphorica 100 mg/kg twice daily for 24 days.The effect of various dilutions of biochemical Mag phos3x, 6x, 30x was determined on urine output by comparing the urine volume collected by keeping individual animals in metabolic cages. Calcium oxalate urolithiasis and hyperoxaluria in male Wistar rats was induced by oral administration of 0.75% Ethylene glycol p.o. daily for 24 days. Simultaneous administration of biochemical 3x, 6x, 30xMag phos (100mg/kg p.o. twice a day) along with ethylene glycol significantly decreased calcium oxalate, urea, creatinine, Calcium, Magnesium, Chloride, Phosphorus, Albumin, Alkaline Phosphatase content in urine compared with vehicle-treated control group.After the completion of treatment period animals were sacrificed, kidneys were removed and subjected to microscopic examination for possible stone formation. Histological estimation of kidney treated with biochemical Mag phos (3x, 6x, 30xMag phos 100 mg/kg, p.o.) along with ethylene glycol inhibited the growth of calculi and reduced the number of stones in kidney compared with control group. Biochemical Mag phos of 3x dilution and its crude equivalent also showed potent diuretic and antiurolithiatic activity in ethylene glycol induced urolithiasis. A significant decrease in the weight of stones was observed after treatment in animals which received biochemical Mag phos of 3x dilution and its crude equivalent in comparison with control groups. From this study, it can be proposed that the 3x dilution of biochemical Mag phos exhibits a significant inhibitory effect on crystal growth, with the improvement of kidney function and substantiates claims on the biological activity of twelve tissue remedies which can be proved scientifically through laboratory animal studies.

Keywords: Mag phos, Magnesia phosphorica, ciochemic medicine, urolithiasis, kidney stone, ethylene glycol

Procedia PDF Downloads 423
1311 Development of a Model Based on Wavelets and Matrices for the Treatment of Weakly Singular Partial Integro-Differential Equations

Authors: Somveer Singh, Vineet Kumar Singh

Abstract:

We present a new model based on viscoelasticity for the Non-Newtonian fluids.We use a matrix formulated algorithm to approximate solutions of a class of partial integro-differential equations with the given initial and boundary conditions. Some numerical results are presented to simplify application of operational matrix formulation and reduce the computational cost. Convergence analysis, error estimation and numerical stability of the method are also investigated. Finally, some test examples are given to demonstrate accuracy and efficiency of the proposed method.

Keywords: Legendre Wavelets, operational matrices, partial integro-differential equation, viscoelasticity

Procedia PDF Downloads 330
1310 Hansen Solubility Parameter from Surface Measurements

Authors: Neveen AlQasas, Daniel Johnson

Abstract:

Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied films

Keywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements

Procedia PDF Downloads 87
1309 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 267
1308 Outlier Detection in Stock Market Data using Tukey Method and Wavelet Transform

Authors: Sadam Alwadi

Abstract:

Outlier values become a problem that frequently occurs in the data observation or recording process. Thus, the need for data imputation has become an essential matter. In this work, it will make use of the methods described in the prior work to detect the outlier values based on a collection of stock market data. In order to implement the detection and find some solutions that maybe helpful for investors, real closed price data were obtained from the Amman Stock Exchange (ASE). Tukey and Maximum Overlapping Discrete Wavelet Transform (MODWT) methods will be used to impute the detect the outlier values.

Keywords: outlier values, imputation, stock market data, detecting, estimation

Procedia PDF Downloads 79
1307 Markov-Chain-Based Optimal Filtering and Smoothing

Authors: Garry A. Einicke, Langford B. White

Abstract:

This paper describes an optimum filter and smoother for recovering a Markov process message from noisy measurements. The developments follow from an equivalence between a state space model and a hidden Markov chain. The ensuing filter and smoother employ transition probability matrices and approximate probability distribution vectors. The properties of the optimum solutions are retained, namely, the estimates are unbiased and minimize the variance of the output estimation error, provided that the assumed parameter set are correct. Methods for estimating unknown parameters from noisy measurements are discussed. Signal recovery examples are described in which performance benefits are demonstrated at an increased calculation cost.

Keywords: optimal filtering, smoothing, Markov chains

Procedia PDF Downloads 314
1306 Digital Forgery Detection by Signal Noise Inconsistency

Authors: Bo Liu, Chi-Man Pun

Abstract:

A novel technique for digital forgery detection by signal noise inconsistency is proposed in this paper. The forged area spliced from the other picture contains some features which may be inconsistent with the rest part of the image. Noise pattern and the level is a possible factor to reveal such inconsistency. To detect such noise discrepancies, the test picture is initially segmented into small pieces. The noise pattern and level of each segment are then estimated by using various filters. The noise features constructed in this step are utilized in energy-based graph cut to expose forged area in the final step. Experimental results show that our method provides a good illustration of regions with noise inconsistency in various scenarios.

Keywords: forgery detection, splicing forgery, noise estimation, noise

Procedia PDF Downloads 454
1305 A Comparison of Smoothing Spline Method and Penalized Spline Regression Method Based on Nonparametric Regression Model

Authors: Autcha Araveeporn

Abstract:

This paper presents a study about a nonparametric regression model consisting of a smoothing spline method and a penalized spline regression method. We also compare the techniques used for estimation and prediction of nonparametric regression model. We tried both methods with crude oil prices in dollars per barrel and the Stock Exchange of Thailand (SET) index. According to the results, it is concluded that smoothing spline method performs better than that of penalized spline regression method.

Keywords: nonparametric regression model, penalized spline regression method, smoothing spline method, Stock Exchange of Thailand (SET)

Procedia PDF Downloads 433
1304 The Role of Urban Development Patterns for Mitigating Extreme Urban Heat: The Case Study of Doha, Qatar

Authors: Yasuyo Makido, Vivek Shandas, David J. Sailor, M. Salim Ferwati

Abstract:

Mitigating extreme urban heat is challenging in a desert climate such as Doha, Qatar, since outdoor daytime temperature area often too high for the human body to tolerate. Recent studies demonstrate that cities in arid and semiarid areas can exhibit ‘urban cool islands’ - urban areas that are cooler than the surrounding desert. However, the variation of temperatures as a result of the time of day and factors leading to temperature change remain at the question. To address these questions, we examined the spatial and temporal variation of air temperature in Doha, Qatar by conducting multiple vehicle-base local temperature observations. We also employed three statistical approaches to model surface temperatures using relevant predictors: (1) Ordinary Least Squares, (2) Regression Tree Analysis and (3) Random Forest for three time periods. Although the most important determinant factors varied by day and time, distance to the coast was the significant determinant at midday. A 70%/30% holdout method was used to create a testing dataset to validate the results through Pearson’s correlation coefficient. The Pearson’s analysis suggests that the Random Forest model more accurately predicts the surface temperatures than the other methods. We conclude with recommendations about the types of development patterns that show the greatest potential for reducing extreme heat in air climates.

Keywords: desert cities, tree-structure regression model, urban cool Island, vehicle temperature traverse

Procedia PDF Downloads 389