Search results for: least square estimation (LSE)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3381

Search results for: least square estimation (LSE)

2661 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation

Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu

Abstract:

This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.

Keywords: machine learning, neural network, pressurized water reactor, supervisory controller

Procedia PDF Downloads 143
2660 Presuppositions and Implicatures in Four Selected Speeches of Osama Bin Laden's Legitimisation of 'Jihad'

Authors: Sawsan Al-Saaidi, Ghayth K. Shaker Al-Shaibani

Abstract:

This paper investigates certain linguistics properties of four selected speeches by Al-Qaeda’s former leader Osama bin Laden who legitimated the use of jihad by Muslims in various countries when he was alive. The researchers adopt van Dijk’s (2009; 1998) Socio-Cognitive approach and Ideological Square theory respectively. Socio-Cognitive approach revolves around various cognitive, socio-political, and discursive aspects that can be found in political discourse as in Osama bin Laden’s one. The political discourse can be defined in terms of textual properties and contextual models. Pertaining to the ideological square, it refers to positive self-presentation and negative other-presentation which help to enhance the textual and contextual analyses. Therefore, among the most significant properties in Osama bin Laden’s discourse are the use of presuppositions and implicatures which are based on background knowledge and contextual models as well. Thus, the paper concludes that Osama bin Laden used a number of manipulative strategies which augmented and embellished the use of ‘jihad’ in order to develop a more effective discourse for his audience. In addition, the findings have revealed that bin Laden used different implicit and embedded interpretations of different topics which have been accepted as taken-for-granted truths for him to legitimate Jihad against his enemies. There are many presuppositions in the speeches analysed that result in particular common-sense assumptions and a world-view about the selected speeches. More importantly, the assumptions in the analysed speeches help consolidate the ideological analysis in terms of in-group and out-group members.

Keywords: Al-Qaeda, cognition, critical discourse analysis, Osama Bin Laden, jihad, implicature, legitimisation, presupposition, political discourse

Procedia PDF Downloads 225
2659 Poverty Dynamics in Thailand: Evidence from Household Panel Data

Authors: Nattabhorn Leamcharaskul

Abstract:

This study aims to examine determining factors of the dynamics of poverty in Thailand by using panel data of 3,567 households in 2007-2017. Four techniques of estimation are employed to analyze the situation of poverty across households and time periods: the multinomial logit model, the sequential logit model, the quantile regression model, and the difference in difference model. Households are categorized based on their experiences into 5 groups, namely chronically poor, falling into poverty, re-entering into poverty, exiting from poverty and never poor households. Estimation results emphasize the effects of demographic and socioeconomic factors as well as unexpected events on the economic status of a household. It is found that remittances have positive impact on household’s economic status in that they are likely to lower the probability of falling into poverty or trapping in poverty while they tend to increase the probability of exiting from poverty. In addition, not only receiving a secondary source of household income can raise the probability of being a never poor household, but it also significantly increases household income per capita of the chronically poor and falling into poverty households. Public work programs are recommended as an important tool to relieve household financial burden and uncertainty and thus consequently increase a chance for households to escape from poverty.

Keywords: difference in difference, dynamic, multinomial logit model, panel data, poverty, quantile regression, remittance, sequential logit model, Thailand, transfer

Procedia PDF Downloads 97
2658 Three Foci of Trust as Potential Mediators in the Association Between Job Insecurity and Dynamic Organizational Capability: A Quantitative, Exploratory Study

Authors: Marita Heyns

Abstract:

Job insecurity is a distressing phenomenon which has far reaching consequences for both employees and their organizations. Previously, much attention has been given to the link between job insecurity and individual level performance outcomes, while less is known about how subjectively perceived job insecurity might transfer beyond the individual level to affect performance of the organization on an aggregated level. Research focusing on how employees’ fear of job loss might affect the organization’s ability to respond proactively to volatility and drastic change through applying its capabilities of sensing, seizing, and reconfiguring, appears to be practically non-existent. Equally little is known about the potential underlying mechanisms through which job insecurity might affect the dynamic capabilities of an organization. This study examines how job insecurity might affect dynamic organizational capability through trust as an underling process. More specifically, it considered the simultaneous roles of trust at an impersonal (organizational) level as well as trust at an interpersonal level (in leaders and co-workers) as potential underlying mechanisms through which job insecurity might affect the organization’s dynamic capability to respond to opportunities and imminent, drastic change. A quantitative research approach and a stratified random sampling technique enabled the collection of data among 314 managers at four different plant sites of a large South African steel manufacturing organization undergoing dramatic changes. To assess the study hypotheses, the following statistical procedures were employed: Structural equation modelling was performed in Mplus to evaluate the measurement and structural models. The Chi-square values test for absolute fit as well as alternative fit indexes such as the Comparative Fit Index and the Tucker-Lewis Index, the Root Mean Square Error of Approximation and the Standardized Root Mean Square Residual were used as indicators of model fit. Composite reliabilities were calculated to evaluate the reliability of the factors. Finally, interaction effects were tested by using PROCESS and the construction of two-sided 95% confidence intervals. The findings indicate that job insecurity had a lower-than-expected detrimental effect on evaluations of the organization’s dynamic capability through the conducive buffering effects of trust in the organization and in its leaders respectively. In contrast, trust in colleagues did not seem to have any noticeable facilitative effect. The study proposes that both job insecurity and dynamic capability can be managed more effectively by also paying attention to factors that could promote trust in the organization and its leaders; some practical recommendations are given in this regard.

Keywords: dynamic organizational capability, impersonal trust, interpersonal trust, job insecurity

Procedia PDF Downloads 73
2657 A Designing 3D Model: Castle of the Mall-Dern

Authors: Nanadcha Sinjindawong

Abstract:

This article discusses the design process of a community mall called Castle of The Mall-dern. The concept behind this mall is to combine elements of a medieval castle with modern architecture. The author aims to create a building that fits into the surroundings while also providing users with the vibes of the ancient era. The total area used for the mall is 4,000 square meters, with three floors. The first floor is 1,500 square meters, the second floor is 1,750 square meters, and the third floor is 750 square meters. Research Aim: The aim of this research is to design a community mall that sells ancient clothes and accessories, and to combine sustainable architectural design with the ideas of ancient architecture in an urban area with convenient transportation. Methodology: The research utilizes qualitative research methods in architectural design. The process begins with calculating the given area and dividing it into different zones. The author then sketches and draws the plan of each floor, adding the necessary rooms based on the floor areas mentioned earlier. The program "SketchUp" is used to create an online 3D model of the community mall, and a physical model is built for presentation purposes on A1 paper, explaining all the details. Findings: The result of this research is a community mall with various amenities. The first floor includes retail shops, clothing stores, a food center, and a service zone. Additionally, there is an indoor garden with a fountain and a tree for relaxation. The second and third floors feature a void in the middle, with a few stores, cafes, restaurants, and studios on the second floor. The third floor is home to the administration and security control room, as well as a community gathering area designed as a public library with a café inside. Theoretical Importance: This research contributes to the field of sustainable architectural design by combining ancient architectural ideas with modern elements. It showcases the potential for creating buildings that blend historical aesthetics with contemporary functionality. Data Collection and Analysis Procedures: The data for this research is collected through a combination of area calculation, sketching, and building a 3D model. The analysis involves evaluating the design based on the allocated area, zoning, and functional requirements for a community mall. Question Addressed: The research addresses the question of how to design a community mall with a theme of ancient Medieval and Victorian eras. It explores how to combine sustainable architectural design principles with historical aesthetics to create a functional and visually appealing space. Conclusion: In conclusion, this research successfully designs a community mall called “Castle of The Mall-dern” that incorporates elements of Medieval and Victorian architecture. The building encompasses various zones, including retail shops, restaurants, community gathering areas, and service zones. It also features an interior garden and a public library within the mall. The research contributes to the field of sustainable architectural design by showcasing the potential for combining ancient architectural ideas with modern elements in an urban setting.

Keywords: 3D model, community mall, modern architecture, medieval architecture

Procedia PDF Downloads 83
2656 Instant Location Detection of Objects Moving at High Speed in C-OTDR Monitoring Systems

Authors: Andrey V. Timofeev

Abstract:

The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data off the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as 'signaling parameters' (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of C-OTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as a rule. This report contains describing the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.

Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems

Procedia PDF Downloads 456
2655 Developing Allometric Equations for More Accurate Aboveground Biomass and Carbon Estimation in Secondary Evergreen Forests, Thailand

Authors: Titinan Pothong, Prasit Wangpakapattanawong, Stephen Elliott

Abstract:

Shifting cultivation is an indigenous agricultural practice among upland people and has long been one of the major land-use systems in Southeast Asia. As a result, fallows and secondary forests have come to cover a large part of the region. However, they are increasingly being replaced by monocultures, such as corn cultivation. This is believed to be a main driver of deforestation and forest degradation, and one of the reasons behind the recurring winter smog crisis in Thailand and around Southeast Asia. Accurate biomass estimation of trees is important to quantify valuable carbon stocks and changes to these stocks in case of land use change. However, presently, Thailand lacks proper tools and optimal equations to quantify its carbon stocks, especially for secondary evergreen forests, including fallow areas after shifting cultivation and smaller trees with a diameter at breast height (DBH) of less than 5 cm. Developing new allometric equations to estimate biomass is urgently needed to accurately estimate and manage carbon storage in tropical secondary forests. This study established new equations using a destructive method at three study sites: approximately 50-year-old secondary forest, 4-year-old fallow, and 7-year-old fallow. Tree biomass was collected by harvesting 136 individual trees (including coppiced trees) from 23 species, with a DBH ranging from 1 to 31 cm. Oven-dried samples were sent for carbon analysis. Wood density was calculated from disk samples and samples collected with an increment borer from 79 species, including 35 species currently missing from the Global Wood Densities database. Several models were developed, showing that aboveground biomass (AGB) was strongly related to DBH, height (H), and wood density (WD). Including WD in the model was found to improve the accuracy of the AGB estimation. This study provides insights for reforestation management, and can be used to prepare baseline data for Thailand’s carbon stocks for the REDD+ and other carbon trading schemes. These may provide monetary incentives to stop illegal logging and deforestation for monoculture.

Keywords: aboveground biomass, allometric equation, carbon stock, secondary forest

Procedia PDF Downloads 273
2654 The Effect of Fetal Movement Counting on Maternal Antenatal Attachment

Authors: Esra Güney, Tuba Uçar

Abstract:

Aim: This study has been conducted for the purpose of determining the effects of fetal movement counting on antenatal maternal attachment. Material and Method: This research was conducted on the basis of the real test model with the pre-test /post-test control groups. The study population consists of pregnant women registered in the six different Family Health Centers located in the central Malatya districts of Yeşilyurt and Battalgazi. When power analysis is done, the sample size was calculated for each group of at least 55 pregnant women (55 tests, 55 controls). The data were collected by using Personal Information Form and MAAS (Maternal Antenatal Attachment Scale) between July 2015-June 2016. Fetal movement counting training was given to pregnant women by researchers in the experimental group after the pre-test data collection. No intervention was applied to the control group. Post-test data for both groups were collected after four weeks. Data were evaluated with percentage, chi-square arithmetic average, chi-square test and as for the dependent and independent group’s t test. Result: In the MAAS, the pre-test average of total scores in the experimental group is 70.78±6.78, control group is also 71.58±7.54 and so there was no significant difference in mean scores between the two groups (p>0.05). MAAS post-test average of total scores in the experimental group is 78.41±6.65, control group is also is 72.25±7.16 and so the mean scores between groups were found to have statistically significant difference (p<0.05). Conclusion: It was determined that fetal movement counting increases the maternal antenatal attachments.

Keywords: antenatal maternal attachment, fetal movement counting, pregnancy, midwifery

Procedia PDF Downloads 254
2653 Parameter Identification Analysis in the Design of Rock Fill Dams

Authors: G. Shahzadi, A. Soulaimani

Abstract:

This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.

Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS

Procedia PDF Downloads 134
2652 Health Status among Government and Private Primary School Children in the Central of Thailand

Authors: Petcharat Kerdonfag, Supunnee Thrakul

Abstract:

School health services through regular screening of school students’ health status have been the main responsibility for community or school health nurses. The purposes of these retrospective study were to assess and compare health problems between government and private primary school students in the central region of Thailand. The data were collected from the school health records in October at the end of the first semester in the academic year 2018. Two thousand and fifty primary school health records from government and private primary schools were gathered to assess health problems regarding anthropometric measurements, physical examination/personal hygiene, and clinical findings for this study. Descriptive statistics and Chi-square were used to be analyzed. The results revealed that health problems of all the school students remained high magnitude. The five top ranks for prevalence rate of health problems were dental caries (36.6%), visual acuity problem (27.7%), over-nutrition (16.8%), head lice (12.8%), and under-nutrition (6.8%), respectively. However, when compared between government and private schools among five health problems; dental caries (55.0% vs 19.9%), visual acuity problem (23.1% vs 31.9%), over-nutrition (20.2% vs 13.8%), head lice (26.5% vs 0.3%), and under-nutrition (10.6% vs 3.4%) with Chi-square analysis, there were significantly different (p < .001). The problem of visual acuity seems to be more serious in private schools while other health problems tend to be more critical in government schools. The findings have suggested that parents who have children in the private primary schools should pay more attention to visual health defects whereas parents with children in the government school should pay more vigilance regards to hygiene and health behavior problems.

Keywords: community health nursing, school health service, health status, primary school children

Procedia PDF Downloads 106
2651 The Role of Human Capital in the Evolution of Inequality and Economic Growth in Latin-America

Authors: Luis Felipe Brito-Gaona, Emma M. Iglesias

Abstract:

There is a growing literature that studies the main determinants and drivers of inequality and economic growth in several countries, using panel data and different estimation methods (fixed effects, Generalized Methods of Moments (GMM) and Two Stages Least Squares (TSLS)). Recently, it was studied the evolution of these variables in the period 1980-2009 in the 18 countries of Latin-America and it was found that one of the main variables that explained their evolution was Foreign Direct Investment (FDI). We extend this study to the year 2015 in the same 18 countries in Latin-America, and we find that FDI does not have a significant role anymore, while we find a significant negative and positive effect of schooling levels on inequality and economic growth respectively. We also find that the point estimates associated with human capital are the largest ones of the variables included in the analysis, and this means that an increase in human capital (measured by schooling levels of secondary education) is the main determinant that can help to reduce inequality and to increase economic growth in Latin-America. Therefore, we advise that economic policies in Latin-America should be directed towards increasing the level of education. We use the methodologies of estimating by fixed effects, GMM and TSLS to check the robustness of our results. Our conclusion is the same regardless of the estimation method we choose. We also find that the international recession in the Latin-American countries in 2008 reduced significantly their economic growth.

Keywords: economic growth, human capital, inequality, Latin-America

Procedia PDF Downloads 214
2650 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes

Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani

Abstract:

The development of the method to annotate unknown gene functions is an important task in bioinformatics. One of the approaches for the annotation is The identification of the metabolic pathway that genes are involved in. Gene expression data have been utilized for the identification, since gene expression data reflect various intracellular phenomena. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.

Keywords: metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning

Procedia PDF Downloads 391
2649 InSAR Times-Series Phase Unwrapping for Urban Areas

Authors: Hui Luo, Zhenhong Li, Zhen Dong

Abstract:

The analysis of multi-temporal InSAR (MTInSAR) such as persistent scatterer (PS) and small baseline subset (SBAS) techniques usually relies on temporal/spatial phase unwrapping (PU). Unfortunately, it always fails to unwrap the phase for two reasons: 1) spatial phase jump between adjacent pixels larger than π, such as layover and high discontinuous terrain; 2) temporal phase discontinuities such as time varied atmospheric delay. To overcome these limitations, a least-square based PU method is introduced in this paper, which incorporates baseline-combination interferograms and adjacent phase gradient network. Firstly, permanent scatterers (PS) are selected for study. Starting with the linear baseline-combination method, we obtain equivalent 'small baseline inteferograms' to limit the spatial phase difference. Then, phase different has been conducted between connected PSs (connected by a specific networking rule) to suppress the spatial correlated phase errors such as atmospheric artifact. After that, interval phase difference along arcs can be computed by least square method and followed by an outlier detector to remove the arcs with phase ambiguities. Then, the unwrapped phase can be obtained by spatial integration. The proposed method is tested on real data of TerraSAR-X, and the results are also compared with the ones obtained by StaMPS(a software package with 3D PU capabilities). By comparison, it shows that the proposed method can successfully unwrap the interferograms in urban areas even when high discontinuities exist, while StaMPS fails. At last, precise DEM errors can be got according to the unwrapped interferograms.

Keywords: phase unwrapping, time series, InSAR, urban areas

Procedia PDF Downloads 140
2648 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis

Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone

Abstract:

The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21C and 25C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.

Keywords: dehumidification, nodal calculation, radiant cooling panel, system sizing

Procedia PDF Downloads 159
2647 Estimating the Receiver Operating Characteristic Curve from Clustered Data and Case-Control Studies

Authors: Yalda Zarnegarnia, Shari Messinger

Abstract:

Receiver operating characteristic (ROC) curves have been widely used in medical research to illustrate the performance of the biomarker in correctly distinguishing the diseased and non-diseased groups. Correlated biomarker data arises in study designs that include subjects that contain same genetic or environmental factors. The information about correlation might help to identify family members at increased risk of disease development, and may lead to initiating treatment to slow or stop the progression to disease. Approaches appropriate to a case-control design matched by family identification, must be able to accommodate both the correlation inherent in the design in correctly estimating the biomarker’s ability to differentiate between cases and controls, as well as to handle estimation from a matched case control design. This talk will review some developed methods for ROC curve estimation in settings with correlated data from case control design and will discuss the limitations of current methods for analyzing correlated familial paired data. An alternative approach using Conditional ROC curves will be demonstrated, to provide appropriate ROC curves for correlated paired data. The proposed approach will use the information about the correlation among biomarker values, producing conditional ROC curves that evaluate the ability of a biomarker to discriminate between diseased and non-diseased subjects in a familial paired design.

Keywords: biomarker, correlation, familial paired design, ROC curve

Procedia PDF Downloads 227
2646 Bayesian Inference for High Dimensional Dynamic Spatio-Temporal Models

Authors: Sofia M. Karadimitriou, Kostas Triantafyllopoulos, Timothy Heaton

Abstract:

Reduced dimension Dynamic Spatio-Temporal Models (DSTMs) jointly describe the spatial and temporal evolution of a function observed subject to noise. A basic state space model is adopted for the discrete temporal variation, while a continuous autoregressive structure describes the continuous spatial evolution. Application of such a DSTM relies upon the pre-selection of a suitable reduced set of basic functions and this can present a challenge in practice. In this talk, we propose an online estimation method for high dimensional spatio-temporal data based upon DSTM and we attempt to resolve this issue by allowing the basis to adapt to the observed data. Specifically, we present a wavelet decomposition in order to obtain a parsimonious approximation of the spatial continuous process. This parsimony can be achieved by placing a Laplace prior distribution on the wavelet coefficients. The aim of using the Laplace prior, is to filter wavelet coefficients with low contribution, and thus achieve the dimension reduction with significant computation savings. We then propose a Hierarchical Bayesian State Space model, for the estimation of which we offer an appropriate particle filter algorithm. The proposed methodology is illustrated using real environmental data.

Keywords: multidimensional Laplace prior, particle filtering, spatio-temporal modelling, wavelets

Procedia PDF Downloads 416
2645 Assessment of DNA Degradation Using Comet Assay: A Versatile Technique for Forensic Application

Authors: Ritesh K. Shukla

Abstract:

Degradation of biological samples in terms of macromolecules (DNA, RNA, and protein) are the major challenges in the forensic investigation which misleads the result interpretation. Currently, there are no precise methods available to circumvent this problem. Therefore, at the preliminary level, some methods are urgently needed to solve this issue. In this order, Comet assay is one of the most versatile, rapid and sensitive molecular biology technique to assess the DNA degradation. This technique helps to assess DNA degradation even at very low amount of sample. Moreover, the expedient part of this method does not require any additional process of DNA extraction and isolation during DNA degradation assessment. Samples directly embedded on agarose pre-coated microscopic slide and electrophoresis perform on the same slide after lysis step. After electrophoresis microscopic slide stained by DNA binding dye and observed under fluorescent microscope equipped with Komet software. With the help of this technique extent of DNA degradation can be assessed which can help to screen the sample before DNA fingerprinting, whether it is appropriate for DNA analysis or not. This technique not only helps to assess degradation of DNA but many other challenges in forensic investigation such as time since deposition estimation of biological fluids, repair of genetic material from degraded biological sample and early time since death estimation could also be resolved. With the help of this study, an attempt was made to explore the application of well-known molecular biology technique that is Comet assay in the field of forensic science. This assay will open avenue in the field of forensic research and development.

Keywords: comet assay, DNA degradation, forensic, molecular biology

Procedia PDF Downloads 143
2644 Estimation of Normalized Glandular Doses Using a Three-Layer Mammographic Phantom

Authors: Kuan-Jen Lai, Fang-Yi Lin, Shang-Rong Huang, Yun-Zheng Zeng, Po-Chieh Hsu, Jay Wu

Abstract:

The normalized glandular dose (DgN) estimates the energy deposition of mammography in clinical practice. The Monte Carlo simulations frequently use uniformly mixed phantom for calculating the conversion factor. However, breast tissues are not uniformly distributed, leading to errors of conversion factor estimation. This study constructed a three-layer phantom to estimated more accurate of normalized glandular dose. In this study, MCNP code (Monte Carlo N-Particles code) was used to create the geometric structure. We simulated three types of target/filter combinations (Mo/Mo, Mo/Rh, Rh/Rh), six voltages (25 ~ 35 kVp), six HVL parameters and nine breast phantom thicknesses (2 ~ 10 cm) for the three-layer mammographic phantom. The conversion factor for 25%, 50% and 75% glandularity was calculated. The error of conversion factors compared with the results of the American College of Radiology (ACR) was within 6%. For Rh/Rh, the difference was within 9%. The difference between the 50% average glandularity and the uniform phantom was 7.1% ~ -6.7% for the Mo/Mo combination, voltage of 27 kVp, half value layer of 0.34 mmAl, and breast thickness of 4 cm. According to the simulation results, the regression analysis found that the three-layer mammographic phantom at 0% ~ 100% glandularity can be used to accurately calculate the conversion factors. The difference in glandular tissue distribution leads to errors of conversion factor calculation. The three-layer mammographic phantom can provide accurate estimates of glandular dose in clinical practice.

Keywords: Monte Carlo simulation, mammography, normalized glandular dose, glandularity

Procedia PDF Downloads 179
2643 Vocational Education: A Synergy for Skills Acquisition and Global Learning in Colleges of Education in Ogun State, Nigeria

Authors: Raimi, Kehinde Olawuyi, Omoare Ayodeji Motunrayo

Abstract:

In the last two decades, there has been rising youth unemployment, restiveness, and social vices in Nigeria. The relevance of Vocational Education for skills acquisition, global learning, and national development to address these problems cannot be underestimated. Thus, the need to economically empower Nigerian youths to be able to develop the nation and meet up in the ever-changing global learning and economy led to the assessment of Vocational Education as Synergy for the Skills Acquisition and Global Learning in Ogun State, Nigeria. One hundred and twenty out of 1,500 students were randomly selected for this study. Data were obtained through a questionnaire and were analyzed with descriptive statistics and Chi-square. The results of the study showed that 59.2% of the respondents were between 20 – 24 years of age, 60.8% were male, and 65.8% had a keen interest in Vocational Education. Also, 90% of the respondents acquired skills in extension/advisory, 78.3% acquired skills in poultry production, and 69.1% acquired skills in fisheries/aquaculture. The major constraints to Vocational Education are inadequate resource personnel (χ² = 10.25, p = 0.02), inadequate training facilities (x̅ = 2.46) and unstable power supply (x̅ = 2.38). Results of Chi-square showed significance association between constraints and Skills Acquisition (χ² = 12.54, p = 0.00) at p < 0.05 level of significance. It was established that Vocational Education significantly contributed to students’ skills acquisition and global learning. This study, therefore, recommends that inadequate personnel should be looked into by the school authority in order not to over-stretch the available staff of the institution while the provision of alternative stable power supply (solar power) is also essential for effective teaching and learning process.

Keywords: vocational education, skills acquisition, national development, global learning

Procedia PDF Downloads 115
2642 Earnings vs Cash Flows: The Valuation Perspective

Authors: Megha Agarwal

Abstract:

The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.

Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)

Procedia PDF Downloads 361
2641 Immunosupressive Effect of Chloroquine through the Inhibition of Myeloperoxidase

Authors: J. B. Minari, O. B. Oloyede

Abstract:

Polymorphonuclear neutrophils (PMNs) play a crucial role in a variety of infections caused by bacteria, fungi, and parasites. Indeed, the involvement of PMNs in host defence against Plasmodium falciparum is well documented both in vitro and in vivo. Many of the antimalarial drugs such as chloroquine used in the treatment of human malaria significantly reduce the immune response of the host in vitro and in vivo. Myeloperoxidase is the most abundant enzyme found in the polymorphonuclear neutrophil which plays a crucial role in its function. This study was carried out to investigate the effect of chloroquine on the enzyme. In investigating the effects of the drug on myeloperoxidase, the influence of concentration, pH, partition ratio estimation and kinetics of inhibition were studied. This study showed that chloroquine is concentration-dependent inhibitor of myeloperoxidase with an IC50 of 0.03 mM. Partition ratio estimation showed that 40 enzymatic turnover cycles are required for complete inhibition of myeloperoxidase in the presence of chloroquine. The influence of pH on the effect of chloroquine on the enzyme showed significant inhibition of myeloperoxidase at physiological pH. The kinetic inhibition studies showed that chloroquine caused a non-competitive inhibition with an inhibition constant Ki of 0.27mM. The results obtained from this study shows that chloroquine is a potent inhibitor of myeloperoxidase and it is capable of inactivating the enzyme. It is therefore considered that the inhibition of myeloperoxidase in the presence of chloroquine as revealed in this study may partly explain the impairment of polymorphonuclear neutrophil and consequent immunosuppression of the host defence system against secondary infections.

Keywords: myeloperoxidase, chloroquine, inhibition, neutrophil, immune

Procedia PDF Downloads 358
2640 Developing a Health Promotion Program to Prevent and Solve Problem of the Frailty Elderly in the Community

Authors: Kunthida Kulprateepunya, Napat Boontiam, Bunthita Phuasa, Chatsuda Kankayant, Bantoeng Polsawat, Sumran Poontong

Abstract:

Frailty is the thin line between good health and illness. The syndrome is more common in the elderly who transition from strong to weak. (Vulnerability). Fragility can prevent and promote healthy recovery before it goes into disability. This research and development aim to analyze the situation analysis of frailty of the elderly, develop a program, and evaluate the effect of a health promotion program to prevent and solve the problem of frailty among the elderly. The research consisted of 3 phases: 1) analysis of the frailty situation, 2) development of a model, 3) evaluation of the effectiveness of the model. Samples were 328, 122 elderlies using the multi-stage random sampling method. The research instrument was a frailty questionnaire use of the five symptoms, the main characteristics were muscle weakness, slow walking, low physical activity. Fatigue and unintentional weight loss, criteria frailty use more than or equal to three or more symptoms are frailty. Data were analyzed by descriptive and t-test dependent test statistics. The findings showed three parts. First, frailty in the elderly was 23.05 percentage and 56.70% pre-frailty. Second, it was development of a health promotion program to prevent and solve the problem of frailty the elderly with a combination of Nine-Square Exercise, Elastic Band Exercise, Elastic Coconut Shell. Third, evaluation of the effectiveness of the model by comparison of the elderly's get up and go test, the average time before using the program was 14.42 and after using the program was 8.57. It was statistically significant at the .05 level. In conclusion, the findings can used to develop guidelines to promote the health of the frailty elderly.

Keywords: elderly, fragile, nine-square exercise, elastic coconut shell

Procedia PDF Downloads 98
2639 Effect Analysis of an Improved Adaptive Speech Noise Reduction Algorithm in Online Communication Scenarios

Authors: Xingxing Peng

Abstract:

With the development of society, there are more and more online communication scenarios such as teleconference and online education. In the process of conference communication, the quality of voice communication is a very important part, and noise may cause the communication effect of participants to be greatly reduced. Therefore, voice noise reduction has an important impact on scenarios such as voice calls. This research focuses on the key technologies of the sound transmission process. The purpose is to maintain the audio quality to the maximum so that the listener can hear clearer and smoother sound. Firstly, to solve the problem that the traditional speech enhancement algorithm is not ideal when dealing with non-stationary noise, an adaptive speech noise reduction algorithm is studied in this paper. Traditional noise estimation methods are mainly used to deal with stationary noise. In this chapter, we study the spectral characteristics of different noise types, especially the characteristics of non-stationary Burst noise, and design a noise estimator module to deal with non-stationary noise. Noise features are extracted from non-speech segments, and the noise estimation module is adjusted in real time according to different noise characteristics. This adaptive algorithm can enhance speech according to different noise characteristics, improve the performance of traditional algorithms to deal with non-stationary noise, so as to achieve better enhancement effect. The experimental results show that the algorithm proposed in this chapter is effective and can better adapt to different types of noise, so as to obtain better speech enhancement effect.

Keywords: speech noise reduction, speech enhancement, self-adaptation, Wiener filter algorithm

Procedia PDF Downloads 46
2638 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

Authors: Petr Gurný

Abstract:

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default

Procedia PDF Downloads 442
2637 Full-Field Estimation of Cyclic Threshold Shear Strain

Authors: E. E. S. Uy, T. Noda, K. Nakai, J. R. Dungca

Abstract:

Cyclic threshold shear strain is the cyclic shear strain amplitude that serves as the indicator of the development of pore water pressure. The parameter can be obtained by performing either cyclic triaxial test, shaking table test, cyclic simple shear or resonant column. In a cyclic triaxial test, other researchers install measuring devices in close proximity of the soil to measure the parameter. In this study, an attempt was made to estimate the cyclic threshold shear strain parameter using full-field measurement technique. The technique uses a camera to monitor and measure the movement of the soil. For this study, the technique was incorporated in a strain-controlled consolidated undrained cyclic triaxial test. Calibration of the camera was first performed to ensure that the camera can properly measure the deformation under cyclic loading. Its capacity to measure deformation was also investigated using a cylindrical rubber dummy. Two-dimensional image processing was implemented. Lucas and Kanade optical flow algorithm was applied to track the movement of the soil particles. Results from the full-field measurement technique were compared with the results from the linear variable displacement transducer. A range of values was determined from the estimation. This was due to the nonhomogeneous deformation of the soil observed during the cyclic loading. The minimum values were in the order of 10-2% in some areas of the specimen.

Keywords: cyclic loading, cyclic threshold shear strain, full-field measurement, optical flow

Procedia PDF Downloads 223
2636 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer

Abstract:

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML

Procedia PDF Downloads 119
2635 Deliberation of Daily Evapotranspiration and Evaporative Fraction Based on Remote Sensing Data

Authors: J. Bahrawi, M. Elhag

Abstract:

Estimation of evapotranspiration is always a major component in water resources management. Traditional techniques of calculating daily evapotranspiration based on field measurements are valid only for local scales. Earth observation satellite sensors are thus used to overcome difficulties in obtaining daily evapotranspiration measurements on regional scale. The Surface Energy Balance System (SEBS) model was adopted to estimate daily evapotranspiration and relative evaporation along with other land surface energy fluxes. The model requires agro-climatic data that improve the model outputs. Advance Along Track Scanning Radiometer (AATSR) and Medium Spectral Resolution Imaging Spectrometer (MERIS) imageries were used to estimate the daily evapotranspiration and relative evaporation over the entire Nile Delta region in Egypt supported by meteorological data collected from six different weather stations located within the study area. Daily evapotranspiration maps derived from SEBS model show a strong agreement with actual ground-truth data taken from 92 points uniformly distributed all over the study area. Moreover, daily evapotranspiration and relative evaporation are strongly correlated. The reliable estimation of daily evapotranspiration supports the decision makers to review the current land use practices in terms of water management, while enabling them to propose proper land use changes.

Keywords: daily evapotranspiration, relative evaporation, SEBS, AATSR, MERIS, Nile Delta

Procedia PDF Downloads 247
2634 Evaluation of Heat Transfer and Entropy Generation by Al2O3-Water Nanofluid

Authors: Houda Jalali, Hassan Abbassi

Abstract:

In this numerical work, natural convection and entropy generation of Al2O3–water nanofluid in square cavity have been studied. A two-dimensional steady laminar natural convection in a differentially heated square cavity of length L, filled with a nanofluid is investigated numerically. The horizontal walls are considered adiabatic. Vertical walls corresponding to x=0 and x=L are respectively maintained at hot temperature, Th and cold temperature, Tc. The resolution is performed by the CFD code "FLUENT" in combination with GAMBIT as mesh generator. These simulations are performed by maintaining the Rayleigh numbers varied as 103 ≤ Ra ≤ 106, while the solid volume fraction varied from 1% to 5%, the particle size is fixed at dp=33 nm and a range of the temperature from 20 to 70 °C. We used models of thermophysical nanofluids properties based on experimental measurements for studying the effect of adding solid particle into water in natural convection heat transfer and entropy generation of nanofluid. Such as models of thermal conductivity and dynamic viscosity which are dependent on solid volume fraction, particle size and temperature. The average Nusselt number is calculated at the hot wall of the cavity in a different solid volume fraction. The most important results is that at low temperatures (less than 40 °C), the addition of nanosolids Al2O3 into water leads to a decrease in heat transfer and entropy generation instead of the expected increase, whereas at high temperature, heat transfer and entropy generation increase with the addition of nanosolids. This behavior is due to the contradictory effects of viscosity and thermal conductivity of the nanofluid. These effects are discussed in this work.

Keywords: entropy generation, heat transfer, nanofluid, natural convection

Procedia PDF Downloads 259
2633 Linear Regression Estimation of Tactile Comfort for Denim Fabrics Based on In-Plane Shear Behavior

Authors: Nazli Uren, Ayse Okur

Abstract:

Tactile comfort of a textile product is an essential property and a major concern when it comes to customer perceptions and preferences. The subjective nature of comfort and the difficulties regarding the simulation of human hand sensory feelings make it hard to establish a well-accepted link between tactile comfort and objective evaluations. On the other hand, shear behavior of a fabric is a mechanical parameter which can be measured by various objective test methods. The principal aim of this study is to determine the tactile comfort of commercially available denim fabrics by subjective measurements, create a tactile score database for denim fabrics and investigate the relations between tactile comfort and shear behavior. In-plane shear behaviors of 17 different commercially available denim fabrics with a variety of raw material and weave structure were measured by a custom design shear frame and conventional bias extension method in two corresponding diagonal directions. Tactile comfort of denim fabrics was determined via subjective customer evaluations as well. Aforesaid relations were statistically investigated and introduced as regression equations. The analyses regarding the relations between tactile comfort and shear behavior showed that there are considerably high correlation coefficients. The suggested regression equations were likewise found out to be statistically significant. Accordingly, it was concluded that the tactile comfort of denim fabrics can be estimated with a high precision, based on the results of in-plane shear behavior measurements.

Keywords: denim fabrics, in-plane shear behavior, linear regression estimation, tactile comfort

Procedia PDF Downloads 287
2632 Correlation Analysis between the Corporate Governance and Financial Performance of Banking Sectors Using Parameter Estimation

Authors: Vishwa Nath Maurya, Rama Shanker Sharma, Saad Talib Hasson Aljebori, Avadhesh Kumar Maurya, Diwinder Kaur Arora

Abstract:

Present paper deals with problems of determining the relationship between the variables of corporate governance and financial performance of Islamic banks. Here, we dealt with the corporate governance in the banking sector, where increasing the importance of corporate governance, due to their special nature, as the bankruptcy of banks affects not only the relevant parties from customers, depositors and lenders, but also affect financial stability and then the economy as a whole. Through this paper we dealt to the specificity of governance in Islamic banks, which face double governance: Anglo-Saxon governance system and Islamic governance system. In addition, we focused our attention to measure the impact of corporate governance variables on financial performance through an empirical study on a sample of Islamic banks during the period 2005-2012 in the GCC region. Our present study implies that there is a very strong relationship between the variables of governance and financial performance of Islamic banks, where there is a positive relationship between return on assets and the composition of the Board of Directors, the size of the Board of Directors, the number of committees in the Council, as well as the number of members of the Sharia Supervisory Board, while it is clear that there is a negative relationship between return on assets and concentration ownership.

Keywords: correlation analysis, parametric estimation, corporate governance, financial performance, financial stability, conventional banks, bankruptcy, Islamic governance system

Procedia PDF Downloads 505