Search results for: higher dimensional pmf estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14388

Search results for: higher dimensional pmf estimation

13728 Measures for Conflict Management in Nigerian Higher Institutions

Authors: Oyelade Oluwatoyin

Abstract:

The phenomenon of crises in educational sector in Nigeria has reached its peak in the 21st century. Thus, this paper examines the strategies that can be used in managing the conflict situation in Nigeria Higher Institution of learning. The causes of conflicts such as inadequate funding, insufficient school facilities, poor working condition, poor enrolment, proliferation of higher institutions and unfavourable administrative decision are the major detriment of law and order i.e. strike action, destruction of property and programmes coupled with the student unrest. This write-up will make use of the available information and with the aim of adding value to existing knowledge. It was recommend that steps should be taken by policy maker to prevent scourge of conflicts in tertiary institutions in Nigeria

Keywords: conflicts, higher institutions, management, measures

Procedia PDF Downloads 347
13727 Estimation of Consolidating Settlement Based on a Time-Dependent Skin Friction Model Considering Column Surface Roughness

Authors: Jiang Zhenbo, Ishikura Ryohei, Yasufuku Noriyuki

Abstract:

Improvement of soft clay deposits by the combination of surface stabilization and floating type cement-treated columns is one of the most popular techniques worldwide. On the basis of one dimensional consolidation model, a time-dependent skin friction model for the column-soil interaction is proposed. The nonlinear relationship between column shaft shear stresses and effective vertical pressure of the surrounding soil can be described in this model. The influence of column-soil surface roughness can be represented using a roughness coefficient R, which plays an important role in the design of column length. Based on the homogenization method, a part of floating type improved ground will be treated as an unimproved portion, which with a length of αH1 is defined as a time-dependent equivalent skin friction length. The compression settlement of this unimproved portion can be predicted only using the soft clay parameters. Apart from calculating the settlement of this composited ground, the load transfer mechanism is discussed utilizing model tests. The proposed model is validated by comparing with calculations and laboratory results of model and ring shear tests, which indicate the suitability and accuracy of the solutions in this paper.

Keywords: floating type improved foundation, time-dependent skin friction, roughness, consolidation

Procedia PDF Downloads 455
13726 A Generalized Family of Estimators for Estimation of Unknown Population Variance in Simple Random Sampling

Authors: Saba Riaz, Syed A. Hussain

Abstract:

This paper is addressing the estimation method of the unknown population variance of the variable of interest. A new generalized class of estimators of the finite population variance has been suggested using the auxiliary information. To improve the precision of the proposed class, known population variance of the auxiliary variable has been used. Mathematical expressions for the biases and the asymptotic variances of the suggested class are derived under large sample approximation. Theoretical and numerical comparisons are made to investigate the performances of the proposed class of estimators. The empirical study reveals that the suggested class of estimators performs better than the usual estimator, classical ratio estimator, classical product estimator and classical linear regression estimator. It has also been found that the suggested class of estimators is also more efficient than some recently published estimators.

Keywords: study variable, auxiliary variable, finite population variance, bias, asymptotic variance, percent relative efficiency

Procedia PDF Downloads 203
13725 Two-Dimensional CFD Simulation of the Behaviors of Ferromagnetic Nanoparticles in Channel

Authors: Farhad Aalizadeh, Ali Moosavi

Abstract:

This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, particle tracking. The purpose of this paper is applied magnetic field effect on Magnetic Nanoparticles velocities distribution. It is shown that the permeability of the particles determines the effect of the magnetic field on the deposition of the particles and the deposition of the particles is inversely proportional to the Reynolds number. Using MHD and its property it is possible to control the flow velocity, remove the fouling on the walls and return the system to its original form. we consider a channel 2D geometry and solve for the resulting spatial distribution of particles. According to obtained results when only magnetic fields are applied perpendicular to the flow, local particles velocity is decreased due to the direct effect of the magnetic field return the system to its original fom. In the method first, in order to avoid mixing with blood, the ferromagnetic particles are covered with a gel-like chemical composition and are injected into the blood vessels. Then, a magnetic field source with a specified distance from the vessel is used and the particles are guided to the affected area. This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, laminar flow of an incompressible magnetorheological (MR) fluid between two fixed parallel plates in the presence of a uniform magnetic field. The purpose of this study is to develop a numerical tool that is able to simulate MR fluids flow in valve mode and determineB0, applied magnetic field effect on flow velocities and pressure distributions.

Keywords: MHD, channel clots, magnetic nanoparticles, simulations

Procedia PDF Downloads 353
13724 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 74
13723 Objective vs. Perceived Quality in the Cereal Industry

Authors: Albena Ivanova, Jill Kurp, Austin Hampe

Abstract:

Cereal products in the US contain rich information on the front of the package (FOP) as well as point-of-purchase (POP) summaries provided by the store. These summaries frequently are confusing and misleading to the consumer. This study explores the relationship between perceived quality, objective quality, price, and value in the cold cereal industry. A total of 270 cold cereal products were analyzed and the price, quality and value for different summaries were compared using ANOVA tests. The results provide evidence that the United States Department of Agriculture Organic FOP/POP are related to higher objective quality, higher price, but not to a higher value. Whole grain FOP/POP related to a higher objective quality, lower or similar price, and higher value. Heart-healthy POP related to higher objective quality, similar price, and higher value. Gluten-free FOP/POP related to lower objective quality, higher price, and lower value. Kid's cereals were of lower objective quality, same price, and lower value compared to family and adult markets. The findings point to a disturbing tendency of companies to continue to produce lower quality products for the kids’ market, pricing them the same as high-quality products. The paper outlines strategies that marketers and policymakers can utilize to contribute to the increased objective quality and value of breakfast cereal products in the United States.

Keywords: cereals, certifications, front-of-package claims, consumer health.

Procedia PDF Downloads 108
13722 Heat Transfer Performance for Turbulent Flow through a Tube Using Baffles

Authors: Amina Benabderrahmane, Abdelylah Benazza, Samir Laouedj

Abstract:

Three dimensional numerical investigation of heat transfer enhancement inside a non-uniformly heated parabolic trough solar collector fitted with baffles under turbulent flow was studied in the current paper. Molten salt is used as heat transfer fluid and simulations are carried out in ANSYS computational fluid dynamics (CFD). The present data was validating by the empirical correlations available in the literatures and good agreement was obtained. The Nusselt number and friction factor values for using baffles are considerably higher than that for smooth pipe. The emplacement and the distance between two consecutive baffles have an effect non-negligible on heat transfer characteristics; the results demonstrate that the temperature gradient reduces with the inclusion of inserts.

Keywords: Baffles, heat transfer enhancement, molten salt, Monte Carlo ray trace technique, numerical investigation

Procedia PDF Downloads 282
13721 The Economic Benefits of the Graduates of Higher Education in Philippines

Authors: Christia C. Baltar

Abstract:

Everybody goes to primary education but not all proceed to secondary education because of poverty and it is evident in the Philippines. Moreover, the number goes down when they reach higher education. The researcher believes that higher education may improve the standard of living of the family looking at the economic benefits of it. Once one graduated from a particular degree, one may employ with higher wage than those who are non-degree holder. Every year the Philippines produce more than five hundred thousand graduates of higher education and it keeps on increasing every year. Thus, the competition in the employment is really high. It is then important to pursue higher education than settling to a high school graduate because a degree is what most of the employer is looking for. The Philippine government through the Department of Labor and Employment is offering job fairs to all cities as much as possible just to cater employment for those graduates away from urban areas like in Manila and even the privates sectors also proposing for job fairs. Researcher conducted a survey in her institution and she further used secondary information to strengthen the findings of her survey. Researcher used descriptive measures, chi-square test for independence, and the correlation coefficient to analyze the data in her survey. In the survey conducted results show that there was an increase on the income of the family of the graduates of higher education. The graduates believed that their standard of living improved because they were able to work in a better job. The data were analyzed and the results show that there was no significant relationship on sex, age and marital status of the graduates to their economic status but the degree program they enrolled in the tertiary education affects their economic status. The impact of earning higher education can be seen indirectly to the economic growth of the Philippines. Finally, researcher concludes that there is direct and indirect impact of the higher education to the economic status of the graduates.

Keywords: economic, economic benefits, higher education, standard of living

Procedia PDF Downloads 282
13720 Parameter Estimation with Uncertainty and Sensitivity Analysis for the SARS Outbreak in Hong Kong

Authors: Afia Naheed, Manmohan Singh, David Lucy

Abstract:

This work is based on a mathematical as well as statistical study of an SEIJTR deterministic model for the interpretation of transmission of severe acute respiratory syndrome (SARS). Based on the SARS epidemic in 2003, the parameters are estimated using Runge-Kutta (Dormand-Prince pairs) and least squares methods. Possible graphical and numerical techniques are used to validate the estimates. Then effect of the model parameters on the dynamics of the disease is examined using sensitivity and uncertainty analysis. Sensitivity and uncertainty analytical techniques are used in order to analyze the affect of the uncertainty in the obtained parameter estimates and to determine which parameters have the largest impact on controlling the disease dynamics.

Keywords: infectious disease, severe acute respiratory syndrome (SARS), parameter estimation, sensitivity analysis, uncertainty analysis, Runge-Kutta methods, Levenberg-Marquardt method

Procedia PDF Downloads 341
13719 Evaluating Accuracy of Foetal Weight Estimation by Clinicians in Christian Medical College Hospital, India and Its Correlation to Actual Birth Weight: A Clinical Audit

Authors: Aarati Susan Mathew, Radhika Narendra Patel, Jiji Mathew

Abstract:

A retrospective study conducted at Christian Medical College (CMC) Teaching Hospital, Vellore, India on 14th August 2014 to assess the accuracy of clinically estimated foetal weight upon labour admission. Estimating foetal weight is a crucial factor in assessing maternal and foetal complications during and after labour. Medical notes of ninety-eight postnatal women who fulfilled the inclusion criteria were studied to evaluate the correlation between their recorded Estimated Foetal Weight (EFW) on admission and actual birth weight (ABW) of the newborn after delivery. Data concerning maternal and foetal demographics was also noted. Accuracy was determined by absolute percentage error and proportion of estimates within 10% of ABW. Actual birth weights ranged from 950-4080g. A strong positive correlation between EFW and ABW (r=0.904) was noted. Term deliveries (≥40 weeks) in the normal weight range (2500-4000g) had a 59.5% estimation accuracy (n=74) compared to pre-term (<40 weeks) with an estimation accuracy of 0% (n=2). Out of the term deliveries, macrosomic babies (>4000g) were underestimated by 25% (n=3) and low birthweight (LBW) babies were overestimated by 12.7% (n=9). Registrars who estimated foetal weight were accurate in babies within normal weight ranges. However, there needs to be an improvement in predicting weight of macrosomic and LBW foetuses. We have suggested the use of an amended version of the Johnson’s formula for the Indian population for improvement and a need to re-audit once implemented.

Keywords: clinical palpation, estimated foetal weight, pregnancy, India, Johnson’s formula

Procedia PDF Downloads 353
13718 Copula Markov Switching Multifractal Models for Forecasting Value-at-Risk

Authors: Giriraj Achari, Malay Bhattacharyya

Abstract:

In this paper, the effectiveness of Copula Markov Switching Multifractal (MSM) models at forecasting Value-at-Risk of a two-stock portfolio is studied. The innovations are allowed to be drawn from distributions that can capture skewness and leptokurtosis, which are well documented empirical characteristics observed in financial returns. The candidate distributions considered for this purpose are Johnson-SU, Pearson Type-IV and α-Stable distributions. The two univariate marginal distributions are combined using the Student-t copula. The estimation of all parameters is performed by Maximum Likelihood Estimation. Finally, the models are compared in terms of accurate Value-at-Risk (VaR) forecasts using tests of unconditional coverage and independence. It is found that Copula-MSM-models with leptokurtic innovation distributions perform slightly better than Copula-MSM model with Normal innovations. Copula-MSM models, in general, produce better VaR forecasts as compared to traditional methods like Historical Simulation method, Variance-Covariance approach and Copula-Generalized Autoregressive Conditional Heteroscedasticity (Copula-GARCH) models.

Keywords: Copula, Markov Switching, multifractal, value-at-risk

Procedia PDF Downloads 148
13717 The Economic Benefits of Higher Education to the Graduates in the Philippines

Authors: Christia C. Baltar

Abstract:

Everybody goes to primary education but not all proceed to secondary education because of poverty and it is evident in the Philippines. Moreover, the number goes down when they reach higher education. The researcher believes that higher education may improve the standard of living of the family looking at the economic benefits of it. Once one graduated from a particular degree, one may employ with higher wage than those who are non-degree holder. Every year the Philippines produce more than five hundred thousand graduates of higher education and it keeps on increasing every year. Thus, the competition in the employment is really high. It is then important to pursue higher education than settling to a high school graduate because a degree is what most of the employer is looking for. The Philippine government through the Department of Labor and Employment is offering job fairs to all cities as much as possible just to cater employment for those graduates away from urban areas like in Manila and even the privates sectors also proposing for job fairs. Researcher conducted a survey in her institution and she further used secondary information to strengthen the findings of her survey. Researcher used descriptive measures, chi-square test for independence, and the correlation coefficient to analyze the data in her survey. In the survey conducted results show that there was an increase on the income of the family of the graduates of higher education. The graduates believed that their standard of living improved because they were able to work in a better job. The data were analyzed and the results show that there was no significant relationship on sex, age and marital status of the graduates to their economic status but the degree program they enrolled in the tertiary education affects their economic status. The impact of earning higher education can be seen indirectly to the economic growth of the Philippines. Finally, researcher concludes that there is direct and indirect impact of the higher education to the economic status of the graduates.

Keywords: economic benefits, economic status, graduate, higher education

Procedia PDF Downloads 361
13716 The Agency of Black Women Professors in Higher Education: A Critical Consciousness Perspective

Authors: Ncamisile T. Zulu, Nicholas Munro

Abstract:

Black women academics in higher education institutions are predominantly portrayed by literature as individuals who usually lack a sense of belonging, progression, and workload management. The oversaturation of this literature can (overtime) perpetuate a stereotypical idea that Black women academics are incapable of coping and succeeding in higher education institutions. The current article explores the agency, motivated by critical consciousness that Black women professors have and utilise in higher education institutions. In order to provide an understanding of how Black women academics can progress, manage their workloads and succeed in higher education institutions, the article considers how these women can take responsibility for their self-development, adaptation, and self-renewal in academic endeavours. As a result, the article presents a line of thought which could help in challenging the stereotype about Black women academics. The study was conducted at two higher education institutions involving Black women professors from different disciplines. A combination of purposive and snowballing sampling was used to recruit nine women participants, while data were collected through interviews. A critical consciousness perspective was adopted to formulate an understanding of the agency of Black women professors in higher education institutions, while thematic analysis was used to analyse the data. The results challenge the widely disseminated view that portrays Black women academics as incapable of coping and succeeding in higher education institutions. The findings highlight Black women professors as proactive, flexible, and self-regulating in their academic endeavours. These findings contribute to the literature by adding a more constructive narrative of Black women academics in higher education.

Keywords: agency, Black women academics, critical consciousness, higher education institutions

Procedia PDF Downloads 136
13715 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning

Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds is not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.

Keywords: structural healthcare monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation

Procedia PDF Downloads 418
13714 Effect of Blade Shape on the Performance of Wells Turbine for Wave Energy Conversion

Authors: Katsuya Takasaki, Manabu Takao, Toshiaki Setoguchi

Abstract:

Effect of 3-dimensional (3D) blade on the turbine characteristics of Wells turbine for wave energy conversion has been investigated experimentally by model testing under steady flow conditions in the study, in order to improve the peak efficiency and the stall characteristics. The aim of the use of 3D blade is to prevent flow separation on the suction surface near the tip. The chord length is constant with radius and the blade profile changes gradually from mean radius to tip. The proposed blade profiles in the study are NACA0015 from hub to mean radius and NACA0025 at the tip. The performances of Wells turbine with 3D blades has been compared with those of the original Wells turbine, i.e. the turbine with 2-dimensional (2D) blades. As a result, it was concluded that although the peak efficiency of Wells turbine can be improved by the use of the proposed 3D blade, its blade does not overcome the weakness of stalling.

Keywords: fluid machinery, ocean engineering, stall, wave energy conversion, wells turbine

Procedia PDF Downloads 284
13713 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 160
13712 2D PbS Nanosheets Synthesis and Their Applications as Field Effect Transistors or Solar Cells

Authors: T. Bielewicz, S. Dogan, C. Klinke

Abstract:

Two-dimensional, solution-processable semiconductor materials are interesting for low-cost electronic applications [1]. We demonstrate the synthesis of lead sulfide nanosheets and how their size, shape and height can be tuned by varying concentrations of pre-cursors, ligands and by varying the reaction temperature. Especially, the charge carrier confinement in the nanosheets’ height adjustable from 2 to 20 nm has a decisive impact on their electronic properties. This is demonstrated by their use as conduction channel in a field effect transistor [2]. Recently we also showed that especially thin nanosheets show a high carrier multiplication (CM) efficiency [3] which could make them, through the confinement induced band gap and high photoconductivity, very attractive for application in photovoltaic devices. We are already able to manufacture photovoltaic devices out of single nanosheets which show promising results.

Keywords: physical sciences, chemistry, materials, chemistry, colloids, physics, condensed-matter physics, semiconductors, two-dimensional materials

Procedia PDF Downloads 284
13711 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 129
13710 Age Estimation from Upper Anterior Teeth by Pulp/Tooth Ratio Using Peri-Apical X-Rays among Egyptians

Authors: Fatma Mohamed Magdy Badr El Dine, Amr Mohamed Abd Allah

Abstract:

Introduction: Age estimation of individuals is one of the crucial steps in forensic practice. Different traditional methods rely on the length of the diaphysis of long bones of limbs, epiphyseal-diaphyseal union, fusion of the primary ossification centers as well as dental eruption. However, there is a growing need for the development of precise and reliable methods to estimate age, especially in cases where dismembered corpses, burnt bodies, purified or fragmented parts are recovered. Teeth are the hardest and indestructible structure in the human body. In recent years, assessment of pulp/tooth area ratio, as an indirect quantification of secondary dentine deposition has received a considerable attention. However, scanty work has been done in Egypt in terms of applicability of pulp/tooth ratio for age estimation. Aim of the Work: The present work was designed to assess the Cameriere’s method for age estimation from pulp/tooth ratio of maxillary canines, central and lateral incisors among a sample from Egyptian population. In addition, to formulate regression equations to be used as population-based standards for age determination. Material and Methods: The present study was conducted on 270 peri-apical X-rays of maxillary canines, central and lateral incisors (collected from 131 males and 139 females aged between 19 and 52 years). The pulp and tooth areas were measured using the Adobe Photoshop software program and the pulp/tooth area ratio was computed. Linear regression equations were determined separately for canines, central and lateral incisors. Results: A significant correlation was recorded between the pulp/tooth area ratio and the chronological age. The linear regression analysis revealed a coefficient of determination (R² = 0.824 for canine, 0.588 for central incisor and 0.737 for lateral incisor teeth). Three regression equations were derived. Conclusion: As a conclusion, the pulp/tooth ratio is a useful technique for estimating age among Egyptians. Additionally, the regression equation derived from canines gave better result than the incisors.

Keywords: age determination, canines, central incisors, Egypt, lateral incisors, pulp/tooth ratio

Procedia PDF Downloads 170
13709 Quality and Quantity in the Strategic Network of Higher Education Institutions

Authors: Juha Kettunen

Abstract:

This study analyzes the quality and the size of the strategic network of higher education institutions. The study analyses the concept of fitness for purpose in quality assurance. It also analyses the transaction costs of networking that have consequences on the number of members in the network. Empirical evidence is presented of the Consortium on Applied Research and Professional Education, which is a European strategic network of six higher education institutions. The results of the study support the argument that the number of members in the strategic network should be relatively small to provide high quality results. The practical importance is that networking has been able to promote international research and development projects. The results of this study are important for those who want to design and improve international networks in higher education.

Keywords: balanced scorecard, higher education, social networking, strategic planning

Procedia PDF Downloads 324
13708 Rainfall Estimation Using Himawari-8 Meteorological Satellite Imagery in Central Taiwan

Authors: Chiang Wei, Hui-Chung Yeh, Yen-Chang Chen

Abstract:

The objective of this study is to estimate the rainfall using the new generation Himawari-8 meteorological satellite with multi-band, high-bit format, and high spatiotemporal resolution, ground rainfall data at the Chen-Yu-Lan watershed of Joushuei River Basin (443.6 square kilometers) in Central Taiwan. Accurate and fine-scale rainfall information is essential for rugged terrain with high local variation for early warning of flood, landslide, and debris flow disasters. 10-minute and 2 km pixel-based rainfall of Typhoon Megi of 2016 and meiyu on June 1-4 of 2017 were tested to demonstrate the new generation Himawari-8 meteorological satellite can capture rainfall variation in the rugged mountainous area both at fine-scale and watershed scale. The results provide the valuable rainfall information for early warning of future disasters.

Keywords: estimation, Himawari-8, rainfall, satellite imagery

Procedia PDF Downloads 175
13707 Adaptive Multipath Mitigation Acquisition Approach for Global Positioning System Software Receivers

Authors: Animut Meseret Simachew

Abstract:

Parallel Code Phase Search Acquisition (PCSA) Algorithm has been considered as a promising method in GPS software receivers for detection and estimation of the accurate correlation peak between the received Global Positioning System (GPS) signal and locally generated replicas. GPS signal acquisition in highly dense multipath environments is the main research challenge. In this work, we proposed a robust variable step-size (RVSS) PCSA algorithm based on fast frequency transform (FFT) filtering technique to mitigate short time delay multipath signals. Simulation results reveal the effectiveness of the proposed algorithm over the conventional PCSA algorithm. The proposed RVSS-PCSA algorithm equalizes the received carrier wiped-off signal with locally generated C/A code.

Keywords: adaptive PCSA, detection and estimation, GPS signal acquisition, GPS software receiver

Procedia PDF Downloads 106
13706 Development of Single Layer of WO3 on Large Spatial Resolution by Atomic Layer Deposition Technique

Authors: S. Zhuiykov, Zh. Hai, H. Xu, C. Xue

Abstract:

Unique and distinctive properties could be obtained on such two-dimensional (2D) semiconductor as tungsten trioxide (WO3) when the reduction from multi-layer to one fundamental layer thickness takes place. This transition without damaging single-layer on a large spatial resolution remained elusive until the atomic layer deposition (ALD) technique was utilized. Here we report the ALD-enabled atomic-layer-precision development of a single layer WO3 with thickness of 0.77±0.07 nm on a large spatial resolution by using (tBuN)2W(NMe2)2 as tungsten precursor and H2O as oxygen precursor, without affecting the underlying SiO2/Si substrate. Versatility of ALD is in tuning recipe in order to achieve the complete WO3 with desired number of WO3 layers including monolayer. Governed by self-limiting surface reactions, the ALD-enabled approach is versatile, scalable and applicable for a broader range of 2D semiconductors and various device applications.

Keywords: Atomic Layer Deposition (ALD), tungsten oxide, WO₃, two-dimensional semiconductors, single fundamental layer

Procedia PDF Downloads 226
13705 Vehicle Speed Estimation Using Image Processing

Authors: Prodipta Bhowmik, Poulami Saha, Preety Mehra, Yogesh Soni, Triloki Nath Jha

Abstract:

In India, the smart city concept is growing day by day. So, for smart city development, a better traffic management and monitoring system is a very important requirement. Nowadays, road accidents increase due to more vehicles on the road. Reckless driving is mainly responsible for a huge number of accidents. So, an efficient traffic management system is required for all kinds of roads to control the traffic speed. The speed limit varies from road to road basis. Previously, there was a radar system but due to high cost and less precision, the radar system is unable to become favorable in a traffic management system. Traffic management system faces different types of problems every day and it has become a researchable topic on how to solve this problem. This paper proposed a computer vision and machine learning-based automated system for multiple vehicle detection, tracking, and speed estimation of vehicles using image processing. Detection of vehicles and estimating their speed from a real-time video is tough work to do. The objective of this paper is to detect vehicles and estimate their speed as accurately as possible. So for this, a real-time video is first captured, then the frames are extracted from that video, then from that frames, the vehicles are detected, and thereafter, the tracking of vehicles starts, and finally, the speed of the moving vehicles is estimated. The goal of this method is to develop a cost-friendly system that can able to detect multiple types of vehicles at the same time.

Keywords: OpenCV, Haar Cascade classifier, DLIB, YOLOV3, centroid tracker, vehicle detection, vehicle tracking, vehicle speed estimation, computer vision

Procedia PDF Downloads 63
13704 Human Motion Capture: New Innovations in the Field of Computer Vision

Authors: Najm Alotaibi

Abstract:

Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture.

Keywords: human motion capture, computer vision, vision-based, tracking

Procedia PDF Downloads 301
13703 Series Network-Structured Inverse Models of Data Envelopment Analysis: Pitfalls and Solutions

Authors: Zohreh Moghaddas, Morteza Yazdani, Farhad Hosseinzadeh

Abstract:

Nowadays, data envelopment analysis (DEA) models featuring network structures have gained widespread usage for evaluating the performance of production systems and activities (Decision-Making Units (DMUs)) across diverse fields. By examining the relationships between the internal stages of the network, these models offer valuable insights to managers and decision-makers regarding the performance of each stage and its impact on the overall network. To further empower system decision-makers, the inverse data envelopment analysis (IDEA) model has been introduced. This model allows the estimation of crucial information for estimating parameters while keeping the efficiency score unchanged or improved, enabling analysis of the sensitivity of system inputs or outputs according to managers' preferences. This empowers managers to apply their preferences and policies on resources, such as inputs and outputs, and analyze various aspects like production, resource allocation processes, and resource efficiency enhancement within the system. The results obtained can be instrumental in making informed decisions in the future. The top result of this study is an analysis of infeasibility and incorrect estimation that may arise in the theory and application of the inverse model of data envelopment analysis with network structures. By addressing these pitfalls, novel protocols are proposed to circumvent these shortcomings effectively. Subsequently, several theoretical and applied problems are examined and resolved through insightful case studies.

Keywords: inverse models of data envelopment analysis, series network, estimation of inputs and outputs, efficiency, resource allocation, sensitivity analysis, infeasibility

Procedia PDF Downloads 24
13702 Application of UAS in Forest Firefighting for Detecting Ignitions and 3D Fuel Volume Estimation

Authors: Artur Krukowski, Emmanouela Vogiatzaki

Abstract:

The article presents results from the AF3 project “Advanced Forest Fire Fighting” focused on Unmanned Aircraft Systems (UAS)-based 3D surveillance and 3D area mapping using high-resolution photogrammetric methods from multispectral imaging, also taking advantage of the 3D scanning techniques from the SCAN4RECO project. We also present a proprietary embedded sensor system used for the detection of fire ignitions in the forest using near-infrared based scanner with weight and form factors allowing it to be easily deployed on standard commercial micro-UAVs, such as DJI Inspire or Mavic. Results from real-life pilot trials in Greece, Spain, and Israel demonstrated added-value in the use of UAS for precise and reliable detection of forest fires, as well as high-resolution 3D aerial modeling for accurate quantification of human resources and equipment required for firefighting.

Keywords: forest wildfires, surveillance, fuel volume estimation, firefighting, ignition detectors, 3D modelling, UAV

Procedia PDF Downloads 128
13701 Plasma-Induced Modification of Biomolecules: A Tool for Analysis of Protein Structures

Authors: Yuting Wu, Faraz Choudhury, Daniel Benjamin, James Whalin, Joshua Blatz, Leon Shohet, Michael Sussman, Mark Richards

Abstract:

Plasma-Induced Modification of Biomolecules (PLIMB) has been developed as a technology, which, together with mass spectrometry, measures three-dimensional structural characteristics of proteins. This technique uses hydroxyl radicals generated by atmospheric-pressure plasma discharge to react with the solvent-accessible side chains of protein in an aqueous solution. In this work, we investigate the three-dimensional structure of hemoglobin and myoglobin using PLIMB. Additional modifications to these proteins, such as oxidation, fragmentations, and conformational changes caused by PLIMB are also explored. These results show that PLIMB, coupled with mass spectrometry, is an effective way to determine solvent access to hemoproteins. Furthermore, we show that many factors, including pH and the electrical parameters used to generate the plasma, have a significant influence on solvent accessibility.

Keywords: plasma, hemoglobin, myoglobin, solvent access

Procedia PDF Downloads 168
13700 The Hyperbolic Smoothing Approach for Automatic Calibration of Rainfall-Runoff Models

Authors: Adilson Elias Xavier, Otto Corrêa Rotunno Filho, Paulo Canedo De Magalhães

Abstract:

This paper addresses the issue of automatic parameter estimation in conceptual rainfall-runoff (CRR) models. Due to threshold structures commonly occurring in CRR models, the associated mathematical optimization problems have the significant characteristic of being strongly non-differentiable. In order to face this enormous task, the resolution method proposed adopts a smoothing strategy using a special C∞ differentiable class function. The final estimation solution is obtained by solving a sequence of differentiable subproblems which gradually approach the original conceptual problem. The use of this technique, called Hyperbolic Smoothing Method (HSM), makes possible the application of the most powerful minimization algorithms, and also allows for the main difficulties presented by the original CRR problem to be overcome. A set of computational experiments is presented for the purpose of illustrating both the reliability and the efficiency of the proposed approach.

Keywords: rainfall-runoff models, automatic calibration, hyperbolic smoothing method

Procedia PDF Downloads 134
13699 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 80