Search results for: estimation after selection
3764 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation
Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma
Abstract:
Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling
Procedia PDF Downloads 1423763 Hydrological, Hydraulics, Analysis and Design of the Aposto –Yirgalem Road Upgrading Project, Ethiopia
Authors: Azazhu Wassie
Abstract:
This study tried to analyze and identify the drainage pattern and catchment characteristics of the river basin and assess the impact of the hydrologic parameters (catchment area, rainfall intensity, runoff coefficient, land use, and soil type) on the referenced study area. Since there is no river gauging station near the road, even for large rivers, rainfall-runoff models are adopted for flood estimation, i.e., for catchment areas less than 50 ha, the rational method is used; for catchment areas, less than 65 km², the SCS unit hydrograph method is used; and for catchment areas greater than 65 km², HEC-HMS is adopted for flood estimation.Keywords: Arc GIS, catchment area, land use/land cover, peak flood, rainfall intensity
Procedia PDF Downloads 343762 Housing Prices and Travel Costs: Insights from Origin-Destination Demand Estimation in Taiwan’s Science Parks
Authors: Kai-Wei Ji, Dung-Ying Lin
Abstract:
This study investigates the impact of transportation on housing prices in regions surrounding Taiwan's science parks. As these parks evolve into crucial economic and population growth centers, they attract an increasing number of residents and workers, significantly influencing local housing markets. This demographic shift raises important questions about the role of transportation in shaping real estate values. Our research examines four major science parks in Taiwan, providing a comparative analysis of how transportation conditions and population dynamics interact to affect housing price premiums. We employ an origin-destination (OD) matrix derived from pervasive traffic data to model travel patterns and their effects on real estate values. The methodology utilizes a bi-level framework: a genetic algorithm optimizes OD demand estimation at the upper level, while a user equilibrium (UE) model simulates traffic flow at the lower level. This approach enables a nuanced exploration of how population growth impacts transportation conditions and housing price premiums. By analyzing the interplay between travel costs based on OD demand estimation and housing prices, we offer valuable insights for urban planners and policymakers. These findings are crucial for informed decision-making in rapidly developing areas, where understanding the relationship between mobility and real estate values is essential for sustainable urban development.Keywords: demand estimation, genetic algorithm, housing price, transportation
Procedia PDF Downloads 203761 Computer-Based Model for Design Selection of Lightning Arrester for 132/33kV Substation
Authors: Uma U. Uma, Uzoechi Laz
Abstract:
Protection of equipment insulation against lightning over voltages and selection of lightning arrester that will discharge at lower voltage level than the voltage required to breakdown the electrical equipment insulation is examined. The objectives of this paper are to design a computer based model using standard equations for the selection of appropriate lightning arrester with the lowest rated surge arrester that will provide adequate protection of equipment insulation and equally have a satisfactory service life when connected to a specified line voltage in power system network. The effectiveness and non-effectiveness of the earthing system of substation determine arrester properties. MATLAB program with GUI (graphic user interphase) its subprogram is used in the development of the model for the determination of required parameters like voltage rating, impulse spark over voltage, power frequency spark over voltage, discharge current, current rating and protection level of lightning arrester of a specified voltage level of a particular line.Keywords: lightning arrester, GUIs, MatLab program, computer based model
Procedia PDF Downloads 4173760 Prioritization of Customer Order Selection Factors by Utilizing Conjoint Analysis: A Case Study for a Structural Steel Firm
Authors: Burcu Akyildiz, Cigdem Kadaifci, Y. Ilker Topcu, Burc Ulengin
Abstract:
In today’s business environment, companies should make strategic decisions to gain sustainable competitive advantage. Order selection is a crucial issue among these decisions especially for steel production industry. When the companies allocate a high proportion of their design and production capacities to their ongoing projects, determining which customer order should be chosen among the potential orders without exceeding the remaining capacity is the major critical problem. In this study, it is aimed to identify and prioritize the evaluation factors for the customer order selection problem. Conjoint analysis is used to examine the importance level of each factor which is determined as the potential profit rate per unit of time, the compatibility of potential order with available capacity, the level of potential future order with higher profit, customer credit of future business opportunity, and the negotiability level of production schedule for the order.Keywords: conjoint analysis, order prioritization, profit management, structural steel firm
Procedia PDF Downloads 3843759 Multi Tier Data Collection and Estimation, Utilizing Queue Model in Wireless Sensor Networks
Authors: Amirhossein Mohajerzadeh, Abolghasem Mohajerzadeh
Abstract:
In this paper, target parameter is estimated with desirable precision in hierarchical wireless sensor networks (WSN) while the proposed algorithm also tries to prolong network lifetime as much as possible, using efficient data collecting algorithm. Target parameter distribution function is considered unknown. Sensor nodes sense the environment and send the data to the base station called fusion center (FC) using hierarchical data collecting algorithm. FC builds underlying phenomena based on collected data. Considering the aggregation level, x, the goal is providing the essential infrastructure to find the best value for aggregation level in order to prolong network lifetime as much as possible, while desirable accuracy is guaranteed (required sample size is fully depended on desirable precision). First, the sample size calculation algorithm is discussed, second, the average queue length based on M/M[x]/1/K queue model is determined and it is used for energy consumption calculation. Nodes can decrease transmission cost by aggregating incoming data. Furthermore, the performance of the new algorithm is evaluated in terms of lifetime and estimation accuracy.Keywords: aggregation, estimation, queuing, wireless sensor network
Procedia PDF Downloads 1863758 National Assessment for Schools in Saudi Arabia: Score Reliability and Plausible Values
Authors: Dimiter M. Dimitrov, Abdullah Sadaawi
Abstract:
The National Assessment for Schools (NAFS) in Saudi Arabia consists of standardized tests in Mathematics, Reading, and Science for school grade levels 3, 6, and 9. One main goal is to classify students into four categories of NAFS performance (minimal, basic, proficient, and advanced) by schools and the entire national sample. The NAFS scoring and equating is performed on a bounded scale (D-scale: ranging from 0 to 1) in the framework of the recently developed “D-scoring method of measurement.” The specificity of the NAFS measurement framework and data complexity presented both challenges and opportunities to (a) the estimation of score reliability for schools, (b) setting cut-scores for the classification of students into categories of performance, and (c) generating plausible values for distributions of student performance on the D-scale. The estimation of score reliability at the school level was performed in the framework of generalizability theory (GT), with students “nested” within schools and test items “nested” within test forms. The GT design was executed via a multilevel modeling syntax code in R. Cut-scores (on the D-scale) for the classification of students into performance categories was derived via a recently developed method of standard setting, referred to as “Response Vector for Mastery” (RVM) method. For each school, the classification of students into categories of NAFS performance was based on distributions of plausible values for the students’ scores on NAFS tests by grade level (3, 6, and 9) and subject (Mathematics, Reading, and Science). Plausible values (on the D-scale) for each individual student were generated via random selection from a statistical logit-normal distribution with parameters derived from the student’s D-score and its conditional standard error, SE(D). All procedures related to D-scoring, equating, generating plausible values, and classification of students into performance levels were executed via a computer program in R developed for the purpose of NAFS data analysis.Keywords: large-scale assessment, reliability, generalizability theory, plausible values
Procedia PDF Downloads 183757 Managing the Local Manager: A Comparative Study of Core HRM Functions in Multinationals
Authors: Maria Khan
Abstract:
Framing good core Human Resource Management (HRM) functions like recruitment, selection, training and development, which if executed effectively, can become a strategic advantage for a company. HRM policies related to mid-level managers can depend on the type of top management. This may be due to the difference in perception of effective HRM policies of an expatriate and local leadership. This comparative case study assesses how local mid-level managers are managed in leading multinational telecom companies in Pakistan. Core HRM functions related to managers were analysed through field research based on semi-structured interviews with relevant Human Resource Managers. Results suggest that recruitment and selection practices are not too different and are in compliance with best HRM practices. However, there is a difference in the effective implementation of Training and Development policies. Changing global management trends and skill development dictate that MNCs continuously develop the local talent effectively for local and international success.Keywords: recruitment, selection, training, development, core HRM, human resource management, subsidiary, international staffing, managers, MNC, expatriate
Procedia PDF Downloads 3273756 Evaluation of QSRR Models by Sum of Ranking Differences Approach: A Case Study of Prediction of Chromatographic Behavior of Pesticides
Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević
Abstract:
The present study deals with the selection of the most suitable quantitative structure-retention relationship (QSRR) models which should be used in prediction of the retention behavior of basic, neutral, acidic and phenolic pesticides which belong to different classes: fungicides, herbicides, metabolites, insecticides and plant growth regulators. Sum of ranking differences (SRD) approach can give a different point of view on selection of the most consistent QSRR model. SRD approach can be applied not only for ranking of the QSRR models, but also for detection of similarity or dissimilarity among them. Applying the SRD analysis, the most similar models can be found easily. In this study, selection of the best model was carried out on the basis of the reference ranking (“golden standard”) which was defined as the row average values of logarithm of retention time (logtr) defined by high performance liquid chromatography (HPLC). Also, SRD analysis based on experimental logtr values as reference ranking revealed similar grouping of the established QSRR models already obtained by hierarchical cluster analysis (HCA).Keywords: chemometrics, chromatography, pesticides, sum of ranking differences
Procedia PDF Downloads 3753755 Estimation of Train Operation Using an Exponential Smoothing Method
Authors: Taiyo Matsumura, Kuninori Takahashi, Takashi Ono
Abstract:
The purpose of this research is to improve the convenience of waiting for trains at level crossings and stations and to prevent accidents resulting from forcible entry into level crossings, by providing level crossing users and passengers with information that tells them when the next train will pass through or arrive. For this paper, we proposed methods for estimating operation by means of an average value method, variable response smoothing method, and exponential smoothing method, on the basis of open data, which has low accuracy, but for which performance schedules are distributed in real time. We then examined the accuracy of the estimations. The results showed that the application of an exponential smoothing method is valid.Keywords: exponential smoothing method, open data, operation estimation, train schedule
Procedia PDF Downloads 3883754 Enhancement of Primary User Detection in Cognitive Radio by Scattering Transform
Authors: A. Moawad, K. C. Yao, A. Mansour, R. Gautier
Abstract:
The detecting of an occupied frequency band is a major issue in cognitive radio systems. The detection process becomes difficult if the signal occupying the band of interest has faded amplitude due to multipath effects. These effects make it hard for an occupying user to be detected. This work mitigates the missed-detection problem in the context of cognitive radio in frequency-selective fading channel by proposing blind channel estimation method that is based on scattering transform. By initially applying conventional energy detection, the missed-detection probability is evaluated, and if it is greater than or equal to 50%, channel estimation is applied on the received signal followed by channel equalization to reduce the channel effects. In the proposed channel estimator, we modify the Morlet wavelet by using its first derivative for better frequency resolution. A mathematical description of the modified function and its frequency resolution is formulated in this work. The improved frequency resolution is required to follow the spectral variation of the channel. The channel estimation error is evaluated in the mean-square sense for different channel settings, and energy detection is applied to the equalized received signal. The simulation results show improvement in reducing the missed-detection probability as compared to the detection based on principal component analysis. This improvement is achieved at the expense of increased estimator complexity, which depends on the number of wavelet filters as related to the channel taps. Also, the detection performance shows an improvement in detection probability for low signal-to-noise scenarios over principal component analysis- based energy detection.Keywords: channel estimation, cognitive radio, scattering transform, spectrum sensing
Procedia PDF Downloads 1963753 An Efficient Strategy for Relay Selection in Multi-Hop Communication
Authors: Jung-In Baik, Seung-Jun Yu, Young-Min Ko, Hyoung-Kyu Song
Abstract:
This paper proposes an efficient relaying algorithm to obtain diversity for improving the reliability of a signal. The algorithm achieves time or space diversity gain by multiple versions of the same signal through two routes. Relays are separated between a source and destination. The routes between the source and destination are set adaptive in order to deal with different channels and noises. The routes consist of one or more relays and the source transmits its signal to the destination through the routes. The signals from the relays are combined and detected at the destination. The proposed algorithm provides a better performance than the conventional algorithms in bit error rate (BER).Keywords: multi-hop, OFDM, relay, relaying selection
Procedia PDF Downloads 4453752 Credit Risk Prediction Based on Bayesian Estimation of Logistic Regression Model with Random Effects
Authors: Sami Mestiri, Abdeljelil Farhat
Abstract:
The aim of this current paper is to predict the credit risk of banks in Tunisia, over the period (2000-2005). For this purpose, two methods for the estimation of the logistic regression model with random effects: Penalized Quasi Likelihood (PQL) method and Gibbs Sampler algorithm are applied. By using the information on a sample of 528 Tunisian firms and 26 financial ratios, we show that Bayesian approach improves the quality of model predictions in terms of good classification as well as by the ROC curve result.Keywords: forecasting, credit risk, Penalized Quasi Likelihood, Gibbs Sampler, logistic regression with random effects, curve ROC
Procedia PDF Downloads 5423751 Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds
Authors: Hesheng Wang, Haoyu Wang, Chungang Zhuang
Abstract:
Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall.Keywords: pose estimation, deep learning, point cloud, bin-picking, 3D computer vision
Procedia PDF Downloads 1613750 On Periodic Integer-Valued Moving Average Models
Authors: Aries Nawel, Bentarzi Mohamed
Abstract:
This paper deals with the study of some probabilistic and statistical properties of a Periodic Integer-Valued Moving Average Model (PINMA_{S}(q)). The closed forms of the mean, the second moment and the periodic autocovariance function are obtained. Furthermore, the time reversibility of the model is discussed in details. Moreover, the estimation of the underlying parameters are obtained by the Yule-Walker method, the Conditional Least Square method (CLS) and the Weighted Conditional Least Square method (WCLS). A simulation study is carried out to evaluate the performance of the estimation method. Moreover, an application on real data set is provided.Keywords: periodic integer-valued moving average, periodically correlated process, time reversibility, count data
Procedia PDF Downloads 2023749 Evidence of Natural Selection Footprints among Some African Chicken Breeds and Village Ecotypes
Authors: Ahmed Elbeltagy, Francesca Bertolini, Damarius Fleming, Angelica Van Goor, Chris Ashwell, Carl Schmidt, Donald Kugonza, Susan Lamont, Max Rothschild
Abstract:
The major factor in shaping genomic variation of the African indigenous rural chicken is likely natural selection drives the development genetic footprints in the chicken genomes. To investigate such a hypothesis of a selection footprint, a total of 292 birds were randomly sampled from three indigenous ecotypes from East Africa (Uganda, Rwanda) and North Africa (Egypt) and two registered Egyptian breeds (Fayoumi and Dandarawi), and from the synthetic Kuroiler breed. Samples were genotyped using the Affymetrix 600K Axiom® Array. A total of 526,652 SNPs were utilized in the downstream analysis after quality control measures. The intra-population runs of homozygosity (ROH) that were consensuses in > 50% of individuals of an ecotype or > 75% of a breed were studied. To identify inter-population differentiation due to genetic structure, FST was calculated for North- vs. East- African populations in addition to population-pairwise combinations for overlapping windows (500Kb with an overlap of 250Kb). A total of 28,563 ROH were determined and were classified into three length categories. ROH and Fst detected sweeps were identified on several autosomes. Several genes in these regions are likely to be related to adaptation to local environmental stresses that include high altitude, diseases resistance, poor nutrition, oxidative and heat stresses and were linked to gene ontology terms (GO) related to immune response, oxygen consumption and heme binding, carbohydrate metabolism, oxidation-reduction, and behavior. Results indicated a possible effect of natural selection forces on shaping genomic structure for adaptation to local environmental stresses.Keywords: African Chicken, runs of homozygosity, FST, selection footprints
Procedia PDF Downloads 3133748 Non-Local Simultaneous Sparse Unmixing for Hyperspectral Data
Authors: Fanqiang Kong, Chending Bian
Abstract:
Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed pixels of a hyperspectral image can be expressed in the form of linear combination of only a few pure spectral signatures (end members) in an available spectral library. However, the sparse unmixing problem still remains a great challenge at finding the optimal subset of endmembers for the observed data from a large standard spectral library, without considering the spatial information. Under such circumstances, a sparse unmixing algorithm termed as non-local simultaneous sparse unmixing (NLSSU) is presented. In NLSSU, the non-local simultaneous sparse representation method for endmember selection of sparse unmixing, is used to finding the optimal subset of endmembers for the similar image patch set in the hyperspectral image. And then, the non-local means method, as a regularizer for abundance estimation of sparse unmixing, is used to exploit the abundance image non-local self-similarity. Experimental results on both simulated and real data demonstrate that NLSSU outperforms the other algorithms, with a better spectral unmixing accuracy.Keywords: hyperspectral unmixing, simultaneous sparse representation, sparse regression, non-local means
Procedia PDF Downloads 2453747 Machine Learning Assisted Prediction of Sintered Density of Binary W(MO) Alloys
Authors: Hexiong Liu
Abstract:
Powder metallurgy is the optimal method for the consolidation and preparation of W(Mo) alloys, which exhibit excellent application prospects at high temperatures. The properties of W(Mo) alloys are closely related to the sintered density. However, controlling the sintered density and porosity of these alloys is still challenging. In the past, the regulation methods mainly focused on time-consuming and costly trial-and-error experiments. In this study, the sintering data for more than a dozen W(Mo) alloys constituted a small-scale dataset, including both solid and liquid phases of sintering. Furthermore, simple descriptors were used to predict the sintered density of W(Mo) alloys based on the descriptor selection strategy and machine learning method (ML), where the ML algorithm included the least absolute shrinkage and selection operator (Lasso) regression, k-nearest neighbor (k-NN), random forest (RF), and multi-layer perceptron (MLP). The results showed that the interpretable descriptors extracted by our proposed selection strategy and the MLP neural network achieved a high prediction accuracy (R>0.950). By further predicting the sintered density of W(Mo) alloys using different sintering processes, the error between the predicted and experimental values was less than 0.063, confirming the application potential of the model.Keywords: sintered density, machine learning, interpretable descriptors, W(Mo) alloy
Procedia PDF Downloads 823746 Tensile Force Estimation for Real-Size Pre-Stressed Concrete Girder using Embedded Elasto-Magnetic Sensor
Authors: Junkyeong Kim, Jooyoung Park, Aoqi Zhang, Seunghee Park
Abstract:
The tensile force of Pre-Stressed Concrete (PSC) girder is the most important factor for evaluating the performance of PSC girder bridges. To measure the tensile force of PSC girder, several NDT methods were studied. However, conventional NDT method cannot be applied to the real-size PSC girder because the PS tendons could not be approached. To measure the tensile force of real-size PSC girder, this study proposed embedded EM sensor based tensile force estimation method. The embedded EM sensor could be installed inside of PSC girder as a sheath joint before the concrete casting. After curing process, the PS tendons were installed, and the tensile force was induced step by step using hydraulic jacking machine. The B-H loop was measured using embedded EM sensor at each tensile force steps and to compare with actual tensile force, the load cell was installed at each end of girder. The magnetization energy loss, that is the closed area of B-H loop, was decreased according to the increase of tensile force with regular pattern. Thus, the tensile force could be estimated by the tracking the change of magnetization energy loss of PS tendons. Through the experimental result, the proposed method can be used to estimate the tensile force of the in-situ real-size PSC girder bridge.Keywords: tensile force estimation, embedded EM sensor, magnetization energy loss, PSC girder
Procedia PDF Downloads 3373745 Estimation of Source Parameters Using Source Parameters Imaging Method From Digitised High Resolution Airborne Magnetic Data of a Basement Complex
Authors: O. T. Oluriz, O. D. Akinyemi, J. A.Olowofela, O. A. Idowu, S. A. Ganiyu
Abstract:
This study was carried out using aeromagnetic data which record variation in the magnitude of the earth magnetic field in order to detect local changes in the properties of the underlying geology. The aeromagnetic data (Sheet No. 261) was acquired from the archives of Nigeria Geological Survey Agency of Nigeria, obtained in 2009. The study present estimation of source parameters within an area of about 3,025 square kilometers on geographic latitude to and longitude to within Ibadan and it’s environs in Oyo State, southwestern Nigeria. The area under study belongs to part of basement complex in southwestern Nigeria. Estimation of source parameters of aeromagnetic data was achieve through the application of source imaging parameters (SPI) techniques that provide delineation, depth, dip contact, susceptibility contrast and mineral potentials of magnetic signatures within the region. The depth to the magnetic sources in the area ranges from 0.675 km to 4.48 km. The estimated depth limit to shallow sources is 0.695 km and depth to deep sources is 4.48 km. The apparent susceptibility values of the entire study area obtained ranges from 0.01 to 0.005 [SI]. This study has shown that the magnetic susceptibility within study area is controlled mainly by super paramagnetic minerals.Keywords: aeromagnetic, basement complex, meta-sediment, precambrian
Procedia PDF Downloads 4303744 Age Estimation and Sex Determination by CT-Scan Analysis of the Hyoid Bone: Application on a Tunisian Population
Authors: N. Haj Salem, M. Belhadj, S. Ben Jomâa, R. Dhouieb, S. Saadi, M. A. Mesrati, A. Chadly
Abstract:
Introduction: The hyoid bone is considered as one of many bones used to identify a missed person. There is a specificity of each population group in human identifications. Objective: To analyze the relationship between age, sex and metric parameters of hyoid bone in Tunisian population sample, using CT-scan. Materials and Methods: A prospective study was conducted in the Department of Forensic Medicine of FattoumaBourguiba Hospital of Monastir-Tunisia during 4 years. A total of 240 samples of hyoid bone were studied. The age of cases ranged from 18 days to 81 years. The specimens were collected only from the deceased of known age. Once dried, each hyoid bone was scanned using CT scan. For each specimen, 10 measurements were taken using a computer program. The measurements consisted of 6 lengths and 4 widths. A regression analysis was used to estimate the relationship between age, sex, and different measurements. For age estimation, a multiple logistic regression was carried out for samples ≤ 35 years. For sex determination, ROC curve was performed. Discriminant value finally retained was based on the best specificity with the best sensitivity. Results: The correlation between real age and estimated age was good (r²=0.72) for samples aged 35 years or less. The unstandardised canonical function equation was estimated using three variables: maximum length of the right greater cornua, length from the middle of the left joint space to the middle of the right joint space and perpendicular length from the centre point of a line between the distal ends of the right and left greater cornua to the centre point of the anterior view of the body of the hyoid bone. For sex determination, the ROC curve analysis reveals that the area under curve was at 81.8%. Discriminant value was 0.451 with a specificity of 73% and sensibility of 79%. The equation function was estimated based on two variables: maximum length of the greater cornua and maximum length of the hyoid bone. Conclusion: The findings of the current study suggest that metric analysis of the hyoid bone may predict the age ≤ 35 years. Sex estimation seems to be more reliable. Further studies dealing with the fusion of the hyoid bone and the current study could help to achieve more accurate age estimation rates.Keywords: anthropology, age estimation, CT scan, sex determination, Tunisia
Procedia PDF Downloads 1723743 Application of an Analytical Model to Obtain Daily Flow Duration Curves for Different Hydrological Regimes in Switzerland
Authors: Ana Clara Santos, Maria Manuela Portela, Bettina Schaefli
Abstract:
This work assesses the performance of an analytical model framework to generate daily flow duration curves, FDCs, based on climatic characteristics of the catchments and on their streamflow recession coefficients. According to the analytical model framework, precipitation is considered to be a stochastic process, modeled as a marked Poisson process, and recession is considered to be deterministic, with parameters that can be computed based on different models. The analytical model framework was tested for three case studies with different hydrological regimes located in Switzerland: pluvial, snow-dominated and glacier. For that purpose, five time intervals were analyzed (the four meteorological seasons and the civil year) and two developments of the model were tested: one considering a linear recession model and the other adopting a nonlinear recession model. Those developments were combined with recession coefficients obtained from two different approaches: forward and inverse estimation. The performance of the analytical framework when considering forward parameter estimation is poor in comparison with the inverse estimation for both, linear and nonlinear models. For the pluvial catchment, the inverse estimation shows exceptional good results, especially for the nonlinear model, clearing suggesting that the model has the ability to describe FDCs. For the snow-dominated and glacier catchments the seasonal results are better than the annual ones suggesting that the model can describe streamflows in those conditions and that future efforts should focus on improving and combining seasonal curves instead of considering single annual ones.Keywords: analytical streamflow distribution, stochastic process, linear and non-linear recession, hydrological modelling, daily discharges
Procedia PDF Downloads 1623742 Cooling Profile Analysis of Hot Strip Coil Using Finite Volume Method
Authors: Subhamita Chakraborty, Shubhabrata Datta, Sujay Kumar Mukherjea, Partha Protim Chattopadhyay
Abstract:
Manufacturing of multiphase high strength steel in hot strip mill have drawn significant attention due to the possibility of forming low temperature transformation product of austenite under continuous cooling condition. In such endeavor, reliable prediction of temperature profile of hot strip coil is essential in order to accesses the evolution of microstructure at different location of hot strip coil, on the basis of corresponding Continuous Cooling Transformation (CCT) diagram. Temperature distribution profile of the hot strip coil has been determined by using finite volume method (FVM) vis-à-vis finite difference method (FDM). It has been demonstrated that FVM offer greater computational reliability in estimation of contact pressure distribution and hence the temperature distribution for curved and irregular profiles, owing to the flexibility in selection of grid geometry and discrete point position, Moreover, use of finite volume concept allows enforcing the conservation of mass, momentum and energy, leading to enhanced accuracy of prediction.Keywords: simulation, modeling, thermal analysis, coil cooling, contact pressure, finite volume method
Procedia PDF Downloads 4733741 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction
Procedia PDF Downloads 963740 Electromagnetic Source Direction of Arrival Estimation via Virtual Antenna Array
Authors: Meiling Yang, Shuguo Xie, Yilong Zhu
Abstract:
Nowadays, due to diverse electric products and complex electromagnetic environment, the localization and troubleshooting of the electromagnetic radiation source is urgent and necessary especially on the condition of far field. However, based on the existing DOA positioning method, the system or devices are complex, bulky and expensive. To address this issue, this paper proposes a single antenna radiation source localization method. A single antenna moves to form a virtual antenna array combined with DOA and MUSIC algorithm to position accurately, meanwhile reducing the cost and simplify the equipment. As shown in the results of simulations and experiments, the virtual antenna array DOA estimation modeling is correct and its positioning is credible.Keywords: virtual antenna array, DOA, localization, far field
Procedia PDF Downloads 3723739 Multi-Class Text Classification Using Ensembles of Classifiers
Authors: Syed Basit Ali Shah Bukhari, Yan Qiang, Saad Abdul Rauf, Syed Saqlaina Bukhari
Abstract:
Text Classification is the methodology to classify any given text into the respective category from a given set of categories. It is highly important and vital to use proper set of pre-processing , feature selection and classification techniques to achieve this purpose. In this paper we have used different ensemble techniques along with variance in feature selection parameters to see the change in overall accuracy of the result and also on some other individual class based features which include precision value of each individual category of the text. After subjecting our data through pre-processing and feature selection techniques , different individual classifiers were tested first and after that classifiers were combined to form ensembles to increase their accuracy. Later we also studied the impact of decreasing the classification categories on over all accuracy of data. Text classification is highly used in sentiment analysis on social media sites such as twitter for realizing people’s opinions about any cause or it is also used to analyze customer’s reviews about certain products or services. Opinion mining is a vital task in data mining and text categorization is a back-bone to opinion mining.Keywords: Natural Language Processing, Ensemble Classifier, Bagging Classifier, AdaBoost
Procedia PDF Downloads 2323738 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output
Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin
Abstract:
With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.Keywords: channel estimation, LMMSE, LS, MIMO, MMSE
Procedia PDF Downloads 1913737 Proficient Estimation Procedure for a Rare Sensitive Attribute Using Poisson Distribution
Authors: S. Suman, G. N. Singh
Abstract:
The present manuscript addresses the estimation procedure of population parameter using Poisson probability distribution when characteristic under study possesses a rare sensitive attribute. The generalized form of unrelated randomized response model is suggested in order to acquire the truthful responses from respondents. The resultant estimators have been proposed for two situations when the information on an unrelated rare non-sensitive characteristic is known as well as unknown. The properties of the proposed estimators are derived, and the measure of confidentiality of respondent is also suggested for respondents. Empirical studies are carried out in the support of discussed theory.Keywords: Poisson distribution, randomized response model, rare sensitive attribute, non-sensitive attribute
Procedia PDF Downloads 2663736 Metaheuristic to Align Multiple Sequences
Authors: Lamiche Chaabane
Abstract:
In this study, a new method for solving sequence alignment problem is proposed, which is named ITS (Improved Tabu Search). This algorithm is based on the classical Tabu Search (TS). ITS is implemented in order to obtain results of multiple sequence alignment. Several ideas concerning neighbourhood generation, move selection mechanisms and intensification/diversification strategies for our proposed ITS is investigated. ITS have generated high-quality results in terms of measure of scores in comparison with the classical TS and simple iterative search algorithm.Keywords: multiple sequence alignment, tabu search, improved tabu search, neighbourhood generation, selection mechanisms
Procedia PDF Downloads 3053735 Criterion-Referenced Test Reliability through Threshold Loss Agreement: Fuzzy Logic Analysis Approach
Authors: Mohammad Ali Alavidoost, Hossein Bozorgian
Abstract:
Criterion-referenced tests (CRTs) are designed to measure student performance against a fixed set of predetermined criteria or learning standards. The reliability of such tests cannot be based on internal reliability. Threshold loss agreement is one way to calculate the reliability of CRTs. However, the selection of master and non-master in such agreement is determined by the threshold point. The problem is if the threshold point witnesses a minute change, the selection of master and non-master may have a drastic change, leading to the change in reliability results. Therefore, in this study, the Fuzzy logic approach is employed as a remedial procedure for data analysis to obviate the threshold point problem. Forty-one Iranian students were selected; the participants were all between 20 and 30 years old. A quantitative approach was used to address the research questions. In doing so, a quasi-experimental design was utilized since the selection of the participants was not randomized. Based on the Fuzzy logic approach, the threshold point would be more stable during the analysis, resulting in rather constant reliability results and more precise assessment.Keywords: criterion-referenced tests, threshold loss agreement, threshold point, fuzzy logic approach
Procedia PDF Downloads 369