Search results for: missing data estimation
25917 GIS Application in Surface Runoff Estimation for Upper Klang River Basin, Malaysia
Authors: Suzana Ramli, Wardah Tahir
Abstract:
Estimation of surface runoff depth is a vital part in any rainfall-runoff modeling. It leads to stream flow calculation and later predicts flood occurrences. GIS (Geographic Information System) is an advanced and opposite tool used in simulating hydrological model due to its realistic application on topography. The paper discusses on calculation of surface runoff depth for two selected events by using GIS with Curve Number method for Upper Klang River basin. GIS enables maps intersection between soil type and land use that later produces curve number map. The results show good correlation between simulated and observed values with more than 0.7 of R2. Acceptable performance of statistical measurements namely mean error, absolute mean error, RMSE, and bias are also deduced in the paper.Keywords: surface runoff, geographic information system, curve number method, environment
Procedia PDF Downloads 28325916 Agricultural Water Consumption Estimation in the Helmand Basin
Authors: Mahdi Akbari, Ali Torabi Haghighi
Abstract:
Hamun Lakes, located in the Helmand Basin, consisting of four water bodies, were the greatest (>8500 km2) freshwater bodies in Iran plateau but have almost entirely desiccated over the last 20 years. The desiccation of the lakes caused dust storm in the region which has huge economic and health consequences on the inhabitants. The flow of the Hirmand (or Helmand) River, the most important feeding river, has decreased from 4 to 1.9 km3 downstream due to anthropogenic activities. In this basin, water is mainly consumed for farming. Due to the lack of in-situ data in the basin, this research utilizes remote-sensing data to show how croplands and consequently consumed water in the agricultural sector have changed. Based on Landsat NDVI, we suggest using a threshold of around 0.35-0.4 to detect croplands in the basin. Croplands of this basin has doubled since 1990, especially in the downstream of the Kajaki Dam (the biggest dam of the basin). Using PML V2 Actual Evapotranspiration (AET) data and considering irrigation efficiency (≈0.3), we estimate that the consumed water (CW) for farming. We found that CW has increased from 2.5 to over 7.5 km3 from 2002 to 2017 in this basin. Also, the annual average Potential Evapotranspiration (PET) of the basin has had a negative trend in the recent years, although the AET over croplands has an increasing trend. In this research, using remote sensing data, we covered lack of data in the studied area and highlighted anthropogenic activities in the upstream which led to the lakes desiccation in the downstream.Keywords: Afghanistan-Iran transboundary Basin, Iran-Afghanistan water treaty, water use, lake desiccation
Procedia PDF Downloads 13125915 Non-Local Simultaneous Sparse Unmixing for Hyperspectral Data
Authors: Fanqiang Kong, Chending Bian
Abstract:
Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed pixels of a hyperspectral image can be expressed in the form of linear combination of only a few pure spectral signatures (end members) in an available spectral library. However, the sparse unmixing problem still remains a great challenge at finding the optimal subset of endmembers for the observed data from a large standard spectral library, without considering the spatial information. Under such circumstances, a sparse unmixing algorithm termed as non-local simultaneous sparse unmixing (NLSSU) is presented. In NLSSU, the non-local simultaneous sparse representation method for endmember selection of sparse unmixing, is used to finding the optimal subset of endmembers for the similar image patch set in the hyperspectral image. And then, the non-local means method, as a regularizer for abundance estimation of sparse unmixing, is used to exploit the abundance image non-local self-similarity. Experimental results on both simulated and real data demonstrate that NLSSU outperforms the other algorithms, with a better spectral unmixing accuracy.Keywords: hyperspectral unmixing, simultaneous sparse representation, sparse regression, non-local means
Procedia PDF Downloads 24825914 An Efficient Fundamental Matrix Estimation for Moving Object Detection
Authors: Yeongyu Choi, Ju H. Park, S. M. Lee, Ho-Youl Jung
Abstract:
In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object’s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC, the fundamental matrix is calculated. In this method, we have improved the performances of moving object detection by using two threshold values that determine inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values.Keywords: corner detection, optical flow, epipolar geometry, RANSAC
Procedia PDF Downloads 40925913 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements
Authors: Henok Hailemariam, Frank Wuttke
Abstract:
Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence
Procedia PDF Downloads 36325912 Urban Areas Management in Developing Countries: Analysis of the Urban Areas Crossed with Risk of Storm Water Drains, Aswan-Egypt
Authors: Omar Hamdy, Schichen Zhao, Hussein Abd El-Atty, Ayman Ragab, Muhammad Salem
Abstract:
One of the most risky areas in Aswan is Abouelreesh, which is suffering from flood disasters, as heavy deluge inundates urban areas causing considerable damage to buildings and infrastructure. Moreover, the main problem was the urban sprawl towards this risky area. This paper aims to identify the urban areas located in the risk areas prone to flash floods. Analyzing this phenomenon needs a lot of data to ensure satisfactory results; however, in this case the official data and field data were limited, and therefore, free sources of satellite data were used. This paper used ArcGIS tools to obtain the storm water drains network by analyzing DEM files. Additionally, historical imagery in Google Earth was studied to determine the age of each building. The last step was to overlay the urban area layer and the storm water drains layer to identify the vulnerable areas. The results of this study would be helpful to urban planners and government officials to make the disasters risk estimation and develop primary plans to recover the risky area, especially urban areas located in torrents.Keywords: risk area, DEM, storm water drains, GIS
Procedia PDF Downloads 45925911 Stability-Indicating High-Performance Thin-Layer Chromatography Method for Estimation of Naftopidil
Authors: P. S. Jain, K. D. Bobade, S. J. Surana
Abstract:
A simple, selective, precise and Stability-indicating High-performance thin-layer chromatographic method for analysis of Naftopidil both in a bulk and in pharmaceutical formulation has been developed and validated. The method employed, HPTLC aluminium plates precoated with silica gel as the stationary phase. The solvent system consisted of hexane: ethyl acetate: glacial acetic acid (4:4:2 v/v). The system was found to give compact spot for Naftopidil (Rf value of 0.43±0.02). Densitometric analysis of Naftopidil was carried out in the absorbance mode at 253 nm. The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.999±0.0001 with respect to peak area in the concentration range 200-1200 ng per spot. The method was validated for precision, recovery and robustness. The limits of detection and quantification were 20.35 and 61.68 ng per spot, respectively. Naftopidil was subjected to acid and alkali hydrolysis, oxidation and thermal degradation. The drug undergoes degradation under acidic, basic, oxidation and thermal conditions. This indicates that the drug is susceptible to acid, base, oxidation and thermal conditions. The degraded product was well resolved from the pure drug with significantly different Rf value. Statistical analysis proves that the method is repeatable, selective and accurate for the estimation of investigated drug. The proposed developed HPTLC method can be applied for identification and quantitative determination of Naftopidil in bulk drug and pharmaceutical formulation.Keywords: naftopidil, HPTLC, validation, stability, degradation
Procedia PDF Downloads 40025910 Modelling High-Frequency Crude Oil Dynamics Using Affine and Non-Affine Jump-Diffusion Models
Authors: Katja Ignatieva, Patrick Wong
Abstract:
We investigated the dynamics of high frequency energy prices, including crude oil and electricity prices. The returns of underlying quantities are modelled using various parametric models such as stochastic framework with jumps and stochastic volatility (SVCJ) as well as non-parametric alternatives, which are purely data driven and do not require specification of the drift or the diffusion coefficient function. Using different statistical criteria, we investigate the performance of considered parametric and nonparametric models in their ability to forecast price series and volatilities. Our models incorporate possible seasonalities in the underlying dynamics and utilise advanced estimation techniques for the dynamics of energy prices.Keywords: stochastic volatility, affine jump-diffusion models, high frequency data, model specification, markov chain monte carlo
Procedia PDF Downloads 10625909 The Use of Software and Internet Search Engines to Develop the Encoding and Decoding Skills of a Dyslexic Learner: A Case Study
Authors: Rabih Joseph Nabhan
Abstract:
This case study explores the impact of two major computer software programs Learn to Speak English and Learn English Spelling and Pronunciation, and some Internet search engines such as Google on mending the decoding and spelling deficiency of Simon X, a dyslexic student. The improvement in decoding and spelling may result in better reading comprehension and composition writing. Some computer programs and Internet materials can help regain the missing awareness and consequently restore his self-confidence and self-esteem. In addition, this study provides a systematic plan comprising a set of activities (four computer programs and Internet materials) which address the problem from the lowest to the highest levels of phoneme and phonological awareness. Four methods of data collection (accounts, observations, published tests, and interviews) create the triangulation to validly and reliably collect data before the plan, during the plan, and after the plan. The data collected are analyzed quantitatively and qualitatively. Sometimes the analysis is either quantitative or qualitative, and some other times a combination of both. Tables and figures are utilized to provide a clear and uncomplicated illustration of some data. The improvement in the decoding, spelling, reading comprehension, and composition writing skills that occurred is proved through the use of authentic materials performed by the student under study. Such materials are a comparison between two sample passages written by the learner before and after the plan, a genuine computer chat conversation, and the scores of the academic year that followed the execution of the plan. Based on these results, the researcher recommends further studies on other Lebanese dyslexic learners using the computer to mend their language problem in order to design and make a most reliable software program that can address this disability more efficiently and successfully.Keywords: analysis, awareness, dyslexic, software
Procedia PDF Downloads 22725908 Parallel Self Organizing Neural Network Based Estimation of Archie’s Parameters and Water Saturation in Sandstone Reservoir
Authors: G. M. Hamada, A. A. Al-Gathe, A. M. Al-Khudafi
Abstract:
Determination of water saturation in sandstone is a vital question to determine the initial oil or gas in place in reservoir rocks. Water saturation determination using electrical measurements is mainly on Archie’s formula. Consequently accuracy of Archie’s formula parameters affects water saturation values rigorously. Determination of Archie’s parameters a, m, and n is proceeded by three conventional techniques, Core Archie-Parameter Estimation (CAPE) and 3-D. This work introduces the hybrid system of parallel self-organizing neural network (PSONN) targeting accepted values of Archie’s parameters and, consequently, reliable water saturation values. This work focuses on Archie’s parameters determination techniques; conventional technique, CAPE technique, and 3-D technique, and then the calculation of water saturation using current. Using the same data, a hybrid parallel self-organizing neural network (PSONN) algorithm is used to estimate Archie’s parameters and predict water saturation. Results have shown that estimated Arche’s parameters m, a, and n are highly accepted with statistical analysis, indicating that the PSONN model has a lower statistical error and higher correlation coefficient. This study was conducted using a high number of measurement points for 144 core plugs from a sandstone reservoir. PSONN algorithm can provide reliable water saturation values, and it can supplement or even replace the conventional techniques to determine Archie’s parameters and thereby calculate water saturation profiles.Keywords: water saturation, Archie’s parameters, artificial intelligence, PSONN, sandstone reservoir
Procedia PDF Downloads 12825907 Understanding Project Failures in Construction: The Critical Impact of Financial Capacity
Authors: Nnadi Ezekiel Oluwaseun Ejiofor
Abstract:
This research investigates the effects of poor cost estimation, material cost variations, and payment punctuality on the financial health and execution of construction projects in Nigeria. To achieve the objectives of the study, a quantitative research approach was employed, and data was gathered through an online survey of 74 construction industry professionals consisting of quantity surveyors, contractors, and other professionals. The study surveyed input on cost estimation errors, price fluctuations, and payment delays, among other factors. The responses of the respondents were analyzed using a five-point Likert scale and the Relative Importance Index (RII). The findings demonstrated that the errors in cost estimating in the Bill of Quantity (BOQ) have a high degree of negative impact on the reputation and image of the participants in the projects. The greatest effect was experienced on the likelihood of obtaining future endeavors for contractors (mean value = 3.42), followed by the likelihood of obtaining new commissions by quantity surveyors (mean value = 3.40). The level of inaccuracy in costing that undershoots exposes them to risks was most serious in terms of easement of construction and effects of shortage of funds to pursue bankruptcy (hence fears of mean value = 3.78). There was also considerable financial damage as a result of cost underestimation, whereby contractors suffered the worst loss in profit (mean value = 3.88). Every expense comes with its own peculiar risk and uncertainty. Pressure on the cost of materials and every other expense attributed to the building and completion of a structure adds risks to the performance figures of a project. The greatest weight (mean importance score = 4.92) was attributed to issues like market inflation in building materials, while the second greatest weight (mean importance score = 4.76) was due to increased transportation charges. On the other hand, delays in payments arising from issues of the clients like poor availability of funds (RII=0.71) and contracting issues such as disagreements on the valuation of works done (RII=0.72) or other reasons were also found to lead to project delays and additional costs. The results affirm the importance of proper cost estimation on the health of organization finances and project risks and finishes within set time limits. As for the suggestions, it is proposed to progress on the methods of costing, engender better communications with the stakeholders, and manage the delays by way of contracting and financial control. This study enhances the existing literature on construction project management by suggesting ways to deal with adverse cost inaccuracies and availability of materials due to delays in payments which, if addressed, would greatly improve the economic performance of the construction business.Keywords: cost estimation, construction project management, material price fluctuations, payment delays, financial impact
Procedia PDF Downloads 1225906 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation
Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang
Abstract:
Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation
Procedia PDF Downloads 6825905 A Generative Adversarial Framework for Bounding Confounded Causal Effects
Authors: Yaowei Hu, Yongkai Wu, Lu Zhang, Xintao Wu
Abstract:
Causal inference from observational data is receiving wide applications in many fields. However, unidentifiable situations, where causal effects cannot be uniquely computed from observational data, pose critical barriers to applying causal inference to complicated real applications. In this paper, we develop a bounding method for estimating the average causal effect (ACE) under unidentifiable situations due to hidden confounders. We propose to parameterize the unknown exogenous random variables and structural equations of a causal model using neural networks and implicit generative models. Then, with an adversarial learning framework, we search the parameter space to explicitly traverse causal models that agree with the given observational distribution and find those that minimize or maximize the ACE to obtain its lower and upper bounds. The proposed method does not make any assumption about the data generating process and the type of the variables. Experiments using both synthetic and real-world datasets show the effectiveness of the method.Keywords: average causal effect, hidden confounding, bound estimation, generative adversarial learning
Procedia PDF Downloads 19325904 Modified Clusterwise Regression for Pavement Management
Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella
Abstract:
Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.Keywords: clusterwise regression, pavement management system, performance model, optimization
Procedia PDF Downloads 25225903 Language Errors Used in “The Space between Us” Movie and Their Effects on Translation Quality: Translation Study toward Discourse Analysis Approach
Authors: Mochamad Nuruz Zaman, Mangatur Rudolf Nababan, M. A. Djatmika
Abstract:
Both society and education areas teach to have good communication for building the interpersonal skills up. Everyone has the capacity to understand something new, either well comprehension or worst understanding. Worst understanding makes the language errors when the interactions are done by someone in the first meeting, and they do not know before it because of distance area. “The Space between Us” movie delivers the love-adventure story between Mars Boy and Earth Girl. They are so many missing conversations because of the different climate and environment. As the moviegoer also must be focused on the subtitle in order to enjoy well the movie. Furthermore, Indonesia subtitle and English conversation on the movie still have overlapping understanding in the translation. Translation hereby consists of source language -SL- (English conversation) and target language -TL- (Indonesia subtitle). These research gap above is formulated in research question by how the language errors happened in that movie and their effects on translation quality which is deepest analyzed by translation study toward discourse analysis approach. The research goal is to expand the language errors and their translation qualities in order to create a good atmosphere in movie media. The research is studied by embedded research in qualitative design. The research locations consist of setting, participant, and event as focused determined boundary. Sources of datum are “The Space between Us” movie and informant (translation quality rater). The sampling is criterion-based sampling (purposive sampling). Data collection techniques use content analysis and questioner. Data validation applies data source and method triangulation. Data analysis delivers domain, taxonomy, componential, and cultural theme analysis. Data findings on the language errors happened in the movie are referential, register, society, textual, receptive, expressive, individual, group, analogical, transfer, local, and global errors. Data discussions on their effects to translation quality are concentrated by translation techniques on their data findings; they are amplification, borrowing, description, discursive creation, established equivalent, generalization, literal, modulation, particularization, reduction, substitution, and transposition.Keywords: discourse analysis, language errors, The Space between Us movie, translation techniques, translation quality instruments
Procedia PDF Downloads 21925902 Cycle Number Estimation Method on Fatigue Crack Initiation Using Voronoi Tessellation and the Tanaka Mura Model
Authors: Mohammad Ridzwan Bin Abd Rahim, Siegfried Schmauder, Yupiter HP Manurung, Peter Binkele, Meor Iqram B. Meor Ahmad, Kiarash Dogahe
Abstract:
This paper deals with the short crack initiation of the material P91 under cyclic loading at two different temperatures, concluded with the estimation of the short crack initiation Wöhler (S/N) curve. An artificial but representative model microstructure was generated using Voronoi tessellation and the Finite Element Method, and the non-uniform stress distribution was calculated accordingly afterward. The number of cycles needed for crack initiation is estimated on the basis of the stress distribution in the model by applying the physically-based Tanaka-Mura model. Initial results show that the number of cycles to generate crack initiation is strongly correlated with temperature.Keywords: short crack initiation, P91, Wöhler curve, Voronoi tessellation, Tanaka-Mura model
Procedia PDF Downloads 10125901 A Convolutional Neural Network Based Vehicle Theft Detection, Location, and Reporting System
Authors: Michael Moeti, Khuliso Sigama, Thapelo Samuel Matlala
Abstract:
One of the principal challenges that the world is confronted with is insecurity. The crime rate is increasing exponentially, and protecting our physical assets especially in the motorist industry, is becoming impossible when applying our own strength. The need to develop technological solutions that detect and report theft without any human interference is inevitable. This is critical, especially for vehicle owners, to ensure theft detection and speedy identification towards recovery efforts in cases where a vehicle is missing or attempted theft is taking place. The vehicle theft detection system uses Convolutional Neural Network (CNN) to recognize the driver's face captured using an installed mobile phone device. The location identification function uses a Global Positioning System (GPS) to determine the real-time location of the vehicle. Upon identification of the location, Global System for Mobile Communications (GSM) technology is used to report or notify the vehicle owner about the whereabouts of the vehicle. The installed mobile app was implemented by making use of python as it is undoubtedly the best choice in machine learning. It allows easy access to machine learning algorithms through its widely developed library ecosystem. The graphical user interface was developed by making use of JAVA as it is better suited for mobile development. Google's online database (Firebase) was used as a means of storage for the application. The system integration test was performed using a simple percentage analysis. Sixty (60) vehicle owners participated in this study as a sample, and questionnaires were used in order to establish the acceptability of the system developed. The result indicates the efficiency of the proposed system, and consequently, the paper proposes the use of the system can effectively monitor the vehicle at any given place, even if it is driven outside its normal jurisdiction. More so, the system can be used as a database to detect, locate and report missing vehicles to different security agencies.Keywords: CNN, location identification, tracking, GPS, GSM
Procedia PDF Downloads 17225900 Evaluation of Illegal Hunting of Red Deer and Conservation Policy of Department of Environment in Iran
Authors: Tahere Fazilat
Abstract:
Caspian red deer or maral (Cervus elaphus maral) is the largest type of deer in iran. Maral in the past has lived in the north forests of Iran from the Caspian sea coast, Alborz mountains chain and oak forest of Zagros margin from the Azarbaijan up to fars province. However, the generation of them was completely destroyed in the north west and west of Iran. According to reports about 50 years and out of reach of humans. In the present studies, data were collected from 2004 to 2014 in the Mazandaran state Hyrcanian forest by means of guard of environment and justiciary office of department of environment of Mazandaran in this process the all arrested illegal hunting of red deer and the population census, estimation and the correlation of these data was assayed. We provide a first evaluation of how suitable these methods are by comparing the results with population estimates obtained using cohort analysis, and by analyzing the within-season variation in number of seen deer. The data gave us the future of red deer in northern forest of Iran and the results of policy of department of environment in Iran in red deer conservation.Keywords: illegal hunting, red deer, census, concervation
Procedia PDF Downloads 55325899 Price Effect Estimation of Tobacco on Low-wage Male Smokers: A Causal Mediation Analysis
Authors: Kawsar Ahmed, Hong Wang
Abstract:
The study's goal was to estimate the causal mediation impact of tobacco tax before and after price hikes among low-income male smokers, with a particular emphasis on the effect estimating pathways framework for continuous and dichotomous variables. From July to December 2021, a cross-sectional investigation of observational data (n=739) was collected from Bangladeshi low-wage smokers. The Quasi-Bayesian technique, binomial probit model, and sensitivity analysis using a simulation of the computational tools R mediation package had been used to estimate the effect. After a price rise for tobacco products, the average number of cigarettes or bidis sticks taken decreased from 6.7 to 4.56. Tobacco product rising prices have a direct effect on low-income people's decisions to quit or lessen their daily smoking habits of Average Causal Mediation Effect (ACME) [effect=2.31, 95 % confidence interval (C.I.) = (4.71-0.00), p<0.01], Average Direct Effect (ADE) [effect=8.6, 95 percent (C.I.) = (6.8-0.11), p<0.001], and overall significant effects (p<0.001). Tobacco smoking choice is described by the mediated proportion of income effect, which is 26.1% less of following price rise. The curve of ACME and ADE is based on observational figures of the coefficients of determination that asses the model of hypothesis as the substantial consequence after price rises in the sensitivity analysis. To reduce smoking product behaviors, price increases through taxation have a positive causal mediation with income that affects the decision to limit tobacco use and promote low-income men's healthcare policy.Keywords: causal mediation analysis, directed acyclic graphs, tobacco price policy, sensitivity analysis, pathway estimation
Procedia PDF Downloads 11425898 Applications of Analytical Probabilistic Approach in Urban Stormwater Modeling in New Zealand
Authors: Asaad Y. Shamseldin
Abstract:
Analytical probabilistic approach is an innovative approach for urban stormwater modeling. It can provide information about the long-term performance of a stormwater management facility without being computationally very demanding. This paper explores the application of the analytical probabilistic approach in New Zealand. The paper presents the results of a case study aimed at development of an objective way of identifying what constitutes a rainfall storm event and the estimation of the corresponding statistical properties of storms using two selected automatic rainfall stations located in the Auckland region in New Zealand. The storm identification and the estimation of the storm statistical properties are regarded as the first step in the development of the analytical probabilistic models. The paper provides a recommendation about the definition of the storm inter-event time to be used in conjunction with the analytical probabilistic approach.Keywords: hydrology, rainfall storm, storm inter-event time, New Zealand, stormwater management
Procedia PDF Downloads 34425897 On the Optimality of Blocked Main Effects Plans
Authors: Rita SahaRay, Ganesh Dutta
Abstract:
In this article, experimental situations are considered where a main effects plan is to be used to study m two-level factors using n runs which are partitioned into b blocks, not necessarily of same size. Assuming the block sizes to be even for all blocks, for the case n ≡ 2 (mod 4), optimal designs are obtained with respect to type 1 and type 2 optimality criteria in the class of designs providing estimation of all main effects orthogonal to the block effects. In practice, such orthogonal estimation of main effects is often a desirable condition. In the wider class of all available m two level even sized blocked main effects plans, where the factors do not occur at high and low levels equally often in each block, E-optimal designs are also characterized. Simple construction methods based on Hadamard matrices and Kronecker product for these optimal designs are presented.Keywords: design matrix, Hadamard matrix, Kronecker product, type 1 criteria, type 2 criteria
Procedia PDF Downloads 36625896 Divergence of Innovation Capabilities within the EU
Authors: Vishal Jaunky, Jonas Grafström
Abstract:
The development of the European Union’s (EU) single economic market and rapid technological change has resulted in major structural changes in EU’s member states economies. The general liberalization process that the countries has undergone together has convinced the governments of the member states of need to upgrade their economic and training systems in order to be able to face the economic globalization. Several signs of economic convergence have been found but less is known about the knowledge production. This paper addresses the convergence pattern of technological innovation in 13 European Union (EU) states over the time period 1990-2011 by means of parametric and non-parametric techniques. Parametric approaches revolve around the neoclassical convergence theories. This paper reveals divergence of both the β and σ types. Further, we found evidence of stochastic divergence and non-parametric convergence approach such as distribution dynamics shows a tendency towards divergence. This result is supported with the occurrence of γ-divergence. The policies of the EU to reduce technological gap among its member states seem to be missing its target, something that can have negative long run consequences for the market.Keywords: convergence, patents, panel data, European union
Procedia PDF Downloads 29025895 Estimation of Stress-Strength Parameter for Burr Type XII Distribution Based on Progressive Type-II Censoring
Authors: A. M. Abd-Elfattah, M. H. Abu-Moussa
Abstract:
In this paper, the estimation of stress-strength parameter R = P(Y < X) is considered when X; Y the strength and stress respectively are two independent random variables of Burr Type XII distribution. The samples taken for X and Y are progressively censoring of type II. The maximum likelihood estimator (MLE) of R is obtained when the common parameter is unknown. But when the common parameter is known the MLE, uniformly minimum variance unbiased estimator (UMVUE) and the Bayes estimator of R = P(Y < X) are obtained. The exact condence interval of R based on MLE is obtained. The performance of the proposed estimators is compared using the computer simulation.Keywords: Burr Type XII distribution, progressive type-II censoring, stress-strength model, unbiased estimator, maximum-likelihood estimator, uniformly minimum variance unbiased estimator, confidence intervals, Bayes estimator
Procedia PDF Downloads 45725894 Estimation of the External Force for a Co-Manipulation Task Using the Drive Chain Robot
Authors: Sylvain Devie, Pierre-Philippe Robet, Yannick Aoustin, Maxime Gautier
Abstract:
The aim of this paper is to show that the observation of the external effort and the sensor-less control of a system is limited by the mechanical system. First, the model of a one-joint robot with a prismatic joint is presented. Based on this model, two different procedures were performed in order to identify the mechanical parameters of the system and observe the external effort applied on it. Experiments have proven that the accuracy of the force observer, based on the DC motor current, is limited by the mechanics of the robot. The sensor-less control will be limited by the accuracy in estimation of the mechanical parameters and by the maximum static friction force, that is the minimum force which can be observed in this case. The consequence of this limitation is that industrial robots without specific design are not well adapted to perform sensor-less precision tasks. Finally, an efficient control law is presented for high effort applications.Keywords: control, identification, robot, co-manipulation, sensor-less
Procedia PDF Downloads 16125893 Results of EPR Dosimetry Study of Population Residing in the Vicinity of the Uranium Mines and Uranium Processing Plant
Authors: K. Zhumadilov, P. Kazymbet, A. Ivannikov, M. Bakhtin, A. Akylbekov, K. Kadyrzhanov, A. Morzabayev, M. Hoshi
Abstract:
The aim of the study is to evaluate the possible excess of dose received by uranium processing plant workers. The possible excess of dose of workers was evaluated with comparison with population pool (Stepnogorsk) and control pool (Astana city). The measured teeth samples were extracted according to medical indications. In total, twenty-seven tooth enamel samples were analyzed from the residents of Stepnogorsk city (180 km from Astana city, Kazakhstan). About 6 tooth samples were collected from the workers of uranium processing plant. The results of tooth enamel dose estimation show us small influence of working conditions to workers, the maximum excess dose is less than 100 mGy. This is pilot study of EPR dose estimation and for a final conclusion additional sample is required.Keywords: EPR dose, workers, uranium mines, tooth samples
Procedia PDF Downloads 41325892 A Method to Determine Cutting Force Coefficients in Turning Using Mechanistic Approach
Authors: T. C. Bera, A. Bansal, D. Nema
Abstract:
During performing turning operation, cutting force plays a significant role in metal cutting process affecting tool-work piece deflection, vibration and eventually part quality. The present research work aims to develop a mechanistic cutting force model and to study the mechanistic constants used in the force model in case of turning operation. The proposed model can be used for the reliable and accurate estimation of the cutting forces establishing relationship of various force components (cutting force and feed force) with uncut chip thickness. The accurate estimation of cutting force is required to improve thin-walled part accuracy by controlling the tool-work piece deflection induced surface errors and tool-work piece vibration.Keywords: turning, cutting forces, cutting constants, uncut chip thickness
Procedia PDF Downloads 52325891 Design of a Graphical User Interface for Data Preprocessing and Image Segmentation Process in 2D MRI Images
Authors: Enver Kucukkulahli, Pakize Erdogmus, Kemal Polat
Abstract:
The 2D image segmentation is a significant process in finding a suitable region in medical images such as MRI, PET, CT etc. In this study, we have focused on 2D MRI images for image segmentation process. We have designed a GUI (graphical user interface) written in MATLABTM for 2D MRI images. In this program, there are two different interfaces including data pre-processing and image clustering or segmentation. In the data pre-processing section, there are median filter, average filter, unsharp mask filter, Wiener filter, and custom filter (a filter that is designed by user in MATLAB). As for the image clustering, there are seven different image segmentations for 2D MR images. These image segmentation algorithms are as follows: PSO (particle swarm optimization), GA (genetic algorithm), Lloyds algorithm, k-means, the combination of Lloyds and k-means, mean shift clustering, and finally BBO (Biogeography Based Optimization). To find the suitable cluster number in 2D MRI, we have designed the histogram based cluster estimation method and then applied to these numbers to image segmentation algorithms to cluster an image automatically. Also, we have selected the best hybrid method for each 2D MR images thanks to this GUI software.Keywords: image segmentation, clustering, GUI, 2D MRI
Procedia PDF Downloads 37725890 A Generalized Family of Estimators for Estimation of Unknown Population Variance in Simple Random Sampling
Authors: Saba Riaz, Syed A. Hussain
Abstract:
This paper is addressing the estimation method of the unknown population variance of the variable of interest. A new generalized class of estimators of the finite population variance has been suggested using the auxiliary information. To improve the precision of the proposed class, known population variance of the auxiliary variable has been used. Mathematical expressions for the biases and the asymptotic variances of the suggested class are derived under large sample approximation. Theoretical and numerical comparisons are made to investigate the performances of the proposed class of estimators. The empirical study reveals that the suggested class of estimators performs better than the usual estimator, classical ratio estimator, classical product estimator and classical linear regression estimator. It has also been found that the suggested class of estimators is also more efficient than some recently published estimators.Keywords: study variable, auxiliary variable, finite population variance, bias, asymptotic variance, percent relative efficiency
Procedia PDF Downloads 22625889 Assessment of Spectral Indices for Soil Salinity Estimation in Irrigated Land
Authors: R. Lhissou , A. El Harti , K. Chokmani, E. Bachaoui, A. El Ghmari
Abstract:
Soil salinity is a serious environmental hazard in many countries around the world especially the arid and semi-arid countries like Morocco. Salinization causes negative effects on the ground; it affects agricultural production, infrastructure, water resources and biodiversity. Remote sensing can provide soil salinity information for large areas, and in a relatively short time. In addition, remote sensing is not limited by extremes in terrain or hazardous condition. Contrariwise, experimental methods for monitoring soil salinity by direct measurements in situ are very demanding of time and resources, and also very limited in spatial coverage. In the irrigated perimeter of Tadla plain in central Morocco, the increased use of saline groundwater and surface water, coupled with agricultural intensification leads to the deterioration of soil quality especially by salinization. In this study, we assessed several spectral indices of soil salinity cited in the literature using Landsat TM satellite images and field measurements of electrical conductivity (EC). Three Landsat TM satellite images were taken during 3 months in the dry season (September, October and November 2011). Based on field measurement data of EC collected in three field campaigns over the three dates simultaneously with acquisition dates of Landsat TM satellite images, a two assessment techniques are used to validate a soil salinity spectral indices. Firstly, the spectral indices are validated locally by pixel. The second validation technique is made using a window of size 3x3 pixels. The results of the study indicated that the second technique provides getting a more accurate validation and the assessment has shown its limits when it comes to assess across the pixel. In addition, the EC values measured from field have a good correlation with some spectral indices derived from Landsat TM data and the best results show an r² of 0.88, 0.79 and 0.65 for Salinity Index (SI) in the three dates respectively. The results have shown the usefulness of spectral indices as an auxiliary variable in the spatial estimation and mapping salinity in irrigated land.Keywords: remote sensing, spectral indices, soil salinity, irrigated land
Procedia PDF Downloads 39225888 Parameter Estimation with Uncertainty and Sensitivity Analysis for the SARS Outbreak in Hong Kong
Authors: Afia Naheed, Manmohan Singh, David Lucy
Abstract:
This work is based on a mathematical as well as statistical study of an SEIJTR deterministic model for the interpretation of transmission of severe acute respiratory syndrome (SARS). Based on the SARS epidemic in 2003, the parameters are estimated using Runge-Kutta (Dormand-Prince pairs) and least squares methods. Possible graphical and numerical techniques are used to validate the estimates. Then effect of the model parameters on the dynamics of the disease is examined using sensitivity and uncertainty analysis. Sensitivity and uncertainty analytical techniques are used in order to analyze the affect of the uncertainty in the obtained parameter estimates and to determine which parameters have the largest impact on controlling the disease dynamics.Keywords: infectious disease, severe acute respiratory syndrome (SARS), parameter estimation, sensitivity analysis, uncertainty analysis, Runge-Kutta methods, Levenberg-Marquardt method
Procedia PDF Downloads 362