Search results for: missing data estimation
25770 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 15925769 Cross-Comparison between Land Surface Temperature from Polar and Geostationary Satellite over Heterogenous Landscape: A Case Study in Hong Kong
Authors: Ibrahim A. Adeniran, Rui F. Zhu, Man S. Wong
Abstract:
Owing to the insufficiency in the spatial representativeness and continuity of in situ temperature measurements from weather stations (WS), the use of temperature measurement from WS for large-range diurnal analysis in heterogenous landscapes has been limited. This has made the accurate estimation of land surface temperature (LST) from remotely sensed data more crucial. Moreover, the study of dynamic interaction between the atmosphere and the physical surface of the Earth could be enhanced at both annual and diurnal scales by using optimal LST data derived from satellite sensors. The tradeoff between the spatial and temporal resolution of LSTs from satellite’s thermal infrared sensors (TIRS) has, however, been a major challenge, especially when high spatiotemporal LST data are recommended. It is well-known from existing literature that polar satellites have the advantage of high spatial resolution, while geostationary satellites have a high temporal resolution. Hence, this study is aimed at designing a framework for the cross-comparison of LST data from polar and geostationary satellites in a heterogeneous landscape. This could help to understand the relationship between the LST estimates from the two satellites and, consequently, their integration in diurnal LST analysis. Landsat-8 satellite data will be used as the representative of the polar satellite due to the availability of its long-term series, while the Himawari-8 satellite will be used as the data source for the geostationary satellite because of its improved TIRS. For the study area, Hong Kong Special Administrative Region (HK SAR) will be selected; this is due to the heterogeneity in the landscape of the region. LST data will be retrieved from both satellites using the Split window algorithm (SWA), and the resulting data will be validated by comparing satellite-derived LST data with temperature data from automatic WS in HK SAR. The LST data from the satellite data will then be separated based on the land use classification in HK SAR using the Global Land Cover by National Mapping Organization version3 (GLCNMO 2013) data. The relationship between LST data from Landsat-8 and Himawari-8 will then be investigated based on the land-use class and over different seasons of the year in order to account for seasonal variation in their relationship. The resulting relationship will be spatially and statistically analyzed and graphically visualized for detailed interpretation. Findings from this study will reveal the relationship between the two satellite data based on the land use classification within the study area and the seasons of the year. While the information provided by this study will help in the optimal combination of LST data from Polar (Landsat-8) and geostationary (Himawari-8) satellites, it will also serve as a roadmap in the annual and diurnal urban heat (UHI) analysis in Hong Kong SAR.Keywords: automatic weather station, Himawari-8, Landsat-8, land surface temperature, land use classification, split window algorithm, urban heat island
Procedia PDF Downloads 7525768 An Industrial Workplace Alerting and Monitoring Platform to Prevent Workplace Injury and Accidents
Authors: Sanjay Adhikesaven
Abstract:
Workplace accidents are a critical problem that causes many deaths, injuries, and financial losses. Climate change has a severe impact on industrial workers, partially caused by global warming. To reduce such casualties, it is important to proactively find unsafe environments where injuries could occur by detecting the use of personal protective equipment (PPE) and identifying unsafe activities. Thus, we propose an industrial workplace alerting and monitoring platform to detect PPE use and classify unsafe activity in group settings involving multiple humans and objects over a long period of time. Our proposed method is the first to analyze prolonged actions involving multiple people or objects. It benefits from combining pose estimation with PPE detection in one platform. Additionally, we propose the first open-source annotated data set with video data from industrial workplaces annotated with the action classifications and detected PPE. The proposed system can be implemented within the surveillance cameras already present in industrial settings, making it a practical and effective solution.Keywords: computer vision, deep learning, workplace safety, automation
Procedia PDF Downloads 10325767 Designing Creative Events with Deconstructivism Approach
Authors: Maryam Memarian, Mahmood Naghizadeh
Abstract:
Deconstruction is an approach that is entirely incompatible with the traditional prevalent architecture. Considering the fact that this approach attempts to put architecture in sharp contrast with its opposite events and transpires with attending to the neglected and missing aspects of architecture and deconstructing its stable structures. It also recklessly proceeds beyond the existing frameworks and intends to create a different and more efficient prospect for space. The aim of deconstruction architecture is to satisfy both the prospective and retrospective visions as well as takes into account all tastes of the present in order to transcend time. Likewise, it ventures to fragment the facts and symbols of the past and extract new concepts from within their heart, which coincide with today’s circumstances. Since this approach is an attempt to surpass the limits of the prevalent architecture, it can be employed to design places in which creative events occur and imagination and ambition flourish. Thought-provoking artistic events can grow and mature in such places and be represented in the best way possible to all people. The concept of event proposed in the plan grows out of the interaction between space and creation. In addition to triggering surprise and high impressions, it is also considered as a bold journey into the suspended realms of the traditional conflicts in architecture such as architecture-landscape, interior-exterior, center-margin, product-process, and stability-instability. In this project, at first, through interpretive-historical research method and examining the inputs and data collection, recognition and organizing takes place. After evaluating the obtained data using deductive reasoning, the data is eventually interpreted. Given the fact that the research topic is in its infancy and there is not a similar case in Iran with limited number of corresponding instances across the world, the selected topic helps to shed lights on the unrevealed and neglected parts in architecture. Similarly, criticizing, investigating and comparing specific and highly prized cases in other countries with the project under study can serve as an introduction into this architecture style.Keywords: anti-architecture, creativity, deconstruction, event
Procedia PDF Downloads 32225766 The Critical Relevance of Credit and Debt Data in Household Food Security Analysis: The Risks of Ineffective Response Actions
Authors: Siddharth Krishnaswamy
Abstract:
Problem Statement: Currently, when analyzing household food security, the most commonly studied food access indicators are household income and expenditure. Larger studies do take into account other indices such as credit and employment. But these are baselines studies and by definition are conducted infrequently. Food security analysis for access is usually dedicated to analyzing income and expenditure indicators. And both these indicators are notoriously inconsistent. Yet this data can very often end up being the basis on which household food access is calculated; and by extension, be used for decision making. Objectives: This paper argues that along with income and expenditure, credit and debit information should be collected so that an accurate analysis of household food security (and in particular) food access can be determined. The lack of collection and analysis of this information routinely means that there is often a “masking” of the actual situation; a household’s food access and food availability patterns may be adequate mainly as a result of borrowing and may even be due to a long- term dependency (a debt cycle). In other words, such a household is, in reality, worse off than it appears a factor masked by its performance on basic access indicators. Procedures/methodologies/approaches: Existing food security data sets collected in 2005 in Azerbaijan, 2010 across Myanmar and 2014-15 across Uganda were used to support the theory that analyzing income and expenditure of a HHs and analyzing the same in addition to data on credit & borrowing patterns will result in an entirely different scenario of food access of the household. Furthermore, the data analyzed depicts food consumption patterns across groups of households and then relates this to the extent of dependency on credit, i.e. households borrowing money in order to meet food needs. Finally, response options that were based on analyzing only income and expenditure; and response options based on income, expenditure, credit, and borrowing – from the same geographical area of operation are studied and discussed. Results: The purpose of this work was to see if existing methods of household food security analysis could be improved. It is hoped that food security analysts will collect household level information on credit and debit and analyze them against income, expenditure and consumption patterns. This will help determine if a household’s food access and availability are dependent on unsustainable strategies such as borrowing money for food or undertaking sustained debts. Conclusions: The results clearly show the amount of relevant information that is missing in Food Access analysis if debit and borrowing of the household is not analyzed along with the typical Food Access indicators that are usually analyzed. And the serious repercussions this has on Programmatic response and interventions.Keywords: analysis, food security indicators, response, resilience analysis
Procedia PDF Downloads 33225765 Fatigue Life Estimation Using N-Code for Drive Shaft of Passenger Vehicle
Authors: Tae An Kim, Hyo Lim Kang, Hye Won Han, Seung Ho Han
Abstract:
The drive shaft of passenger vehicle has its own function such as transmitting the engine torque from the gearbox and differential gears to the wheels. It must also compensate for all variations in angle or length resulting from manoeuvring and deflection for perfect synchronization between joints. Torsional fatigue failures occur frequently at the connection parts of the spline joints in the end of the drive shaft. In this study, the fatigue life of a drive shaft of passenger vehicle was estimated by using the finite element analysis. A commercial software of n-Code was applied under twisting load conditions, i.e. 0~134kgf•m and 0~188kgf•m, in which the shear strain range-fatigue life relationship considering Signed Shear method, Smith-Watson-Topper equation, Neuber-Hoffman Seeger method, size sensitivity factor and surface roughness effect was taken into account. The estimated fatigue life was verified by a twisting load test of the real drive shaft in a test rig. (Human Resource Training Project for Industry Matched R & D, KIAT, N036200004).Keywords: drive shaft, fatigue life estimation, passenger vehicle, shear strain range-fatigue life relationship, torsional fatigue failure
Procedia PDF Downloads 27625764 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers
Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran
Abstract:
With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.Keywords: optical fiber, multi-mode, data centers, encircled flux
Procedia PDF Downloads 37725763 Institutional and Economic Determinants of Foreign Direct Investment: Comparative Analysis of Three Clusters of Countries
Authors: Ismatilla Mardanov
Abstract:
There are three types of countries, the first of which is willing to attract foreign direct investment (FDI) in enormous amounts and do whatever it takes to make this happen. Therefore, FDI pours into such countries. In the second cluster of countries, even if the country is suffering tremendously from the shortage of investments, the governments are hesitant to attract investments because they are at the hands of local oligarchs/cartels. Therefore, FDI inflows are moderate to low in such countries. The third type is countries whose companies prefer investing in the most efficient locations globally and are hesitant to invest in the homeland. Sorting countries into such clusters, the present study examines the essential institutions and economic factors that make these countries different. Past literature has discussed various determinants of FDI in all kinds of countries. However, it did not classify countries based on government motivation, institutional setup, and economic factors. A specific approach to each target country is vital for corporate foreign direct investment risk analysis and decisions. The research questions are 1. What specific institutional and economic factors paint the pictures of the three clusters; 2. What specific institutional and economic factors are determinants of FDI; 3. Which of the determinants are endogenous and exogenous variables? 4. How can institutions and economic and political variables impact corporate investment decisions Hypothesis 1: In the first type, country institutions and economic factors will be favorable for FDI. Hypothesis 2: In the second type, even if country economic factors favor FDI, institutions will not. Hypothesis 3: In the third type, even if country institutions favorFDI, economic factors will not favor domestic investments. Therefore, FDI outflows occur in large amounts. Methods: Data come from open sources of the World Bank, the Fraser Institute, the Heritage Foundation, and other reliable sources. The dependent variable is FDI inflows. The independent variables are institutions (economic and political freedom indices) and economic factors (natural, material, and labor resources, government consumption, infrastructure, minimum wage, education, unemployment, tax rates, consumer price index, inflation, and others), the endogeneity or exogeneity of which are tested in the instrumental variable estimation. Political rights and civil liberties are used as instrumental variables. Results indicate that in the first type, both country institutions and economic factors, specifically labor and logistics/infrastructure/energy intensity, are favorable for potential investors. In the second category of countries, the risk of loss of assets is very high due to governmentshijacked by local oligarchs/cartels/special interest groups. In the third category of countries, the local economic factors are unfavorable for domestic investment even if the institutions are well acceptable. Cluster analysis and instrumental variable estimation were used to reveal cause-effect patterns in each of the clusters.Keywords: foreign direct investment, economy, institutions, instrumental variable estimation
Procedia PDF Downloads 16125762 Practicing Inclusion for Hard of Hearing and Deaf Students in Regular Schools in Ethiopia
Authors: Mesfin Abebe Molla
Abstract:
This research aims to examine the practices of inclusion of the hard of hearing and deaf students in regular schools. It also focuses on exploring strategies for optimal benefits of students with Hard of Hearing and Deaf (HH-D) from inclusion. Concurrent mixed methods research design was used to collect quantitative and qualitative data. The instruments used to gather data for this study were questionnaire, semi- structured interview, and observations. A total of 102 HH-D students and 42 primary and High School teachers were selected using simple random sampling technique and used as participants to collect quantitative data. Non-probability sampling technique was also employed to select 14 participants (4-school principals, 6-teachers and 4-parents of HH-D students) and they were interviewed to collect qualitative data. Descriptive and inferential statistical techniques (independent sample t-test, one way ANOVA and Multiple regressions) were employed to analyze quantitative data. Qualitative data were also analyzed qualitatively by theme analysis. The findings reported that there were individual principals’, teachers’ and parents’ strong commitment and efforts for practicing inclusion of HH-D students effectively; however, most of the core values of inclusion were missing in both schools. Most of the teachers (78.6 %) and HH-D students (75.5%) had negative attitude and considerable reservations about the feasibility of inclusion of HH-D students in both schools. Furthermore, there was a statistically significant difference of attitude toward to inclusion between the two school’s teachers and the teachers’ who had taken and had not taken additional training on IE and sign language. The study also indicated that there was a statistically significant difference of attitude toward to inclusion between hard of hearing and deaf students. However, the overall contribution of the demographic variables of teachers and HH-D students on their attitude toward inclusion is not statistically significant. The finding also showed that HH-D students did not have access to modified curriculum which would maximize their abilities and help them to learn together with their hearing peers. In addition, there is no clear and adequate direction for the medium of instruction. Poor school organization and management, lack of commitment, financial resources, collaboration and teachers’ inadequate training on Inclusive Education (IE) and sign language, large class size, inappropriate assessment procedure, lack of trained deaf adult personnel who can serve as role model for HH-D students and lack of parents and community members’ involvement were some of the major factors that affect the practicing inclusion of students HH-D. Finally, recommendations are made to improve the practices of inclusion of HH-D students and to make inclusion of HH-D students an integrated part of Ethiopian education based on the findings of the study.Keywords: deaf, hard of hearing, inclusion, regular schools
Procedia PDF Downloads 34525761 Study on Errors in Estimating the 3D Gaze Point for Different Pupil Sizes Using Eye Vergences
Authors: M. Pomianek, M. Piszczek, M. Maciejewski
Abstract:
The binocular eye tracking technology is increasingly being used in industry, entertainment and marketing analysis. In the case of virtual reality, eye tracking systems are already the basis for user interaction with the environment. In such systems, the high accuracy of determining the user's eye fixation point is very important due to the specificity of the virtual reality head-mounted display (HMD). Often, however, there are unknown errors occurring in the used eye tracking technology, as well as those resulting from the positioning of the devices in relation to the user's eyes. However, can the virtual environment itself influence estimation errors? The paper presents mathematical analyses and empirical studies of the determination of the fixation point and errors resulting from the change in the size of the pupil in response to the intensity of the displayed scene. The article contains both static laboratory tests as well as on the real user. Based on the research results, optimization solutions were proposed that would reduce the errors of gaze estimation errors. Studies show that errors in estimating the fixation point of vision can be minimized both by improving the pupil positioning algorithm in the video image and by using more precise methods to calibrate the eye tracking system in three-dimensional space.Keywords: eye tracking, fixation point, pupil size, virtual reality
Procedia PDF Downloads 13325760 Statistical Data Analysis of Migration Impact on the Spread of HIV Epidemic Model Using Markov Monte Carlo Method
Authors: Ofosuhene O. Apenteng, Noor Azina Ismail
Abstract:
Over the last several years, concern has developed over how to minimize the spread of HIV/AIDS epidemic in many countries. AIDS epidemic has tremendously stimulated the development of mathematical models of infectious diseases. The transmission dynamics of HIV infection that eventually developed AIDS has taken a pivotal role of much on building mathematical models. From the initial HIV and AIDS models introduced in the 80s, various improvements have been taken into account as how to model HIV/AIDS frameworks. In this paper, we present the impact of migration on the spread of HIV/AIDS. Epidemic model is considered by a system of nonlinear differential equations to supplement the statistical method approach. The model is calibrated using HIV incidence data from Malaysia between 1986 and 2011. Bayesian inference based on Markov Chain Monte Carlo is used to validate the model by fitting it to the data and to estimate the unknown parameters for the model. The results suggest that the migrants stay for a long time contributes to the spread of HIV. The model also indicates that susceptible individual becomes infected and moved to HIV compartment at a rate that is more significant than the removal rate from HIV compartment to AIDS compartment. The disease-free steady state is unstable since the basic reproduction number is 1.627309. This is a big concern and not a good indicator from the public heath point of view since the aim is to stabilize the epidemic at the disease equilibrium.Keywords: epidemic model, HIV, MCMC, parameter estimation
Procedia PDF Downloads 60225759 The Current Situation and Perspectives of Electricity Demand and Estimation of Carbon Dioxide Emissions and Efficiency
Abstract:
This article presents a current and future energy situation in Libya. The electric power efficiency and operating hours in power plants are evaluated from 2005 to 2010. Carbon dioxide emissions in most of power plants are estimated. In 2005, the efficiency of steam power plants achieved a range of 20% to 28%. While, the gas turbine power plants efficiency ranged between 9% and 25%, this can be considered as low efficiency. However, the efficiency improvement has clearly observed in some power plants from 2008 to 2010, especially in the power plant of North Benghazi and west Tripoli. In fact, these power plants have modified to combine cycle. The efficiency of North Benghazi power plant has increased from 25% to 46.6%, while in Tripoli it is increased from 22% to 34%. On the other hand, the efficiency improvement is not observed in the gas turbine power plants. When compared to the quantity of fuel used, the carbon dioxide emissions resulting from electricity generation plants were very high. Finally, an estimation of the energy demand has been done to the maximum load and the annual load factor (i.e., the ratio between the output power and installed power).Keywords: power plant, efficiency improvement, carbon dioxide emissions, energy situation in Libya
Procedia PDF Downloads 47825758 An Integrated Label Propagation Network for Structural Condition Assessment
Authors: Qingsong Xiong, Cheng Yuan, Qingzhao Kong, Haibei Xiong
Abstract:
Deep-learning-driven approaches based on vibration responses have attracted larger attention in rapid structural condition assessment while obtaining sufficient measured training data with corresponding labels is relevantly costly and even inaccessible in practical engineering. This study proposes an integrated label propagation network for structural condition assessment, which is able to diffuse the labels from continuously-generating measurements by intact structure to those of missing labels of damage scenarios. The integrated network is embedded with damage-sensitive features extraction by deep autoencoder and pseudo-labels propagation by optimized fuzzy clustering, the architecture and mechanism which are elaborated. With a sophisticated network design and specified strategies for improving performance, the present network achieves to extends the superiority of self-supervised representation learning, unsupervised fuzzy clustering and supervised classification algorithms into an integration aiming at assessing damage conditions. Both numerical simulations and full-scale laboratory shaking table tests of a two-story building structure were conducted to validate its capability of detecting post-earthquake damage. The identifying accuracy of a present network was 0.95 in numerical validations and an average 0.86 in laboratory case studies, respectively. It should be noted that the whole training procedure of all involved models in the network stringently doesn’t rely upon any labeled data of damage scenarios but only several samples of intact structure, which indicates a significant superiority in model adaptability and feasible applicability in practice.Keywords: autoencoder, condition assessment, fuzzy clustering, label propagation
Procedia PDF Downloads 9825757 Estimation of Aquifer Properties Using Pumping Tests: Case Study of Pydibhimavaram Industrial Area, Srikakulam, India
Authors: G. Venkata Rao, P. Kalpana, R. Srinivasa Rao
Abstract:
Adequate and reliable estimates of aquifer parameters are of utmost importance for proper management of vital groundwater resources. At present scenario the ground water is polluted because of industrial waste disposed over the land and the contaminants are transported in the aquifer from one area to another area which is depending on the characteristics of the aquifer and contaminants. To know the contaminant transport, the accurate estimation of aquifer properties is highly needed. Conventionally, these properties are estimated through pumping tests carried out on water wells. The occurrence and movement of ground water in the aquifer are characteristically defined by the aquifer parameters. The pumping (aquifer) test is the standard technique for estimating various hydraulic properties of aquifer systems, viz, transmissivity (T), hydraulic conductivity (K), storage coefficient (S) etc., for which the graphical method is widely used. The study area for conducting pumping test is Pydibheemavaram Industrial area near the coastal belt of Srikulam, AP, India. The main objective of the present work is to estimate the aquifer properties for developing contaminant transport model for the study area.Keywords: aquifer, contaminant transport, hydraulic conductivity, industrial waste, pumping test
Procedia PDF Downloads 44725756 Inverse Heat Conduction Analysis of Cooling on Run-Out Tables
Authors: M. S. Gadala, Khaled Ahmed, Elasadig Mahdi
Abstract:
In this paper, we introduced a gradient-based inverse solver to obtain the missing boundary conditions based on the readings of internal thermocouples. The results show that the method is very sensitive to measurement errors, and becomes unstable when small time steps are used. The artificial neural networks are shown to be capable of capturing the whole thermal history on the run-out table, but are not very effective in restoring the detailed behavior of the boundary conditions. Also, they behave poorly in nonlinear cases and where the boundary condition profile is different. GA and PSO are more effective in finding a detailed representation of the time-varying boundary conditions, as well as in nonlinear cases. However, their convergence takes longer. A variation of the basic PSO, called CRPSO, showed the best performance among the three versions. Also, PSO proved to be effective in handling noisy data, especially when its performance parameters were tuned. An increase in the self-confidence parameter was also found to be effective, as it increased the global search capabilities of the algorithm. RPSO was the most effective variation in dealing with noise, closely followed by CRPSO. The latter variation is recommended for inverse heat conduction problems, as it combines the efficiency and effectiveness required by these problems.Keywords: inverse analysis, function specification, neural net works, particle swarm, run-out table
Procedia PDF Downloads 24125755 Estimation of Emanation Properties of Kimberlites and Host Rocks of Lomonosov Diamond Deposit in Russia
Authors: E. Yu. Yakovlev, A. V. Puchkov
Abstract:
The study is devoted to experimental work on the assessment of emanation properties of kimberlites and host rocks of the Lomonosov diamond deposit of the Arkhangelsk diamondiferous province. The aim of the study is estimation the factors influencing on formation of the radon field over kimberlite pipes. For various types of rocks composing the kimberlite pipe and near-pipe space, the following parameters were measured: porosity, density, radium-226 activity, activity of free radon and emanation coefficient. The research results showed that the largest amount of free radon is produced by rocks of near-pipe space, which are the Vendian host deposits and are characterized by high values of the emanation coefficient, radium activity and porosity. The lowest values of these parameters are characteristic of vent-facies kimberlites, which limit the formation of activity of free radon in body of the pipe. The results of experimental work confirm the prospects of using emanation methods for prospecting of kimberlite pipes.Keywords: emanation coefficient, kimberlites, porosity, radon volumetric activity
Procedia PDF Downloads 13925754 Weighted Rank Regression with Adaptive Penalty Function
Authors: Kang-Mo Jung
Abstract:
The use of regularization for statistical methods has become popular. The least absolute shrinkage and selection operator (LASSO) framework has become the standard tool for sparse regression. However, it is well known that the LASSO is sensitive to outliers or leverage points. We consider a new robust estimation which is composed of the weighted loss function of the pairwise difference of residuals and the adaptive penalty function regulating the tuning parameter for each variable. Rank regression is resistant to regression outliers, but not to leverage points. By adopting a weighted loss function, the proposed method is robust to leverage points of the predictor variable. Furthermore, the adaptive penalty function gives us good statistical properties in variable selection such as oracle property and consistency. We develop an efficient algorithm to compute the proposed estimator using basic functions in program R. We used an optimal tuning parameter based on the Bayesian information criterion (BIC). Numerical simulation shows that the proposed estimator is effective for analyzing real data set and contaminated data.Keywords: adaptive penalty function, robust penalized regression, variable selection, weighted rank regression
Procedia PDF Downloads 47725753 Automatic Post Stroke Detection from Computed Tomography Images
Authors: C. Gopi Jinimole, A. Harsha
Abstract:
For detecting strokes, Computed Tomography (CT) scan is preferred for imaging the abnormalities or infarction in the brain. Because of the problems in the window settings used to evaluate brain CT images, they are very poor in the early stage infarction detection. This paper presents an automatic estimation method for the window settings of the CT images for proper contrast of the hyper infarction present in the brain. In the proposed work the window width is estimated automatically for each slice and the window centre is changed to a new value of 31HU, which is the average of the HU values of the grey matter and white matter in the brain. The automatic window width estimation is based on the average of median of statistical central moments. Thus with the new suggested window centre and estimated window width, the hyper infarction or post-stroke regions in CT brain images are properly detected. The proposed approach assists the radiologists in CT evaluation for early quantitative signs of delayed stroke, which leads to severe hemorrhage in the future can be prevented by providing timely medication to the patients.Keywords: computed tomography (CT), hyper infarction or post stroke region, Hounsefield Unit (HU), window centre (WC), window width (WW)
Procedia PDF Downloads 20325752 An Integrated 5G, Geomagnetic, and Inertial Measurement Unit Fusion Approach for Indoor Positioning
Authors: Chen Zhang, Wei He, Yue Jin, Zengshan Tian, Kaikai Liu
Abstract:
With the widespread adoption of the Internet of Things and smart devices, the demand for indoor positioning technology with high accuracy and robustness continues to grow. Traditional positioning methods such as fingerprinting, channel parameter estimation techniques (TDoA, AoA), and Pedestrian Dead Reckoning (PDR) each have their limitations. Fingerprinting is highly sensitive to environmental changes, channel parameter estimation is only effective in line-of-sight conditions, and PDR is prone to sensor errors and magnetic interference. To overcome these limitations, multisensor fusion-based positioning methods have become a mainstream solution. This paper proposes a dynamic positioning system that integrates 5G TDoA, geomagnetic fingerprinting, and PDR. The system uses 5G TDoA for high-precision starting point positioning, corrects PDR heading with geomagnetic declination, and refines PDR positioning accuracy using geomagnetic fingerprints. Experimental results demonstrate that this method improves positioning accuracy and stability in complex indoor environments, overcoming the limitations of traditional methods and providing a reliable indoor positioning solution.Keywords: 5G TDoA, magnetic fields, pedestrian dead reckoning, fusion location
Procedia PDF Downloads 325751 E-Waste Generation in Bangladesh: Present and Future Estimation by Material Flow Analysis Method
Authors: Rowshan Mamtaz, Shuvo Ahmed, Imran Noor, Sumaiya Rahman, Prithvi Shams, Fahmida Gulshan
Abstract:
Last few decades have witnessed a phenomenal rise in the use of electrical and electronic equipment globally in our everyday life. As these items reach the end of their lifecycle, they turn into e-wastes and contribute to the waste stream. Bangladesh, in conformity with the global trend and due to its ongoing rapid growth, is also using electronics-based appliances and equipment at an increasing rate. This has caused a corresponding increase in the generation of e-wastes. Bangladesh is a developing country; its overall waste management system, is not yet efficient, nor is it environmentally sustainable. Most of its solid wastes are disposed of in a crude way at dumping sites. Addition of e-wastes, which often contain toxic heavy metals, into its waste stream has made the situation more difficult and challenging. Assessment of generation of e-wastes is an important step towards addressing the challenges posed by e-wastes, setting targets, and identifying the best practices for their management. Understanding and proper management of e-wastes is a stated item of the Sustainable Development Goals (SDG) campaign, and Bangladesh is committed to fulfilling it. A better understanding and availability of reliable baseline data on e-wastes will help in preventing illegal dumping, promote recycling, and create jobs in the recycling sectors and thus facilitate sustainable e-waste management. With this objective in mind, the present study has attempted to estimate the amount of e-wastes and its future generation trend in Bangladesh. To achieve this, sales data on eight selected electrical and electronic products (TV, Refrigerator, Fan, Mobile phone, Computer, IT equipment, CFL (Compact Fluorescent Lamp) bulbs, and Air Conditioner) have been collected from different sources. Primary and secondary data on the collection, recycling, and disposal of the e-wastes have also been gathered by questionnaire survey, field visits, interviews, and formal and informal meetings with the stakeholders. Material Flow Analysis (MFA) method has been applied, and mathematical models have been developed in the present study to estimate e-waste amounts and their future trends up to the year 2035 for the eight selected electrical and electronic equipment. End of life (EOL) method is adopted in the estimation. Model inputs are products’ annual sale/import data, past and future sales data, and average life span. From the model outputs, it is estimated that the generation of e-wastes in Bangladesh in 2018 is 0.40 million tons and by 2035 the amount will be 4.62 million tons with an average annual growth rate of 20%. Among the eight selected products, the number of e-wastes generated from seven products are increasing whereas only one product, CFL bulb, showed a decreasing trend of waste generation. The average growth rate of e-waste from TV sets is the highest (28%) while those from Fans and IT equipment are the lowest (11%). Field surveys conducted in the e-waste recycling sector also revealed that every year around 0.0133 million tons of e-wastes enter into the recycling business in Bangladesh which may increase in the near future.Keywords: Bangladesh, end of life, e-waste, material flow analysis
Procedia PDF Downloads 20325750 Use of Multistage Transition Regression Models for Credit Card Income Prediction
Authors: Denys Osipenko, Jonathan Crook
Abstract:
Because of the variety of the card holders’ behaviour types and income sources each consumer account can be transferred to a variety of states. Each consumer account can be inactive, transactor, revolver, delinquent, defaulted and requires an individual model for the income prediction. The estimation of transition probabilities between statuses at the account level helps to avoid the memorylessness of the Markov Chains approach. This paper investigates the transition probabilities estimation approaches to credit cards income prediction at the account level. The key question of empirical research is which approach gives more accurate results: multinomial logistic regression or multistage conditional logistic regression with binary target. Both models have shown moderate predictive power. Prediction accuracy for conditional logistic regression depends on the order of stages for the conditional binary logistic regression. On the other hand, multinomial logistic regression is easier for usage and gives integrate estimations for all states without priorities. Thus further investigations can be concentrated on alternative modeling approaches such as discrete choice models.Keywords: multinomial regression, conditional logistic regression, credit account state, transition probability
Procedia PDF Downloads 48725749 Statistical Model to Examine the Impact of the Inflation Rate and Real Interest Rate on the Bahrain Economy
Authors: Ghada Abo-Zaid
Abstract:
Introduction: Oil is one of the most income source in Bahrain. Low oil price influence on the economy growth and the investment rate in Bahrain. For example, the economic growth was 3.7% in 2012, and it reduced to 2.9% in 2015. Investment rate was 9.8% in 2012, and it is reduced to be 5.9% and -12.1% in 2014 and 2015, respectively. The inflation rate is increased to the peak point in 2013 with 3.3 %. Objectives: The objectives here are to build statistical models to examine the effect of the interest rate inflation rate on the growth economy in Bahrain from 2000 to 2018. Methods: This study based on 18 years, and the multiple regression model is used for the analysis. All of the missing data are omitted from the analysis. Results: Regression model is used to examine the association between the Growth national product (GNP), the inflation rate, and real interest rate. We found that (i) Increase the real interest rate decrease the GNP. (ii) Increase the inflation rate does not effect on the growth economy in Bahrain since the average of the inflation rate was almost 2%, and this is considered as a low percentage. Conclusion: There is a positive impact of the real interest rate on the GNP in Bahrain. While the inflation rate does not show any negative influence on the GNP as the inflation rate was not large enough to effect negatively on the economy growth rate in Bahrain.Keywords: growth national product, egypt, regression model, interest rate
Procedia PDF Downloads 16725748 Determining the Width and Depths of Cut in Milling on the Basis of a Multi-Dexel Model
Authors: Jens Friedrich, Matthias A. Gebele, Armin Lechler, Alexander Verl
Abstract:
Chatter vibrations and process instabilities are the most important factors limiting the productivity of the milling process. Chatter can leads to damage of the tool, the part or the machine tool. Therefore, the estimation and prediction of the process stability is very important. The process stability depends on the spindle speed, the depth of cut and the width of cut. In milling, the process conditions are defined in the NC-program. While the spindle speed is directly coded in the NC-program, the depth and width of cut are unknown. This paper presents a new simulation based approach for the prediction of the depth and width of cut of a milling process. The prediction is based on a material removal simulation with an analytically represented tool shape and a multi-dexel approach for the work piece. The new calculation method allows the direct estimation of the depth and width of cut, which are the influencing parameters of the process stability, instead of the removed volume as existing approaches do. The knowledge can be used to predict the stability of new, unknown parts. Moreover with an additional vibration sensor, the stability lobe diagram of a milling process can be estimated and improved based on the estimated depth and width of cut.Keywords: dexel, process stability, material removal, milling
Procedia PDF Downloads 52525747 Concept for Planning Sustainable Factories
Authors: T. Mersmann, P. Nyhuis
Abstract:
In the current economic climate, for many businesses it is generally no longer sufficient to pursue exclusively economic interests. Instead, integrating ecological and social goals into the corporate targets is becoming ever more important. However, the holistic integration of these new goals is missing from current factory planning approaches. This article describes the conceptual framework for a planning methodology for sustainable factories. To this end, the description of the key areas for action is followed by a description of the principal components for the systematization of sustainability for factories and their stakeholders. Finally, a conceptual framework is presented which integrates the components formulated into an established factory planning procedure.Keywords: factory planning, stakeholder, systematization, sustainability
Procedia PDF Downloads 45525746 Mining Big Data in Telecommunications Industry: Challenges, Techniques, and Revenue Opportunity
Authors: Hoda A. Abdel Hafez
Abstract:
Mining big data represents a big challenge nowadays. Many types of research are concerned with mining massive amounts of data and big data streams. Mining big data faces a lot of challenges including scalability, speed, heterogeneity, accuracy, provenance and privacy. In telecommunication industry, mining big data is like a mining for gold; it represents a big opportunity and maximizing the revenue streams in this industry. This paper discusses the characteristics of big data (volume, variety, velocity and veracity), data mining techniques and tools for handling very large data sets, mining big data in telecommunication and the benefits and opportunities gained from them.Keywords: mining big data, big data, machine learning, telecommunication
Procedia PDF Downloads 41025745 Maximum Deformation Estimation for Reinforced Concrete Buildings Using Equivalent Linearization Method
Authors: Chien-Kuo Chiu
Abstract:
In the displacement-based seismic design and evaluation, equivalent linearization method is one of the approximation methods to estimate the maximum inelastic displacement response of a system. In this study, the accuracy of two equivalent linearization methods are investigated. The investigation consists of three soil condition in Taiwan (Taipei Basin 1, 2, and 3) and five different heights of building (H_r= 10, 20, 30, 40, and 50 m). The first method is the Taiwan equivalent linearization method (TELM) which was proposed based on Japanese equivalent linear method considering the modification factor, α_T= 0.85. On the basis of Lin and Miranda study, the second method is proposed with some modification considering Taiwan soil conditions. From this study, it is shown that Taiwanese equivalent linearization method gives better estimation compared to the modified Lin and Miranda method (MLM). The error index for the Taiwanese equivalent linearization method are 16%, 13%, and 12% for Taipei Basin 1, 2, and 3, respectively. Furthermore, a ductility demand spectrum of single-degree-of-freedom (SDOF) system is presented in this study as a guide for engineers to estimate the ductility demand of a structure.Keywords: displacement-based design, ductility demand spectrum, equivalent linearization method, RC buildings, single-degree-of-freedom
Procedia PDF Downloads 16225744 The Role Of Digital Technology In Crime Prevention
Authors: Muhammad Ashfaq
Abstract:
Main theme: This prime focus of this study is on the role of digital technology in crime prevention, with special focus on Cellular Forensic Unit, Capital City Police Peshawar-Khyber Pakhtunkhwa-Pakistan. Objective(s) of the study: The prime objective of this study is to provide statistics, strategies and pattern of analysis used for crime prevention in Cellular Forensic Unit of Capital City Police Peshawar, Khyber Pakhtunkhwa-Pakistan. Research Method and Procedure: Qualitative method of research has been used in the study for obtaining secondary data from research wing and Information Technology (IT) section of Peshawar police. Content analysis was the method used for the conduction of the study. This study is delimited to Capital City Police and Cellular Forensic Unit Peshawar-KP, Pakistan. information technologies. Major finding(s): It is evident that the old traditional approach will never provide solutions for better management in controlling crimes. The best way to control crimes and promotion of proactive policing is to adopt new technologies. The study reveals that technology have transformed police more effective and vigilant as compared to traditional policing. The heinous crimes like abduction, missing of an individual, snatching, burglaries and blind murder cases are now traceable with the help of technology. Recommendation(s): From the analysis of the data, it is reflected that Information Technology (IT) expert should be recruited along with research analyst to timely assist and facilitate operational as well as investigation units of police.A mobile locator should be Provided to Cellular Forensic Unit to timely apprehend the criminals .Latest digital analysis software should be provided to equip the Cellular Forensic Unit.Keywords: crime prevention, digital technology, pakistan, police
Procedia PDF Downloads 6525743 Least Squares Solution for Linear Quadratic Gaussian Problem with Stochastic Approximation Approach
Authors: Sie Long Kek, Wah June Leong, Kok Lay Teo
Abstract:
Linear quadratic Gaussian model is a standard mathematical model for the stochastic optimal control problem. The combination of the linear quadratic estimation and the linear quadratic regulator allows the state estimation and the optimal control policy to be designed separately. This is known as the separation principle. In this paper, an efficient computational method is proposed to solve the linear quadratic Gaussian problem. In our approach, the Hamiltonian function is defined, and the necessary conditions are derived. In addition to this, the output error is defined and the least-square optimization problem is introduced. By determining the first-order necessary condition, the gradient of the sum squares of output error is established. On this point of view, the stochastic approximation approach is employed such that the optimal control policy is updated. Within a given tolerance, the iteration procedure would be stopped and the optimal solution of the linear-quadratic Gaussian problem is obtained. For illustration, an example of the linear-quadratic Gaussian problem is studied. The result shows the efficiency of the approach proposed. In conclusion, the applicability of the approach proposed for solving the linear quadratic Gaussian problem is highly demonstrated.Keywords: iteration procedure, least squares solution, linear quadratic Gaussian, output error, stochastic approximation
Procedia PDF Downloads 18825742 Kindergarten Children’s Reactions to the COVID-19 Pandemic: Creating a Sense of Coherence
Authors: Bilha Paryente, Roni Gez Langerman
Abstract:
Background and Objectives: The current study focused on how kindergarten children have experienced the COVID-19 pandemic. The main goals were understanding children’s emotions, coping strategies, and thoughts regarding the presence of the COVID-19 virus in their daily lives, using the salute genic approach to study their sense of coherence, and to promote relevant professional instruction. Design and Method: Semistructured in-depth interviews were held with 130 five- to six-year-old children, with an equal number of boys and girls. All of the children were recruited from kindergartens affiliated with the state's secular education system. Results: Data were structured into three themes: 1) the child’s pandemic perception as manageable through meaningful accompanying and missing figures; 2) the child’s comprehension of the virus as dangerous, age differentiating, and contagious. 3) the child’s emotional processing of the pandemic as arousing fear of death and, through images, as thorny and as a monster. Conclusions: Results demonstrate the young children’s sense of coherence, characterized as extrapersonal perception, interpersonal coping, and intrapersonal emotional processing, and the need for greater acknowledgement of child-parent educators' informed interventions that could give children a partial feeling of the adult’s awareness of their needs.Keywords: kindergarten children, continuous stress, COVID-19, salutogenic approach
Procedia PDF Downloads 17725741 Nonparametric Path Analysis with Truncated Spline Approach in Modeling Rural Poverty in Indonesia
Authors: Usriatur Rohma, Adji Achmad Rinaldo Fernandes
Abstract:
Nonparametric path analysis is a statistical method that does not rely on the assumption that the curve is known. The purpose of this study is to determine the best nonparametric truncated spline path function between linear and quadratic polynomial degrees with 1, 2, and 3-knot points and to determine the significance of estimating the best nonparametric truncated spline path function in the model of the effect of population migration and agricultural economic growth on rural poverty through the variable unemployment rate using the t-test statistic at the jackknife resampling stage. The data used in this study are secondary data obtained from statistical publications. The results showed that the best model of nonparametric truncated spline path analysis is quadratic polynomial degree with 3-knot points. In addition, the significance of the best-truncated spline nonparametric path function estimation using jackknife resampling shows that all exogenous variables have a significant influence on the endogenous variables.Keywords: nonparametric path analysis, truncated spline, linear, quadratic, rural poverty, jackknife resampling
Procedia PDF Downloads 50