Search results for: iterative calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1605

Search results for: iterative calculation

555 Machine Learning Based Approach for Measuring Promotion Effectiveness in Multiple Parallel Promotions’ Scenarios

Authors: Revoti Prasad Bora, Nikita Katyal

Abstract:

Promotion is a key element in the retail business. Thus, analysis of promotions to quantify their effectiveness in terms of Revenue and/or Margin is an essential activity in the retail industry. However, measuring the sales/revenue uplift is based on estimations, as the actual sales/revenue without the promotion is not present. Further, the presence of Halo and Cannibalization in a multiple parallel promotions’ scenario complicates the problem. Calculating Baseline by considering inter-brand/competitor items or using Halo and Cannibalization's impact on Revenue calculations by considering Baseline as an interpretation of items’ unit sales in neighboring nonpromotional weeks individually may not capture the overall Revenue uplift in the case of multiple parallel promotions. Hence, this paper proposes a Machine Learning based method for calculating the Revenue uplift by considering the Halo and Cannibalization impact on the Baseline and the Revenue. In the first section of the proposed methodology, Baseline of an item is calculated by incorporating the impact of the promotions on its related items. In the later section, the Revenue of an item is calculated by considering both Halo and Cannibalization impacts. Hence, this methodology enables correct calculation of the overall Revenue uplift due a given promotion.

Keywords: Halo, Cannibalization, promotion, Baseline, temporary price reduction, retail, elasticity, cross price elasticity, machine learning, random forest, linear regression

Procedia PDF Downloads 177
554 The Analysis of Education Sector and Poverty Alleviation with Benefit Incidence Analysis Approach Budget Allocation Policy in East Java

Authors: Wildan Syafitri

Abstract:

The main purpose of the development is to embody public welfare. Its indication is shown by the increasing of the public prosperity in which it will be related to the consumption level as a consequence of the increasing of public income. One of the government’s efforts to increase public welfare is to create development equity in order to alleviate poor people. Poverty’s problem is not merely about the number and percentage of the poor people, but also it includes the gap and severity of poverty.the analysis method used is Benefit Incidence Analysis (BIA) that is an analysis method used to disclose the impact of government policy or individual access based on the income distribution in society. Further, the finding of the study revealed is that the highest number of the poor people in the village is those who are unemployed and have family members who are still in the Junior High School. The income distribution calculation shows a fairly good budget allocation applied with good mass ratio that is 0.31. In addition, the finding of this study also discloses that Indonesian Government policy to subsidize education cost for Elementary and Junior High School students has reached the right target. It is indicated by more benefits received by Elementary and Junior High School students who are poor and very poor than other income group.

Keywords: benefit incidence analysis, budget allocation, poverty, education

Procedia PDF Downloads 393
553 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce

Authors: Jiao Sun, Li Pan, Shijun Liu

Abstract:

Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.

Keywords: collaborative filtering, recommendation, data normalization, mapreduce

Procedia PDF Downloads 217
552 Challenges in the Material and Action-Resistance Factor Design for Embedded Retaining Wall Limit State Analysis

Authors: Kreso Ivandic, Filip Dodigovic, Damir Stuhec

Abstract:

The paper deals with the proposed 'Material' and 'Action-resistance factor' design methods in designing the embedded retaining walls. The parametric analysis of evaluating the differences of the output values mutually and compared with classic approach computation was performed. There is a challenge with the criteria for choosing the proposed calculation design methods in Eurocode 7 with respect to current technical regulations and regular engineering practice. The basic criterion for applying a particular design method is to ensure minimum an equal degree of reliability in relation to the current practice. The procedure of combining the relevant partial coefficients according to design methods was carried out. The use of mentioned partial coefficients should result in the same level of safety, regardless of load combinations, material characteristics and problem geometry. This proposed approach of the partial coefficients related to the material and/or action-resistance should aimed at building a bridge between calculations used so far and pure probability analysis. The measure to compare the results was to determine an equivalent safety factor for each analysis. The results show a visible wide span of equivalent values of the classic safety factors.

Keywords: action-resistance factor design, classic approach, embedded retaining wall, Eurocode 7, limit states, material factor design

Procedia PDF Downloads 231
551 A Comparative Analysis of Solid Waste Treatment Technologies on Cost and Environmental Basis

Authors: Nesli Aydin

Abstract:

Waste management decision making in developing countries has moved towards being more pragmatic, transparent, sustainable and comprehensive. Turkey is required to make its waste related legislation compatible with European Legislation as it is a candidate country of the European Union. Improper Turkish practices such as open burning and open dumping practices must be abandoned urgently, and robust waste management systems have to be structured. The determination of an optimum waste management system in any region requires a comprehensive analysis in which many criteria are taken into account by stakeholders. In conducting this sort of analysis, there are two main criteria which are evaluated by waste management analysts; economic viability and environmentally friendliness. From an analytical point of view, a central characteristic of sustainable development is an economic-ecological integration. It is predicted that building a robust waste management system will need significant effort and cooperation between the stakeholders in developing countries such as Turkey. In this regard, this study aims to provide data regarding the cost and environmental burdens of waste treatment technologies such as an incinerator, an autoclave (with different capacities), a hydroclave and a microwave coupled with updated information on calculation methods, and a framework for comparing any proposed scenario performances on a cost and environmental basis.

Keywords: decision making, economic viability, environmentally friendliness, waste management systems

Procedia PDF Downloads 305
550 A Non-Destructive Estimation Method for Internal Time in Perilla Leaf Using Hyperspectral Data

Authors: Shogo Nagano, Yusuke Tanigaki, Hirokazu Fukuda

Abstract:

Vegetables harvested early in the morning or late in the afternoon are valued in plant production, and so the time of harvest is important. The biological functions known as circadian clocks have a significant effect on this harvest timing. The purpose of this study was to non-destructively estimate the circadian clock and so construct a method for determining a suitable harvest time. We took eight samples of green busil (Perilla frutescens var. crispa) every 4 hours, six times for 1 day and analyzed all samples at the same time. A hyperspectral camera was used to collect spectrum intensities at 141 different wavelengths (350–1050 nm). Calculation of correlations between spectrum intensity of each wavelength and harvest time suggested the suitability of the hyperspectral camera for non-destructive estimation. However, even the highest correlated wavelength had a weak correlation, so we used machine learning to raise the accuracy of estimation and constructed a machine learning model to estimate the internal time of the circadian clock. Artificial neural networks (ANN) were used for machine learning because this is an effective analysis method for large amounts of data. Using the estimation model resulted in an error between estimated and real times of 3 min. The estimations were made in less than 2 hours. Thus, we successfully demonstrated this method of non-destructively estimating internal time.

Keywords: artificial neural network (ANN), circadian clock, green busil, hyperspectral camera, non-destructive evaluation

Procedia PDF Downloads 299
549 Integrating Participatory Action and Arts-Based Research: A Methodology for Investigating Generative AI in Elementary Art Education

Authors: Jihane Mossalim

Abstract:

This study proposes a methodological framework that combines Participatory Action Research (PAR) with Arts-Based Research (ABR) to explore the potential of generative AI in elementary art education. By integrating PAR, this framework emphasizes elementary school students’ active participation as co-researchers, engaging with AI technologies and reflecting on their creative journeys. PAR’s iterative cycles of planning, action, observation, and reflection provide a solid structure for involving children in the research process, ensuring that the study is inclusive and reflective of the children’s perspectives. Arts-Based Research, on the other hand, allows for the exploration of AI not just as a tool but as a medium of creative expression. ABR’s emphasis on visual, performative, and creative outputs complements PAR’s inclusive approach, offering a dynamic and flexible way of studying the intersection of technology and art in educational contexts. This combination is particularly valuable as it encourages students to express their ideas and emotions through art, making the learning process more engaging and personally meaningful. Despite the recognized benefits of both PAR and ABR, there remains a notable gap in research that applies these methodologies in combination with elementary school students, particularly in the context of emerging technologies like generative AI. Addressing this gap is crucial, as integrating these approaches can lead to more inclusive and innovative educational practices that cater to the diverse needs of young learners. This chapter seeks to demonstrate how integrating PAR and ABR can empower young learners, giving them a voice in the research process while enriching their creative and critical thinking skills. This chapter will develop a methodology that integrates both theoretical and practical aspects of PAR and ABR, highlighting the challenges and opportunities that emerge when these approaches are integrated. It will also discuss how to adapt these methods for research in the elementary art education, providing a foundation for future inquiry. Further, the chapter will focus on situating these methodological developments in relation to a study that seeks to understand the potential of generative AI in fostering creativity, collaboration, and critical thinking among young learners. Ultimately, this work aims to provide a pioneering example that inspires further exploration and development of educational practices in the digital age.

Keywords: participatory action research, arts-based research, generative AI, elementary art education

Procedia PDF Downloads 24
548 The School Governing Council as the Impetus for Collaborative Education Governance: A Case Study of Two Benguet Municipalities in the Highlands of Northern Philippines

Authors: Maria Consuelo Doble

Abstract:

For decades, basic public education in the Philippines has been beleaguered by a governance scenario of multi-layered decision-making and the lack of collaboration between sectors in addressing issues on poor access to schools, high dropout rates, low survival rates, and poor student performance. These chronic problems persisted despite multiple efforts making it appear that the education system is incapable of reforming itself. In the mountainous rural towns of La Trinidad and Tuba, in the province of Benguet in Northern Philippines, collaborative education governance was catalyzed by the intervention of Synergeia Foundation, a coalition made up of individuals, institutions and organizations that aim to improve the quality of education in the Philippines. Its major thrust is to empower the major stakeholders at the community level to make education work by building the capacities of School Governing Councils (SGCs). Although mandated by the Department of Education in 2006, the SGCs in Philippine public elementary schools remained dysfunctional. After one year of capacity-building by Synergeia Foundation, some SGCs are already exhibiting active community-based multi-sectoral collaboration, while there are many that are not. With the myriad of factors hindering collaboration, Synergeia Foundation is now confronted with the pressing question: What are the factors that promote collaborative governance in the SGCs so that they can address the education-related issues that they are facing? Using Emerson’s (2011) framework on collaborative governance, this study analyzes the application of collaborative governance by highly-functioning SGCs in the public elementary schools of Tuba and La Trinidad. Findings of this action research indicate how the dynamics of collaboration composed of three interactive and iterative components – principled engagement, shared motivation and capacity for joint action – have resulted in meaningful short-term impact such as stakeholder engagement and decreased a number of dropouts. The change in the behavior of stakeholders is indicative of adaptation to a more collaborative approach in governing education in Benguet highland settings such as Tuba and La Trinidad.

Keywords: basic public education, Benguet highlands, collaborative governance, School Governing Council

Procedia PDF Downloads 290
547 Correlations between Wear Rate and Energy Dissipation Mechanisms in a Ti6Al4V–WC/Co Sliding Pair

Authors: J. S. Rudas, J. M. Gutiérrez Cabeza, A. Corz Rodríguez, L. M. Gómez, A. O. Toro

Abstract:

The prediction of the wear rate of rubbing pairs has attracted the interest of many researchers for years. It has been recently proposed that the sliding wear rate can be inferred from the calculation of the energy rate dissipated by the tribological pair. In this paper some of the dissipative mechanisms present in a pin-on-disc configuration are discussed and both analytical and numerical calculations are carried out. Three dissipative mechanisms were studied: First, the energy release due to temperature gradients within the solid; second, the heat flow from the solid to the environment, and third, the energy loss due to abrasive damage of the surface. The Finite Element Method was used to calculate the dynamics of heat transfer within the solid, with the aid of commercial software. Validation the FEM model was assisted by virtual and laboratory experimentation using different operating points (sliding velocity and geometry contact). The materials for the experiments were Ti6Al4V alloy and Tungsten Carbide (WC-Co). The results showed that the sliding wear rate has a linear relationship with the energy dissipation flow. It was also found that energy loss due to micro-cutting is relevant for the system. This mechanism changes if the sliding velocity and pin geometry are modified though the degradation coefficient continues to present a linear behavior. We found that the less relevant dissipation mechanism for all the cases studied is the energy release by temperature gradients in the solid.

Keywords: degradation, dissipative mechanism, dry sliding, entropy, friction, wear

Procedia PDF Downloads 502
546 Estimation of Normalized Glandular Doses Using a Three-Layer Mammographic Phantom

Authors: Kuan-Jen Lai, Fang-Yi Lin, Shang-Rong Huang, Yun-Zheng Zeng, Po-Chieh Hsu, Jay Wu

Abstract:

The normalized glandular dose (DgN) estimates the energy deposition of mammography in clinical practice. The Monte Carlo simulations frequently use uniformly mixed phantom for calculating the conversion factor. However, breast tissues are not uniformly distributed, leading to errors of conversion factor estimation. This study constructed a three-layer phantom to estimated more accurate of normalized glandular dose. In this study, MCNP code (Monte Carlo N-Particles code) was used to create the geometric structure. We simulated three types of target/filter combinations (Mo/Mo, Mo/Rh, Rh/Rh), six voltages (25 ~ 35 kVp), six HVL parameters and nine breast phantom thicknesses (2 ~ 10 cm) for the three-layer mammographic phantom. The conversion factor for 25%, 50% and 75% glandularity was calculated. The error of conversion factors compared with the results of the American College of Radiology (ACR) was within 6%. For Rh/Rh, the difference was within 9%. The difference between the 50% average glandularity and the uniform phantom was 7.1% ~ -6.7% for the Mo/Mo combination, voltage of 27 kVp, half value layer of 0.34 mmAl, and breast thickness of 4 cm. According to the simulation results, the regression analysis found that the three-layer mammographic phantom at 0% ~ 100% glandularity can be used to accurately calculate the conversion factors. The difference in glandular tissue distribution leads to errors of conversion factor calculation. The three-layer mammographic phantom can provide accurate estimates of glandular dose in clinical practice.

Keywords: Monte Carlo simulation, mammography, normalized glandular dose, glandularity

Procedia PDF Downloads 189
545 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops

Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding

Abstract:

BACKGROUND: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. OBJECTIVE: This article tried to optimize the layout of troops’ cafeteria and to improve the overall efficiency of the dining process. METHODS: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. RESULTS: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interference reduced as well, which verified corresponding simulation results. CONCLUSIONS: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.

Keywords: layout optimization, dining efficiency, troops’ cafeteria, anylogic simulation, field experiment

Procedia PDF Downloads 143
544 The Effect of Body Positioning on Upper-Limb Arterial Occlusion Pressure and the Reliability of the Method during Blood Flow Restriction Training

Authors: Stefanos Karanasios, Charkleia Koutri, Maria Moutzouri, Sofia A. Xergia, Vasiliki Sakellari, George Gioftsos

Abstract:

The precise calculation of arterial occlusive pressure (AOP) is a critical step to accurately prescribe individualized pressures during blood flow restriction training (BFRT). AOP is usually measured in a supine position before training; however, previous reports suggested a significant influence in lower limb AOP across different body positions. The aim of the study was to investigate the effect of three different body positions on upper limb AOP and the reliability of the method for its standardization in clinical practice. Forty-two healthy participants (Mean age: 28.1, SD: ±7.7) underwent measurements of upper limb AOP in supine, seated, and standing positions by three blinded raters. A cuff with a manual pump and a pocket doppler ultrasound were used. A significantly higher upper limb AOP was found in seated compared with supine position (p < 0.031) and in supine compared with standing position (p < 0.031) by all raters. An excellent intraclass correlation coefficient (0.858- 0.984, p < 0.001) was found in all positions. Upper limb AOP is strongly dependent on body position changes. The appropriate measurement position should be selected to accurately calculate AOP before BFRT. The excellent inter-rater reliability and repeatability of the method suggest reliable and consistent results across repeated measurements.

Keywords: Kaatsu training, blood flow restriction training, arterial occlusion, reliability

Procedia PDF Downloads 212
543 PathoPy2.0: Application of Fractal Geometry for Early Detection and Histopathological Analysis of Lung Cancer

Authors: Rhea Kapoor

Abstract:

Fractal dimension provides a way to characterize non-geometric shapes like those found in nature. The purpose of this research is to estimate Minkowski fractal dimension of human lung images for early detection of lung cancer. Lung cancer is the leading cause of death among all types of cancer and an early histopathological analysis will help reduce deaths primarily due to late diagnosis. A Python application program, PathoPy2.0, was developed for analyzing medical images in pixelated format and estimating Minkowski fractal dimension using a new box-counting algorithm that allows windowing of images for more accurate calculation in the suspected areas of cancerous growth. Benchmark geometric fractals were used to validate the accuracy of the program and changes in fractal dimension of lung images to indicate the presence of issues in the lung. The accuracy of the program for the benchmark examples was between 93-99% of known values of the fractal dimensions. Fractal dimension values were then calculated for lung images, from National Cancer Institute, taken over time to correctly detect the presence of cancerous growth. For example, as the fractal dimension for a given lung increased from 1.19 to 1.27 due to cancerous growth, it represents a significant change in fractal dimension which lies between 1 and 2 for 2-D images. Based on the results obtained on many lung test cases, it was concluded that fractal dimension of human lungs can be used to diagnose lung cancer early. The ideas behind PathoPy2.0 can also be applied to study patterns in the electrical activity of the human brain and DNA matching.

Keywords: fractals, histopathological analysis, image processing, lung cancer, Minkowski dimension

Procedia PDF Downloads 178
542 Performing Diagnosis in Building with Partially Valid Heterogeneous Tests

Authors: Houda Najeh, Mahendra Pratap Singh, Stéphane Ploix, Antoine Caucheteux, Karim Chabir, Mohamed Naceur Abdelkrim

Abstract:

Building system is highly vulnerable to different kinds of faults and human misbehaviors. Energy efficiency and user comfort are directly targeted due to abnormalities in building operation. The available fault diagnosis tools and methodologies particularly rely on rules or pure model-based approaches. It is assumed that model or rule-based test could be applied to any situation without taking into account actual testing contexts. Contextual tests with validity domain could reduce a lot of the design of detection tests. The main objective of this paper is to consider fault validity when validate the test model considering the non-modeled events such as occupancy, weather conditions, door and window openings and the integration of the knowledge of the expert on the state of the system. The concept of heterogeneous tests is combined with test validity to generate fault diagnoses. A combination of rules, range and model-based tests known as heterogeneous tests are proposed to reduce the modeling complexity. Calculation of logical diagnoses coming from artificial intelligence provides a global explanation consistent with the test result. An application example shows the efficiency of the proposed technique: an office setting at Grenoble Institute of Technology.

Keywords: heterogeneous tests, validity, building system, sensor grids, sensor fault, diagnosis, fault detection and isolation

Procedia PDF Downloads 293
541 An Investigation of Rainfall Changes in KanganCity During Years 1964 to 2003

Authors: Borzou Faramarzi, Farideh Azimi, Azam Gohardoust, Abbas Ghasemi Ghasemvand, Maryam Mirzaei, Mandana Amani

Abstract:

In this study, attempts were made to examine and analyze the trend for rainfall changes in Kangan City, Booshehr Province, during the time span 1964 to 2003, using seven rainfall threshold indices based on 50 climate extremes indices approved by WMO–CCL/CLIVAR. These indices include days with heavy precipitations, days with rainfalls, frequency of rainfall threshold values, intensity of rainfall threshold values, percentage of rainfall threshold values, successive days of rainfall, and successive days with no precipitation. Results are indicative of the fact that Kangan City climatic conditions have become more dried than before. Indices days with heavy precipitations and days with rainfalls do not show a certain trend in Kangan City. Frequency, intensity, and percentage of rainfall threshold values in the station under investigation do not indicate a certain trend. In analysis of time series of rainfall extreme indices, generally, it was revealed that Kangan City is influenced by general factors of global warming. Calculation of values for the next 10 years based on ARIMA models demonstrates a continuation of warming trends in Kangan City. On the whole, rainfall conditions in Kangan City have experienced more dry periods compared to the past, the trend which is also observable for next 10 years.

Keywords: climatic indices, climate change, extreme temperature and precipitation, time series

Procedia PDF Downloads 272
540 The Identification of Combined Genomic Expressions as a Diagnostic Factor for Oral Squamous Cell Carcinoma

Authors: Ki-Yeo Kim

Abstract:

Trends in genetics are transforming in order to identify differential coexpressions of correlated gene expression rather than the significant individual gene. Moreover, it is known that a combined biomarker pattern improves the discrimination of a specific cancer. The identification of the combined biomarker is also necessary for the early detection of invasive oral squamous cell carcinoma (OSCC). To identify the combined biomarker that could improve the discrimination of OSCC, we explored an appropriate number of genes in a combined gene set in order to attain the highest level of accuracy. After detecting a significant gene set, including the pre-defined number of genes, a combined expression was identified using the weights of genes in a gene set. We used the Principal Component Analysis (PCA) for the weight calculation. In this process, we used three public microarray datasets. One dataset was used for identifying the combined biomarker, and the other two datasets were used for validation. The discrimination accuracy was measured by the out-of-bag (OOB) error. There was no relation between the significance and the discrimination accuracy in each individual gene. The identified gene set included both significant and insignificant genes. One of the most significant gene sets in the classification of normal and OSCC included MMP1, SOCS3 and ACOX1. Furthermore, in the case of oral dysplasia and OSCC discrimination, two combined biomarkers were identified. The combined genomic expression achieved better performance in the discrimination of different conditions than in a single significant gene. Therefore, it could be expected that accurate diagnosis for cancer could be possible with a combined biomarker.

Keywords: oral squamous cell carcinoma, combined biomarker, microarray dataset, correlated genes

Procedia PDF Downloads 423
539 Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis

Authors: Yao Cheng, Weihua Zhang

Abstract:

Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability.

Keywords: bearing, complete ensemble empirical mode decomposition with adaptive noise, fault diagnosis, multipoint optimal minimum entropy deconvolution

Procedia PDF Downloads 374
538 Estimation of Time Loss and Costs of Traffic Congestion: The Contingent Valuation Method

Authors: Amira Mabrouk, Chokri Abdennadher

Abstract:

The reduction of road congestion which is inherent to the use of vehicles is an obvious priority to public authority. Therefore, assessing the willingness to pay of an individual in order to save trip-time is akin to estimating the change in price which was the result of setting up a new transport policy to increase the networks fluidity and improving the level of social welfare. This study holds an innovative perspective. In fact, it initiates an economic calculation that has the objective of giving an estimation of the monetized time value during the trips made in Sfax. This research is founded on a double-objective approach. The aim of this study is to i) give an estimation of the monetized value of time; an hour dedicated to trips, ii) determine whether or not the consumer considers the environmental variables to be significant, iii) analyze the impact of applying a public management of the congestion via imposing taxation of city tolls on urban dwellers. This article is built upon a rich field survey led in the city of Sfax. With the use of the contingent valuation method, we analyze the “declared time preferences” of 450 drivers during rush hours. Based on the fond consideration of attributed bias of the applied method, we bring to light the delicacy of this approach with regards to the revelation mode and the interrogative techniques by following the NOAA panel recommendations bearing the exception of the valorization point and other similar studies about the estimation of transportation externality.

Keywords: willingness to pay, contingent valuation, time value, city toll

Procedia PDF Downloads 434
537 Development of Precise Ephemeris Generation Module for Thaichote Satellite Operations

Authors: Manop Aorpimai, Ponthep Navakitkanok

Abstract:

In this paper, the development of the ephemeris generation module used for the Thaichote satellite operations is presented. It is a vital part of the flight dynamics system, which comprises, the orbit determination, orbit propagation, event prediction and station-keeping maneuver modules. In the generation of the spacecraft ephemeris data, the estimated orbital state vector from the orbit determination module is used as an initial condition. The equations of motion are then integrated forward in time to predict the satellite states. The higher geopotential harmonics, as well as other disturbing forces, are taken into account to resemble the environment in low-earth orbit. Using a highly accurate numerical integrator based on the Burlish-Stoer algorithm the ephemeris data can be generated for long-term predictions, by using a relatively small computation burden and short calculation time. Some events occurring during the prediction course that are related to the mission operations, such as the satellite’s rise/set viewed from the ground station, Earth and Moon eclipses, the drift in ground track as well as the drift in the local solar time of the orbital plane are all detected and reported. When combined with other modules to form a flight dynamics system, this application is aimed to be applied for the Thaichote satellite and successive Thailand’s Earth-observation missions.

Keywords: flight dynamics system, orbit propagation, satellite ephemeris, Thailand’s Earth Observation Satellite

Procedia PDF Downloads 377
536 Technology in the Calculation of People Health Level: Design of a Computational Tool

Authors: Sara Herrero Jaén, José María Santamaría García, María Lourdes Jiménez Rodríguez, Jorge Luis Gómez González, Adriana Cercas Duque, Alexandra González Aguna

Abstract:

Background: Health concept has evolved throughout history. The health level is determined by the own individual perception. It is a dynamic process over time so that you can see variations from one moment to the next. In this way, knowing the health of the patients you care for, will facilitate decision making in the treatment of care. Objective: To design a technological tool that calculates the people health level in a sequential way over time. Material and Methods: Deductive methodology through text analysis, extraction and logical knowledge formalization and education with expert group. Studying time: September 2015- actually. Results: A computational tool for the use of health personnel has been designed. It has 11 variables. Each variable can be given a value from 1 to 5, with 1 being the minimum value and 5 being the maximum value. By adding the result of the 11 variables we obtain a magnitude in a certain time, the health level of the person. The health calculator allows to represent people health level at a time, establishing temporal cuts being useful to determine the evolution of the individual over time. Conclusion: The Information and Communication Technologies (ICT) allow training and help in various disciplinary areas. It is important to highlight their relevance in the field of health. Based on the health formalization, care acts can be directed towards some of the propositional elements of the concept above. The care acts will modify the people health level. The health calculator allows the prioritization and prediction of different strategies of health care in hospital units.

Keywords: calculator, care, eHealth, health

Procedia PDF Downloads 264
535 Theoretical Study of Structural, Magnetic, and Magneto-Optical Properties of Ultrathin Films of Fe/Cu (001)

Authors: Mebarek Boukelkoul, Abdelhalim Haroun

Abstract:

By means of the first principle calculation, we have investigated the structural, magnetic and magneto-optical properties of the ultra-thin films of Fen/Cu(001) with (n=1, 2, 3). We adopted a relativistic approach using DFT theorem with local spin density approximation (LSDA). The electronic structure is performed within the framework of the Spin-Polarized Relativistic (SPR) Linear Muffin-Tin Orbitals (LMTO) with the Atomic Sphere Approximation (ASA) method. During the variational principle, the crystal wave function is expressed as a linear combination of the Bloch sums of the so-called relativistic muffin-tin orbitals centered on the atomic sites. The crystalline structure is calculated after an atomic relaxation process using the optimization of the total energy with respect to the atomic interplane distance. A body-centered tetragonal (BCT) pseudomorphic crystalline structure with a tetragonality ratio c/a larger than unity is found. The magnetic behaviour is characterized by an enhanced magnetic moment and a ferromagnetic interplane coupling. The polar magneto-optical Kerr effect spectra are given over a photon energy range extended to 15eV and the microscopic origin of the most interesting features are interpreted by interband transitions. Unlike thin layers, the anisotropy in the ultra-thin films is characterized by a perpendicular magnetization which is perpendicular to the film plane.

Keywords: ultrathin films, magnetism, magneto-optics, pseudomorphic structure

Procedia PDF Downloads 335
534 Stochastic Edge Based Anomaly Detection for Supervisory Control and Data Acquisitions Systems: Considering the Zambian Power Grid

Authors: Lukumba Phiri, Simon Tembo, Kumbuso Joshua Nyoni

Abstract:

In Zambia recent initiatives by various power operators like ZESCO, CEC, and consumers like the mines to upgrade power systems into smart grids target an even tighter integration with information technologies to enable the integration of renewable energy sources, local and bulk generation, and demand response. Thus, for the reliable operation of smart grids, its information infrastructure must be secure and reliable in the face of both failures and cyberattacks. Due to the nature of the systems, ICS/SCADA cybersecurity and governance face additional challenges compared to the corporate networks, and critical systems may be left exposed. There exist control frameworks internationally such as the NIST framework, however, there are generic and do not meet the domain-specific needs of the SCADA systems. Zambia is also lagging in cybersecurity awareness and adoption, therefore there is a concern about securing ICS controlling key infrastructure critical to the Zambian economy as there are few known facts about the true posture. In this paper, we introduce a stochastic Edged-based Anomaly Detection for SCADA systems (SEADS) framework for threat modeling and risk assessment. SEADS enables the calculation of steady-steady probabilities that are further applied to establish metrics like system availability, maintainability, and reliability.

Keywords: anomaly, availability, detection, edge, maintainability, reliability, stochastic

Procedia PDF Downloads 110
533 An Activity Based Trajectory Search Approach

Authors: Mohamed Mahmoud Hasan, Hoda M. O. Mokhtar

Abstract:

With the gigantic increment in portable applications use and the spread of positioning and location-aware technologies that we are seeing today, new procedures and methodologies for location-based strategies are required. Location recommendation is one of the highly demanded location-aware applications uniquely with the wide accessibility of social network applications that are location-aware including Facebook check-ins, Foursquare, and others. In this paper, we aim to present a new methodology for location recommendation. The proposed approach coordinates customary spatial traits alongside other essential components including shortest distance, and user interests. We also present another idea namely, "activity trajectory" that represents trajectory that fulfills the set of activities that the user is intrigued to do. The approach dispatched acquaints the related distance value to select trajectory(ies) with minimum cost value (distance) and spatial-area to prune unneeded directions. The proposed calculation utilizes the idea of movement direction to prescribe most comparable N-trajectory(ies) that matches the client's required action design with least voyaging separation. To upgrade the execution of the proposed approach, parallel handling is applied through the employment of a MapReduce based approach. Experiments taking into account genuine information sets were built up and tested for assessing the proposed approach. The exhibited tests indicate how the proposed approach beets different strategies giving better precision and run time.

Keywords: location based recommendation, map-reduce, recommendation system, trajectory search

Procedia PDF Downloads 223
532 Measuring Fluctuating Asymmetry in Human Faces Using High-Density 3D Surface Scans

Authors: O. Ekrami, P. Claes, S. Van Dongen

Abstract:

Fluctuating asymmetry (FA) has been studied for many years as an indicator of developmental stability or ‘genetic quality’ based on the assumption that perfect symmetry is ideally the expected outcome for a bilateral organism. Further studies have also investigated the possible link between FA and attractiveness or levels of masculinity or femininity. These hypotheses have been mostly examined using 2D images, and the structure of interest is usually presented using a limited number of landmarks. Such methods have the downside of simplifying and reducing the dimensionality of the structure, which will in return increase the error of the analysis. In an attempt to reach more conclusive and accurate results, in this study we have used high-resolution 3D scans of human faces and have developed an algorithm to measure and localize FA, taking a spatially-dense approach. A symmetric spatially dense anthropometric mask with paired vertices is non-rigidly mapped on target faces using an Iterative Closest Point (ICP) registration algorithm. A set of 19 manually indicated landmarks were used to examine the precision of our mapping step. The protocol’s accuracy in measurement and localizing FA is assessed using simulated faces with known amounts of asymmetry added to them. The results of validation of our approach show that the algorithm is perfectly capable of locating and measuring FA in 3D simulated faces. With the use of such algorithm, the additional captured information on asymmetry can be used to improve the studies of FA as an indicator of fitness or attractiveness. This algorithm can especially be of great benefit in studies of high number of subjects due to its automated and time-efficient nature. Additionally, taking a spatially dense approach provides us with information about the locality of FA, which is impossible to obtain using conventional methods. It also enables us to analyze the asymmetry of a morphological structures in a multivariate manner; This can be achieved by using methods such as Principal Components Analysis (PCA) or Factor Analysis, which can be a step towards understanding the underlying processes of asymmetry. This method can also be used in combination with genome wide association studies to help unravel the genetic bases of FA. To conclude, we introduced an algorithm to study and analyze asymmetry in human faces, with the possibility of extending the application to other morphological structures, in an automated, accurate and multi-variate framework.

Keywords: developmental stability, fluctuating asymmetry, morphometrics, 3D image processing

Procedia PDF Downloads 140
531 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 154
530 A Practical and Theoretical Study on the Electromotor Bearing Defect Detection in a Wet Mill Using the Vibration Analysis Method and Defect Length Calculation in the Bearing

Authors: Mostafa Firoozabadi, Alireza Foroughi Nematollahi

Abstract:

Wet mills are one of the most important equipment in the mining industries and any defect occurrence in them can stop the production line and it can make some irrecoverable damages to the system. Electromotors are the significant parts of a mill and their monitoring is a necessary process to prevent unwanted defects. The purpose of this study is to investigate the Electromotor bearing defects, theoretically and practically, using the vibration analysis method. When a defect happens in a bearing, it can be transferred to the other parts of the equipment like inner ring, outer ring, balls, and the bearing cage. The electromotor defects source can be electrical or mechanical. Sometimes, the electrical and mechanical defect frequencies are modulated and the bearing defect detection becomes difficult. In this paper, to detect the electromotor bearing defects, the electrical and mechanical defect frequencies are extracted firstly. Then, by calculating the bearing defect frequencies, and the spectrum and time signal analysis, the bearing defects are detected. In addition, the obtained frequency determines that the bearing level in which the defect has happened and by comparing this level to the standards it determines the bearing remaining lifetime. Finally, the defect length is calculated by theoretical equations to demonstrate that there is no need to replace the bearing. The results of the proposed method, which has been implemented on the wet mills in the Golgohar mining and industrial company in Iran, show that this method is capable of detecting the electromotor bearing defects accurately and on time.

Keywords: bearing defect length, defect frequency, electromotor defects, vibration analysis

Procedia PDF Downloads 502
529 Increasing Access to Upper Limb Reconstruction in Cervical Spinal Cord Injury

Authors: Michelle Jennett, Jana Dengler, Maytal Perlman

Abstract:

Background: Cervical spinal cord injury (SCI) is a devastating event that results in upper limb paralysis, loss of independence, and disability. People living with cervical SCI have identified improvement of upper limb function as a top priority. Nerve and tendon transfer surgery has successfully restored upper limb function in cervical SCI but is not universally used or available to all eligible individuals. This exploratory mixed-methods study used an implementation science approach to better understand these factors that influence access to upper limb reconstruction in the Canadian context and design an intervention to increase access to care. Methods: Data from the Canadian Institute for Health Information’s Discharge Abstracts Database (CIHI-DAD) and the National Ambulatory Care Reporting System (NACRS) were used to determine the annual rate of nerve transfer and tendon transfer surgeries performed in cervical SCI in Canada over the last 15 years. Semi-structured interviews informed by the consolidated framework for implementation research (CFIR) were used to explore Ontario healthcare provider knowledge and practices around upper limb reconstruction. An inductive, iterative constant comparative process involving descriptive and interpretive analyses was used to identify themes that emerged from the data. Results: Healthcare providers (n = 10 upper extremity surgeons, n = 10 SCI physiatrists, n = 12 physical and occupational therapists working with individuals with SCI) were interviewed about their knowledge and perceptions of upper limb reconstruction and their current practices and discussions around upper limb reconstruction. Data analysis is currently underway and will be presented. Regional variation in rates of upper limb reconstruction and trends over time are also currently being analyzed. Conclusions: Utilization of nerve and tendon transfer surgery to improve upper limb reconstruction in Canada remains low. There are a complex array of interrelated individual-, provider- and system-level barriers that prevent individuals with cervical SCI from accessing upper limb reconstruction. In order to offer equitable access to care, a multi-modal approach addressing current barriers is required.

Keywords: cervical spinal cord injury, nerve and tendon transfer surgery, spinal cord injury, upper extremity reconstruction

Procedia PDF Downloads 97
528 Assessing the Walkability and Urban Design Qualities of Campus Streets

Authors: Zhehao Zhang

Abstract:

Walking has become an indispensable and sustainable way of travel for college students in their daily lives; campus street is an important carrier for students to walk and take part in a variety of activities, improving the walkability of campus streets plays an important role in optimizing the quality of campus space environment, promoting the campus walking system and inducing multiple walking behaviors. The purpose of this paper is to explore the effect of campus layout, facility distribution, and location site selection on the walkability of campus streets, and assess the street design qualities from the elements of imageability, enclosure, complexity, transparency, and human scale, and further examines the relationship between street-level urban design perceptual qualities and walkability and its effect on walking behavior in the campus. Taking Tianjin University as the research object, this paper uses the optimized walk score method based on walking frequency, variety, and distance to evaluate the walkability of streets from a macro perspective and measures the urban design qualities in terms of the calculation of street physical environment characteristics, as well as uses behavior annotation and street image data to establish temporal and spatial behavior database to analyze walking activity from the microscopic view. In addition, based on the conclusions, the improvement and design strategy will be presented from the aspects of the built walking environment, street vitality, and walking behavior.

Keywords: walkability, streetscapes, pedestrian activity, walk score

Procedia PDF Downloads 144
527 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 294
526 Re-Inhabiting the Roof: Han Slawick Covered Roof Terrace, Amsterdam

Authors: Simone Medio

Abstract:

If we observe many modern cities from above, we are typically confronted with a sea of asphalt-clad flat rooftops. In contrast to the modernist expectation of a populated flat roof, flat rooftops in modern multi-story buildings are rarely used. On the contrary, they typify a desolate and abandoned landscape encouraging mechanical system allocation. Flat roof technology continues to be seen as a state-of-fact in most multi-storey building designs and its greening its prevalent environmental justification. This paper aims to seek a change in the approach to flat roofing. It makes a case for the opportunity at hand for architectonically resolute, sheltered, livable spaces that make a better use of the environment at rooftop level. The researcher is looking for the triggers that allow for that change to happen in the design process of case study buildings. The paper begins by exploring Han Slawick covered roof terrace in Amsterdam as a simple and essential example of transforming the flat roof in a usable, inhabitable space. It investigates the design challenges and the logistic, financial and legislative hurdles faced by the architect, and the outcomes in terms of building performance and occupant use and satisfaction. The researcher uses a grounded research methodology with direct interview process to the architect in charge of the building and the building user. Energy simulation tools and calculation of running costs are also used as further means of validating change.

Keywords: environmental design, flat rooftop persistence, roof re-habitation, tectonics

Procedia PDF Downloads 273