Search results for: measurement models
5340 Validity and Reliability of Lifestyle Measurement of the LSAS among Recurrent Stroke Patients in Selected Hospital, Central Java, Indonesia
Authors: Meida Laely Ramdani, Earmporn Thongkrajai, Dedy Purwito
Abstract:
Lifestyle is one of the most important factors affecting health. Measurement of lifestyle behaviors is necessary for the identification of causal associations between unhealthy lifestyle and health outcomes. There was many instruments have been measured for lifestyle, but not specific for stroke recurrence. This study aimed to develop a new questionnaire of Lifestyle Adjustment Scale (LSAS) among recurrent stroke patients in Indonesia and to measure the reliability and validity of LSAS. The instrument consist of 33 items was developed from the responses of 30 recurrent stroke patients with the maximum age 60 years. Data was collected during October to November 2015. The properties of the instrument were evaluated by validity assessment and reliability measures. The content validity was judged adequate by a panel of five experts, with the result of I-CVI was 0.97. The Cronbach’s alpha analysis was carried out to measure the reliability of LSAS. The result showed that Cronbach’s alpha coefficient was 0.819. LSAS were classified under the domains of dietary habit, smoking habit, physical activity, and stress management. The results of Cronbach’s alpha coefficient for each subscale were 0.60, 0.39, 0.67, 0.65 and 0.76 respectively. LSAS instrument was valid and reliable therefore can be used as research tool among recurrent stroke patients. The development of this questionnaire has been adapted to the socio-cultural context in Indonesia.Keywords: LSAS, recurrent stroke patients, lifestyle, Indonesia
Procedia PDF Downloads 2495339 Application of Signature Verification Models for Document Recognition
Authors: Boris M. Fedorov, Liudmila P. Goncharenko, Sergey A. Sybachin, Natalia A. Mamedova, Ekaterina V. Makarenkova, Saule Rakhimova
Abstract:
In modern economic conditions, the question of the possibility of correct recognition of a signature on digital documents in order to verify the expression of will or confirm a certain operation is relevant. The additional complexity of processing lies in the dynamic variability of the signature for each individual, as well as in the way information is processed because the signature refers to biometric data. The article discusses the issues of using artificial intelligence models in order to improve the quality of signature confirmation in document recognition. The analysis of several possible options for using the model is carried out. The results of the study are given, in which it is possible to correctly determine the authenticity of the signature on small samples.Keywords: signature recognition, biometric data, artificial intelligence, neural networks
Procedia PDF Downloads 1485338 Analog Input Output Buffer Information Specification Modelling Techniques for Single Ended Inter-Integrated Circuit and Differential Low Voltage Differential Signaling I/O Interfaces
Authors: Monika Rawat, Rahul Kumar
Abstract:
Input output Buffer Information Specification (IBIS) models are used for describing the analog behavior of the Input Output (I/O) buffers of a digital device. They are widely used to perform signal integrity analysis. Advantages of using IBIS models include simple structure, IP protection and fast simulation time with reasonable accuracy. As design complexity of driver and receiver increases, capturing exact behavior from transistor level model into IBIS model becomes an essential task to achieve better accuracy. In this paper, an improvement in existing methodology of generating IBIS model for complex I/O interfaces such as Inter-Integrated Circuit (I2C) and Low Voltage Differential Signaling (LVDS) is proposed. Furthermore, the accuracy and computational performance of standard method and proposed approach with respect to SPICE are presented. The investigations will be useful to further improve the accuracy of IBIS models and to enhance their wider acceptance.Keywords: IBIS, signal integrity, open-drain buffer, low voltage differential signaling, behavior modelling, transient simulation
Procedia PDF Downloads 1965337 Experiences of Timing Analysis of Parallel Embedded Software
Authors: Muhammad Waqar Aziz, Syed Abdul Baqi Shah
Abstract:
The execution time analysis is fundamental to the successful design and execution of real-time embedded software. In such analysis, the Worst-Case Execution Time (WCET) of a program is a key measure, on the basis of which system tasks are scheduled. The WCET analysis of embedded software is also needed for system understanding and to guarantee its behavior. WCET analysis can be performed statically (without executing the program) or dynamically (through measurement). Traditionally, research on the WCET analysis assumes sequential code running on single-core platforms. However, as computation is steadily moving towards using a combination of parallel programs and multi-core hardware, new challenges in WCET analysis need to be addressed. In this article, we report our experiences of performing the WCET analysis of Parallel Embedded Software (PES) running on multi-core platform. The primary purpose was to investigate how WCET estimates of PES can be computed statically, and how they can be derived dynamically. Our experiences, as reported in this article, include the challenges we faced, possible suggestions to these challenges and the workarounds that were developed. This article also provides observations on the benefits and drawbacks of deriving the WCET estimates using the said methods and provides useful recommendations for further research in this area.Keywords: embedded software, worst-case execution-time analysis, static flow analysis, measurement-based analysis, parallel computing
Procedia PDF Downloads 3245336 Plot Scale Estimation of Crop Biophysical Parameters from High Resolution Satellite Imagery
Authors: Shreedevi Moharana, Subashisa Dutta
Abstract:
The present study focuses on the estimation of crop biophysical parameters like crop chlorophyll, nitrogen and water stress at plot scale in the crop fields. To achieve these, we have used high-resolution satellite LISS IV imagery. A new methodology has proposed in this research work, the spectral shape function of paddy crop is employed to get the significant wavelengths sensitive to paddy crop parameters. From the shape functions, regression index models were established for the critical wavelength with minimum and maximum wavelengths of multi-spectrum high-resolution LISS IV data. Moreover, the functional relationships were utilized to develop the index models. From these index models crop, biophysical parameters were estimated and mapped from LISS IV imagery at plot scale in crop field level. The result showed that the nitrogen content of the paddy crop varied from 2-8%, chlorophyll from 1.5-9% and water content variation observed from 40-90% respectively. It was observed that the variability in rice agriculture system in India was purely a function of field topography.Keywords: crop parameters, index model, LISS IV imagery, plot scale, shape function
Procedia PDF Downloads 1685335 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods
Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie
Abstract:
The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence
Procedia PDF Downloads 2485334 The Prediction of Effective Equation on Drivers' Behavioral Characteristics of Lane Changing
Authors: Khashayar Kazemzadeh, Mohammad Hanif Dasoomi
Abstract:
According to the increasing volume of traffic, lane changing plays a crucial role in traffic flow. Lane changing in traffic depends on several factors including road geometrical design, speed, drivers’ behavioral characteristics, etc. A great deal of research has been carried out regarding these fields. Despite of the other significant factors, the drivers’ behavioral characteristics of lane changing has been emphasized in this paper. This paper has predicted the effective equation based on personal characteristics of lane changing by regression models.Keywords: effective equation, lane changing, drivers’ behavioral characteristics, regression models
Procedia PDF Downloads 4505333 Developing and Evaluating Clinical Risk Prediction Models for Coronary Artery Bypass Graft Surgery
Authors: Mohammadreza Mohebbi, Masoumeh Sanagou
Abstract:
The ability to predict clinical outcomes is of great importance to physicians and clinicians. A number of different methods have been used in an effort to accurately predict these outcomes. These methods include the development of scoring systems based on multivariate statistical modelling, and models involving the use of classification and regression trees. The process usually consists of two consecutive phases, namely model development and external validation. The model development phase consists of building a multivariate model and evaluating its predictive performance by examining calibration and discrimination, and internal validation. External validation tests the predictive performance of a model by assessing its calibration and discrimination in different but plausibly related patients. A motivate example focuses on prediction modeling using a sample of patients undergone coronary artery bypass graft (CABG) has been used for illustrative purpose and a set of primary considerations for evaluating prediction model studies using specific quality indicators as criteria to help stakeholders evaluate the quality of a prediction model study has been proposed.Keywords: clinical prediction models, clinical decision rule, prognosis, external validation, model calibration, biostatistics
Procedia PDF Downloads 2975332 An Approach for Pattern Recognition and Prediction of Information Diffusion Model on Twitter
Authors: Amartya Hatua, Trung Nguyen, Andrew Sung
Abstract:
In this paper, we study the information diffusion process on Twitter as a multivariate time series problem. Our model concerns three measures (volume, network influence, and sentiment of tweets) based on 10 features, and we collected 27 million tweets to build our information diffusion time series dataset for analysis. Then, different time series clustering techniques with Dynamic Time Warping (DTW) distance were used to identify different patterns of information diffusion. Finally, we built the information diffusion prediction models for new hashtags which comprise two phrases: The first phrase is recognizing the pattern using k-NN with DTW distance; the second phrase is building the forecasting model using the traditional Autoregressive Integrated Moving Average (ARIMA) model and the non-linear recurrent neural network of Long Short-Term Memory (LSTM). Preliminary results of performance evaluation between different forecasting models show that LSTM with clustering information notably outperforms other models. Therefore, our approach can be applied in real-world applications to analyze and predict the information diffusion characteristics of selected topics or memes (hashtags) in Twitter.Keywords: ARIMA, DTW, information diffusion, LSTM, RNN, time series clustering, time series forecasting, Twitter
Procedia PDF Downloads 3915331 The Mechanical and Electrochemical Properties of DC-Electrodeposited Ni-Mn Alloy Coating with Low Internal Stress
Authors: Chun-Ying Lee, Kuan-Hui Cheng, Mei-Wen Wu
Abstract:
The nickel-manganese (Ni-Mn) alloy coating prepared from DC electrodeposition process in sulphamate bath was studied. The effects of process parameters, such as current density and electrolyte composition, on the cathodic current efficiency, microstructure, internal stress and mechanical properties were investigated. Because of its crucial effect on the application to the electroforming of microelectronic components, the development of low internal stress coating with high leveling power was emphasized. It was found that both the coating’s manganese content and the cathodic current efficiency increased with the raise in current density. In addition, the internal stress of the deposited coating showed compressive nature at low current densities while changed to tensile one at higher current densities. Moreover, the metallographic observation, X-ray diffraction measurement, transmission electron microscope (TEM) examination, and polarization curve measurement were conducted. It was found that the Ni-Mn coating consisted of nano-sized columnar grains and the maximum hardness of the coating was associated with (111) preferred orientation in the microstructure. The grain size was refined along with the increase in the manganese content of the coating, which accordingly, raised its hardness and mechanical tensile strength. In summary, the Ni-Mn coating prepared at lower current density of 1-2 A/dm2 had low internal stress, high leveling power, and better corrosion resistance.Keywords: Ni-Mn coating, DC plating, internal stress, leveling power
Procedia PDF Downloads 3695330 Construction of QSAR Models to Predict Potency on a Series of substituted Imidazole Derivatives as Anti-fungal Agents
Authors: Sara El Mansouria Beghdadi
Abstract:
Quantitative structure–activity relationship (QSAR) modelling is one of the main computer tools used in medicinal chemistry. Over the past two decades, the incidence of fungal infections has increased due to the development of resistance. In this study, the QSAR was performed on a series of esters of 2-carboxamido-3-(1H-imidazole-1-yl) propanoic acid derivatives. These compounds have showed moderate and very good antifungal activity. The multiple linear regression (MLR) was used to generate the linear 2d-QSAR models. The dataset consists of 115 compounds with their antifungal activity (log MIC) against «Candida albicans» (ATCC SC5314). Descriptors were calculated, and different models were generated using Chemoffice, Avogadro, GaussView software. The selected model was validated. The study suggests that the increase in lipophilicity and the reduction in the electronic character of the substituent in R1, as well as the reduction in the steric hindrance of the substituent in R2 and its aromatic character, supporting the potentiation of the antifungal effect. The results of QSAR could help scientists to propose new compounds with higher antifungal activities intended for immunocompromised patients susceptible to multi-resistant nosocomial infections.Keywords: quantitative structure–activity relationship, imidazole, antifungal, candida albicans (ATCC SC5314)
Procedia PDF Downloads 845329 Testing and Validation Stochastic Models in Epidemiology
Authors: Snigdha Sahai, Devaki Chikkavenkatappa Yellappa
Abstract:
This study outlines approaches for testing and validating stochastic models used in epidemiology, focusing on the integration and functional testing of simulation code. It details methods for combining simple functions into comprehensive simulations, distinguishing between deterministic and stochastic components, and applying tests to ensure robustness. Techniques include isolating stochastic elements, utilizing large sample sizes for validation, and handling special cases. Practical examples are provided using R code to demonstrate integration testing, handling of incorrect inputs, and special cases. The study emphasizes the importance of both functional and defensive programming to enhance code reliability and user-friendliness.Keywords: computational epidemiology, epidemiology, public health, infectious disease modeling, statistical analysis, health data analysis, disease transmission dynamics, predictive modeling in health, population health modeling, quantitative public health, random sampling simulations, randomized numerical analysis, simulation-based analysis, variance-based simulations, algorithmic disease simulation, computational public health strategies, epidemiological surveillance, disease pattern analysis, epidemic risk assessment, population-based health strategies, preventive healthcare models, infection dynamics in populations, contagion spread prediction models, survival analysis techniques, epidemiological data mining, host-pathogen interaction models, risk assessment algorithms for disease spread, decision-support systems in epidemiology, macro-level health impact simulations, socioeconomic determinants in disease spread, data-driven decision making in public health, quantitative impact assessment of health policies, biostatistical methods in population health, probability-driven health outcome predictions
Procedia PDF Downloads 75328 Non-Pharmacological Approach to the Improvement and Maintenance of the Convergence Parameter
Authors: Andreas Aceranti, Guido Bighiani, Francesca Crotto, Marco Colorato, Stefania Zaghi, Marino Zanetti, Simonetta Vernocchi
Abstract:
The management of eye parameters such as convergence, accommodation, and miosis is very complex; in fact, both the neurovegetative system and the complex Oculocephalgiria system come into play. We have found the effectiveness of the "highvelocity low amplitude" technique directed on C7-T1 (where the cilio-spinal nucleus of the budge is located) in improving the convergence parameter through the measurement of the point of maximum convergence. With this research, we set out to investigate whether the improvement obtained through the High Velocity Low Amplitude maneuver lasts over time, carrying out a pre-manipulation measurement, one immediately after manipulation and one month after manipulation. We took a population of 30 subjects with both refractive and non-refractive problems. Of the 30 patients tested, 27 gave a positive result after the High Velocity Low Amplitude maneuver, giving an improvement in the point of maximum convergence. After a month, we retested all 27 subjects: some further improved the result, others kept, and three subjects slightly lost the gain obtained. None of the re-tested patients returned to the point of maximum convergence starting pre-manipulation. This result opens the door to a multidisciplinary approach between ophthalmologists and osteopaths with the aim of addressing oculomotricity and convergence deficits that increasingly afflict our society due to the massive use of devices and for the conduct of life in closed and restricted environments.Keywords: point of maximum convergence, HVLA, improvement in PPC, convergence
Procedia PDF Downloads 775327 Validation of an Acuity Measurement Tool for Maternity Services
Authors: Cherrie Lowe
Abstract:
The TrendCare Patient Dependency System is currently utilized by a large number of Maternity Services across Australia, New Zealand and Singapore. In 2012, 2013, and 2014 validation studies were initiated in all three countries to validate the acuity tools used for Women in Labour, and Postnatal Mothers and Babies. This paper will present the findings of the validation study. Aim: The aim of this study was to; Identify if the care hours provided by the TrendCare Acuity System was an accurate reflection of the care required by Women and Babies. Obtain evidence of changes required to acuity indicators and/or category timings to ensure the TrendCare acuity system remains reliable and valid across a range of Maternity care models in three countries. Method: A non-experimental action research methodology was used across four District Health Boards in New Zealand, two large public Australian Maternity services and a large tertiary Maternity service in Singapore. Standardized data collection forms and timing devices were used to collect Midwife contact times with Women and Babies included in the study. Rejection processes excluded samples where care was not completed/rationed. The variances between actual timed Midwife/Mother/Baby contact and actual Trend Care acuity times were identified and investigated. Results: 87.5% (18) of TrendCare acuity category timings matched the actual timings recorded for Midwifery care. 12.5% (3) of TrendCare night duty categories provided less minutes of care than the actual timings. 100% of Labour Ward TrendCare categories matched actual timings for Midwifery care. The actual times given for assistance to New Zealand independent Midwives in Labour Ward showed a significant deviation to previous studies demonstrating the need for additional time allocations in Trend Care. Conclusion: The results demonstrated the importance of regularly validating the Trend Care category timings with the care hours required, as variances to models of care and length of stay in Maternity units have increased Midwifery workloads on the night shift. The level of assistance provided by the core labour ward staff to the Independent Midwife has increased substantially. Outcomes: As a consequence of this study changes were made to the night duty TrendCare Maternity categories, additional acuity indicators developed and times for assisting independent Midwives increased. The updated TrendCare version was delivered to Maternity services in 2014.Keywords: maternity, acuity, research, nursing workloads
Procedia PDF Downloads 3785326 From the Sharing Economy to Social Manufacturing: Analyzing Collaborative Service Networks in the Manufacturing Domain
Authors: Babak Mohajeri
Abstract:
In recent years, the conventional business model of ownership has been changed towards accessibility in a variety of markets. Two trends can be observed in the evolution of this rental-like business model. Firstly, the technological development that enables the emergence of new business models. These new business models increasingly become agile and flexible. For example Spotify, an online music stream company provides consumers access to over millions of music tracks, conveniently through the smartphone, tablet or computer. Similarly, Car2Go, the car sharing company accesses its members with flexible and nearby sharing cars. The second trend is the increasing communication and connections via social networks. This trend enables a shift to peer-to-peer accessibility based business models. Conventionally, companies provide access for their customers to own companies products or services. In peer-to-peer model, nonetheless, companies facilitate access and connection across their customers to use other customers owned property or skills, competencies or services .The is so-called the sharing economy business model. The aim of this study is to investigate into a new and emerging type of the sharing economy model in which role of customers and service providers may dramatically change. This new model is called Collaborative Service Networks. We propose a mechanism for Collaborative Service Networks business model. Uber and Airbnb, two successful growing companies, have been selected for our case studies and their business models are analyzed. Finally, we study the emergence of the collaborative service networks in the manufacturing domain. Our finding results to a new manufacturing paradigm called social manufacturing.Keywords: sharing economy, collaborative service networks, social manufacturing, manufacturing development
Procedia PDF Downloads 3175325 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation
Authors: Ali Ashtiani, Hamid Shirazi
Abstract:
This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.Keywords: airport pavement management, crack density, pavement evaluation, pavement management
Procedia PDF Downloads 1855324 Numerical Model Validation Using Durbin Method
Authors: H. Al-Hajeri
Abstract:
The computation of the effectiveness of turbulence enhancement surface features, such as ribs as means of promoting mixing and hence heat transfer, has attracted the continued attention of the engineering community. In this study, the simulation of a three-dimensional cooling passage is carried out employing a number of turbulence models including Durbin model. The cooling passage consists of a square section duct whose upper and lower surfaces feature staggered cuboid ribs. The main objective of this paper is to provide comparisons of the performance of the v2-f model against other established turbulence models as implemented in the commercial CFD code Ansys Fluent. The present study demonstrates that the v2-f model can successfully capture the isothermal air flow phenomena in flow over obstacles.Keywords: CFD, cooling passage, Durbin model, turbulence model
Procedia PDF Downloads 5035323 Study on the Non-Contact Sheet Resistance Measuring of Silver Nanowire Coated Film Using Terahertz Wave
Authors: Dong-Hyun Kim, Wan-Ho Chung, Hak-Sung Kim
Abstract:
In this work, non-destructive evaluation was conducted to measure the sheet resistance of silver nanowire coated film and find a damage of that film using terahertz (THz) wave. Pulse type THz instrument was used, and the measurement was performed under transmission and pitch-catch reflection modes with 30 degree of incidence angle. In the transmission mode, the intensity of the THz wave was gradually increased as the conductivity decreased. Meanwhile, the intensity of THz wave was decreased as the conductivity decreased in the pitch-catch reflection mode. To confirm the conductivity of the film, sheet resistance was measured by 4-point probe station. Interaction formula was drawn from a relation between the intensity and the sheet resistance. Through substituting sheet resistance to the formula and comparing the resultant value with measured maximum THz wave intensity, measurement of sheet resistance using THz wave was more suitable than that using 4-point probe station. In addition, the damage on the silver nanowire coated film was detected by applying the THz image system. Therefore, the reliability of the entire film can be also be ensured. In conclusion, real-time monitoring using the THz wave can be applied in the transparent electrodes with detecting the damaged area as well as measuring the sheet resistance.Keywords: terahertz wave, sheet resistance, non-destructive evaluation, silver nanowire
Procedia PDF Downloads 4905322 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method
Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang
Abstract:
Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series
Procedia PDF Downloads 2745321 Manual Wheelchair Propulsion Efficiency on Different Slopes
Authors: A. Boonpratatong, J. Pantong, S. Kiattisaksophon, W. Senavongse
Abstract:
In this study, an integrated sensing and modeling system for manual wheelchair propulsion measurement and propulsion efficiency calculation was used to indicate the level of overuse. Seven subjects participated in the measurement. On the level surface, the propulsion efficiencies were not different significantly as the riding speed increased. By contrast, the propulsion efficiencies on the 15-degree incline were restricted to around 0.5. The results are supported by previously reported wheeling resistance and propulsion torque relationships implying margin of the overuse. Upper limb musculoskeletal injuries and syndromes in manual wheelchair riders are common, chronic, and may be caused at different levels by the overuse i.e. repetitive riding on steep incline. The qualitative analysis such as the mechanical effectiveness on manual wheeling to establish the relationship between the riding difficulties, mechanical efforts and propulsion outputs is scarce, possibly due to the challenge of simultaneous measurement of those factors in conventional manual wheelchairs and everyday environments. In this study, the integrated sensing and modeling system were used to measure manual wheelchair propulsion efficiency in conventional manual wheelchairs and everyday environments. The sensing unit is comprised of the contact pressure and inertia sensors which are portable and universal. Four healthy male and three healthy female subjects participated in the measurement on level and 15-degree incline surface. Subjects were asked to perform manual wheelchair ridings with three different self-selected speeds on level surface and only preferred speed on the 15-degree incline. Five trials were performed in each condition. The kinematic data of the subject’s dominant hand and a spoke and the trunk of the wheelchair were collected through the inertia sensors. The compression force applied from the thumb of the dominant hand to the push rim was collected through the contact pressure sensors. The signals from all sensors were recorded synchronously. The subject-selected speeds for slow, preferred and fast riding on level surface and subject-preferred speed on 15-degree incline were recorded. The propulsion efficiency as a ratio between the pushing force in tangential direction to the push rim and the net force as a result of the three-dimensional riding motion were derived by inverse dynamic problem solving in the modeling unit. The intra-subject variability of the riding speed was not different significantly as the self-selected speed increased on the level surface. Since the riding speed on the 15-degree incline was difficult to regulate, the intra-subject variability was not applied. On the level surface, the propulsion efficiencies were not different significantly as the riding speed increased. However, the propulsion efficiencies on the 15-degree incline were restricted to around 0.5 for all subjects on their preferred speed. The results are supported by the previously reported relationship between the wheeling resistance and propulsion torque in which the wheelchair axle torque increased but the muscle activities were not increased when the resistance is high. This implies the margin of dynamic efforts on the relatively high resistance being similar to the margin of the overuse indicated by the restricted propulsion efficiency on the 15-degree incline.Keywords: contact pressure sensor, inertia sensor, integrating sensing and modeling system, manual wheelchair propulsion efficiency, manual wheelchair propulsion measurement, tangential force, resultant force, three-dimensional riding motion
Procedia PDF Downloads 2905320 Hidden Markov Movement Modelling with Irregular Data
Authors: Victoria Goodall, Paul Fatti, Norman Owen-Smith
Abstract:
Hidden Markov Models have become popular for the analysis of animal tracking data. These models are being used to model the movements of a variety of species in many areas around the world. A common assumption of the model is that the observations need to have regular time steps. In many ecological studies, this will not be the case. The objective of the research is to modify the movement model to allow for irregularly spaced locations and investigate the effect on the inferences which can be made about the latent states. A modification of the likelihood function to allow for these irregular spaced locations is investigated, without using interpolation or averaging the movement rate. The suitability of the modification is investigated using GPS tracking data for lion (Panthera leo) in South Africa, with many observations obtained during the night, and few observations during the day. Many nocturnal predator tracking studies are set up in this way, to obtain many locations at night when the animal is most active and is difficult to observe. Few observations are obtained during the day, when the animal is expected to rest and is potentially easier to observe. Modifying the likelihood function allows the popular Hidden Markov Model framework to be used to model these irregular spaced locations, making use of all the observed data.Keywords: hidden Markov Models, irregular observations, animal movement modelling, nocturnal predator
Procedia PDF Downloads 2445319 Comparison of Adsorbents for Ammonia Removal from Mining Wastewater
Authors: F. Al-Sheikh, C. Moralejo, M. Pritzker, W. A. Anderson, A. Elkamel
Abstract:
Ammonia in mining wastewater is a significant problem, and treatment can be especially difficult in cold climates where biological treatment is not feasible. An adsorption process is one of the alternative processes that can be used to reduce ammonia concentrations to acceptable limits, and therefore a LEWATIT resin strongly acidic H+ form ion exchange resin and a Bowie Chabazite Na form AZLB-Na zeolite were tested to assess their effectiveness. For these adsorption tests, two packed bed columns (a mini-column constructed from a 32-cm long x 1-cm diameter piece of glass tubing, and a 60-cm long x 2.5-cm diameter Ace Glass chromatography column) were used containing varying quantities of the adsorbents. A mining wastewater with ammonia concentrations of 22.7 mg/L was fed through the columns at controlled flowrates. In the experimental work, maximum capacities of the LEWATIT ion exchange resin were 0.438, 0.448, and 1.472 mg/g for 3, 6, and 9 g respectively in a mini column and 1.739 mg/g for 141.5 g in a larger Ace column while the capacities for the AZLB-Na zeolite were 0.424, and 0.784 mg/g for 3, and 6 g respectively in the mini column and 1.1636 mg/g for 38.5 g in the Ace column. In the theoretical work, Thomas, Adams-Bohart, and Yoon-Nelson models were constructed to describe a breakthrough curve of the adsorption process and find the constants of the above-mentioned models. In the regeneration tests, 5% hydrochloric acid, HCl (v/v) and 10% sodium hydroxide, NaOH (w/v) were used to regenerate the LEWATIT resin and AZLB-Na zeolite with 44 and 63.8% recovery, respectively. In conclusion, continuous flow adsorption using a LEWATIT ion exchange resin and an AZLB-Na zeolite is efficient when using a co-flow technique for removal of the ammonia from wastewater. Thomas, Adams-Bohart, and Yoon-Nelson models satisfactorily fit the data with R2 closer to 1 in all cases.Keywords: AZLB-Na zeolite, continuous adsorption, Lewatit resin, models, regeneration
Procedia PDF Downloads 3895318 Density Measurement of Underexpanded Jet Using Stripe Patterned Background Oriented Schlieren Method
Authors: Shinsuke Udagawa, Masato Yamagishi, Masanori Ota
Abstract:
The Schlieren method, which has been conventionally used to visualize high-speed flows, has disadvantages such as the complexity of the experimental setup and the inability to quantitatively analyze the amount of refraction of light. The Background Oriented Schlieren (BOS) method proposed by Meier is one of the measurement methods that solves the problems, as mentioned above. The refraction of light is used for BOS method same as the Schlieren method. The BOS method is characterized using a digital camera to capture the images of the background behind the observation area. The images are later analyzed by a computer to quantitatively detect the amount of shift of the background image. The experimental setup for BOS does not require concave mirrors, pinholes, or color filters, which are necessary in the conventional Schlieren method, thus simplifying the experimental setup. However, the defocusing of the observation results is caused in case of using BOS method. Since the focus of camera on the background image leads to defocusing of the observed object. The defocusing of object becomes greater with increasing the distance between the background and the object. On the other hand, the higher sensitivity can be obtained. Therefore, it is necessary to adjust the distance between the background and the object to be appropriate for the experiment, considering the relation between the defocus and the sensitivity. The purpose of this study is to experimentally clarify the effect of defocus on density field reconstruction. In this study, the visualization experiment of underexpanded jet using BOS measurement system with ronchi ruling as the background that we constructed, have been performed. The reservoir pressure of the jet and the distance between camera and axis of jet is fixed, and the distance between background and axis of jet has been changed as the parameter. The images have been later analyzed by using personal computer to quantitatively detect the amount of shift of the background image from the comparison between the background pattern and the captured image of underexpanded jet. The quantitatively measured amount of shift have been reconstructed into a density flow field using the Abel transformation and the Gradstone-Dale equation. From the experimental results, it is found that the reconstructed density image becomes blurring, and noise becomes decreasing with increasing the distance between background and axis of underexpanded jet. Consequently, it is cralified that the sensitivity constant should be greater than 20, and the circle of confusion diameter should be less than 2.7mm at least in this experimental setup.Keywords: BOS method, underexpanded jet, abel transformation, density field visualization
Procedia PDF Downloads 785317 Enhanced Calibration Map for a Four-Hole Probe for Measuring High Flow Angles
Authors: Jafar Mortadha, Imran Qureshi
Abstract:
This research explains and compares the modern techniques used for measuring the flow angles of a flowing fluid with the traditional technique of using multi-hole pressure probes. In particular, the focus of the study is on four-hole probes, which offer great reliability and benefits in several applications where the use of modern measurement techniques is either inconvenient or impractical. Due to modern advancements in manufacturing, small multi-hole pressure probes can be made with high precision, which eliminates the need for calibrating every manufactured probe. This study aims to improve the range of calibration maps for a four-hole probe to allow high flow angles to be measured accurately. The research methodology comprises a literature review of the successful calibration definitions that have been implemented on five-hole probes. These definitions are then adapted and applied on a four-hole probe using a set of raw pressures data. A comparison of the different definitions will be carried out in Matlab and the results will be analyzed to determine the best calibration definition. Taking simplicity of implementation into account as well as the reliability of flow angles estimation, an adapted technique from a research paper written in 2002 offered the most promising outcome. Consequently, the method is seen as a good enhancement for four-hole probes and it can substitute for the existing calibration definitions that offer less accuracy.Keywords: calibration definitions, calibration maps, flow measurement techniques, four-hole probes, multi-hole pressure probes
Procedia PDF Downloads 2955316 A Test Methodology to Measure the Open-Loop Voltage Gain of an Operational Amplifier
Authors: Maninder Kaur Gill, Alpana Agarwal
Abstract:
It is practically not feasible to measure the open-loop voltage gain of the operational amplifier in the open loop configuration. It is because the open-loop voltage gain of the operational amplifier is very large. In order to avoid the saturation of the output voltage, a very small input should be given to operational amplifier which is not possible to be measured practically by a digital multimeter. A test circuit for measurement of open loop voltage gain of an operational amplifier has been proposed and verified using simulation tools as well as by experimental methods on breadboard. The main advantage of this test circuit is that it is simple, fast, accurate, cost effective, and easy to handle even on a breadboard. The test circuit requires only the device under test (DUT) along with resistors. This circuit has been tested for measurement of open loop voltage gain for different operational amplifiers. The underlying goal is to design testable circuits for various analog devices that are simple to realize in VLSI systems, giving accurate results and without changing the characteristics of the original system. The DUTs used are LM741CN and UA741CP. For LM741CN, the simulated gain and experimentally measured gain (average) are calculated as 89.71 dB and 87.71 dB, respectively. For UA741CP, the simulated gain and experimentally measured gain (average) are calculated as 101.15 dB and 105.15 dB, respectively. These values are found to be close to the datasheet values.Keywords: Device Under Test (DUT), open loop voltage gain, operational amplifier, test circuit
Procedia PDF Downloads 4475315 A Low Cost Non-Destructive Grain Moisture Embedded System for Food Safety and Quality
Authors: Ritula Thakur, Babankumar S. Bansod, Puneet Mehta, S. Chatterji
Abstract:
Moisture plays an important role in storage, harvesting and processing of food grains and related agricultural products. It is an important characteristic of most agricultural products for maintenance of quality. Accurate knowledge of the moisture content can be of significant value in maintaining quality and preventing contamination of cereal grains. The present work reports the design and development of microcontroller based low cost non-destructive moisture meter, which uses complex impedance measurement method for moisture measurement of wheat using parallel plate capacitor arrangement. Moisture can conveniently be sensed by measuring the complex impedance using a small parallel-plate capacitor sensor filled with the kernels in-between the two plates of sensor, exciting the sensor at 30 KHz and 100 KHz frequencies. The effects of density and temperature variations were compensated by providing suitable compensations in the developed algorithm. The results were compared with standard dry oven technique and the developed method was found to be highly accurate with less than 1% error. The developed moisture meter is low cost, highly accurate, non-destructible method for determining the moisture of grains utilizing the fast computing capabilities of microcontroller.Keywords: complex impedance, moisture content, electrical properties, safety of food
Procedia PDF Downloads 4625314 Ultrasonic Evaluation of Periodic Rough Inaccessible Surfaces from Back Side
Authors: Chanh Nghia Nguyen, Yu Kurokawa, Hirotsugu Inoue
Abstract:
The surface roughness is an important parameter for evaluating the quality of material surfaces since it affects functions and performance of industrial components. Although stylus and optical techniques are commonly used for measuring the surface roughness, they are applicable only to accessible surfaces. In practice, surface roughness measurement from the back side is sometimes demanded, for example, in inspection of safety-critical parts such as inner surface of pipes. However, little attention has been paid to the measurement of back surface roughness so far. Since back surface is usually inaccessible by stylus or optical techniques, ultrasonic technique is one of the most effective among others. In this research, an ultrasonic pulse-echo technique is considered for evaluating the pitch and the height of back surface having periodic triangular profile as a very first step. The pitch of the surface profile is measured by applying the diffraction grating theory for oblique incidence; then the height is evaluated by numerical analysis based on the Kirchhoff theory for normal incidence. The validity of the proposed method was verified by both numerical simulation and experiment. It was confirmed that the pitch is accurately measured in most cases. The height was also evaluated with good accuracy when it is smaller than a half of the pitch because of the approximation in the Kirchhoff theory.Keywords: back side, inaccessible surface, periodic roughness, pulse-echo technique, ultrasonic NDE
Procedia PDF Downloads 2755313 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation
Authors: Aicha Majda, Abdelhamid El Hassani
Abstract:
Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.Keywords: graph cuts, lung CT scan, lung parenchyma segmentation, patch-based similarity metric
Procedia PDF Downloads 1695312 Assessment of Material Type, Diameter, Orientation and Closeness of Fibers in Vulcanized Reinforced Rubbers
Authors: Ali Osman Güney, Bahattin Kanber
Abstract:
In this work, the effect of material type, diameter, orientation and closeness of fibers on the general performance of reinforced vulcanized rubbers are investigated using finite element method with experimental verification. Various fiber materials such as hemp, nylon, polyester are used for different fiber diameters, orientations and closeness. 3D finite element models are developed by considering bonded contact elements between fiber and rubber sheet interfaces. The fibers are assumed as linear elastic, while vulcanized rubber is considered as hyper-elastic. After an experimental verification of finite element results, the developed models are analyzed under prescribed displacement that causes tension. The normal stresses in fibers and shear stresses between fibers and rubber sheet are investigated in all models. Large deformation of reinforced rubber sheet also represented with various fiber conditions under incremental loading. A general assessment is achieved about best fiber properties of reinforced rubber sheets for tension-load conditions.Keywords: reinforced vulcanized rubbers, fiber properties, out of plane loading, finite element method
Procedia PDF Downloads 3475311 An Application of Graph Theory to The Electrical Circuit Using Matrix Method
Authors: Samai'la Abdullahi
Abstract:
A graph is a pair of two set and so that a graph is a pictorial representation of a system using two basic element nodes and edges. A node is represented by a circle (either hallo shade) and edge is represented by a line segment connecting two nodes together. In this paper, we present a circuit network in the concept of graph theory application and also circuit models of graph are represented in logical connection method were we formulate matrix method of adjacency and incidence of matrix and application of truth table.Keywords: euler circuit and path, graph representation of circuit networks, representation of graph models, representation of circuit network using logical truth table
Procedia PDF Downloads 561