Search results for: micro data
24865 Implementation of Data Science in Field of Homologation
Authors: Shubham Bhonde, Nekzad Doctor, Shashwat Gawande
Abstract:
For the use and the import of Keys and ID Transmitter as well as Body Control Modules with radio transmission in a lot of countries, homologation is required. Final deliverables in homologation of the product are certificates. In considering the world of homologation, there are approximately 200 certificates per product, with most of the certificates in local languages. It is challenging to manually investigate each certificate and extract relevant data from the certificate, such as expiry date, approval date, etc. It is most important to get accurate data from the certificate as inaccuracy may lead to missing re-homologation of certificates that will result in an incompliance situation. There is a scope of automation in reading the certificate data in the field of homologation. We are using deep learning as a tool for automation. We have first trained a model using machine learning by providing all country's basic data. We have trained this model only once. We trained the model by feeding pdf and jpg files using the ETL process. Eventually, that trained model will give more accurate results later. As an outcome, we will get the expiry date and approval date of the certificate with a single click. This will eventually help to implement automation features on a broader level in the database where certificates are stored. This automation will help to minimize human error to almost negligible.Keywords: homologation, re-homologation, data science, deep learning, machine learning, ETL (extract transform loading)
Procedia PDF Downloads 16324864 Additive Weibull Model Using Warranty Claim and Finite Element Analysis Fatigue Analysis
Authors: Kanchan Mondal, Dasharath Koulage, Dattatray Manerikar, Asmita Ghate
Abstract:
This paper presents an additive reliability model using warranty data and Finite Element Analysis (FEA) data. Warranty data for any product gives insight to its underlying issues. This is often used by Reliability Engineers to build prediction model to forecast failure rate of parts. But there is one major limitation in using warranty data for prediction. Warranty periods constitute only a small fraction of total lifetime of a product, most of the time it covers only the infant mortality and useful life zone of a bathtub curve. Predicting with warranty data alone in these cases is not generally provide results with desired accuracy. Failure rate of a mechanical part is driven by random issues initially and wear-out or usage related issues at later stages of the lifetime. For better predictability of failure rate, one need to explore the failure rate behavior at wear out zone of a bathtub curve. Due to cost and time constraints, it is not always possible to test samples till failure, but FEA-Fatigue analysis can provide the failure rate behavior of a part much beyond warranty period in a quicker time and at lesser cost. In this work, the authors proposed an Additive Weibull Model, which make use of both warranty and FEA fatigue analysis data for predicting failure rates. It involves modeling of two data sets of a part, one with existing warranty claims and other with fatigue life data. Hazard rate base Weibull estimation has been used for the modeling the warranty data whereas S-N curved based Weibull parameter estimation is used for FEA data. Two separate Weibull models’ parameters are estimated and combined to form the proposed Additive Weibull Model for prediction.Keywords: bathtub curve, fatigue, FEA, reliability, warranty, Weibull
Procedia PDF Downloads 7324863 Data-Focused Digital Transformation for Smart Net-Zero Cities: A Systems Thinking Approach
Authors: Farzaneh Mohammadi Jouzdani, Vahid Javidroozi, Monica Mateo Garcia, Hanifa Shah
Abstract:
The emergence of developing smart net-zero cities in recent years has attracted significant attention and interest from worldwide communities and scholars as a potential solution to the critical requirement for urban sustainability. This research-in-progress paper aims to investigate the development of smart net-zero cities to propose a digital transformation roadmap for smart net-zero cities with a primary focus on data. Employing systems thinking as an underpinning theory, the study advocates for the necessity of utilising a holistic strategy for understanding the complex interdependencies and interrelationships that characterise urban systems. The proposed methodology will involve an in-depth investigation of current data-driven approaches in the smart net-zero city. This is followed by utilising predictive analysis methods to evaluate the holistic impact of the approaches on moving toward a Smart net-zero city. It is expected to achieve systemic intervention followed by a data-focused and systemic digital transformation roadmap for smart net-zero, contributing to a more holistic understanding of urban sustainability.Keywords: smart city, net-zero city, digital transformation, systems thinking, data integration, data-driven approach
Procedia PDF Downloads 2324862 Mathematical Modeling and Analysis of Forced Vibrations in Micro-Scale Microstretch Thermoelastic Simply Supported Beam
Authors: Geeta Partap, Nitika Chugh
Abstract:
The present paper deals with the flexural vibrations of homogeneous, isotropic, generalized micropolar microstretch thermoelastic thin Euler-Bernoulli beam resonators, due to Exponential time varying load. Both the axial ends of the beam are assumed to be at simply supported conditions. The governing equations have been solved analytically by using Laplace transforms technique twice with respect to time and space variables respectively. The inversion of Laplace transform in time domain has been performed by using the calculus of residues to obtain deflection.The analytical results have been numerically analyzed with the help of MATLAB software for magnesium like material. The graphical representations and interpretations have been discussed for Deflection of beam under Simply Supported boundary condition and for distinct considered values of time and space as well. The obtained results are easy to implement for engineering analysis and designs of resonators (sensors), modulators, actuators.Keywords: microstretch, deflection, exponential load, Laplace transforms, residue theorem, simply supported
Procedia PDF Downloads 31124861 Analysis of an Alternative Data Base for the Estimation of Solar Radiation
Authors: Graciela Soares Marcelli, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Claudineia Brazil, Rafael Haag
Abstract:
The sun is a source of renewable energy, and its use as both a source of heat and light is one of the most promising energy alternatives for the future. To measure the thermal or photovoltaic systems a solar irradiation database is necessary. Brazil still has a reduced number of meteorological stations that provide frequency tests, as an alternative to the radio data platform, with reanalysis systems, quite significant. ERA-Interim is a global fire reanalysis by the European Center for Medium-Range Weather Forecasts (ECMWF). The data assimilation system used for the production of ERA-Interim is based on a 2006 version of the IFS (Cy31r2). The system includes a 4-dimensional variable analysis (4D-Var) with a 12-hour analysis window. The spatial resolution of the dataset is approximately 80 km at 60 vertical levels from the surface to 0.1 hPa. This work aims to make a comparative analysis between the ERA-Interim data and the data observed in the Solarimmetric Atlas of the State of Rio Grande do Sul, to verify its applicability in the absence of an observed data network. The analysis of the results obtained for a study region as an alternative to the energy potential of a given region.Keywords: energy potential, reanalyses, renewable energy, solar radiation
Procedia PDF Downloads 16424860 Big Data Analytics and Public Policy: A Study in Rural India
Authors: Vasantha Gouri Prathapagiri
Abstract:
Innovations in ICT sector facilitate qualitative life style for citizens across the globe. Countries that facilitate usage of new techniques in ICT, i.e., big data analytics find it easier to fulfil the needs of their citizens. Big data is characterised by its volume, variety, and speed. Analytics involves its processing in a cost effective way in order to draw conclusion for their useful application. Big data also involves into the field of machine learning, artificial intelligence all leading to accuracy in data presentation useful for public policy making. Hence using data analytics in public policy making is a proper way to march towards all round development of any country. The data driven insights can help the government to take important strategic decisions with regard to socio-economic development of her country. Developed nations like UK and USA are already far ahead on the path of digitization with the support of Big Data analytics. India is a huge country and is currently on the path of massive digitization being realised through Digital India Mission. Internet connection per household is on the rise every year. This transforms into a massive data set that has the potential to improvise the public services delivery system into an effective service mechanism for Indian citizens. In fact, when compared to developed nations, this capacity is being underutilized in India. This is particularly true for administrative system in rural areas. The present paper focuses on the need for big data analytics adaptation in Indian rural administration and its contribution towards development of the country on a faster pace. Results of the research focussed on the need for increasing awareness and serious capacity building of the government personnel working for rural development with regard to big data analytics and its utility for development of the country. Multiple public policies are framed and implemented for rural development yet the results are not as effective as they should be. Big data has a major role to play in this context as can assist in improving both policy making and implementation aiming at all round development of the country.Keywords: Digital India Mission, public service delivery system, public policy, Indian administration
Procedia PDF Downloads 15924859 4G LTE Dynamic Pricing: The Drivers, Benefits, and Challenges
Authors: Ahmed Rashad Harb Riad Ismail
Abstract:
The purpose of this research is to study the potential of Dynamic Pricing if deployed by mobile operators and analyse its effects from both operators and consumers side. Furthermore, to conclude, throughout the research study, the recommended conditions for successful Dynamic Pricing deployment, recommended factors identifying the type of markets where Dynamic Pricing can be effective, and proposal for a Dynamic Pricing stakeholders’ framework were presented. Currently, the mobile telecommunications industry is witnessing a dramatic growth rate in the data consumption, being fostered mainly by higher data speed technology as the 4G LTE and by the smart devices penetration rates. However, operators’ revenue from data services lags behind and is decupled from this data consumption growth. Pricing strategy is a key factor affecting this ecosystem. Since the introduction of the 4G LTE technology will increase the pace of data growth in multiples, consequently, if pricing strategies remain constant, then the revenue and usage gap will grow wider, risking the sustainability of the ecosystem. Therefore, this research study is focused on Dynamic Pricing for 4G LTE data services, researching the drivers, benefits and challenges of 4G LTE Dynamic Pricing and the feasibility of its deployment in practice from different perspectives including operators, regulators, consumers, and telecommunications equipment manufacturers point of views.Keywords: LTE, dynamic pricing, EPC, research
Procedia PDF Downloads 33324858 Prediction of Wind Speed by Artificial Neural Networks for Energy Application
Authors: S. Adjiri-Bailiche, S. M. Boudia, H. Daaou, S. Hadouche, A. Benzaoui
Abstract:
In this work the study of changes in the wind speed depending on the altitude is calculated and described by the model of the neural networks, the use of measured data, the speed and direction of wind, temperature and the humidity at 10 m are used as input data and as data targets at 50m above sea level. Comparing predict wind speeds and extrapolated at 50 m above sea level is performed. The results show that the prediction by the method of artificial neural networks is very accurate.Keywords: MATLAB, neural network, power low, vertical extrapolation, wind energy, wind speed
Procedia PDF Downloads 69324857 A Case Study at PT Bank XYZ on The Role of Compensation, Career Development, and Employee Engagement towards Employee Performance
Authors: Ahmad Badawi Saluy, Novawiguna Kemalasari
Abstract:
This study aims to examine, analyze and explain the impacts of compensation, career development and employee engagement to employee’s performance partially and simultaneously (Case Study at PT Bank XYZ). The research design used is quantitative descriptive research causality involving 30 respondents. Sources of data are from primary and secondary data, primary data obtained from questionnaires distribution and secondary data obtained from journals and books. Data analysis used model test using smart application PLS 3 that consists of test outer model and inner model. The results showed that compensation, career development and employee engagement partially have a positive impact on employee performance, while they have a positive and significant impact on employee performance simultaneously. The independent variable has the greatest impact is the employee engagement.Keywords: compensation, career development, employee engagement, employee performance
Procedia PDF Downloads 15224856 Spectral Anomaly Detection and Clustering in Radiological Search
Authors: Thomas L. McCullough, John D. Hague, Marylesa M. Howard, Matthew K. Kiser, Michael A. Mazur, Lance K. McLean, Johanna L. Turk
Abstract:
Radiological search and mapping depends on the successful recognition of anomalies in large data sets which contain varied and dynamic backgrounds. We present a new algorithmic approach for real-time anomaly detection which is resistant to common detector imperfections, avoids the limitations of a source template library and provides immediate, and easily interpretable, user feedback. This algorithm is based on a continuous wavelet transform for variance reduction and evaluates the deviation between a foreground measurement and a local background expectation using methods from linear algebra. We also present a technique for recognizing and visualizing spectrally similar clusters of data. This technique uses Laplacian Eigenmap Manifold Learning to perform dimensional reduction which preserves the geometric "closeness" of the data while maintaining sensitivity to outlying data. We illustrate the utility of both techniques on real-world data sets.Keywords: radiological search, radiological mapping, radioactivity, radiation protection
Procedia PDF Downloads 69624855 Knowledge Engineering Based Smart Healthcare Solution
Authors: Rhaed Khiati, Muhammad Hanif
Abstract:
In the past decade, smart healthcare systems have been on an ascendant drift, especially with the evolution of hospitals and their increasing reliance on bioinformatics and software specializing in healthcare. Doctors have become reliant on technology more than ever, something that in the past would have been looked down upon, as technology has become imperative in reducing overall costs and improving the quality of patient care. With patient-doctor interactions becoming more necessary and more complicated than ever, systems must be developed while taking into account costs, patient comfort, and patient data, among other things. In this work, we proposed a smart hospital bed, which mixes the complexity and big data usage of traditional healthcare systems with the comfort found in soft beds while taking certain concerns like data confidentiality, security, and maintaining SLA agreements, etc. into account. This research work potentially provides users, namely patients and doctors, with a seamless interaction with to their respective nurses, as well as faster access to up-to-date personal data, including prescriptions and severity of the condition in contrast to the previous research in the area where there is lack of consideration of such provisions.Keywords: big data, smart healthcare, distributed systems, bioinformatics
Procedia PDF Downloads 19824854 Transformation of the Business Model in an Occupational Health Care Company Embedded in an Emerging Personal Data Ecosystem: A Case Study in Finland
Authors: Tero Huhtala, Minna Pikkarainen, Saila Saraniemi
Abstract:
Information technology has long been used as an enabler of exchange for goods and services. Services are evolving from generic to personalized, and the reverse use of customer data has been discussed in both academia and industry for the past few years. This article presents the results of an empirical case study in the area of preventive health care services. The primary data were gathered in workshops, in which future personal data-based services were conceptualized by analyzing future scenarios from a business perspective. The aim of this study is to understand business model transformation in emerging personal data ecosystems. The work was done as a case study in the context of occupational healthcare. The results have implications to theory and practice, indicating that adopting personal data management principles requires transformation of the business model, which, if successfully managed, may provide access to more resources, potential to offer better value, and additional customer channels. These advantages correlate with the broadening of the business ecosystem. Expanding the scope of this study to include more actors would improve the validity of the research. The results draw from existing literature and are based on findings from a case study and the economic properties of the healthcare industry in Finland.Keywords: ecosystem, business model, personal data, preventive healthcare
Procedia PDF Downloads 24924853 Superordinated Control for Increasing Feed-in Capacity and Improving Power Quality in Low Voltage Distribution Grids
Authors: Markus Meyer, Bastian Maucher, Rolf Witzmann
Abstract:
The ever increasing amount of distributed generation in low voltage distribution grids (mainly PV and micro-CHP) can lead to reverse load flows from low to medium/high voltage levels at times of high feed-in. Reverse load flow leads to rising voltages that may even exceed the limits specified in the grid codes. Furthermore, the share of electrical loads connected to low voltage distribution grids via switched power supplies continuously increases. In combination with inverter-based feed-in, this results in high harmonic levels reducing overall power quality. Especially high levels of third-order harmonic currents can lead to neutral conductor overload, which is even more critical if lines with reduced neutral conductor section areas are used. This paper illustrates a possible concept for smart grids in order to increase the feed-in capacity, improve power quality and to ensure safe operation of low voltage distribution grids at all times. The key feature of the concept is a hierarchically structured control strategy that is run on a superordinated controller, which is connected to several distributed grid analyzers and inverters via broad band powerline (BPL). The strategy is devised to ensure both quick response time as well as the technically and economically reasonable use of the available inverters in the grid (PV-inverters, batteries, stepless line voltage regulators). These inverters are provided with standard features for voltage control, e.g. voltage dependent reactive power control. In addition they can receive reactive power set points transmitted by the superordinated controller. To further improve power quality, the inverters are capable of active harmonic filtering, as well as voltage balancing, whereas the latter is primarily done by the stepless line voltage regulators. By additionally connecting the superordinated controller to the control center of the grid operator, supervisory control and data acquisition capabilities for the low voltage distribution grid are enabled, which allows easy monitoring and manual input. Such a low voltage distribution grid can also be used as a virtual power plant.Keywords: distributed generation, distribution grid, power quality, smart grid, virtual power plant, voltage control
Procedia PDF Downloads 26724852 Design of an Instrumentation Setup and Data Acquisition System for a GAS Turbine Engine Using Suitable DAQ Software
Authors: Syed Nauman Bin Asghar Bukhari, Mohtashim Mansoor, Mohammad Nouman
Abstract:
Engine test-Bed system is a fundamental tool to measure dynamic parameters, economic performance, and reliability of an aircraft Engine, and its automation and accuracy directly influences the precision of acquired and analysed data. In this paper, we present the design of digital Data Acquisition (DAQ) system for a vintage aircraft engine test bed that lacks the capability of displaying all the analyzed parameters at one convenient location (one panel-one screen). Recording such measurements in the vintage test bed is not only time consuming but also prone to human errors. Digitizing such measurement system requires a Data Acquisition (DAQ) system capable of recording these parameters and displaying them on one screen-one panel monitor. The challenge in designing upgrade to the vintage systems arises with a need to build and integrate digital measurement system from scratch with a minimal budget and modifications to the existing vintage system. The proposed design not only displays all the key performance / maintenance parameters of the gas turbine engines for operator as well as quality inspector on separate screens but also records the data for further processing / archiving.Keywords: Gas turbine engine, engine test cell, data acquisition, instrumentation
Procedia PDF Downloads 12324851 Microstructure and Mechanical Properties of Mg-Zn Alloys
Authors: Young Sik Kim, Tae Kwon Ha
Abstract:
Effect of Zn addition on the microstructure and mechanical properties of Mg-Zn alloys with Zn contents from 6 to 10 weight percent was investigated in this study. Through calculation of phase equilibria of Mg-Zn alloys, carried out by using FactSage® and FTLite database, solution treatment temperature was decided as temperatures from 300 to 400oC, where supersaturated solid solution can be obtained. Solid solution treatment of Mg-Zn alloys was successfully conducted at 380oC and supersaturated microstructure with all beta phase resolved into matrix was obtained. After solution treatment, hot rolling was successfully conducted by reduction of 60%. Compression and tension tests were carried out at room temperature on the samples as-cast, solution treated, hot-rolled and recrystallized after rolling. After solid solution treatment, each alloy was annealed at temperatures of 180 and 200oC for time intervals from 1 min to 48 hrs and hardness of each condition was measured by micro-Vickers method. Peak aging conditions were deduced as at the temperature of 200oC for 10 hrs. By addition of Zn by 10 weight percent, hardness and strength were enhanced.Keywords: Mg-Zn alloy, heat treatment, microstructure, mechanical properties, hardness
Procedia PDF Downloads 27924850 A PHREEQC Reactive Transport Simulation for Simply Determining Scaling during Desalination
Authors: Andrew Freiburger, Sergi Molins
Abstract:
Freshwater is a vital resource; yet, the supply of clean freshwater is diminishing as the consequence of melting snow and ice from global warming, pollution from industry, and an increasing demand from human population growth. The unsustainable trajectory of diminishing water resources is projected to jeopardize water security for billions of people in the 21st century. Membrane desalination technologies may resolve the growing discrepancy between supply and demand by filtering arbitrary feed water into a fraction of renewable, clean water and a fraction of highly concentrated brine. The leading hindrance of membrane desalination is fouling, whereby the highly concentrated brine solution encourages micro-organismal colonization and/or the precipitation of occlusive minerals (i.e. scale) upon the membrane surface. Thus, an understanding of brine formation is necessary to mitigate membrane fouling and to develop efficacious desalination technologies that can bolster the supply of available freshwater. This study presents a reactive transport simulation of brine formation and scale deposition during reverse osmosis (RO) desalination. The simulation conceptually represents the RO module as a one-dimensional domain, where feed water directionally enters the domain with a prescribed fluid velocity and is iteratively concentrated in the immobile layer of a dual porosity model. Geochemical PHREEQC code numerically evaluated the conceptual model with parameters for the BW30-400 RO module and for real water feed sources – e.g. the Red and Mediterranean seas, and produced waters from American oil-wells, based upon peer-review data. The presented simulation is computationally simpler, and hence less resource intensive, than the existent and more rigorous simulations of desalination phenomena, like TOUGHREACT. The end-user may readily prepare input files and execute simulations on a personal computer with open source software. The graphical results of fouling-potential and brine characteristics may therefore be particularly useful as the initial tool for screening candidate feed water sources and/or informing the selection of an RO module.Keywords: desalination, PHREEQC, reactive transport, scaling
Procedia PDF Downloads 13624849 Gender, Occupational Status, Work-to-Family Conflict, and the Roles of Stressors among Korean Immigrants: Rethinking the Concept of the 'Stress of Higher Status'
Authors: Il-Ho Kim, Samuel Noh, Kwame McKenzie, Cyu-Chul Choi
Abstract:
Introduction: The ‘stress of higher status’ hypothesis suggests that workers with higher-status occupations are more likely to experience work-to-family conflict (WFC) than those with lower-status occupations. Yet, the occupational difference in WFC and its mechanisms have not been explicitly explored within Asian culture. This present study examines (a) the association between occupational status and WFC and (b) the mediating roles of work-related stressors and resources, focused on gender perspectives using a sample of Korean immigrants. Methods: Data were derived from a cross-sectional survey of foreign born Korean immigrants who were currently working at least two years in the Greater Area of Toronto or surrounding towns. The sample was stratified for equivalent presentations of micro-business owners (N=555) and paid employees in diverse occupational categories (N=733). Results: We found gender differences and similarities in the link between occupational status and WFC and the mediating roles of work-related variables. Compared to skilled/unskilled counterparts, male immigrants in professional, service, and microbusiness jobs reported higher levels of WFC, whereas female immigrants in higher-status occupations were more likely to have WFC with the exception of the highest levels of WFC among microbusiness owners. Regardless of gender, both male and female immigrants who have longer weekly work hours, shift work schedule, and high emotional and psychological demands were significantly associated with high levels of WFC. However, skill development was related to WFC only among male immigrants. Regarding the mediating roles of work-related factors, among female immigrants, the occupational difference in WFC was fully mediated by weekly work hours, shift work schedule, and emotional and psychological demands with the exception of the case of microbusiness workers. Among male immigrants, the occupational differences remained virtually unchanged after controlling for these mediators. Conclusions: Our results partially confirmed the ‘stress of higher status’ hypothesis among female immigrants. Additionally, work-related stressors seem to be critical mediators of the link between occupations and WFC only for female immigrants.Keywords: work-to-family conflict, gender, work conditions, job demands, job resources
Procedia PDF Downloads 18624848 Water End-Use Classification with Contemporaneous Water-Energy Data and Deep Learning Network
Authors: Khoi A. Nguyen, Rodney A. Stewart, Hong Zhang
Abstract:
‘Water-related energy’ is energy use which is directly or indirectly influenced by changes to water use. Informatics applying a range of mathematical, statistical and rule-based approaches can be used to reveal important information on demand from the available data provided at second, minute or hourly intervals. This study aims to combine these two concepts to improve the current water end use disaggregation problem through applying a wide range of most advanced pattern recognition techniques to analyse the concurrent high-resolution water-energy consumption data. The obtained results have shown that recognition accuracies of all end-uses have significantly increased, especially for mechanised categories, including clothes washer, dishwasher and evaporative air cooler where over 95% of events were correctly classified.Keywords: deep learning network, smart metering, water end use, water-energy data
Procedia PDF Downloads 30624847 Strategy Management of Soybean (Glycine max L.) for Dealing with Extreme Climate through the Use of Cropsyst Model
Authors: Aminah Muchdar, Nuraeni, Eddy
Abstract:
The aims of the research are: (1) to verify the cropsyst plant model of experimental data in the field of soybean plants and (2) to predict planting time and potential yield soybean plant with the use of cropsyst model. This research is divided into several stages: (1) first calibration stage which conducted in the field from June until September 2015.(2) application models stage, where the data obtained from calibration in the field will be included in cropsyst models. The required data models are climate data, ground data/soil data,also crop genetic data. The relationship between the obtained result in field with simulation cropsyst model indicated by Efficiency Index (EF) which the value is 0,939.That is showing that cropsyst model is well used. From the calculation result RRMSE which the value is 1,922%.That is showing that comparative fault prediction results from simulation with result obtained in the field is 1,92%. The conclusion has obtained that the prediction of soybean planting time cropsyst based models that have been made valid for use. and the appropriate planting time for planting soybeans mainly on rain-fed land is at the end of the rainy season, in which the above study first planting time (June 2, 2015) which gives the highest production, because at that time there was still some rain. Tanggamus varieties more resistant to slow planting time cause the percentage decrease in the yield of each decade is lower than the average of all varieties.Keywords: soybean, Cropsyst, calibration, efficiency Index, RRMSE
Procedia PDF Downloads 18024846 Effect Of Selected Food And Nutrition Environments On Prevalence Of Cardio-Metabolic Risk Factors With Emphasis On Worksite Environment In Urban Delhi
Authors: Deepa Shokeen, Bani Tamber Aeri
Abstract:
Food choice is a complex process influenced by the interplay of multiple factors, including physical, socio-cultural and economic factors comprising macro or micro level food environments. While a clear understanding of the relationship between what we eat and the environmental context in which these food choices are made is still needed; it has however now been shown that food environments do play a significant role in the obesity epidemic and increasing cardio-metabolic risk factors. Evidence in other countries indicates that the food environment may strongly influence the prevalence of obesity and cardio-metabolic risk factors among young adults. Although in the Indian context, data does indicate the associations between sedentary lifestyle, stress, faulty diets but very little evidence supports the role of food environment in influencing cardio-metabolic health among employed adults. Thus, this research is required to establish how different environments affect different individuals as individuals interact with the environment on a number of levels. Methodology: The objective of the present study is to assess the effect of selected food and nutrition environments with emphasis on worksite environment and to analyse its impact on the food choices and dietary behaviour of the employees (25-45 years of age) of the organizations under study. In the proposed study an attempt will be made to randomly select various worksite environments from Delhi and NCR. The study will be conducted in two phases. In phase I, Information will be obtained on their socio-demographic profile and various factors influencing their food choices including most commonly consumed foods and most frequently visited eating outlets in and around the work place. Data will also be gathered on anthropometry (height, weight, waist circumference), biochemical parameters (lipid profile and fasting glucose), blood pressure and dietary intake. Based on the findings of phase I, a list of the most frequently visited eating outlets in and around the workplace will be prepared in Phase II. These outlets will then be subjected to nutrition environment assessment survey (NEMS). On the basis of the information gathered from phase I and phase II, influence of selected food and nutrition environments on food choice, dietary behaviour and prevalence of cardio-metabolic risk factors among employed adults will be assessed. Expected outcomes: The proposed study will try to ascertain the impact of selected food and nutrition environments on food choice and dietary intake of the working adults as it is important to learn how these food environments influence the eating perceptions and health behavior of the adults. In addition to this, anthropometry blood pressure and biochemical assessment of the subjects will be done to assess the prevalence of cardio-metabolic risk factors. If the findings indicate that the work environment, where most of these young adults spend their productive hours of the day, influence their health, than perhaps steps maybe needed to make these environments more conducive to health.Keywords: food and nutrition environment, cardio-metabolic risk factors, India, worksite environment
Procedia PDF Downloads 28124845 An Assessment of Water and Sediment Quality of the Danube River: Polycyclic Aromatic Hydrocarbons and Trace Metals
Authors: A. Szabó Nagy, J. Szabó, I. Vass
Abstract:
Water and sediment samples from the Danube River and Moson Danube Arm (Hungary) have been collected and analyzed for contamination by 18 polycyclic aromatic hydrocarbons (PAHs) and eight trace metal(loid)s (As, Cu, Pb, Ni, Cr, Cd, Hg and Zn) in the period of 2014-2015. Moreover, the trace metal(loid) concentrations were measured in the Rába and Marcal rivers (parts of the tributary system feeding the Danube). Total PAH contents in water were found to vary from 0.016 to 0.133 µg/L and concentrations in sediments varied in the range of 0.118 mg/kg and 0.283 mg/kg. Source analysis of PAHs using diagnostic concentration ratios indicated that PAHs found in sediments were of pyrolytic origins. The dissolved trace metal and arsenic concentrations were relatively low in the surface waters. However, higher concentrations were detected in the water samples of Rába (Zn, Cu, Ni, Pb) and Marcal (As, Cu, Ni, Pb) compared to the Danube and Moson Danube. The concentrations of trace metals in sediments were higher than those found in water samples.Keywords: surface water, sediment, PAH, trace metal
Procedia PDF Downloads 31524844 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction
Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan
Abstract:
Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.Keywords: decision trees, neural network, myocardial infarction, Data Mining
Procedia PDF Downloads 42924843 Sparse Coding Based Classification of Electrocardiography Signals Using Data-Driven Complete Dictionary Learning
Authors: Fuad Noman, Sh-Hussain Salleh, Chee-Ming Ting, Hadri Hussain, Syed Rasul
Abstract:
In this paper, a data-driven dictionary approach is proposed for the automatic detection and classification of cardiovascular abnormalities. Electrocardiography (ECG) signal is represented by the trained complete dictionaries that contain prototypes or atoms to avoid the limitations of pre-defined dictionaries. The data-driven trained dictionaries simply take the ECG signal as input rather than extracting features to study the set of parameters that yield the most descriptive dictionary. The approach inherently learns the complicated morphological changes in ECG waveform, which is then used to improve the classification. The classification performance was evaluated with ECG data under two different preprocessing environments. In the first category, QT-database is baseline drift corrected with notch filter and it filters the 60 Hz power line noise. In the second category, the data are further filtered using fast moving average smoother. The experimental results on QT database confirm that our proposed algorithm shows a classification accuracy of 92%.Keywords: electrocardiogram, dictionary learning, sparse coding, classification
Procedia PDF Downloads 38624842 Microplastics in Two Bivalves of The Bay of Bengal Coast, Bangladesh
Authors: Showmitra Chowdhury, M. Shahadat Hossain, S. M. Sharifuzzaman, Sayedur Rahman Chowdhury, Subrata Sarker, M. Shah Nawaz Chowdhury
Abstract:
Microplastics were identified in mussel (Pernaviridis) and Oyster (Crassostrea madrasensis) from the south east coast of Bangladesh. Samples were collected from four sites of the coast based on their availability, and gastrointestinal tracts were assessed following isolation, floatation, filtration, microscopic observation, and polymer identification by micro-Fourier Transformed Infrared Spectroscope (μ-FTIR) for microplastics determination. A total of 1527 microplastics were identified from 130 samples. The amount of microplastics varied from 0.66 to 3.10 microplastics/g and from 3.20 to 27.60 items/individual. Crassostrea madrasensiscontained on average 1.64 items/g and exhibited the highest level of microplastics by weight. Fiber was the most dominant type, accounting for 72% of total microplastics. Polyethylene, polypropylene, polystyrene, polyester, and nylon were the major polymer types. In both species, transparent/ black color and filamentous shape was dominant. The most common size ranges from 0.005 to 0.25mm and accounted for 39% to 67%. The study revealed microplastics pollution is widespread and relatively high in the bivalves of Bangladesh.Keywords: microplastics, bivalves, mussel, oyster, bay of bengal, Bangladesh
Procedia PDF Downloads 11124841 Synthesis and Characterization of Novel Hollow Silica Particle through DODAB Vesicle Templating
Authors: Eun Ju Park, Wendy Rusli, He Tao, Alexander M. Van Herk, Sanggu Kim
Abstract:
Hollow micro-/nano- structured materials have proven to be promising in wide range of applications, such as catalysis, drug delivery and controlled release, biotechnology, and personal and consumer care. Hollow sphere structures can be obtained through various templating approaches; colloid templates, emulsion templates, multi-surfactant templates, and single crystal templates. Vesicles are generally the self-directed assemblies of amphiphilic molecules including cationic, anionic, and cationic surfactants in aqueous solutions. The directed silica capsule formations were performed at the surface of dioctadecyldimethylammoniumbromide(DODAB) bilayer vesicles as soft template. The size of DODAB bilayer vesicles could be tuned by extrusion of a preheated dispersion of DODAB. The synthesized hollow silica particles were characterized by conventional TEM, cryo-TEM and SEM to determine the morphology and structure of particles and dynamic light scattering (DLS) method to measure the particle size and particle size distribution.Keywords: characterization, DODAB, hollow silica particle, synthesis, vesicle
Procedia PDF Downloads 30724840 A Deletion-Cost Based Fast Compression Algorithm for Linear Vector Data
Authors: Qiuxiao Chen, Yan Hou, Ning Wu
Abstract:
As there are deficiencies of the classic Douglas-Peucker Algorithm (DPA), such as high risks of deleting key nodes by mistake, high complexity, time consumption and relatively slow execution speed, a new Deletion-Cost Based Compression Algorithm (DCA) for linear vector data was proposed. For each curve — the basic element of linear vector data, all the deletion costs of its middle nodes were calculated, and the minimum deletion cost was compared with the pre-defined threshold. If the former was greater than or equal to the latter, all remaining nodes were reserved and the curve’s compression process was finished. Otherwise, the node with the minimal deletion cost was deleted, its two neighbors' deletion costs were updated, and the same loop on the compressed curve was repeated till the termination. By several comparative experiments using different types of linear vector data, the comparison between DPA and DCA was performed from the aspects of compression quality and computing efficiency. Experiment results showed that DCA outperformed DPA in compression accuracy and execution efficiency as well.Keywords: Douglas-Peucker algorithm, linear vector data, compression, deletion cost
Procedia PDF Downloads 25124839 Multimedia Container for Autonomous Car
Authors: Janusz Bobulski, Mariusz Kubanek
Abstract:
The main goal of the research is to develop a multimedia container structure containing three types of images: RGB, lidar and infrared, properly calibrated to each other. An additional goal is to develop program libraries for creating and saving this type of file and for restoring it. It will also be necessary to develop a method of data synchronization from lidar and RGB cameras as well as infrared. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. Autonomous cars are increasingly breaking into our consciousness. No one seems to have any doubts that self-driving cars are the future of motoring. Manufacturers promise that moving the first of them to showrooms is the prospect of the next few years. Many experts believe that creating a network of communicating autonomous cars will be able to completely eliminate accidents. However, to make this possible, it is necessary to develop effective methods of detection of objects around the moving vehicle. In bad weather conditions, this task is difficult on the basis of the RGB(red, green, blue) image. Therefore, in such situations, you should be supported by information from other sources, such as lidar or infrared cameras. The problem is the different data formats that individual types of devices return. In addition to these differences, there is a problem with the synchronization of these data and the formatting of this data. The goal of the project is to develop a file structure that could be containing a different type of data. This type of file is calling a multimedia container. A multimedia container is a container that contains many data streams, which allows you to store complete multimedia material in one file. Among the data streams located in such a container should be indicated streams of images, films, sounds, subtitles, as well as additional information, i.e., metadata. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. As shown by preliminary studies, the use of combining RGB and InfraRed images with Lidar data allows for easier data analysis. Thanks to this application, it will be possible to display the distance to the object in a color photo. Such information can be very useful for drivers and for systems in autonomous cars.Keywords: an autonomous car, image processing, lidar, obstacle detection
Procedia PDF Downloads 22624838 Mobile Crowdsensing Scheme by Predicting Vehicle Mobility Using Deep Learning Algorithm
Authors: Monojit Manna, Arpan Adhikary
Abstract:
In Mobile cloud sensing across the globe, an emerging paradigm is selected by the user to compute sensing tasks. In urban cities current days, Mobile vehicles are adapted to perform the task of data sensing and data collection for universality and mobility. In this work, we focused on the optimality and mobile nodes that can be selected in order to collect the maximum amount of data from urban areas and fulfill the required data in the future period within a couple of minutes. We map out the requirement of the vehicle to configure the maximum data optimization problem and budget. The Application implementation is basically set up to generalize a realistic online platform in which real-time vehicles are moving apparently in a continuous manner. The data center has the authority to select a set of vehicles immediately. A deep learning-based scheme with the help of mobile vehicles (DLMV) will be proposed to collect sensing data from the urban environment. From the future time perspective, this work proposed a deep learning-based offline algorithm to predict mobility. Therefore, we proposed a greedy approach applying an online algorithm step into a subset of vehicles for an NP-complete problem with a limited budget. Real dataset experimental extensive evaluations are conducted for the real mobility dataset in Rome. The result of the experiment not only fulfills the efficiency of our proposed solution but also proves the validity of DLMV and improves the quantity of collecting the sensing data compared with other algorithms.Keywords: mobile crowdsensing, deep learning, vehicle recruitment, sensing coverage, data collection
Procedia PDF Downloads 7824837 Wear Performance of Stellite 21 Cladded Overlay on Aisi 304L
Authors: Sandeep Singh Sandhua, Karanvir Singh Ghuman, Arun Kumar
Abstract:
Stellite 21 is cobalt based super alloy used in improving the wear performance of stainless steel engineering components subjected to harsh environmental conditions. This piece of research focuses on the wear analysis of satellite 21 cladded on AISI 304 L substrate using SMAW process. Bead on plate experiments were carried out by varying current and electrode manipulation techniques to optimize the dilution and microhardness. 80 Amp current and weaving technique was found to be optimum set of parameters for overlaying which were further used for multipass multilayer cladding of AISI 304 L substrate. The wear performance was examined on pin on dics wear testing machine under room temperature conditions. The results from this study show that Stellite 21 overlays show a significant improvement in the frictional wear resistance after TIG remelting. It is also established that low dilution procedures are important in controlling the metallurgical composition of these overlays which has a consequent effect in enhancing hardness and wear resistance of these overlays.Keywords: surfacing, stellite 21, dilution, SMAW, frictional wear, micro-hardness
Procedia PDF Downloads 25024836 A Biometric Template Security Approach to Fingerprints Based on Polynomial Transformations
Authors: Ramon Santana
Abstract:
The use of biometric identifiers in the field of information security, access control to resources, authentication in ATMs and banking among others, are of great concern because of the safety of biometric data. In the general architecture of a biometric system have been detected eight vulnerabilities, six of them allow obtaining minutiae template in plain text. The main consequence of obtaining minutia templates is the loss of biometric identifier for life. To mitigate these vulnerabilities several models to protect minutiae templates have been proposed. Several vulnerabilities in the cryptographic security of these models allow to obtain biometric data in plain text. In order to increase the cryptographic security and ease of reversibility, a minutiae templates protection model is proposed. The model aims to make the cryptographic protection and facilitate the reversibility of data using two levels of security. The first level of security is the data transformation level. In this level generates invariant data to rotation and translation, further transformation is irreversible. The second level of security is the evaluation level, where the encryption key is generated and data is evaluated using a defined evaluation function. The model is aimed at mitigating known vulnerabilities of the proposed models, basing its security on the impossibility of the polynomial reconstruction.Keywords: fingerprint, template protection, bio-cryptography, minutiae protection
Procedia PDF Downloads 170