Search results for: minimum data set
25723 Design of a Low Cost Motion Data Acquisition Setup for Mechatronic Systems
Authors: Baris Can Yalcin
Abstract:
Motion sensors have been commonly used as a valuable component in mechatronic systems, however, many mechatronic designs and applications that need motion sensors cost enormous amount of money, especially high-tech systems. Design of a software for communication protocol between data acquisition card and motion sensor is another issue that has to be solved. This study presents how to design a low cost motion data acquisition setup consisting of MPU 6050 motion sensor (gyro and accelerometer in 3 axes) and Arduino Mega2560 microcontroller. Design parameters are calibration of the sensor, identification and communication between sensor and data acquisition card, interpretation of data collected by the sensor.Keywords: design, mechatronics, motion sensor, data acquisition
Procedia PDF Downloads 58825722 Bee Colony Optimization Applied to the Bin Packing Problem
Authors: Kenza Aida Amara, Bachir Djebbar
Abstract:
We treat the two-dimensional bin packing problem which involves packing a given set of rectangles into a minimum number of larger identical rectangles called bins. This combinatorial problem is NP-hard. We propose a pretreatment for the oriented version of the problem that allows the valorization of the lost areas in the bins and the reduction of the size problem. A heuristic method based on the strategy first-fit adapted to this problem is presented. We present an approach of resolution by bee colony optimization. Computational results express a comparison of the number of bins used with and without pretreatment.Keywords: bee colony optimization, bin packing, heuristic algorithm, pretreatment
Procedia PDF Downloads 63325721 The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme
Authors: Andrey V. Timofeev, Dmitry V. Egorov
Abstract:
This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained.Keywords: classification accuracy, fusion solution, total error rate, multimodal fusion classifier
Procedia PDF Downloads 46625720 Speed Characteristics of Mixed Traffic Flow on Urban Arterials
Authors: Ashish Dhamaniya, Satish Chandra
Abstract:
Speed and traffic volume data are collected on different sections of four lane and six lane roads in three metropolitan cities in India. Speed data are analyzed to fit the statistical distribution to individual vehicle speed data and all vehicles speed data. It is noted that speed data of individual vehicle generally follows a normal distribution but speed data of all vehicle combined at a section of urban road may or may not follow the normal distribution depending upon the composition of traffic stream. A new term Speed Spread Ratio (SSR) is introduced in this paper which is the ratio of difference in 85th and 50th percentile speed to the difference in 50th and 15th percentile speed. If SSR is unity then speed data are truly normally distributed. It is noted that on six lane urban roads, speed data follow a normal distribution only when SSR is in the range of 0.86 – 1.11. The range of SSR is validated on four lane roads also.Keywords: normal distribution, percentile speed, speed spread ratio, traffic volume
Procedia PDF Downloads 42225719 An Exploratory Analysis of Brisbane's Commuter Travel Patterns Using Smart Card Data
Authors: Ming Wei
Abstract:
Over the past two decades, Location Based Service (LBS) data have been increasingly applied to urban and transportation studies due to their comprehensiveness and consistency. However, compared to other LBS data including mobile phone data, GPS and social networking platforms, smart card data collected from public transport users have arguably yet to be fully exploited in urban systems analysis. By using five weekdays of passenger travel transaction data taken from go card – Southeast Queensland’s transit smart card – this paper analyses the spatiotemporal distribution of passenger movement with regard to the land use patterns in Brisbane. Work and residential places for public transport commuters were identified after extracting journeys-to-work patterns. Our results show that the locations of the workplaces identified from the go card data and residential suburbs are largely consistent with those that were marked in the land use map. However, the intensity for some residential locations in terms of population or commuter densities do not match well between the map and those derived from the go card data. This indicates that the misalignment between residential areas and workplaces to a certain extent, shedding light on how enhancements to service management and infrastructure expansion might be undertaken.Keywords: big data, smart card data, travel pattern, land use
Procedia PDF Downloads 28525718 Impact of Climate Change on Flow Regime in Himalayan Basins, Nepal
Authors: Tirtha Raj Adhikari, Lochan Prasad Devkota
Abstract:
This research studied the hydrological regime of three glacierized river basins in Khumbu, Langtang and Annapurna regions of Nepal using the Hydraologiska Byrans Vattenbalansavde (HBV), HVB-light 3.0 model. Future scenario of discharge is also studied using downscaled climate data derived from statistical downscaling method. General Circulation Models (GCMs) successfully simulate future climate variability and climate change on a global scale; however, poor spatial resolution constrains their application for impact studies at a regional or a local level. The dynamically downscaled precipitation and temperature data from Coupled Global Circulation Model 3 (CGCM3) was used for the climate projection, under A2 and A1B SRES scenarios. In addition, the observed historical temperature, precipitation and discharge data were collected from 14 different hydro-metrological locations for the implementation of this study, which include watershed and hydro-meteorological characteristics, trends analysis and water balance computation. The simulated precipitation and temperature were corrected for bias before implementing in the HVB-light 3.0 conceptual rainfall-runoff model to predict the flow regime, in which Groups Algorithms Programming (GAP) optimization approach and then calibration were used to obtain several parameter sets which were finally reproduced as observed stream flow. Except in summer, the analysis showed that the increasing trends in annual as well as seasonal precipitations during the period 2001 - 2060 for both A2 and A1B scenarios over three basins under investigation. In these river basins, the model projected warmer days in every seasons of entire period from 2001 to 2060 for both A1B and A2 scenarios. These warming trends are higher in maximum than in minimum temperatures throughout the year, indicating increasing trend of daily temperature range due to recent global warming phenomenon. Furthermore, there are decreasing trends in summer discharge in Langtang Khola (Langtang region) which is increasing in Modi Khola (Annapurna region) as well as Dudh Koshi (Khumbu region) river basin. The flow regime is more pronounced during later parts of the future decades than during earlier parts in all basins. The annual water surplus of 1419 mm, 177 mm and 49 mm are observed in Annapurna, Langtang and Khumbu region, respectively.Keywords: temperature, precipitation, water discharge, water balance, global warming
Procedia PDF Downloads 34425717 Whitnall’s Sling Will Be an Alternative Method for the Surgical Correction of Poor Function Ptosis
Authors: Titap Yazicioglu
Abstract:
To examine the results of two different surgery in patients with severe ptosis and poor levator function. The records of 10 bilateral congenital ptosis patients, who underwent Whitnall’s sling surgery on one eyelid and frontalis sling surgery on the other were analyzed retrospectively. All patients had severe congenital ptosis(>4mm) and poor levator function (LF<4mm). Data regarding eyelid position, cosmetic outcomes, and postoperative complications were evaluated. All patients were assessed for a minimum of one year with regard to the amount of correction, residual ptosis and lagophthalmos. The study consisted of 10 patients, with an average age of 9.2±2.4 years. Preoperative diagnosis for all patients was noted as, the average LF was 3.4±0.51mm, vertical lid height was 3.5±0.52 mm and margin reflex distance-1 (MRD-1) was 0.4±0.51mm. The mean vertical lid height was measured as 7.1±0.73 mm in the frontalis sling group and 7.2±0.63 mm in the Whitnall’s sling group at the postoperative 1st month control. However, in patients with Whitnall’s sling, revision was performed with frontalis sling surgery due to failure in vertical lid height in the late postoperative period, and an average of 7.5±0.52 mm was achieved. Satisfactory results were obtained in all patients. Although postoperative lagophthalmitis developed in the frontalis sling group, none of them developed exposure keratitis. Granuloma was observed as sling infection in 2(20%) of the patients. Although Whitnall’s sling technique provides a natural look appearance without interfering with the functional result, we did not find it as successful as frontalis sling surgery in severe ptosis.Keywords: congenital ptosis, frontalis suspension, Whitnall ligament, complications
Procedia PDF Downloads 10625716 Pattern Recognition Using Feature Based Die-Map Clustering in the Semiconductor Manufacturing Process
Authors: Seung Hwan Park, Cheng-Sool Park, Jun Seok Kim, Youngji Yoo, Daewoong An, Jun-Geol Baek
Abstract:
Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.Keywords: die-map clustering, feature extraction, pattern recognition, semiconductor manufacturing process
Procedia PDF Downloads 40225715 Effects of Test Environment on the Sliding Wear Behaviour of Cast Iron, Zinc-Aluminium Alloy and Its Composite
Authors: Mohammad M. Khan, Gajendra Dixit
Abstract:
Partially lubricated sliding wear behaviour of a zinc-based alloy reinforced with 10wt% SiC particles has been studied as a function of applied load and solid lubricant particle size and has been compared with that of matrix alloy and conventionally used grey cast iron. The wear tests were conducted at the sliding velocities of 2.1m/sec in various partial lubricated conditions using pin on disc machine as per ASTM G-99-05. Base oil (SAE 20W-40) or mixture of the base oil with 5wt% graphite of particle sizes (7-10 µm) and (100 µm) were used for creating lubricated conditions. The matrix alloy revealed primary dendrites of a and eutectoid a + h and Î phases in the Inter dendritic regions. Similar microstructure has been depicted by the composite with an additional presence of the dispersoid SiC particles. In the case of cast iron, flakes of graphite were observed in the matrix; the latter comprised of (majority of) pearlite and (limited quantity of) ferrite. Results show a large improvement in wear resistance of the zinc-based alloy after reinforcement with SiC particles. The cast iron shows intermediate response between the matrix alloy and composite. The solid lubrication improved the wear resistance and friction behaviour of both the reinforced and base alloy. Moreover, minimum wear rate is obtained in oil+ 5wt % graphite (7-10 µm) lubricated environment for the matrix alloy and composite while for cast iron addition of solid lubricant increases the wear rate and minimum wear rate is obtained in case of oil lubricated environment. The cast iron experienced higher frictional heating than the matrix alloy and composite in all the cases especially at higher load condition. As far as friction coefficient is concerned, a mixed trend of behaviour was noted. The wear rate and frictional heating increased with load while friction coefficient was affected in an opposite manner. Test duration influenced the frictional heating and friction coefficient of the samples in a mixed manner.Keywords: solid lubricant, sliding wear, grey cast iron, zinc based metal matrix composites
Procedia PDF Downloads 31725714 Spatial Integrity of Seismic Data for Oil and Gas Exploration
Authors: Afiq Juazer Rizal, Siti Zaleha Misnan, M. Zairi M. Yusof
Abstract:
Seismic data is the fundamental tool utilized by exploration companies to determine potential hydrocarbon. However, the importance of seismic trace data will be undermined unless the geo-spatial component of the data is understood. Deriving a proposed well to be drilled from data that has positional ambiguity will jeopardize business decision and millions of dollars’ investment that every oil and gas company would like to avoid. Spatial integrity QC workflow has been introduced in PETRONAS to ensure positional errors within the seismic data are recognized throughout the exploration’s lifecycle from acquisition, processing, and seismic interpretation. This includes, amongst other tests, quantifying that the data is referenced to the appropriate coordinate reference system, survey configuration validation, and geometry loading verification. The direct outcome of the workflow implementation helps improve reliability and integrity of sub-surface geological model produced by geoscientist and provide important input to potential hazard assessment where positional accuracy is crucial. This workflow’s development initiative is part of a bigger geospatial integrity management effort, whereby nearly eighty percent of the oil and gas data are location-dependent.Keywords: oil and gas exploration, PETRONAS, seismic data, spatial integrity QC workflow
Procedia PDF Downloads 22225713 Patients’ Trust in Health Care Systems
Authors: Dilara Usta, Fatos Korkmaz
Abstract:
Background: Individuals who utilise health services maintain relationships with health professionals, insurers and institutions. The nature of these relationships requires service receivers to have trust in the service providers because maintaining health services without reciprocal trust is very difficult. Therefore, individual evaluations of trust within the scope of health services have become increasingly important. Objective: To investigate patients’ trust in the health-care system and their relevant socio-demographical characteristics. Methods: This research was conducted using a descriptive design which included 493 literate patients aged 18-65 years who were hospitalised for a minimum of two days at public university and training&research hospitals in Ankara, Turkey. Patients’ trust in health-care professionals, insurers, and institutions were investigated. Data were collected using a demographic questionnaire and the Multidimensional Trust in Health-Care Systems Scale between September 2015 and April 2016. Results: The participants’ mean age was 47.7±13.1; 70% had a moderate income and 69% had a prior hospitalisation and 63.5% of the patients were satisfied with the health-care services. The mean Multidimensional Trust in Health-Care Systems Scale score for the sample was 61.5±8.3; the provider subscale had a mean of 38.1±5, the insurers subscale had a mean of 12.9±3.7, and institutions subscale had a mean of 10.6±1.9. Conclusion: Patients’ level of trust in the health-care system was above average and the trust level of the patients with higher educational and socio-economic levels was lower compared to the other patients. Health-care professionals should raise awareness about the significance of trust in the health-care system.Keywords: delivery of health care, health care system, nursing, patients, trust
Procedia PDF Downloads 36925712 Food Losses Reducing by Extending the Minimum Durability Date of Thermally Processed Products
Authors: Dorota Zielińska, Monika Trząskowska, Anna Łepecka, Katarzyna Neffe-Skocińska, Beata Bilska, Marzena Tomaszewska, Danuta Kołożyn-Krajewska
Abstract:
Minimum durability date (MDD) labeled food is known to have a long shelf life. A properly stored or transported food retains its physical, chemical, microbiological, and sensory properties up to MDD. The aim of the study was to assess the sensory quality and microbiological safety of selected thermally processed products,i.e., mayonnaise, jam, and canned tuna within and after MDD. The scope of the study was to determine the markers of microbiological quality, i.e., the total viable count (TVC), the Enterobacteriaceae count and the total yeast and mold (TYMC) count on the last day of MDD and after 1 and 3 months of storage, after the MDD expired. In addition, the presence of Salmonella and Listeria monocytogenes was examined on the last day of MDD. The sensory quality of products was assessed by quantitative descriptive analysis (QDA), the intensity of differentiators (quality features), and overall quality were defined and determined. It was found that during three months storage of tested food products, after the MDD expired, the microbiological quality slightly decreased, however, regardless of the tested sample, TVC was at the level of <3 log cfu/g, similarly, the Enterobacretiaceae, what indicates the good microbiological quality of the tested foods. The TYMC increased during storage but did not exceed 2 logs cfu/g of product. Salmonella and Listeria monocytogenes were not found in any of the tested food samples. The sensory quality of mayonnaise negatively changed during storage. After three months from the expiry of MDD, a decrease in the "fat" and "egg" taste and aroma intensity, as well as the "density" were found. The "sour" taste intensity of blueberry jam after three months of storage was slightly higher, compared to the jam tested on the last day of MDD, without affecting the overall quality. In the case of tuna samples, an increase in the "fishy" taste and aroma intensity was observed during storage, and the overall quality did not change. Tested thermally processed products (mayonnaise, jam, and canned tuna) were characterized by good microbiological and sensory quality on the last day of MDD, as well as after three months of storage under conditions recommended by the producer. These findings indicate the possibility of reducing food losses by extending or completely abolishing the MDD of selected thermal processed food products.Keywords: food wastes, food quality and safety, mayonnaise, jam, tuna
Procedia PDF Downloads 12925711 Efficiency of Membrane Distillation to Produce Fresh Water
Authors: Sabri Mrayed, David Maccioni, Greg Leslie
Abstract:
Seawater desalination has been accepted as one of the most effective solutions to the growing problem of a diminishing clean drinking water supply. Currently, two desalination technologies dominate the market – the thermally driven multi-stage flash distillation (MSF) and the membrane based reverse osmosis (RO). However, in recent years membrane distillation (MD) has emerged as a potential alternative to the established means of desalination. This research project intended to determine the viability of MD as an alternative process to MSF and RO for seawater desalination. Specifically the project involves conducting a thermodynamic analysis of the process based on the second law of thermodynamics to determine the efficiency of the MD. Data was obtained from experiments carried out on a laboratory rig. In order to determine exergy values required for the exergy analysis, two separate models were built in Engineering Equation Solver – the ’Minimum Separation Work Model’ and the ‘Stream Exergy Model’. The efficiency of MD process was found to be 17.3 %, and the energy consumption was determined to be 4.5 kWh to produce one cubic meter of fresh water. The results indicate MD has potential as a technique for seawater desalination compared to RO and MSF. However, it was shown that this was only the case if an alternate energy source such as green or waste energy was available to provide the thermal energy input to the process. If the process was required to power itself, it was shown to be highly inefficient and in no way thermodynamically viable as a commercial desalination process.Keywords: desalination, exergy, membrane distillation, second law efficiency
Procedia PDF Downloads 36325710 Evaluating Data Maturity in Riyadh's Nonprofit Sector: Insights Using the National Data Maturity Index (NDI)
Authors: Maryam Aloshan, Imam Mohammad Ibn Saud, Ahmad Khudair
Abstract:
This study assesses the data governance maturity of nonprofit organizations in Riyadh, Saudi Arabia, using the National Data Maturity Index (NDI) framework developed by the Saudi Data and Artificial Intelligence Authority (SDAIA). Employing a survey designed around the NDI model, data maturity levels were evaluated across 14 dimensions using a 5-point Likert scale. The results reveal a spectrum of maturity levels among the organizations surveyed: while some medium-sized associations reached the ‘Defined’ stage, others, including large associations, fell within the ‘Absence of Capabilities’ or ‘Building’ phases, with no organizations achieving the advanced ‘Established’ or ‘Pioneering’ levels. This variation suggests an emerging recognition of data governance but underscores the need for targeted interventions to bridge the maturity gap. The findings point to a significant opportunity to elevate data governance capabilities in Saudi nonprofits through customized capacity-building initiatives, including training, mentorship, and best practice sharing. This study contributes valuable insights into the digital transformation journey of the Saudi nonprofit sector, aligning with national goals for data-driven governance and organizational efficiency.Keywords: nonprofit organizations-national data maturity index (NDI), Saudi Arabia- SDAIA, data governance, data maturity
Procedia PDF Downloads 1425709 Cloud Data Security Using Map/Reduce Implementation of Secret Sharing Schemes
Authors: Sara Ibn El Ahrache, Tajje-eddine Rachidi, Hassan Badir, Abderrahmane Sbihi
Abstract:
Recently, there has been increasing confidence for a favorable usage of big data drawn out from the huge amount of information deposited in a cloud computing system. Data kept on such systems can be retrieved through the network at the user’s convenience. However, the data that users send include private information, and therefore, information leakage from these data is now a major social problem. The usage of secret sharing schemes for cloud computing have lately been approved to be relevant in which users deal out their data to several servers. Notably, in a (k,n) threshold scheme, data security is assured if and only if all through the whole life of the secret the opponent cannot compromise more than k of the n servers. In fact, a number of secret sharing algorithms have been suggested to deal with these security issues. In this paper, we present a Mapreduce implementation of Shamir’s secret sharing scheme to increase its performance and to achieve optimal security for cloud data. Different tests were run and through it has been demonstrated the contributions of the proposed approach. These contributions are quite considerable in terms of both security and performance.Keywords: cloud computing, data security, Mapreduce, Shamir's secret sharing
Procedia PDF Downloads 30625708 Planning Fore Stress II: Study on Resiliency of New Architectural Patterns in Urban Scale
Authors: Amir Shouri, Fereshteh Tabe
Abstract:
Master planning and urban infrastructure’s thoughtful and sequential design strategies will play the major role in reducing the damages of natural disasters, war and or social/population related conflicts for cities. Defensive strategies have been revised during the history of mankind after having damages from natural depressions, war experiences and terrorist attacks on cities. Lessons learnt from Earthquakes, from 2 world war casualties in 20th century and terrorist activities of all times. Particularly, after Hurricane Sandy of New York in 2012 and September 11th attack on New York’s World Trade Centre (WTC) in 21st century, there have been series of serious collaborations between law making authorities, urban planners and architects and defence related organizations to firstly, getting prepared and/or prevent such activities and secondly, reduce the human loss and economic damages to minimum. This study will work on developing a model of planning for New York City, where its citizens will get minimum impacts in threat-full time with minimum economic damages to the city after the stress is passed. The main discussion in this proposal will focus on pre-hazard, hazard-time and post-hazard transformative policies and strategies that will reduce the “Life casualties” and will ease “Economic Recovery” in post-hazard conditions. This proposal is going to scrutinize that one of the key solutions in this path might be focusing on all overlaying possibilities on architectural platforms of three fundamental infrastructures, the transportation, the power related sources and defensive abilities on a dynamic-transformative framework that will provide maximum safety, high level of flexibility and fastest action-reaction opportunities in stressful periods of time. “Planning Fore Stress” is going to be done in an analytical, qualitative and quantitative work frame, where it will study cases from all over the world. Technology, Organic Design, Materiality, Urban forms, city politics and sustainability will be discussed in deferent cases in international scale. From the modern strategies of Copenhagen for living friendly with nature to traditional approaches of Indonesian old urban planning patterns, the “Iron Dome” of Israel to “Tunnels” in Gaza, from “Ultra-high-performance quartz-infused concrete” of Iran to peaceful and nature-friendly strategies of Switzerland, from “Urban Geopolitics” in cities, war and terrorism to “Design of Sustainable Cities” in the world, will all be studied with references and detailed look to analysis of each case in order to propose the most resourceful, practical and realistic solutions to questions on “New City Divisions”, “New City Planning and social activities” and “New Strategic Architecture for Safe Cities”. This study is a developed version of a proposal that was announced as winner at MoMA in 2013 in call for ideas for Rockaway after Sandy Hurricane took place.Keywords: urban scale, city safety, natural disaster, war and terrorism, city divisions, architecture for safe cities
Procedia PDF Downloads 48425707 Principle Component Analysis on Colon Cancer Detection
Authors: N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Rita Magdalena, R. D. Atmaja, Sofia Saidah, Ocky Tiaramukti
Abstract:
Colon cancer or colorectal cancer is a type of cancer that attacks the last part of the human digestive system. Lymphoma and carcinoma are types of cancer that attack human’s colon. Colon cancer causes deaths about half a million people every year. In Indonesia, colon cancer is the third largest cancer case for women and second in men. Unhealthy lifestyles such as minimum consumption of fiber, rarely exercising and lack of awareness for early detection are factors that cause high cases of colon cancer. The aim of this project is to produce a system that can detect and classify images into type of colon cancer lymphoma, carcinoma, or normal. The designed system used 198 data colon cancer tissue pathology, consist of 66 images for Lymphoma cancer, 66 images for carcinoma cancer and 66 for normal / healthy colon condition. This system will classify colon cancer starting from image preprocessing, feature extraction using Principal Component Analysis (PCA) and classification using K-Nearest Neighbor (K-NN) method. Several stages in preprocessing are resize, convert RGB image to grayscale, edge detection and last, histogram equalization. Tests will be done by trying some K-NN input parameter setting. The result of this project is an image processing system that can detect and classify the type of colon cancer with high accuracy and low computation time.Keywords: carcinoma, colorectal cancer, k-nearest neighbor, lymphoma, principle component analysis
Procedia PDF Downloads 20525706 A Systematic Review of Pedometer-or Accelerometer-Based Interventions for Increasing Physical Activity in Low Socioeconomic Groups
Authors: Shaun G. Abbott, Rebecca C. Reynolds, James B. Etter, John B. F. de Wit
Abstract:
The benefits of physical activity (PA) on health are well documented. Low socioeconomic status (SES) is associated with poor health, with PA a suggested mediator. Pedometers and accelerometers offer an effective behavior change tool to increase PA levels. While the role of pedometer and accelerometer use in increasing PA is recognized in many populations, little is known in low-SES groups. We are aiming to assess the effectiveness of pedometer- and accelerometer-based interventions for increasing PA step count and improving subsequent health outcomes among low-SES groups of high-income countries. Medline, Embase, PsycINFO, CENTRAL and SportDiscus databases were searched to identify articles published before 10th July, 2015; using search terms developed from previous systematic reviews. Inclusion criteria are: low-SES participants classified by income, geography, education, occupation or ethnicity; study duration minimum 4 weeks; an intervention and control group; wearing of an unsealed pedometer or accelerometer to objectively measure PA as step counts per day for the duration of the study. We retrieved 2,142 articles from our database searches, after removal of duplicates. Two investigators independently reviewed titles and abstracts of these articles (50% each) and a combined 20% sample were reviewed to account for inter-assessor variation. We are currently verifying the full texts of 430 articles. Included studies will be critically appraised for risk of bias using guidelines suggested by the Cochrane Public Health Group. Two investigators will extract data concerning the intervention; study design; comparators; steps per day; participants; context and presence or absence of obesity and/or chronic disease. Heterogeneity amongst studies is anticipated, thus a narrative synthesis of data will be conducted with the simplification of selected results into percentage increases from baseline to allow for between-study comparison. Results will be presented at the conference in December if selected.Keywords: accelerometer, pedometer, physical activity, socioeconomic, step count
Procedia PDF Downloads 33125705 A Modular Framework for Enabling Analysis for Educators with Different Levels of Data Mining Skills
Authors: Kyle De Freitas, Margaret Bernard
Abstract:
Enabling data mining analysis among a wider audience of educators is an active area of research within the educational data mining (EDM) community. The paper proposes a framework for developing an environment that caters for educators who have little technical data mining skills as well as for more advanced users with some data mining expertise. This framework architecture was developed through the review of the strengths and weaknesses of existing models in the literature. The proposed framework provides a modular architecture for future researchers to focus on the development of specific areas within the EDM process. Finally, the paper also highlights a strategy of enabling analysis through either the use of predefined questions or a guided data mining process and highlights how the developed questions and analysis conducted can be reused and extended over time.Keywords: educational data mining, learning management system, learning analytics, EDM framework
Procedia PDF Downloads 32625704 Using Audit Tools to Maintain Data Quality for ACC/NCDR PCI Registry Abstraction
Authors: Vikrum Malhotra, Manpreet Kaur, Ayesha Ghotto
Abstract:
Background: Cardiac registries such as ACC Percutaneous Coronary Intervention Registry require high quality data to be abstracted, including data elements such as nuclear cardiology, diagnostic coronary angiography, and PCI. Introduction: The audit tool created is used by data abstractors to provide data audits and assess the accuracy and inter-rater reliability of abstraction performed by the abstractors for a health system. This audit tool solution has been developed across 13 registries, including ACC/NCDR registries, PCI, STS, Get with the Guidelines. Methodology: The data audit tool was used to audit internal registry abstraction for all data elements, including stress test performed, type of stress test, data of stress test, results of stress test, risk/extent of ischemia, diagnostic catheterization detail, and PCI data elements for ACC/NCDR PCI registries. This is being used across 20 hospital systems internally and providing abstraction and audit services for them. Results: The data audit tool had inter-rater reliability and accuracy greater than 95% data accuracy and IRR score for the PCI registry in 50 PCI registry cases in 2021. Conclusion: The tool is being used internally for surgical societies and across hospital systems. The audit tool enables the abstractor to be assessed by an external abstractor and includes all of the data dictionary fields for each registry.Keywords: abstraction, cardiac registry, cardiovascular registry, registry, data
Procedia PDF Downloads 10525703 Artificial Intelligence Based Comparative Analysis for Supplier Selection in Multi-Echelon Automotive Supply Chains via GEP and ANN Models
Authors: Seyed Esmail Seyedi Bariran, Laysheng Ewe, Amy Ling
Abstract:
Since supplier selection appears as a vital decision, selecting supplier based on the best and most accurate ways has a lot of importance for enterprises. In this study, a new Artificial Intelligence approach is exerted to remove weaknesses of supplier selection. The paper has three parts. First part is choosing the appropriate criteria for assessing the suppliers’ performance. Next one is collecting the data set based on experts. Afterwards, the data set is divided into two parts, the training data set and the testing data set. By the training data set the best structure of GEP and ANN are selected and to evaluate the power of the mentioned methods the testing data set is used. The result obtained shows that the accuracy of GEP is more than ANN. Moreover, unlike ANN, a mathematical equation is presented by GEP for the supplier selection.Keywords: supplier selection, automotive supply chains, ANN, GEP
Procedia PDF Downloads 63125702 Time Series Modelling for Forecasting Wheat Production and Consumption of South Africa in Time of War
Authors: Yiseyon Hosu, Joseph Akande
Abstract:
Wheat is one of the most important staple food grains of human for centuries and is largely consumed in South Africa. It has a special place in the South African economy because of its significance in food security, trade, and industry. This paper modelled and forecast the production and consumption of wheat in South Africa in the time covid-19 and the ongoing Russia-Ukraine war by using annual time series data from 1940–2021 based on the ARIMA models. Both the averaging forecast and selected models forecast indicate that there is the possibility of an increase with respect to production. The minimum and maximum growth in production is projected to be between 3million and 10 million tons, respectively. However, the model also forecast a possibility of depression with respect to consumption in South Africa. Although Covid-19 and the war between Ukraine and Russia, two major producers and exporters of global wheat, are having an effect on the volatility of the prices currently, the wheat production in South African is expected to increase and meat the consumption demand and provided an opportunity for increase export with respect to domestic consumption. The forecasting of production and consumption behaviours of major crops play an important role towards food and nutrition security, these findings can assist policymakers and will provide them with insights into the production and pricing policy of wheat in South Africa.Keywords: ARIMA, food security, price volatility, staple food, South Africa
Procedia PDF Downloads 10225701 Antibacterial Evaluation, in Silico ADME and QSAR Studies of Some Benzimidazole Derivatives
Authors: Strahinja Kovačević, Lidija Jevrić, Miloš Kuzmanović, Sanja Podunavac-Kuzmanović
Abstract:
In this paper, various derivatives of benzimidazole have been evaluated against Gram-negative bacteria Escherichia coli. For all investigated compounds the minimum inhibitory concentration (MIC) was determined. Quantitative structure-activity relationships (QSAR) attempts to find consistent relationships between the variations in the values of molecular properties and the biological activity for a series of compounds so that these rules can be used to evaluate new chemical entities. The correlation between MIC and some absorption, distribution, metabolism and excretion (ADME) parameters was investigated, and the mathematical models for predicting the antibacterial activity of this class of compounds were developed. The quality of the multiple linear regression (MLR) models was validated by the leave-one-out (LOO) technique, as well as by the calculation of the statistical parameters for the developed models and the results are discussed on the basis of the statistical data. The results of this study indicate that ADME parameters have a significant effect on the antibacterial activity of this class of compounds. Principal component analysis (PCA) and agglomerative hierarchical clustering algorithms (HCA) confirmed that the investigated molecules can be classified into groups on the basis of the ADME parameters: Madin-Darby Canine Kidney cell permeability (MDCK), Plasma protein binding (PPB%), human intestinal absorption (HIA%) and human colon carcinoma cell permeability (Caco-2).Keywords: benzimidazoles, QSAR, ADME, in silico
Procedia PDF Downloads 37525700 Activity of Commonly Used Intravenous Nutrient and Bisolvon in Neonatal Intensive Care Units against Biofilm Cells and Their Synergetic Effect with Antibiotics
Authors: Marwa Fady Abozed, Hemat Abd El Latif, Fathy Serry, Lotfi El Sayed
Abstract:
The purpose of this study was to investigate the efficacy of intravenous nutrient(soluvit, vitalipid, aminoven infant, lipovenos) and bisolvon commonly used in neonatal intensive care units against biofilm cells of staphylococcus aureus, Staphylococcus epidermidis, Pseudomonas aerguinosa and klebseilla pneumonia as they are the most commonly isolated organisms and are biofilm producers. Also, the synergetic acticity of soluvit, heparin, bisolvon with antibiotics and its effect on minimum biofilm eradication concentration(MBEC) was tested. Intravenous nutrient and bromohexine are widely used in newborns. Numbers of viable cell count released from biofilm after treatment with intravenous nutrient and bromohexine were counted to compare the efficacy. The percentage of reduction in biofilm regrowth in case of using soluvit was 43-51% and 36-42 % for Gram positive and Gram negative respectively, on adding the vitalipid the percentage was 45-50 %and 37-41% for Gram positive and Gram negative respectively. While, in case of using bisolvon the percentage was 46-52% and 47-48% for Gram positive and Gram negative respectively. Adding lipovenos had a reduction percentage of 48-52% and 48-49% for Gram positive and Gram negative respectively. While, adding aminoven infant the percentage was 10-15% and 9-11% for Gram positive and Gram negative respectively. Adding soluvit, heparin and bisolvon to antibiotics had synergic effect. soluvit with ciprofloxacin has 8-16 times decrease than minimum biofilm eradication concentration (MBEC) for ciprofloxacin alone. While, by adding soluvit to vancomycin the MBEC reduced by 16 times than MBEC of vancomycin alone. In case of combination soluvit with cefotaxime, amikacin and gentamycin the reduction in MBEC was 16, 8 and 6-32 times respectively. The synergetic effect of adding heparin to ciprofloxacin, vancomycin, cefotaxime, amikacin and gentamicin was 2 times reduction with all except in case of gram negative the range of reduction was 0-2 with both gentamycin and ciprofloxacin. Bisolvon exihited synergetic effect with ciprofloxacin, vancomycin, cefotaxime, amikacin and gentamicin by 16, 32, 32, 8, 32-64 and 32 times decrease in MBEC respectively.Keywords: biofilm, neonatal intensive care units, antibiofilm agents, intravenous nutrient
Procedia PDF Downloads 32725699 Increasing the Apparent Time Resolution of Tc-99m Diethylenetriamine Pentaacetic Acid Galactosyl Human Serum Albumin Dynamic SPECT by Use of an 180-Degree Interpolation Method
Authors: Yasuyuki Takahashi, Maya Yamashita, Kyoko Saito
Abstract:
In general, dynamic SPECT data acquisition needs a few minutes for one rotation. Thus, the time-activity curve (TAC) derived from the dynamic SPECT is relatively coarse. In order to effectively shorten the interval, between data points, we adopted a 180-degree interpolation method. This method is already used for reconstruction of the X-ray CT data. In this study, we applied this 180-degree interpolation method to SPECT and investigated its effectiveness.To briefly describe the 180-degree interpolation method: the 180-degree data in the second half of one rotation are combined with the 180-degree data in the first half of the next rotation to generate a 360-degree data set appropriate for the time halfway between the first and second rotations. In both a phantom and a patient study, the data points from the interpolated images fell in good agreement with the data points tracking the accumulation of 99mTc activity over time for appropriate region of interest. We conclude that data derived from interpolated images improves the apparent time resolution of dynamic SPECT.Keywords: dynamic SPECT, time resolution, 180-degree interpolation method, 99mTc-GSA.
Procedia PDF Downloads 49325698 AI-Driven Solutions for Optimizing Master Data Management
Authors: Srinivas Vangari
Abstract:
In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.Keywords: artificial intelligence, master data management, data governance, data quality
Procedia PDF Downloads 1725697 Biomass and Biogas Yield of Maize as Affected by Nitrogen Rates with Varying Harvesting under Semi-Arid Condition of Pakistan
Authors: Athar Mahmood, Asad Ali
Abstract:
Management considerations including harvesting time and nitrogen application considerably influence the biomass yield, quality and biogas production. Therefore, a field study was conducted to determine the effect of various harvesting times and nitrogen rates on the biomass yield, quality and biogas yield of maize crop. This experiment was consisted of various harvesting times i.e., harvesting after 45, 55 and 65 days of sowing (DAS) and nitrogen rates i.e., 0, 100, 150 and 200 kg ha-1 respectively. The data indicated that maximum plant height, leaf area, dry matter (DM) yield, protein, acid detergent fiber, neutral detergent fiber, crude fiber contents and biogas yield were recorded 65 days after sowing while lowest was recorded 45 days after sowing. In contrary to that significantly higher chlorophyll contents were observed at 45 DAS. In case of nitrogen rates maximum plant height, leaf area, and DM yield, protein contents, ash contents, acid detergent fiber, neutral detergent fiber, crude fiber contents and chlorophyll contents were determined with nitrogen at the rate of 200 kg ha-1, while minimum was observed when no N was applied. Therefore, harvesting 65 DAS and N application @ 200 kg ha-1 can be suitable for getting the higher biomass and biogas production.Keywords: chemical composition, fiber contents, biogas, nitrogen, harvesting time
Procedia PDF Downloads 16025696 Use of McCloskey/Mueller Satisfaction Scale in Evaluating Satisfaction with Working Conditions of Nurses in Slovakia
Authors: Vladimir Siska, Lukas Kober
Abstract:
Introduction: The research deals with the work satisfaction of nurses working in healthcare institutions in the Slovak Republic, and factors influencing it. Employers should create working conditions that are consonant with the requirements of their employees and make the most of motivation strategies to help them answer to the employess' needs in concordance with various needs and motivation process theories. Methodology: In our research, we aimed to investigate the level of work satisfaction in nurses by carrying out a quantitative analysis using the standardized McCloskey/Mueller Satisfaction scale questionnaire. We used the descriptive positioning characteristics (average, median and variability, standard deviation, minimum and maximum) to process the collected data and, to verify our hypotheses; we employed the double-selection Student T-test, Mann-Whitney U test, and a one-way analysis of variance (One-way ANOVA). Results: Nurses´satisfaction with external rewards is influenced by their age, years of experience, and level of completed education, with all of the abovementioned factors also impacting on the nurses' satisfaction with their work schedule. The type of founding authority of the healthcare institution also constitutes an influence on the nurses' satisfaction concerning relationships in the workplace. Conclusion: The feelling of work dissatisfaction can influence employees in many ways, e.g., it can take the form of burn-out syndrome, absenteeism, or increased fluctuation. Therefore, it is important to pay increased attention to all employees of an organisation, regardless of their position.Keywords: motivation, nurse, work satisfaction, McCloskey/Mueller satisfaction scale
Procedia PDF Downloads 12925695 Rapid and Sensitive Detection: Biosensors as an Innovative Analytical Tools
Authors: Sylwia Baluta, Joanna Cabaj, Karol Malecha
Abstract:
The evolution of biosensors was driven by the need for faster and more versatile analytical methods for application in important areas including clinical, diagnostics, food analysis or environmental monitoring, with minimum sample pretreatment. Rapid and sensitive neurotransmitters detection is extremely important in modern medicine. These compounds mainly occur in the brain and central nervous system of mammals. Any changes in the neurotransmitters concentration may lead to many diseases, such as Parkinson’s or schizophrenia. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements.Keywords: adrenaline, biosensor, dopamine, laccase, tyrosinase
Procedia PDF Downloads 14225694 Genetic Data of Deceased People: Solving the Gordian Knot
Authors: Inigo de Miguel Beriain
Abstract:
Genetic data of deceased persons are of great interest for both biomedical research and clinical use. This is due to several reasons. On the one hand, many of our diseases have a genetic component; on the other hand, we share genes with a good part of our biological family. Therefore, it would be possible to improve our response considerably to these pathologies if we could use these data. Unfortunately, at the present moment, the status of data on the deceased is far from being satisfactorily resolved by the EU data protection regulation. Indeed, the General Data Protection Regulation has explicitly excluded these data from the category of personal data. This decision has given rise to a fragmented legal framework on this issue. Consequently, each EU member state offers very different solutions. For instance, Denmark considers the data as personal data of the deceased person for a set period of time while some others, such as Spain, do not consider this data as such, but have introduced some specifically focused regulations on this type of data and their access by relatives. This is an extremely dysfunctional scenario from multiple angles, not least of which is scientific cooperation at the EU level. This contribution attempts to outline a solution to this dilemma through an alternative proposal. Its main hypothesis is that, in reality, health data are, in a sense, a rara avis within data in general because they do not refer to one person but to several. Hence, it is possible to think that all of them can be considered data subjects (although not all of them can exercise the corresponding rights in the same way). When the person from whom the data were obtained dies, the data remain as personal data of his or her biological relatives. Hence, the general regime provided for in the GDPR may apply to them. As these are personal data, we could go back to thinking in terms of a general prohibition of data processing, with the exceptions provided for in Article 9.2 and on the legal bases included in Article 6. This may be complicated in practice, given that, since we are dealing with data that refer to several data subjects, it may be complex to refer to some of these bases, such as consent. Furthermore, there are theoretical arguments that may oppose this hypothesis. In this contribution, it is shown, however, that none of these objections is of sufficient substance to delegitimize the argument exposed. Therefore, the conclusion of this contribution is that we can indeed build a general framework on the processing of personal data of deceased persons in the context of the GDPR. This would constitute a considerable improvement over the current regulatory framework, although it is true that some clarifications will be necessary for its practical application.Keywords: collective data conceptual issues, data from deceased people, genetic data protection issues, GDPR and deceased people
Procedia PDF Downloads 154