Search results for: image and telemetric data
24547 Research Analysis of Urban Area Expansion Based on Remote Sensing
Authors: Sheheryar Khan, Weidong Li, Fanqian Meng
Abstract:
The Urban Heat Island (UHI) effect is one of the foremost problems out of other ecological and socioeconomic issues in urbanization. Due to this phenomenon that human-made urban areas have replaced the rural landscape with the surface that increases thermal conductivity and urban warmth; as a result, the temperature in the city is higher than in the surrounding rural areas. To affect the evidence of this phenomenon in the Zhengzhou city area, an observation of the temperature variations in the urban area is done through a scientific method that has been followed. Landsat 8 satellite images were taken from 2013 to 2015 to calculate the effect of Urban Heat Island (UHI) along with the NPP-VRRIS night-time remote sensing data to analyze the result for a better understanding of the center of the built-up area. To further support the evidence, the correlation between land surface temperatures and the normalized difference vegetation index (NDVI) was calculated using the Red band 4 and Near-infrared band 5 of the Landsat 8 data. Mono-window algorithm was applied to retrieve the land surface temperature (LST) distribution from the Landsat 8 data using Band 10 and 11 accordingly to convert the top-of-atmosphere radiance (TOA) and to convert the satellite brightness temperature. Along with Landsat 8 data, NPP-VIIRS night-light data is preprocessed to get the research area data. The analysis between Landsat 8 data and NPP night-light data was taken to compare the output center of the Built-up area of Zhengzhou city.Keywords: built-up area, land surface temperature, mono-window algorithm, NDVI, remote sensing, threshold method, Zhengzhou
Procedia PDF Downloads 14524546 A Comparative Study of the Athlete Health Records' Minimum Data Set in Selected Countries and Presenting a Model for Iran
Authors: Robab Abdolkhani, Farzin Halabchi, Reza Safdari, Goli Arji
Abstract:
Background and purpose: The quality of health record depends on the quality of its content and proper documentation. Minimum data set makes a standard method for collecting key data elements that make them easy to understand and enable comparison. The aim of this study was to determine the minimum data set for Iranian athletes’ health records. Methods: This study is an applied research of a descriptive comparative type which was carried out in 2013. By using internal and external forms of documentation, a checklist was created that included data elements of athletes health record and was subjected to debate in Delphi method by experts in the field of sports medicine and health information management. Results: From 97 elements which were subjected to discussion, 85 elements by more than 75 percent of the participants (as the main elements) and 12 elements by 50 to 75 percent of the participants (as the proposed elements) were agreed upon. In about 97 elements of the case, there was no significant difference between responses of alumni groups of sport pathology and sports medicine specialists with medical record, medical informatics and information management professionals. Conclusion: Minimum data set of Iranian athletes’ health record with four information categories including demographic information, health history, assessment and treatment plan was presented. The proposed model is available for manual and electronic medical records.Keywords: Documentation, Health record, Minimum data set, Sports medicine
Procedia PDF Downloads 48724545 Atmospheric Punctuation and Ludic Presence in Ingmar Bergman’s The Seventh Seal
Authors: Bo Kampmann Walther
Abstract:
Drawing from key concepts in ludology, the essay examines the chess game as a literal and metaphorical element that bridges the narrative’s existential weight with the ludic dynamics of gameplay. Unlike traditional readings that interpret the game as a symbolic duel between humanity and mortality, this essay highlights its structural role as a rhythmic motif punctuating the film. The recurring presence of the chessboard provides a temporal and spatial framework that organizes the narrative’s episodic structure, setting the stage for dialogues, drama, and action. The key hypothesis advanced here is that the chess game functions as an atmospheric punctuation rather than a mere representation of existential struggle. It operates as the underlying fabric of the narrative, structuring the characters’ interactions, decisions, and reflections. The essay argues that the game’s ludic nature destabilizes the narrative hierarchy, with gameplay acting as a central mediating force that organizes and reframes the film’s philosophical themes. Through close analysis of key scenes, the essay explores how the chess game aligns with Bergman’s cinematic language to foreground a 'ludified' narrative. In particular, the essay identifies moments where gameplay drives the plot forward, such as the Knight’s strategic use of the game to delay Death and save the lives of Jof and Mia. These scenes illustrate how gameplay becomes a generative force, not only enabling narrative progression but also embedding ludic logic into the film’s structure. The essay also situates The Seventh Seal within a broader theoretical discourse, drawing on classics like Caillois and Huizinga to contextualize the game of chess as a ritualistic and agonistic act. It incorporates Derrida’s concept of différance to explore how the game defers resolution and creates a space of interpretive ambiguity, aligning with the film’s existential themes, and it engages with Deleuze’s notion of the "time-image," arguing that the chess game operates as a temporal motif that disrupts linearity and invites reflection on the passage of time. This novel interpretation positions The Seventh Seal as a work that transcends its existential narrative to become a cinematic "gameboard," where the act of play mediates meaning and structure. In conclusion, by foregrounding the ludic dimensions of the chess game, the essay opens up new avenues for understanding Bergman’s masterpiece as a film that is not only about playing but is, in itself, a 'play'.Keywords: Ludic Narrative, narratology, time-image, rhythm, atmospheric punctuation
Procedia PDF Downloads 724544 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength
Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong
Abstract:
This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification
Procedia PDF Downloads 22224543 Data Collection in Protected Agriculture for Subsequent Big Data Analysis: Methodological Evaluation in Venezuela
Authors: Maria Antonieta Erna Castillo Holly
Abstract:
During the last decade, data analysis, strategic decision making, and the use of artificial intelligence (AI) tools in Latin American agriculture have been a challenge. In some countries, the availability, quality, and reliability of historical data, in addition to the current data recording methodology in the field, makes it difficult to use information systems, complete data analysis, and their support for making the right strategic decisions. This is something essential in Agriculture 4.0. where the increase in the global demand for fresh agricultural products of tropical origin, during all the seasons of the year requires a change in the production model and greater agility in the responses to the consumer market demands of quality, quantity, traceability, and sustainability –that means extensive data-. Having quality information available and updated in real-time on what, how much, how, when, where, at what cost, and the compliance with production quality standards represents the greatest challenge for sustainable and profitable agriculture in the region. The objective of this work is to present a methodological proposal for the collection of georeferenced data from the protected agriculture sector, specifically in production units (UP) with tall structures (Greenhouses), initially for Venezuela, taking the state of Mérida as the geographical framework, and horticultural products as target crops. The document presents some background information and explains the methodology and tools used in the 3 phases of the work: diagnosis, data collection, and analysis. As a result, an evaluation of the process is carried out, relevant data and dashboards are displayed, and the first satellite maps integrated with layers of information in a geographic information system are presented. Finally, some improvement proposals and tentatively recommended applications are added to the process, understanding that their objective is to provide better qualified and traceable georeferenced data for subsequent analysis of the information and more agile and accurate strategic decision making. One of the main points of this study is the lack of quality data treatment in the Latin America area and especially in the Caribbean basin, being one of the most important points how to manage the lack of complete official data. The methodology has been tested with horticultural products, but it can be extended to other tropical crops.Keywords: greenhouses, protected agriculture, data analysis, geographic information systems, Venezuela
Procedia PDF Downloads 13624542 The Influence of Green Supply Chain Management Practices' Implementation on Organizational Performance: An Empirical Case Study in Spain
Authors: Keivan Amirbagheri, Ana Nuñez-Carballosa, Laura Guitart-Tarrés
Abstract:
Over the last couple of decades, enterprises have begun to accept the need for environmental management and have started to implement environmental management programs to compete in the markets. The implementation of green supply chain management (GSCM) practices can provide valuable opportunities to improve firm performance. Through the prior investigations, the ascending tendency of the numbers of published papers in the field of green supply chain management practices has been reported and it shows the high interest level of the authors to work in this area. Besides, there is still a gap to study more about the relationship of GSCM to the organizational performance (OP). So, the purpose of this research is to study the practices related to green supply chain management that influence the results of the company as an organizational performance. Based on our previous works, from one part we have collected these GSCM practices (planning, operational, and communication practices) and classified them through conducting some literature reviews to analyze their effects on the OP’s factors (balanced scorecard’s perspectives). To do so we design a case study methodology through semi-structured interviews and secondary data from some multinational well-known companies based in Spain. The cases have been selected with the criterion of trying to collect members of the entire supply chain to have a vision as global as possible. The results report the considerable influence of green supply chain management practices on the organizational performance of the companies of the study. In addition, they represent that the implementation of green supply chain management practices especially in a long-term perspective can be economically justified. From the point of view of the personal, they feel better about being a member of this type of company that has been structured on environmental issues. Also, for these companies, the image that has been created by the implementation of these practices helps them to facilitate their marketing program.Keywords: green supply chain management, organizational performance, case study, Spain
Procedia PDF Downloads 19324541 Reliable Consensus Problem for Multi-Agent Systems with Sampled-Data
Authors: S. H. Lee, M. J. Park, O. M. Kwon
Abstract:
In this paper, reliable consensus of multi-agent systems with sampled-data is investigated. By using a suitable Lyapunov-Krasovskii functional and some techniques such as Wirtinger Inequality, Schur Complement and Kronecker Product, the results of this systems are obtained by solving a set of Linear Matrix Inequalities(LMIs). One numerical example is included to show the effectiveness of the proposed criteria.Keywords: multi-agent, linear matrix inequalities (LMIs), kronecker product, sampled-data, Lyapunov method
Procedia PDF Downloads 53124540 Materialized View Effect on Query Performance
Authors: Yusuf Ziya Ayık, Ferhat Kahveci
Abstract:
Currently, database management systems have various tools such as backup and maintenance, and also provide statistical information such as resource usage and security. In terms of query performance, this paper covers query optimization, views, indexed tables, pre-computation materialized view, query performance analysis in which query plan alternatives can be created and the least costly one selected to optimize a query. Indexes and views can be created for related table columns. The literature review of this study showed that, in the course of time, despite the growing capabilities of the database management system, only database administrators are aware of the need for dealing with archival and transactional data types differently. These data may be constantly changing data used in everyday life, and also may be from the completed questionnaire whose data input was completed. For both types of data, the database uses its capabilities; but as shown in the findings section, instead of repeating similar heavy calculations which are carrying out same results with the same query over a survey results, using materialized view results can be in a more simple way. In this study, this performance difference was observed quantitatively considering the cost of the query.Keywords: cost of query, database management systems, materialized view, query performance
Procedia PDF Downloads 28224539 An AK-Chart for the Non-Normal Data
Authors: Chia-Hau Liu, Tai-Yue Wang
Abstract:
Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.Keywords: multivariate control chart, statistical process control, one-class classification method, non-normal data
Procedia PDF Downloads 42624538 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation
Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves
Abstract:
Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP
Procedia PDF Downloads 10524537 Panel Application for Determining Impact of Real Exchange Rate and Security on Tourism Revenues: Countries with Middle and High Level Tourism Income
Authors: M. Koray Cetin, Mehmet Mert
Abstract:
The purpose of the study is to examine impacts on tourism revenues of the exchange rate and country overall security level. There are numerous studies that examine the bidirectional relation between macroeconomic factors and tourism revenues and tourism demand. Most of the studies support the existence of impact of tourism revenues on growth rate but not vice versa. Few studies examine the impact of factors like real exchange rate or purchasing power parity on the tourism revenues. In this context, firstly impact of real exchange rate on tourism revenues examination is aimed. Because exchange rate is one of the main determinants of international tourism services price in guests currency unit. Another determinant of tourism demand for a country is country’s overall security level. This issue can be handled in the context of the relationship between tourism revenues and overall security including turmoil, terrorism, border problem, political violence. In this study, factors are handled for several countries which have tourism revenues on a certain level. With this structure, it is a panel data, and it is evaluated with panel data analysis techniques. Panel data have at least two dimensions, and one of them is time dimensions. The panel data analysis techniques are applied to data gathered from Worldbank data web page. In this study, it is expected to find impacts of real exchange rate and security factors on tourism revenues for the countries that have noteworthy tourism revenues.Keywords: exchange rate, panel data analysis, security, tourism revenues
Procedia PDF Downloads 35624536 The Effect of General Data Protection Regulation on South Asian Data Protection Laws
Authors: Sumedha Ganjoo, Santosh Goswami
Abstract:
The rising reliance on technology places national security at the forefront of 21st-century issues. It complicates the efforts of emerging and developed countries to combat cyber threats and increases the inherent risk factors connected with technology. The inability to preserve data securely might have devastating repercussions on a massive scale. Consequently, it is vital to establish national, regional, and global data protection rules and regulations that penalise individuals who participate in immoral technology usage and exploit the inherent vulnerabilities of technology. This study paper seeks to analyse GDPR-inspired Bills in the South Asian Region and determine their suitability for the development of a worldwide data protection framework, considering that Asian countries are much more diversified than European ones. In light of this context, the objectives of this paper are to identify GDPR-inspired Bills in the South Asian Region, identify their similarities and differences, as well as the obstacles to developing a regional-level data protection mechanism, thereby satisfying the need to develop a global-level mechanism. Due to the qualitative character of this study, the researcher did a comprehensive literature review of prior research papers, journal articles, survey reports, and government publications on the aforementioned topics. Taking into consideration the survey results, the researcher conducted a critical analysis of the significant parameters highlighted in the literature study. Many nations in the South Asian area are in the process of revising their present data protection measures in accordance with GDPR, according to the primary results of this study. Consideration is given to the data protection laws of Thailand, Malaysia, China, and Japan. Significant parallels and differences in comparison to GDPR have been discussed in detail. The conclusion of the research analyses the development of various data protection legislation regimes in South Asia.Keywords: data privacy, GDPR, Asia, data protection laws
Procedia PDF Downloads 8524535 A Web Service Based Sensor Data Management System
Authors: Rose A. Yemson, Ping Jiang, Oyedeji L. Inumoh
Abstract:
The deployment of wireless sensor network has rapidly increased, however with the increased capacity and diversity of sensors, and applications ranging from biological, environmental, military etc. generates tremendous volume of data’s where more attention is placed on the distributed sensing and little on how to manage, analyze, retrieve and understand the data generated. This makes it more quite difficult to process live sensor data, run concurrent control and update because sensor data are either heavyweight, complex, and slow. This work will focus on developing a web service platform for automatic detection of sensors, acquisition of sensor data, storage of sensor data into a database, processing of sensor data using reconfigurable software components. This work will also create a web service based sensor data management system to monitor physical movement of an individual wearing wireless network sensor technology (SunSPOT). The sensor will detect movement of that individual by sensing the acceleration in the direction of X, Y and Z axes accordingly and then send the sensed reading to a database that will be interfaced with an internet platform. The collected sensed data will determine the posture of the person such as standing, sitting and lying down. The system is designed using the Unified Modeling Language (UML) and implemented using Java, JavaScript, html and MySQL. This system allows real time monitoring an individual closely and obtain their physical activity details without been physically presence for in-situ measurement which enables you to work remotely instead of the time consuming check of an individual. These details can help in evaluating an individual’s physical activity and generate feedback on medication. It can also help in keeping track of any mandatory physical activities required to be done by the individuals. These evaluations and feedback can help in maintaining a better health status of the individual and providing improved health care.Keywords: HTML, java, javascript, MySQL, sunspot, UML, web-based, wireless network sensor
Procedia PDF Downloads 21424534 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 8324533 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 32024532 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education
Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue
Abstract:
In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education
Procedia PDF Downloads 11224531 Foundation of the Information Model for Connected-Cars
Authors: Hae-Won Seo, Yong-Gu Lee
Abstract:
Recent progress in the next generation of automobile technology is geared towards incorporating information technology into cars. Collectively called smart cars are bringing intelligence to cars that provides comfort, convenience and safety. A branch of smart cars is connected-car system. The key concept in connected-cars is the sharing of driving information among cars through decentralized manner enabling collective intelligence. This paper proposes a foundation of the information model that is necessary to define the driving information for smart-cars. Road conditions are modeled through a unique data structure that unambiguously represent the time variant traffics in the streets. Additionally, the modeled data structure is exemplified in a navigational scenario and usage using UML. Optimal driving route searching is also discussed using the proposed data structure in a dynamically changing road conditions.Keywords: connected-car, data modeling, route planning, navigation system
Procedia PDF Downloads 37824530 Combined Analysis of Land use Change and Natural Flow Path in Flood Analysis
Authors: Nowbuth Manta Devi, Rasmally Mohammed Hussein
Abstract:
Flood is one of the most devastating climate impacts that many countries are facing. Many different causes have been associated with the intensity of floods being recorded over time. Unplanned development, low carrying capacity of drains, clogged drains, construction in flood plains or increasing intensity of rainfall events. While a combination of these causes can certainly aggravate the flood conditions, in many cases, increasing drainage capacity has not reduced flood risk to the level that was expected. The present study analyzed the extent to which land use is contributing to aggravating impacts of flooding in a city. Satellite images have been analyzed over a period of 20 years at intervals of 5 years. Both unsupervised and supervised classification methods have been used with the image processing module of ArcGIS. The unsupervised classification was first compared to the basemap available in ArcGIS to get a first overview of the results. These results also aided in guiding data collection on-site for the supervised classification. The island of Mauritius is small, and there are large variations in land use over small areas, both within the built areas and in agricultural zones involving food crops. Larger plots of agricultural land under sugar cane plantations are relatively more easily identified. However, the growth stage and health of plants vary and this had to be verified during ground truthing. The results show that although there have been changes in land use as expected over a span of 20 years, this was not significant enough to cause a major increase in flood risk levels. A digital elevation model was analyzed for further understanding. It could not be noted that overtime, development tampered with natural flow paths in addition to increasing the impermeable areas. This situation results in backwater flows, hence increasing flood risks.Keywords: climate change, flood, natural flow paths, small islands
Procedia PDF Downloads 1824529 Localized and Time-Resolved Velocity Measurements of Pulsatile Flow in a Rectangular Channel
Authors: R. Blythman, N. Jeffers, T. Persoons, D. B. Murray
Abstract:
The exploitation of flow pulsation in micro- and mini-channels is a potentially useful technique for enhancing cooling of high-end photonics and electronics systems. It is thought that pulsation alters the thickness of the hydrodynamic and thermal boundary layers, and hence affects the overall thermal resistance of the heat sink. Although the fluid mechanics and heat transfer are inextricably linked, it can be useful to decouple the parameters to better understand the mechanisms underlying any heat transfer enhancement. Using two-dimensional, two-component particle image velocimetry, the current work intends to characterize the heat transfer mechanisms in pulsating flow with a mean Reynolds number of 48 by experimentally quantifying the hydrodynamics of a generic liquid-cooled channel geometry. Flows circulated through the test section by a gear pump are modulated using a controller to achieve sinusoidal flow pulsations with Womersley numbers of 7.45 and 2.36 and an amplitude ratio of 0.75. It is found that the transient characteristics of the measured velocity profiles are dependent on the speed of oscillation, in accordance with the analytical solution for flow in a rectangular channel. A large velocity overshoot is observed close to the wall at high frequencies, resulting from the interaction of near-wall viscous stresses and inertial effects of the main fluid body. The steep velocity gradients at the wall are indicative of augmented heat transfer, although the local flow reversal may reduce the upstream temperature difference in heat transfer applications. While unsteady effects remain evident at the lower frequency, the annular effect subsides and retreats from the wall. The shear rate at the wall is increased during the accelerating half-cycle and decreased during deceleration compared to steady flow, suggesting that the flow may experience both enhanced and diminished heat transfer during a single period. Hence, the thickness of the hydrodynamic boundary layer is reduced for positively moving flow during one half of the pulsation cycle at the investigated frequencies. It is expected that the size of the thermal boundary layer is similarly reduced during the cycle, leading to intervals of heat transfer enhancement.Keywords: Heat transfer enhancement, particle image velocimetry, localized and time-resolved velocity, photonics and electronics cooling, pulsating flow, Richardson’s annular effect
Procedia PDF Downloads 35024528 Appearance-Based Discrimination in a Workplace: An Emerging Problem for Labor Law Relationships
Authors: Irmina Miernicka
Abstract:
Nowadays, dress codes and widely understood appearance are becoming more important in the workplace. They are often used in the workplace to standardize image of an employer, to communicate a corporate image and ensure that customers can easily identify it. It is also a way to build professionalism of employer. Additionally, in many cases, an employer will introduce a dress code for health and safety reasons. Employers more often oblige employees to follow certain rules concerning their clothing, grooming, make-up, body art or even weight. An important research problem is to find the limits of the employer's interference with the external appearance of employees. They are primarily determined by the two main obligations of the employer, i. e. the obligation to respect the employee's personal rights and the principle of equal treatment and non-discrimination in employment. It should also be remembered that the limits of the employer's interference will be different when certain rules concerning the employee's appearance result directly from the provisions of laws and other acts of universally binding law (workwear, official clothing, and uniform). The analysis of this issue was based on literature and jurisprudence, both domestic and foreign, including the U.S. and European case law, and led the author to put forward a thesis that there are four main principles, which will protect the employer from the allegation of discrimination. First, it is the principle of adequacy - the means requirements regarding dress code must be appropriate to the position and type of work performed by the employee. Secondly, in accordance with the purpose limitation principle, an employer may introduce certain requirements regarding the appearance of employees if there is a legitimate, objective justification for this (such as work safety or type of work performed), not dictated by the employer's subjective feelings and preferences. Thirdly, these requirements must not place an excessive burden on workers and be disproportionate in relation to the employer's objective (principle of proportionality). Fourthly, the employer should also ensure that the requirements imposed in the workplace are equally burdensome and enforceable from all groups of employees. Otherwise, it may expose itself to grounds of discrimination based on sex or age. At the same time, it is also possible to differentiate the situation of some employees if these differences are small and reflect established habits and traditions and if employees are obliged to maintain the same level of professionalism in their positions. Although this subject may seem to be insignificant, frequent application of dress codes and increasing awareness of both employees and employers indicate that its legal aspects need to be thoroughly analyzed. Many legal cases brought before U.S. and European courts show that employees look for legal protection when they consider that their rights are violated by dress code introduced in a workplace.Keywords: labor law, the appearance of an employee, discrimination in the workplace, dress code in a workplace
Procedia PDF Downloads 12824527 Automated Multisensory Data Collection System for Continuous Monitoring of Refrigerating Appliances Recycling Plants
Authors: Georgii Emelianov, Mikhail Polikarpov, Fabian Hübner, Jochen Deuse, Jochen Schiemann
Abstract:
Recycling refrigerating appliances plays a major role in protecting the Earth's atmosphere from ozone depletion and emissions of greenhouse gases. The performance of refrigerator recycling plants in terms of material retention is the subject of strict environmental certifications and is reviewed periodically through specialized audits. The continuous collection of Refrigerator data required for the input-output analysis is still mostly manual, error-prone, and not digitalized. In this paper, we propose an automated data collection system for recycling plants in order to deduce expected material contents in individual end-of-life refrigerating appliances. The system utilizes laser scanner measurements and optical data to extract attributes of individual refrigerators by applying transfer learning with pre-trained vision models and optical character recognition. Based on Recognized features, the system automatically provides material categories and target values of contained material masses, especially foaming and cooling agents. The presented data collection system paves the way for continuous performance monitoring and efficient control of refrigerator recycling plants.Keywords: automation, data collection, performance monitoring, recycling, refrigerators
Procedia PDF Downloads 16924526 Sales Patterns Clustering Analysis on Seasonal Product Sales Data
Authors: Soojin Kim, Jiwon Yang, Sungzoon Cho
Abstract:
As a seasonal product is only in demand for a short time, inventory management is critical to profits. Both markdowns and stockouts decrease the return on perishable products; therefore, researchers have been interested in the distribution of seasonal products with the aim of maximizing profits. In this study, we propose a data-driven seasonal product sales pattern analysis method for individual retail outlets based on observed sales data clustering; the proposed method helps in determining distribution strategies.Keywords: clustering, distribution, sales pattern, seasonal product
Procedia PDF Downloads 60624525 Probability Sampling in Matched Case-Control Study in Drug Abuse
Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell
Abstract:
Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling
Procedia PDF Downloads 49624524 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 36624523 Evaluating the Effectiveness of Science Teacher Training Programme in National Colleges of Education: a Preliminary Study, Perceptions of Prospective Teachers
Authors: A. S. V Polgampala, F. Huang
Abstract:
This is an overview of what is entailed in an evaluation and issues to be aware of when class observation is being done. This study examined the effects of evaluating teaching practice of a 7-day ‘block teaching’ session in a pre -service science teacher training program at a reputed National College of Education in Sri Lanka. Effects were assessed in three areas: evaluation of the training process, evaluation of the training impact, and evaluation of the training procedure. Data for this study were collected by class observation of 18 teachers during 9th February to 16th of 2017. Prospective teachers of science teaching, the participants of the study were evaluated based on newly introduced format by the NIE. The data collected was analyzed qualitatively using the Miles and Huberman procedure for analyzing qualitative data: data reduction, data display and conclusion drawing/verification. It was observed that the trainees showed their confidence in teaching those competencies and skills. Teacher educators’ dissatisfaction has been a great impact on evaluation process.Keywords: evaluation, perceptions & perspectives, pre-service, science teachering
Procedia PDF Downloads 31624522 Detecting Venomous Files in IDS Using an Approach Based on Data Mining Algorithm
Authors: Sukhleen Kaur
Abstract:
In security groundwork, Intrusion Detection System (IDS) has become an important component. The IDS has received increasing attention in recent years. IDS is one of the effective way to detect different kinds of attacks and malicious codes in a network and help us to secure the network. Data mining techniques can be implemented to IDS, which analyses the large amount of data and gives better results. Data mining can contribute to improving intrusion detection by adding a level of focus to anomaly detection. So far the study has been carried out on finding the attacks but this paper detects the malicious files. Some intruders do not attack directly, but they hide some harmful code inside the files or may corrupt those file and attack the system. These files are detected according to some defined parameters which will form two lists of files as normal files and harmful files. After that data mining will be performed. In this paper a hybrid classifier has been used via Naive Bayes and Ripper classification methods. The results show how the uploaded file in the database will be tested against the parameters and then it is characterised as either normal or harmful file and after that the mining is performed. Moreover, when a user tries to mine on harmful file it will generate an exception that mining cannot be made on corrupted or harmful files.Keywords: data mining, association, classification, clustering, decision tree, intrusion detection system, misuse detection, anomaly detection, naive Bayes, ripper
Procedia PDF Downloads 41624521 Generalized Approach to Linear Data Transformation
Authors: Abhijith Asok
Abstract:
This paper presents a generalized approach for the simple linear data transformation, Y=bX, through an integration of multidimensional coordinate geometry, vector space theory and polygonal geometry. The scaling is performed by adding an additional ’Dummy Dimension’ to the n-dimensional data, which helps plot two dimensional component-wise straight lines on pairs of dimensions. The end result is a set of scaled extensions of observations in any of the 2n spatial divisions, where n is the total number of applicable dimensions/dataset variables, created by shifting the n-dimensional plane along the ’Dummy Axis’. The derived scaling factor was found to be dependent on the coordinates of the common point of origin for diverging straight lines and the plane of extension, chosen on and perpendicular to the ’Dummy Axis’, respectively. This result indicates the geometrical interpretation of a linear data transformation and hence, opportunities for a more informed choice of the factor ’b’, based on a better choice of these coordinate values. The paper follows on to identify the effect of this transformation on certain popular distance metrics, wherein for many, the distance metric retained the same scaling factor as that of the features.Keywords: data transformation, dummy dimension, linear transformation, scaling
Procedia PDF Downloads 30324520 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health
Authors: Minna Pikkarainen, Yueqiang Xu
Abstract:
The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.Keywords: blockchain, health data, platform, action design
Procedia PDF Downloads 10424519 The Pivotal Impact of Optimizing Target Margins and Reducing Setup Errors on Enhancing Clinical Outcomes and Precision in Cervical Cancer Radiotherapy Using Electronical Portal Imagine Device
Authors: Ahlam Azalmad, Younes Elmaadaoui, Mohamed Hilal
Abstract:
Background: This study highlights the impact of optimizing target margins by minimizing setup errors in cervical cancer radiotherapy through electronic portal imaging device, aiming to improve treatment accuracy and patient outcomes. These findings are crucial for refining treatment protocols and enhancing the safety and effectiveness of radiation oncology practices. As a groundbreaking initiative within our department, this work marks a major advancement in treatment optimization and will be disseminated to other radiotherapy centers, encouraging the adoption of consistent and improved radiotherapy practices. Materials and Methods: The study involved 20 cervical cancer patients treated between January 30 and September 30, 2024, using knee and foot fixed supports for immobilization. Treatment setups were verified with electronic portal imaging and delivered via an Elekta linear accelerator to enhance radiotherapy precision. Displacement analysis used bony landmarks to assess setup errors, guiding the calculation of safety margins based on ICRU-62 guidelines and Stroom's and Van Herk's formulas. Results: The study revealed systematic errors of up to 0.22 cm and random errors of 0.74 cm along the X-axis, 0.2 cm and 0.08 cm along the Y-axis, and 0.18 cm and 0.08 cm along the Z-axis. Based on these findings, the calculated CTV-PTV margins for the X, Y, and Z axes were 0.61 cm, 0.57 cm, and 0.51 cm, respectively, using Van Herk’s formula; 0.5 cm, 0.46 cm, and 0.42 cm with Stroom’s formula; and 0.23 cm, 0.22 cm, and 0.2 cm according to ICRU-62 guidelines. Considering these calculations, a 6 mm safety margin is recommended as optimal. Discussion: While electronic portal imaging device improves radiotherapy precision, it has limitations, including limited field of view, difficulty with soft tissue visualization, and insufficient resolution for small errors. Patient variations and setup errors occurring after imaging further complicate safety margin calculations. Time, image quality, and radiation dose concerns also pose challenges. Integrating electronic portal imaging device with advanced imaging techniques like 3D imaging can enhance treatment accuracy. Conclusion: this study highlights the significance of a 6 mm safety margin, showing that image guided verification with electronic portal imaging enhances accuracy, reduces errors, and improves the precision of pelvic radiotherapy.Keywords: cervical cancer, precision, PTV margins, radiotherapy, setup errors
Procedia PDF Downloads 824518 Image Processing and Calculation of NGRDI Embedded System in Raspberry
Authors: Efren Lopez Jimenez, Maria Isabel Cajero, J. Irving-Vasqueza
Abstract:
The use and processing of digital images have opened up new opportunities for the resolution of problems of various kinds, such as the calculation of different vegetation indexes, among other things, differentiating healthy vegetation from humid vegetation. However, obtaining images from which these indexes are calculated is still the exclusive subject of active research. In the present work, we propose to obtain these images using a low cost embedded system (Raspberry Pi) and its processing, using a set of libraries of open code called OpenCV, in order to obtain the Normalized Red-Green Difference Index (NGRDI).Keywords: Raspberry Pi, vegetation index, Normalized Red-Green Difference Index (NGRDI), OpenCV
Procedia PDF Downloads 294