Search results for: geospatial data science
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26074

Search results for: geospatial data science

25054 A Programming Assessment Software Artefact Enhanced with the Help of Learners

Authors: Romeo A. Botes, Imelda Smit

Abstract:

The demands of an ever changing and complex higher education environment, along with the profile of modern learners challenge current approaches to assessment and feedback. More learners enter the education system every year. The younger generation expects immediate feedback. At the same time, feedback should be meaningful. The assessment of practical activities in programming poses a particular problem, since both lecturers and learners in the information and computer science discipline acknowledge that paper-based assessment for programming subjects lacks meaningful real-life testing. At the same time, feedback lacks promptness, consistency, comprehensiveness and individualisation. Most of these aspects may be addressed by modern, technology-assisted assessment. The focus of this paper is the continuous development of an artefact that is used to assist the lecturer in the assessment and feedback of practical programming activities in a senior database programming class. The artefact was developed using three Design Science Research cycles. The first implementation allowed one programming activity submission per assessment intervention. This pilot provided valuable insight into the obstacles regarding the implementation of this type of assessment tool. A second implementation improved the initial version to allow multiple programming activity submissions per assessment. The focus of this version is on providing scaffold feedback to the learner – allowing improvement with each subsequent submission. It also has a built-in capability to provide the lecturer with information regarding the key problem areas of each assessment intervention.

Keywords: programming, computer-aided assessment, technology-assisted assessment, programming assessment software, design science research, mixed-method

Procedia PDF Downloads 281
25053 Scientific Production on Lean Supply Chains Published in Journals Indexed by SCOPUS and Web of Science Databases: A Bibliometric Study

Authors: T. Botelho de Sousa, F. Raphael Cabral Furtado, O. Eduardo da Silva Ferri, A. Batista, W. Augusto Varella, C. Eduardo Pinto, J. Mimar Santa Cruz Yabarrena, S. Gibran Ruwer, F. Müller Guerrini, L. Adalberto Philippsen Júnior

Abstract:

Lean Supply Chain Management (LSCM) is an emerging research field in Operations Management (OM). As a strategic model that focuses on reduced cost and waste with fulfilling the needs of customers, LSCM attracts great interest among researchers and practitioners. The purpose of this paper is to present an overview of Lean Supply Chains literature, based on bibliometric analysis through 57 papers published in indexed journals by SCOPUS and/or Web of Science databases. The results indicate that the last three years (2015, 2016, and 2017) were the most productive on LSCM discussion, especially in Supply Chain Management and International Journal of Lean Six Sigma journals. India, USA, and UK are the most productive countries; nevertheless, cross-country studies by collaboration among researchers were detected, by social network analysis, as a research practice, appearing to play a more important role on LSCM studies. Despite existing limitation, such as limited indexed journal database, bibliometric analysis helps to enlighten ongoing efforts on LSCM researches, including most used technical procedures and collaboration network, showing important research gaps, especially, for development countries researchers.

Keywords: Lean Supply Chains, Bibliometric Study, SCOPUS, Web of Science

Procedia PDF Downloads 329
25052 As a Little-Known Side a Passionate Statistician: Florence Nightingale

Authors: Gülcan Taşkıran, Ayla Bayık Temel

Abstract:

Background: Florence Nightingale, the modern founder of the nursing, is most famous for her role as a nurse. But not so much known about her contributions as a mathematician and statistician. Aim: In this conceptual article it is aimed to examine Florence Nightingale's statistics education, how she used her passion for statistics and applied statistical data in nursing care and her scientific contributions to statistical science. Design: Literature review method was used in the study. The databases of Istanbul University Library Search Engine, Turkish Medical Directory, Thesis Scanning Center of Higher Education Council, PubMed, Google Scholar, EBSCO Host, Web of Science were scanned to reach the studies. The keywords 'statistics' and 'Florence Nightingale' have been used in Turkish and English while being screened. As a result of the screening, totally 41 studies were examined from the national and international literature. Results: Florence Nightingale has interested in mathematics and statistics at her early ages and has received various training in these subjects. Lessons learned by Nightingale in a cultured family environment, her talent in mathematics and numbers, and her religious beliefs played a crucial role in the direction of the statistics. She was influenced by Quetelet's ideas in the formation of the statistical philosophy and received support from William Farr in her statistical studies. During the Crimean War, she applied statistical knowledge to nursing care, developed many statistical methods and graphics, so that she made revolutionary reforms in the health field. Conclusions: Nightingale's interest in statistics, her broad vision, the statistical ideas fused with religious beliefs, the innovative graphics she has developed and the extraordinary statistical projects that she carried out has been influential on the basis of her professional achievements. Florence Nightingale has also become a model for women in statistics. Today, using and teaching of statistics and research in nursing care practices and education programs continues with the light she gave.

Keywords: Crimean war, Florence Nightingale, nursing, statistics

Procedia PDF Downloads 281
25051 Impact of Weather Conditions on Non-Food Retailers and Implications for Marketing Activities

Authors: Noriyuki Suyama

Abstract:

This paper discusses purchasing behavior in retail stores, with a particular focus on the impact of weather changes on customers' purchasing behavior. Weather conditions are one of the factors that greatly affect the management and operation of retail stores. However, there is very little research on the relationship between weather conditions and marketing from an academic perspective, although there is some importance from a practical standpoint and knowledge based on experience. For example, customers are more hesitant to go out when it rains than when it is sunny, and they may postpone purchases or buy only the minimum necessary items even if they do go out. It is not difficult to imagine that weather has a significant impact on consumer behavior. To the best of the authors' knowledge, there have been only a few studies that have delved into the purchasing behavior of individual customers. According to Hirata (2018), the economic impact of weather in the United States is estimated to be 3.4% of GDP, or "$485 billion ± $240 billion per year. However, weather data is not yet fully utilized. Representative industries include transportation-related industries (e.g., airlines, shipping, roads, railroads), leisure-related industries (e.g., leisure facilities, event organizers), energy and infrastructure-related industries (e.g., construction, factories, electricity and gas), agriculture-related industries (e.g., agricultural organizations, producers), and retail-related industries (e.g., retail, food service, convenience stores, etc.). This paper focuses on the retail industry and advances research on weather. The first reason is that, as far as the author has investigated the retail industry, only grocery retailers use temperature, rainfall, wind, weather, and humidity as parameters for their products, and there are very few examples of academic use in other retail industries. Second, according to NBL's "Toward Data Utilization Starting from Consumer Contact Points in the Retail Industry," labor productivity in the retail industry is very low compared to other industries. According to Hirata (2018) mentioned above, improving labor productivity in the retail industry is recognized as a major challenge. On the other hand, according to the "Survey and Research on Measurement Methods for Information Distribution and Accumulation (2013)" by the Ministry of Internal Affairs and Communications, the amount of data accumulated by each industry is extremely large in the retail industry, so new applications are expected by analyzing these data together with weather data. Third, there is currently a wealth of weather-related information available. There are, for example, companies such as WeatherNews, Inc. that make weather information their business and not only disseminate weather information but also disseminate information that supports businesses in various industries. Despite the wide range of influences that weather has on business, the impact of weather has not been a subject of research in the retail industry, where business models need to be imagined, especially from a micro perspective. In this paper, the author discuss the important aspects of the impact of weather on marketing strategies in the non-food retail industry.

Keywords: consumer behavior, weather marketing, marketing science, big data, retail marketing

Procedia PDF Downloads 60
25050 Bridging the Gap between Teaching and Learning: A 3-S (Strength, Stamina, Speed) Model for Medical Education

Authors: Mangala. Sadasivan, Mary Hughes, Bryan Kelly

Abstract:

Medical Education must focus on bridging the gap between teaching and learning when training pre-clinical year students in skills needed to keep up with medical knowledge and to meet the demands of health care in the future. The authors were interested in showing that a 3-S Model (building strength, developing stamina, and increasing speed) using a bridged curriculum design helps connect teaching and learning and improves students’ retention of basic science and clinical knowledge. The authors designed three learning modules using the 3-S Model within a systems course in a pre-clerkship medical curriculum. Each module focused on a bridge (concept map) designed by the instructor for specific content delivered to students in the course. This with-in-subjects design study included 304 registered MSU osteopathic medical students (3 campuses) ranked by quintile based on previous coursework. The instructors used the bridge to create self-directed learning exercises (building strength) to help students master basic science content. Students were video coached on how to complete assignments, and given pre-tests and post-tests designed to give them control to assess and identify gaps in learning and strengthen connections. The instructor who designed the modules also used video lectures to help students master clinical concepts and link them (building stamina) to previously learned material connected to the bridge. Boardstyle practice questions relevant to the modules were used to help students improve access (increasing speed) to stored content. Unit Examinations covering the content within modules and materials covered by other instructors teaching within the units served as outcome measures in this study. This data was then compared to each student’s performance on a final comprehensive exam and their COMLEX medical board examinations taken some time after the course. The authors used mean comparisons to evaluate students’ performances on module items (using 3-S Model) to non-module items on unit exams, final course exam and COMLEX medical board examination. The data shows that on average, students performed significantly better on module items compared to non-module items on exams 1 and 2. The module 3 exam was canceled due to a university shut down. The difference in mean scores (module verses non-module) items disappeared on the final comprehensive exam which was rescheduled once the university resumed session. Based on Quintile designation, the mean scores were higher for module items than non-module items and the difference in scores between items for Quintiles 1 and 2 were significantly better on exam 1 and the gap widened for all Quintile groups on exam 2 and disappeared in exam 3. Based on COMLEX performance, all students on average as a group, whether they Passed or Failed, performed better on Module items than non-module items in all three exams. The gap between scores of module items for students who passed COMLEX to those who failed was greater on Exam 1 (14.3) than on Exam 2 (7.5) and Exam 3 (10.2). Data shows the 3-S Model using a bridge effectively connects teaching and learning

Keywords: bridging gap, medical education, teaching and learning, model of learning

Procedia PDF Downloads 41
25049 Development of New Technology Evaluation Model by Using Patent Information and Customers' Review Data

Authors: Kisik Song, Kyuwoong Kim, Sungjoo Lee

Abstract:

Many global firms and corporations derive new technology and opportunity by identifying vacant technology from patent analysis. However, previous studies failed to focus on technologies that promised continuous growth in industrial fields. Most studies that derive new technology opportunities do not test practical effectiveness. Since previous studies depended on expert judgment, it became costly and time-consuming to evaluate new technologies based on patent analysis. Therefore, research suggests a quantitative and systematic approach to technology evaluation indicators by using patent data to and from customer communities. The first step involves collecting two types of data. The data is used to construct evaluation indicators and apply these indicators to the evaluation of new technologies. This type of data mining allows a new method of technology evaluation and better predictor of how new technologies are adopted.

Keywords: data mining, evaluating new technology, technology opportunity, patent analysis

Procedia PDF Downloads 351
25048 Anomaly Detection Based on System Log Data

Authors: M. Kamel, A. Hoayek, M. Batton-Hubert

Abstract:

With the increase of network virtualization and the disparity of vendors, the continuous monitoring and detection of anomalies cannot rely on static rules. An advanced analytical methodology is needed to discriminate between ordinary events and unusual anomalies. In this paper, we focus on log data (textual data), which is a crucial source of information for network performance. Then, we introduce an algorithm used as a pipeline to help with the pretreatment of such data, group it into patterns, and dynamically label each pattern as an anomaly or not. Such tools will provide users and experts with continuous real-time logs monitoring capability to detect anomalies and failures in the underlying system that can affect performance. An application of real-world data illustrates the algorithm.

Keywords: logs, anomaly detection, ML, scoring, NLP

Procedia PDF Downloads 73
25047 Soil Quality State and Trends in New Zealand’s Largest City after Fifteen Years

Authors: Fiona Curran-Cournane

Abstract:

Soil quality monitoring is a science-based soil management tool that assesses soil ecosystem health. A soil monitoring program in Auckland, New Zealand’s largest city, extends from 1995 to the present. The objective of this study was to firstly determine changes in soil parameters (basic soil properties and heavy metals) that were assessed from rural land in 1995-2000 and repeated in 2008-2012. The second objective was to determine differences in soil parameters across various land uses including native bush, rural (horticulture, pasture and plantation forestry) and urban land uses using soil data collected in more recent years (2009-2013). Across rural land, mean concentrations of Olsen P had significantly increased in the second sampling period and was identified as the indicator of most concern, followed by soil macroporosity, particularly for horticultural and pastoral land. Mean concentrations of Cd were also greatest for pastoral and horticultural land and a positive correlation existed between these two parameters, which highlights the importance of analysing basic soil parameters in conjunction with heavy metals. In contrast, mean concentrations of As, Cr, Pb, Ni and Zn were greatest for urban sites. Native bush sites had the lowest concentrations of heavy metals and were used to calculate a ‘pollution index’ (PI). The mean PI was classified as high (PI > 3) for Cd and Ni and moderate for Pb, Zn, Cr, Cu, As, and Hg, indicating high levels of heavy metal pollution across both rural and urban soils. From a land use perspective, the mean ‘integrated pollution index’ was highest for urban sites at 2.9 followed by pasture, horticulture and plantation forests at 2.7, 2.6, and 0.9, respectively. It is recommended that soil sampling continues over time because a longer spanning record will allow further identification of where soil problems exist and where resources need to be targeted in the future. Findings from this study will also inform policy and science direction in regional councils.

Keywords: heavy metals, pollution index, rural and urban land use, soil quality

Procedia PDF Downloads 361
25046 EnumTree: An Enumerative Biclustering Algorithm for DNA Microarray Data

Authors: Haifa Ben Saber, Mourad Elloumi

Abstract:

In a number of domains, like in DNA microarray data analysis, we need to cluster simultaneously rows (genes) and columns (conditions) of a data matrix to identify groups of constant rows with a group of columns. This kind of clustering is called biclustering. Biclustering algorithms are extensively used in DNA microarray data analysis. More effective biclustering algorithms are highly desirable and needed. We introduce a new algorithm called, Enumerative tree (EnumTree) for biclustering of binary microarray data. is an algorithm adopting the approach of enumerating biclusters. This algorithm extracts all biclusters consistent good quality. The main idea of ​​EnumLat is the construction of a new tree structure to represent adequately different biclusters discovered during the process of enumeration. This algorithm adopts the strategy of all biclusters at a time. The performance of the proposed algorithm is assessed using both synthetic and real DNA micryarray data, our algorithm outperforms other biclustering algorithms for binary microarray data. Biclusters with different numbers of rows. Moreover, we test the biological significance using a gene annotation web tool to show that our proposed method is able to produce biologically relevent biclusters.

Keywords: DNA microarray, biclustering, gene expression data, tree, datamining.

Procedia PDF Downloads 358
25045 The Impact of Financial Reporting on Sustainability

Authors: Lynn Ruggieri

Abstract:

The worldwide pandemic has only increased sustainability awareness. The public is demanding that businesses be held accountable for their impact on the environment. While financial data enjoys uniformity in reporting requirements, there are no uniform reporting requirements for non-financial data. Europe is leading the way with some standards being implemented for reporting non-financial sustainability data; however, there is no uniformity globally. And without uniformity, there is not a clear understanding of what information to include and how to disclose it. Sustainability reporting will provide important information to stakeholders and will enable businesses to understand their impact on the environment. Therefore, there is a crucial need for this data. This paper looks at the history of sustainability reporting in the countries of the European Union and throughout the world and makes a case for worldwide reporting requirements for sustainability.

Keywords: financial reporting, non-financial data, sustainability, global financial reporting

Procedia PDF Downloads 155
25044 Methods and Algorithms of Ensuring Data Privacy in AI-Based Healthcare Systems and Technologies

Authors: Omar Farshad Jeelani, Makaire Njie, Viktoriia M. Korzhuk

Abstract:

Recently, the application of AI-powered algorithms in healthcare continues to flourish. Particularly, access to healthcare information, including patient health history, diagnostic data, and PII (Personally Identifiable Information) is paramount in the delivery of efficient patient outcomes. However, as the exchange of healthcare information between patients and healthcare providers through AI-powered solutions increases, protecting a person’s information and their privacy has become even more important. Arguably, the increased adoption of healthcare AI has resulted in a significant concentration on the security risks and protection measures to the security and privacy of healthcare data, leading to escalated analyses and enforcement. Since these challenges are brought by the use of AI-based healthcare solutions to manage healthcare data, AI-based data protection measures are used to resolve the underlying problems. Consequently, this project proposes AI-powered safeguards and policies/laws to protect the privacy of healthcare data. The project presents the best-in-school techniques used to preserve the data privacy of AI-powered healthcare applications. Popular privacy-protecting methods like Federated learning, cryptographic techniques, differential privacy methods, and hybrid methods are discussed together with potential cyber threats, data security concerns, and prospects. Also, the project discusses some of the relevant data security acts/laws that govern the collection, storage, and processing of healthcare data to guarantee owners’ privacy is preserved. This inquiry discusses various gaps and uncertainties associated with healthcare AI data collection procedures and identifies potential correction/mitigation measures.

Keywords: data privacy, artificial intelligence (AI), healthcare AI, data sharing, healthcare organizations (HCOs)

Procedia PDF Downloads 60
25043 Effectiveness of Geogebra Training Activities through Teams for Junior High School Teachers

Authors: Idha Novianti, Suci Nurhayati, Puryati, Elang Krisnadi

Abstract:

Community service activities are activities of the academic community in practicing and cultivating science, knowledge, and technology to advance the general welfare and educate the nation's life as described in the Higher Education Law. Training activities on the use of GeoGebra software are an option because GeoGebra software is software that is easy to operate and complete in the presentation of graphic design. The training activity was held for 3 hours online via teams and 3 hours offline. Involving 15 junior high school mathematics teachers located around south Tangerang. As a result, all teachers were satisfied with the activity, and they had additional new knowledge and skills to teach mathematics in the topic of geometry and algebra. The existence of new knowledge made the participants increase their confidence in developing mathematical science for students at school.

Keywords: geogebra, Ms. teams, junior high school teacher, mathematics

Procedia PDF Downloads 102
25042 Comparison of Wet and Microwave Digestion Methods for the Al, Cu, Fe, Mn, Ni, Pb and Zn Determination in Some Honey Samples by ICPOES in Turkey

Authors: Huseyin Altundag, Emel Bina, Esra Altıntıg

Abstract:

The aim of this study is determining amount of Al, Cu, Fe, Mn, Ni, Pb and Zn in the samples of honey which are gathered from Sakarya and Istanbul regions. In this study the evaluation of the trace elements in honeys samples are gathered from Sakarya and Istanbul, Turkey. The sample preparation phase is performed via wet decomposition method and microwave digestion system. The accuracy of the method was corrected by the standard reference material, Tea Leaves (INCY-TL-1) and NIST SRM 1515 Apple leaves. The comparison between gathered data and literature values has made and possible resources of the contamination to the samples of honey have handled. The obtained results will be presented in ICCIS 2015: XIII International Conference on Chemical Industry and Science.

Keywords: Wet decomposition, Microwave digestion, Trace element, Honey, ICP-OES

Procedia PDF Downloads 439
25041 Clustering Ethno-Informatics of Naming Village in Java Island Using Data Mining

Authors: Atje Setiawan Abdullah, Budi Nurani Ruchjana, I. Gede Nyoman Mindra Jaya, Eddy Hermawan

Abstract:

Ethnoscience is used to see the culture with a scientific perspective, which may help to understand how people develop various forms of knowledge and belief, initially focusing on the ecology and history of the contributions that have been there. One of the areas studied in ethnoscience is etno-informatics, is the application of informatics in the culture. In this study the science of informatics used is data mining, a process to automatically extract knowledge from large databases, to obtain interesting patterns in order to obtain a knowledge. While the application of culture described by naming database village on the island of Java were obtained from Geographic Indonesia Information Agency (BIG), 2014. The purpose of this study is; first, to classify the naming of the village on the island of Java based on the structure of the word naming the village, including the prefix of the word, syllable contained, and complete word. Second to classify the meaning of naming the village based on specific categories, as well as its role in the community behavioral characteristics. Third, how to visualize the naming of the village to a map location, to see the similarity of naming villages in each province. In this research we have developed two theorems, i.e theorems area as a result of research studies have collected intersection naming villages in each province on the island of Java, and the composition of the wedge theorem sets the provinces in Java is used to view the peculiarities of a location study. The methodology in this study base on the method of Knowledge Discovery in Database (KDD) on data mining, the process includes preprocessing, data mining and post processing. The results showed that the Java community prioritizes merit in running his life, always working hard to achieve a more prosperous life, and love as well as water and environmental sustainment. Naming villages in each location adjacent province has a high degree of similarity, and influence each other. Cultural similarities in the province of Central Java, East Java and West Java-Banten have a high similarity, whereas in Jakarta-Yogyakarta has a low similarity. This research resulted in the cultural character of communities within the meaning of the naming of the village on the island of Java, this character is expected to serve as a guide in the behavior of people's daily life on the island of Java.

Keywords: ethnoscience, ethno-informatics, data mining, clustering, Java island culture

Procedia PDF Downloads 260
25040 Mapping Tunnelling Parameters for Global Optimization in Big Data via Dye Laser Simulation

Authors: Sahil Imtiyaz

Abstract:

One of the biggest challenges has emerged from the ever-expanding, dynamic, and instantaneously changing space-Big Data; and to find a data point and inherit wisdom to this space is a hard task. In this paper, we reduce the space of big data in Hamiltonian formalism that is in concordance with Ising Model. For this formulation, we simulate the system using dye laser in FORTRAN and analyse the dynamics of the data point in energy well of rhodium atom. After mapping the photon intensity and pulse width with energy and potential we concluded that as we increase the energy there is also increase in probability of tunnelling up to some point and then it starts decreasing and then shows a randomizing behaviour. It is due to decoherence with the environment and hence there is a loss of ‘quantumness’. This interprets the efficiency parameter and the extent of quantum evolution. The results are strongly encouraging in favour of the use of ‘Topological Property’ as a source of information instead of the qubit.

Keywords: big data, optimization, quantum evolution, hamiltonian, dye laser, fermionic computations

Procedia PDF Downloads 181
25039 Applying Different Stenography Techniques in Cloud Computing Technology to Improve Cloud Data Privacy and Security Issues

Authors: Muhammad Muhammad Suleiman

Abstract:

Cloud Computing is a versatile concept that refers to a service that allows users to outsource their data without having to worry about local storage issues. However, the most pressing issues to be addressed are maintaining a secure and reliable data repository rather than relying on untrustworthy service providers. In this study, we look at how stenography approaches and collaboration with Digital Watermarking can greatly improve the system's effectiveness and data security when used for Cloud Computing. The main requirement of such frameworks, where data is transferred or exchanged between servers and users, is safe data management in cloud environments. Steganography is the cloud is among the most effective methods for safe communication. Steganography is a method of writing coded messages in such a way that only the sender and recipient can safely interpret and display the information hidden in the communication channel. This study presents a new text steganography method for hiding a loaded hidden English text file in a cover English text file to ensure data protection in cloud computing. Data protection, data hiding capability, and time were all improved using the proposed technique.

Keywords: cloud computing, steganography, information hiding, cloud storage, security

Procedia PDF Downloads 172
25038 Investigation on Performance of Change Point Algorithm in Time Series Dynamical Regimes and Effect of Data Characteristics

Authors: Farhad Asadi, Mohammad Javad Mollakazemi

Abstract:

In this paper, Bayesian online inference in models of data series are constructed by change-points algorithm, which separated the observed time series into independent series and study the change and variation of the regime of the data with related statistical characteristics. variation of statistical characteristics of time series data often represent separated phenomena in the some dynamical system, like a change in state of brain dynamical reflected in EEG signal data measurement or a change in important regime of data in many dynamical system. In this paper, prediction algorithm for studying change point location in some time series data is simulated. It is verified that pattern of proposed distribution of data has important factor on simpler and smother fluctuation of hazard rate parameter and also for better identification of change point locations. Finally, the conditions of how the time series distribution effect on factors in this approach are explained and validated with different time series databases for some dynamical system.

Keywords: time series, fluctuation in statistical characteristics, optimal learning, change-point algorithm

Procedia PDF Downloads 410
25037 Scattered Places in Stories Singularity and Pattern in Geographic Information

Authors: I. Pina, M. Painho

Abstract:

Increased knowledge about the nature of place and the conditions under which space becomes place is a key factor for better urban planning and place-making. Although there is a broad consensus on the relevance of this knowledge, difficulties remain in relating the theoretical framework about place and urban management. Issues related to representation of places are among the greatest obstacles to overcome this gap. With this critical discussion, based on literature review, we intended to explore, in a common framework for geographical analysis, the potential of stories to spell out place meanings, bringing together qualitative text analysis and text mining in order to capture and represent the singularity contained in each person's life history, and the patterns of social processes that shape places. The development of this reasoning is based on the extensive geographical thought about place, and in the theoretical advances in the field of Geographic Information Science (GISc).

Keywords: discourse analysis, geographic information science place, place-making, stories

Procedia PDF Downloads 174
25036 Surface Pressure Distributions for a Forebody Using Pressure Sensitive Paint

Authors: Yi-Xuan Huang, Kung-Ming Chung, Ping-Han Chung

Abstract:

Pressure sensitive paint (PSP), which relies on the oxygen quenching of a luminescent molecule, is an optical technique used in wind-tunnel models. A full-field pressure pattern with low aerodynamic interference can be obtained, and it is becoming an alternative to pressure measurements using pressure taps. In this study, a polymer-ceramic PSP was used, using toluene as a solvent. The porous particle and polymer were silica gel (SiO₂) and RTV-118 (3g:7g), respectively. The compound was sprayed onto the model surface using a spray gun. The absorption and emission spectra for Ru(dpp) as a luminophore were respectively 441-467 nm and 597 nm. A Revox SLG-55 light source with a short-pass filter (550 nm) and a 14-bit CCD camera with a long-pass (600 nm) filter were used to illuminate PSP and to capture images. This study determines surface pressure patterns for a forebody of an AGARD B model in a compressible flow. Since there is no experimental data for surface pressure distributions available, numerical simulation is conducted using ANSYS Fluent. The lift and drag coefficients are calculated and in comparison with the data in the open literature. The experiments were conducted using a transonic wind tunnel at the Aerospace Science and Research Center, National Cheng Kung University. The freestream Mach numbers were 0.83, and the angle of attack ranged from -4 to 8 degree. Deviation between PSP and numerical simulation is within 5%. However, the effect of the setup of the light source should be taken into account to address the relative error.

Keywords: pressure sensitive paint, forebody, surface pressure, compressible flow

Procedia PDF Downloads 111
25035 Determination of the Risks of Heart Attack at the First Stage as Well as Their Control and Resource Planning with the Method of Data Mining

Authors: İbrahi̇m Kara, Seher Arslankaya

Abstract:

Frequently preferred in the field of engineering in particular, data mining has now begun to be used in the field of health as well since the data in the health sector have reached great dimensions. With data mining, it is aimed to reveal models from the great amounts of raw data in agreement with the purpose and to search for the rules and relationships which will enable one to make predictions about the future from the large amount of data set. It helps the decision-maker to find the relationships among the data which form at the stage of decision-making. In this study, it is aimed to determine the risk of heart attack at the first stage, to control it, and to make its resource planning with the method of data mining. Through the early and correct diagnosis of heart attacks, it is aimed to reveal the factors which affect the diseases, to protect health and choose the right treatment methods, to reduce the costs in health expenditures, and to shorten the durations of patients’ stay at hospitals. In this way, the diagnosis and treatment costs of a heart attack will be scrutinized, which will be useful to determine the risk of the disease at the first stage, to control it, and to make its resource planning.

Keywords: data mining, decision support systems, heart attack, health sector

Procedia PDF Downloads 342
25034 Health Information Needs and Utilization of Information and Communication Technologies by Medical Professionals in a Northern City of India

Authors: Sonika Raj, Amarjeet Singh, Vijay Lakshmi Sharma

Abstract:

Introduction: In 21st century, due to revolution in Information and Communication Technologies (ICTs), there has been phenomenal development in quality and quantity of knowledge in the field of medical science. So, the access to relevant information to physicians is critical to the delivery of effective healthcare services to patients. The study was conducted to assess the information needs and attitudes of the medical professionals; to determine the sources and channels of information used by them; to ascertain the current usage of ICTs and the barriers faced by them in utilization of ICTs in health information access. Methodology: This descriptive cross-sectional study was carried in 2015 on hundred medical professionals working in public and private sectors of Chandigarh. The study used both quantitative and qualitative method for data collection. A semi structured questionnaire and interview schedule was used to collect data on information seeking needs, access to ICTs and barriers to healthcare information access. Five Data analysis was done using SPSS-16 and qualitative data was analyzed using thematic approach. Results: The most preferred sources to access healthcare information were internet (85%), trainings (61%) and communication with colleagues (57%). They wanted information on new drug therapy and latest developments in respective fields. All had access to computer with but almost half assessed their computer knowledge as average and only 3% had received training regarding usage. Educational status (p=0.004), place of work (p=0.004), number of years in job (p=0.004) and sector of job (p=0.04) of doctors were found to be significantly associated with their active search for information. The major themes that emerged from in-views were need; types and sources of healthcare information; exchange of information among different levels of healthcare providers; usage of ICTs to obtain and share information; barriers to access of healthcare information and quality of health information materials and involvement in their development process Conclusion and Recommendations: The medical professionals need information in their in their due course of work. However, information needs of medical professionals were not being adequately met. There should be training of professional regarding internet skills and the course on bioinformatics should be incorporated in the curricula of medical students. The policy framework must be formulated that will encourage and promote the use of ICTs as tools for health information access and dissemination.

Keywords: health information, ICTs, medical professionals, qualitative

Procedia PDF Downloads 332
25033 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder

Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen

Abstract:

Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.

Keywords: count data, meta-analytic prior, negative binomial, poisson

Procedia PDF Downloads 104
25032 Strategic Citizen Participation in Applied Planning Investigations: How Planners Use Etic and Emic Community Input Perspectives to Fill-in the Gaps in Their Analysis

Authors: John Gaber

Abstract:

Planners regularly use citizen input as empirical data to help them better understand community issues they know very little about. This type of community data is based on the lived experiences of local residents and is known as "emic" data. What is becoming more common practice for planners is their use of data from local experts and stakeholders (known as "etic" data or the outsider perspective) to help them fill in the gaps in their analysis of applied planning research projects. Utilizing international Health Impact Assessment (HIA) data, I look at who planners invite to their citizen input investigations. Research presented in this paper shows that planners access a wide range of emic and etic community perspectives in their search for the “community’s view.” The paper concludes with how planners can chart out a new empirical path in their execution of emic/etic citizen participation strategies in their applied planning research projects.

Keywords: citizen participation, emic data, etic data, Health Impact Assessment (HIA)

Procedia PDF Downloads 471
25031 Teachers’ Perception of Implementing a Norm Critical Pedagogical Perspective – A Case Study of a Swedish Behavioural Science Programme

Authors: Sophia Yakhlef

Abstract:

Norm-critical pedagogy is an approach originating from intersectional gender pedagogy, feminist pedagogy, queer pedagogy, and critical pedagogy. In the Swedish context, the norm critical approach is rising in popularity, and norms that are highlighted or challenged are, for example, various dimensions of power such as ’whiteness norm’, discourses of ’Swedishness’, ’middle class norm’, heteronormativity, and body functionality. Instead of seeing students as a homogenous group, intersectional pedagogy focuses on the consequences of differences and on critically paying attention to differences. The perspective encourages teachers to assess their teaching methods, material, and the course literature provided in their education. The classical sociological literature that most students encounter when studying behaviour science or sociology has, in recent years, been referred to as the sociological canon. The sociological perspectives of the classical scholars included in the canon have, in many ways, shaped how we perceive the history of sociology and theories of the modern world in general. The sociological canon has, in recent decades, been challenged by, amongst others, feminist, post-colonial, and queer theorists. This urges us to further investigate the implications that this might have on sociological and behavioural science education, as well as on pedagogical considerations and teaching methods. This qualitative case study focuses on the experiences of implementing a norm critical pedagogical perspective in an online behavioural science programme at Kristianstad University in Sweden. Interviews and informal conversations were conducted in 2022 with teachers regarding their experiences of teaching online, of implementing a student-centred learning approach, and their experiences of implementing a norm critical perspective in sociology and criminology courses. The study demonstrates the inclusion aspect of online education, the benefits of adopting a norm critical perspective, the challenges that arise when updating course literature, and the urgent need for guidance and education for teachers regarding inclusion and paying attention to power asymmetry.

Keywords: norm critical pedagogy, online-education, sociological canon, sweden

Procedia PDF Downloads 61
25030 Crime Prevention with Artificial Intelligence

Authors: Mehrnoosh Abouzari, Shahrokh Sahraei

Abstract:

Today, with the increase in quantity and quality and variety of crimes, the discussion of crime prevention has faced a serious challenge that human resources alone and with traditional methods will not be effective. One of the developments in the modern world is the presence of artificial intelligence in various fields, including criminal law. In fact, the use of artificial intelligence in criminal investigations and fighting crime is a necessity in today's world. The use of artificial intelligence is far beyond and even separate from other technologies in the struggle against crime. Second, its application in criminal science is different from the discussion of prevention and it comes to the prediction of crime. Crime prevention in terms of the three factors of the offender, the offender and the victim, following a change in the conditions of the three factors, based on the perception of the criminal being wise, and therefore increasing the cost and risk of crime for him in order to desist from delinquency or to make the victim aware of self-care and possibility of exposing him to danger or making it difficult to commit crimes. While the presence of artificial intelligence in the field of combating crime and social damage and dangers, like an all-seeing eye, regardless of time and place, it sees the future and predicts the occurrence of a possible crime, thus prevent the occurrence of crimes. The purpose of this article is to collect and analyze the studies conducted on the use of artificial intelligence in predicting and preventing crime. How capable is this technology in predicting crime and preventing it? The results have shown that the artificial intelligence technologies in use are capable of predicting and preventing crime and can find patterns in the data set. find large ones in a much more efficient way than humans. In crime prediction and prevention, the term artificial intelligence can be used to refer to the increasing use of technologies that apply algorithms to large sets of data to assist or replace police. The use of artificial intelligence in our debate is in predicting and preventing crime, including predicting the time and place of future criminal activities, effective identification of patterns and accurate prediction of future behavior through data mining, machine learning and deep learning, and data analysis, and also the use of neural networks. Because the knowledge of criminologists can provide insight into risk factors for criminal behavior, among other issues, computer scientists can match this knowledge with the datasets that artificial intelligence uses to inform them.

Keywords: artificial intelligence, criminology, crime, prevention, prediction

Procedia PDF Downloads 62
25029 Canada's "Flattened Curve": A Geospatial Temporal Analysis of Canada's Amelioration of the Sars-COV-2 Pandemic Through Coordinated Government Intervention

Authors: John Ahluwalia

Abstract:

As an affluent first-world nation, Canada took swift and comprehensive action during the outbreak of the SARS-CoV-2 (COVID-19) pandemic compared to other countries in the same socio-economic cohort. The United States has stumbled to overcome obstacles most developed nations have faced, which has led to significantly more per capita cases and deaths. The initial outbreaks of COVID-19 occurred in the US and Canada within days of each other and posed similar potentially catastrophic threats to public health, the economy, and governmental stability. On a macro level, events that take place in the US have a direct impact on Canada. For example, both countries tend to enter and exit economic recessions at approximately the same time, they are each other’s largest trading partners, and their currencies are inexorably linked. Why is it that Canada has not shared the same fate as the US (and many other nations) that have realized much worse outcomes relative to the COVID-19 pandemic? Variables intrinsic to Canada’s national infrastructure have been instrumental in the country’s efforts to flatten the curve of COVID-19 cases and deaths. Canada’s coordinated multi-level governmental effort has allowed it to create and enforce policies related to COVID-19 at both the national and provincial levels. Canada’s policy of universal healthcare is another variable. Health care and public health measures are enforced on a provincial level, and it is within each province’s jurisdiction to dictate standards for public safety based on scientific evidence. Rather than introducing confusion and the possibility of competition for resources such as PPE and vaccines, Canada’s multi-level chain of government authority has provided consistent policies supporting national public health and local delivery of medical care. This paper will demonstrate that the coordinated efforts on provincial and federal levels have been the linchpin in Canada’s relative success in containing the deadly spread of the COVID-19 virus.

Keywords: COVID-19, Canada, GIS, temporal analysis, ESRI

Procedia PDF Downloads 135
25028 Data Augmentation for Automatic Graphical User Interface Generation Based on Generative Adversarial Network

Authors: Xulu Yao, Moi Hoon Yap, Yanlong Zhang

Abstract:

As a branch of artificial neural network, deep learning is widely used in the field of image recognition, but the lack of its dataset leads to imperfect model learning. By analysing the data scale requirements of deep learning and aiming at the application in GUI generation, it is found that the collection of GUI dataset is a time-consuming and labor-consuming project, which is difficult to meet the needs of current deep learning network. To solve this problem, this paper proposes a semi-supervised deep learning model that relies on the original small-scale datasets to produce a large number of reliable data sets. By combining the cyclic neural network with the generated countermeasure network, the cyclic neural network can learn the sequence relationship and characteristics of data, make the generated countermeasure network generate reasonable data, and then expand the Rico dataset. Relying on the network structure, the characteristics of collected data can be well analysed, and a large number of reasonable data can be generated according to these characteristics. After data processing, a reliable dataset for model training can be formed, which alleviates the problem of dataset shortage in deep learning.

Keywords: GUI, deep learning, GAN, data augmentation

Procedia PDF Downloads 166
25027 Modelling Rainfall-Induced Shallow Landslides in the Northern New South Wales

Authors: S. Ravindran, Y.Liu, I. Gratchev, D.Jeng

Abstract:

Rainfall-induced shallow landslides are more common in the northern New South Wales (NSW), Australia. From 2009 to 2017, around 105 rainfall-induced landslides occurred along the road corridors and caused temporary road closures in the northern NSW. Rainfall causing shallow landslides has different distributions of rainfall varying from uniform, normal, decreasing to increasing rainfall intensity. The duration of rainfall varied from one day to 18 days according to historical data. The objective of this research is to analyse slope instability of some of the sites in the northern NSW by varying cumulative rainfall using SLOPE/W and SEEP/W and compare with field data of rainfall causing shallow landslides. The rainfall data and topographical data from public authorities and soil data obtained from laboratory tests will be used for this modelling. There is a likelihood of shallow landslides if the cumulative rainfall is between 100 mm to 400 mm in accordance with field data.

Keywords: landslides, modelling, rainfall, suction

Procedia PDF Downloads 148
25026 Machine Learning-Enabled Classification of Climbing Using Small Data

Authors: Nicholas Milburn, Yu Liang, Dalei Wu

Abstract:

Athlete performance scoring within the climbing do-main presents interesting challenges as the sport does not have an objective way to assign skill. Assessing skill levels within any sport is valuable as it can be used to mark progress while training, and it can help an athlete choose appropriate climbs to attempt. Machine learning-based methods are popular for complex problems like this. The dataset available was composed of dynamic force data recorded during climbing; however, this dataset came with challenges such as data scarcity, imbalance, and it was temporally heterogeneous. Investigated solutions to these challenges include data augmentation, temporal normalization, conversion of time series to the spectral domain, and cross validation strategies. The investigated solutions to the classification problem included light weight machine classifiers KNN and SVM as well as the deep learning with CNN. The best performing model had an 80% accuracy. In conclusion, there seems to be enough information within climbing force data to accurately categorize climbers by skill.

Keywords: classification, climbing, data imbalance, data scarcity, machine learning, time sequence

Procedia PDF Downloads 128
25025 Analysis of Expression Data Using Unsupervised Techniques

Authors: M. A. I Perera, C. R. Wijesinghe, A. R. Weerasinghe

Abstract:

his study was conducted to review and identify the unsupervised techniques that can be employed to analyze gene expression data in order to identify better subtypes of tumors. Identifying subtypes of cancer help in improving the efficacy and reducing the toxicity of the treatments by identifying clues to find target therapeutics. Process of gene expression data analysis described under three steps as preprocessing, clustering, and cluster validation. Feature selection is important since the genomic data are high dimensional with a large number of features compared to samples. Hierarchical clustering and K Means are often used in the analysis of gene expression data. There are several cluster validation techniques used in validating the clusters. Heatmaps are an effective external validation method that allows comparing the identified classes with clinical variables and visual analysis of the classes.

Keywords: cancer subtypes, gene expression data analysis, clustering, cluster validation

Procedia PDF Downloads 131