Search results for: harmonic data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25385

Search results for: harmonic data

22115 Leveraging the Power of Dual Spatial-Temporal Data Scheme for Traffic Prediction

Authors: Yang Zhou, Heli Sun, Jianbin Huang, Jizhong Zhao, Shaojie Qiao

Abstract:

Traffic prediction is a fundamental problem in urban environment, facilitating the smart management of various businesses, such as taxi dispatching, bike relocation, and stampede alert. Most earlier methods rely on identifying the intrinsic spatial-temporal correlation to forecast. However, the complex nature of this problem entails a more sophisticated solution that can simultaneously capture the mutual influence of both adjacent and far-flung areas, with the information of time-dimension also incorporated seamlessly. To tackle this difficulty, we propose a new multi-phase architecture, DSTDS (Dual Spatial-Temporal Data Scheme for traffic prediction), that aims to reveal the underlying relationship that determines future traffic trend. First, a graph-based neural network with an attention mechanism is devised to obtain the static features of the road network. Then, a multi-granularity recurrent neural network is built in conjunction with the knowledge from a grid-based model. Subsequently, the preceding output is fed into a spatial-temporal super-resolution module. With this 3-phase structure, we carry out extensive experiments on several real-world datasets to demonstrate the effectiveness of our approach, which surpasses several state-of-the-art methods.

Keywords: traffic prediction, spatial-temporal, recurrent neural network, dual data scheme

Procedia PDF Downloads 117
22114 Practicing Inclusion for Hard of Hearing and Deaf Students in Regular Schools in Ethiopia

Authors: Mesfin Abebe Molla

Abstract:

This research aims to examine the practices of inclusion of the hard of hearing and deaf students in regular schools. It also focuses on exploring strategies for optimal benefits of students with Hard of Hearing and Deaf (HH-D) from inclusion. Concurrent mixed methods research design was used to collect quantitative and qualitative data. The instruments used to gather data for this study were questionnaire, semi- structured interview, and observations. A total of 102 HH-D students and 42 primary and High School teachers were selected using simple random sampling technique and used as participants to collect quantitative data. Non-probability sampling technique was also employed to select 14 participants (4-school principals, 6-teachers and 4-parents of HH-D students) and they were interviewed to collect qualitative data. Descriptive and inferential statistical techniques (independent sample t-test, one way ANOVA and Multiple regressions) were employed to analyze quantitative data. Qualitative data were also analyzed qualitatively by theme analysis. The findings reported that there were individual principals’, teachers’ and parents’ strong commitment and efforts for practicing inclusion of HH-D students effectively; however, most of the core values of inclusion were missing in both schools. Most of the teachers (78.6 %) and HH-D students (75.5%) had negative attitude and considerable reservations about the feasibility of inclusion of HH-D students in both schools. Furthermore, there was a statistically significant difference of attitude toward to inclusion between the two school’s teachers and the teachers’ who had taken and had not taken additional training on IE and sign language. The study also indicated that there was a statistically significant difference of attitude toward to inclusion between hard of hearing and deaf students. However, the overall contribution of the demographic variables of teachers and HH-D students on their attitude toward inclusion is not statistically significant. The finding also showed that HH-D students did not have access to modified curriculum which would maximize their abilities and help them to learn together with their hearing peers. In addition, there is no clear and adequate direction for the medium of instruction. Poor school organization and management, lack of commitment, financial resources, collaboration and teachers’ inadequate training on Inclusive Education (IE) and sign language, large class size, inappropriate assessment procedure, lack of trained deaf adult personnel who can serve as role model for HH-D students and lack of parents and community members’ involvement were some of the major factors that affect the practicing inclusion of students HH-D. Finally, recommendations are made to improve the practices of inclusion of HH-D students and to make inclusion of HH-D students an integrated part of Ethiopian education based on the findings of the study.

Keywords: deaf, hard of hearing, inclusion, regular schools

Procedia PDF Downloads 343
22113 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach

Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip

Abstract:

The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.

Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method

Procedia PDF Downloads 129
22112 Good Environmental Governance Realization among the Three King Mongkut's Institutes of Technology in Bangkok, Thailand

Authors: Pastraporn Thipayasothorn, Vipawan Tadapratheep, Jintana Nokyoo

Abstract:

A physical realization of good environmental governance about an environmental principle, educational psychology and architecture in the three King Mongkut's Institutes of Technology, is generated for researching physical environmental factors which related to the good environmental governance, communication between the good environmental governance and a physical environmental, and a physical environmental design policy. Moreover, we collected data by a survey, observation and questionnaire that participants are students of the three King Mongkut's Institutes of Technology, and analyzed a relationship between a building utilization and the good environmental governance awareness. We found that, from the data analysis, a balance and creativity participation which played as the project users and communities of the good governance environmental promotion in the institutes helps the good governance and environmental development in the future.

Keywords: built environment, good governance, environmental governance, physical environmental

Procedia PDF Downloads 438
22111 Using Computer Vision to Detect and Localize Fractures in Wrist X-ray Images

Authors: John Paul Q. Tomas, Mark Wilson L. de los Reyes, Kirsten Joyce P. Vasquez

Abstract:

The most frequent type of fracture is a wrist fracture, which often makes it difficult for medical professionals to find and locate. In this study, fractures in wrist x-ray pictures were located and identified using deep learning and computer vision. The researchers used image filtering, masking, morphological operations, and data augmentation for the image preprocessing and trained the RetinaNet and Faster R-CNN models with ResNet50 backbones and Adam optimizers separately for each image filtering technique and projection. The RetinaNet model with Anisotropic Diffusion Smoothing filter trained with 50 epochs has obtained the greatest accuracy of 99.14%, precision of 100%, sensitivity/recall of 98.41%, specificity of 100%, and an IoU score of 56.44% for the Posteroanterior projection utilizing augmented data. For the Lateral projection using augmented data, the RetinaNet model with an Anisotropic Diffusion filter trained with 50 epochs has produced the highest accuracy of 98.40%, precision of 98.36%, sensitivity/recall of 98.36%, specificity of 98.43%, and an IoU score of 58.69%. When comparing the test results of the different individual projections, models, and image filtering techniques, the Anisotropic Diffusion filter trained with 50 epochs has produced the best classification and regression scores for both projections.

Keywords: Artificial Intelligence, Computer Vision, Wrist Fracture, Deep Learning

Procedia PDF Downloads 73
22110 Information Communication Technologies and Renewable Technologies' Impact on Irish People's Lifestyle: A Constructivist Grounded Theory Study

Authors: Hamilton V. Niculescu

Abstract:

This paper discusses findings relating to people's engagement with mobile communication technologies and remote automated systems. This interdisciplinary study employs a constructivist grounded theory methodology, with qualitative data that was generated following in-depth semi-structured interviews with 18 people living in Ireland being corroborated with participants' observations and quantitative data. Additional data was collected following participants' remote interaction with six custom-built automated enclosures, located at six different sites around Dublin, Republic of Ireland. This paper argues that ownership and education play a vital role in people engaging with and adoption of new technologies. Analysis of participants' behavior and attitude towards Information Communication Technologies (ICT) suggests that innovations do not always improve peoples' social inclusion. Technological innovations are sometimes perceived as destroying communities and create a dysfunctional society. Moreover, the findings indicate that a lack of public information and support from Irish governmental institutions, as well as limited off-the-shelves availability, has led to low trust and adoption of renewable technologies. A limited variation in participants' behavior and interaction patterns with technologies was observed during the study. This suggests that people will eventually adopt new technologies according to their needs and experience, even though they initially rejected the idea of changing their lifestyle.

Keywords: automation, communication, ICT, renewables

Procedia PDF Downloads 112
22109 Use of Cloud Computing and Smart Devices in Healthcare

Authors: Nikunj Agarwal, M. P. Sebastian

Abstract:

Cloud computing can reduce the start-up expenses of implementing EHR (Electronic Health Records). However, many of the healthcare institutions are yet to implement cloud computing due to the associated privacy and security issues. In this paper, we analyze the challenges and opportunities of implementing cloud computing in healthcare. We also analyze data of over 5000 US hospitals that use Telemedicine applications. This analysis helps to understand the importance of smart phones over the desktop systems in different departments of the healthcare institutions. The wide usage of smartphones and cloud computing allows ubiquitous and affordable access to the health data by authorized persons, including patients and doctors. Cloud computing will prove to be beneficial to a majority of the departments in healthcare. Through this analysis, we attempt to understand the different healthcare departments that may benefit significantly from the implementation of cloud computing.

Keywords: cloud computing, smart devices, healthcare, telemedicine

Procedia PDF Downloads 396
22108 Evaluation of the Impact of Pavement Roughness on Vehicle Emissions by HDM-4

Authors: Muhammad Azhar, Arshad Hussain

Abstract:

Vehicular emissions have increased in recent years due to rapid growth in world traffic resulting in an increase in associated problems such as air pollution and climate change, therefore it’s necessary to control vehicle emissions. This study looks at the effect of road maintenance on vehicle emissions. The Highway Development and Management Tool (HDM-4) was used to find the effect of road maintenance on vehicle emissions. Key data collected were traffic volume and composition, vehicle characteristics, pavement characteristics and climate data of the study area. Two options were analysed using the HDM-4 software; the base case or do nothing while the second is overlay maintenance. The study also showed a strong correlation between average roughness and yearly emission levels in both the alternatives. Finally, the study showed that proper maintenance reduces the roughness and emissions.

Keywords: vehicle emissions, road roughness, IRI, maintenance, HDM-4, CO2

Procedia PDF Downloads 265
22107 Malaysian Students' Identity in Seminars by Observing, Interviewing and Conducting Focus Group Discussion

Authors: Zurina Khairuddin

Abstract:

The objective of this study is to explore the identities constructed and negotiated by Malaysian students in the UK and Malaysia when they interact in seminars. The study utilised classroom observation, interview and focus group discussion to collect the data. The participants of this study are the first year Malaysian students studying in the UK and Malaysia. The data collected was analysed utilising a combination of Conversation Analysis and framework. This study postulates that Malaysian students in the UK construct and negotiate flexible and different identities depending on the contexts they were in. It also shows that most Malaysian students in the UK and Malaysia are similar in the identities they construct and negotiate. This study suggests implications and recommendations for Malaysian students in the UK and Malaysia, and other stakeholders such as UK and Malaysian academic community.

Keywords: conversation analysis, interaction patterns, Malaysian students, students' identity

Procedia PDF Downloads 182
22106 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data

Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder

Abstract:

Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.

Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods

Procedia PDF Downloads 253
22105 Data-Driven Monitoring and Control of Water Sanitation and Hygiene for Improved Maternal Health in Rural Communities

Authors: Paul Barasa Wanyama, Tom Wanyama

Abstract:

Governments and development partners in low-income countries often prioritize building Water Sanitation and Hygiene (WaSH) infrastructure of healthcare facilities to improve maternal healthcare outcomes. However, the operation, maintenance, and utilization of this infrastructure are almost never considered. Many healthcare facilities in these countries use untreated water that is not monitored for quality or quantity. Consequently, it is common to run out of water while a patient is on their way to or in the operating theater. Further, the handwashing stations in healthcare facilities regularly run out of water or soap for months, and the latrines are typically not clean, in part due to the lack of water. In this paper, we present a system that uses Internet of Things (IoT), big data, cloud computing, and AI to initiate WaSH security in healthcare facilities, with a specific focus on maternal health. We have implemented smart sensors and actuators to monitor and control WaSH systems from afar to ensure their objectives are achieved. We have also developed a cloud-based system to analyze WaSH data in real time and communicate relevant information back to the healthcare facilities and their stakeholders (e.g., medical personnel, NGOs, ministry of health officials, facilities managers, community leaders, pregnant women, and new mothers and their families) to avert or mitigate problems before they occur.

Keywords: WaSH, internet of things, artificial intelligence, maternal health, rural communities, healthcare facilities

Procedia PDF Downloads 19
22104 Uncloaking Priceless Pieces of Evidence: Psychotherapy with an Older New Zealand Man; Contributions to Understanding Hidden Historical Phenomena and the Trans-Generation Transmission of Silent and Un-Witnessed Trauma

Authors: Joanne M. Emmens

Abstract:

This paper makes use of the case notes of a single psychoanalytically informed psychotherapy of a now 72-year-old man over a four-year period to explore the potential of qualitative data to be incorporated into a research methodology that can contribute theory and knowledge to the wider professional community involved in mental health care. The clinical material arising out of any psychoanalysis provides a potentially rich source of clinical data that could contribute valuably to our historical understanding of both individual and societal traumata. As psychoanalysis is primarily an investigation, it is argued that clinical case material is a rich source of qualitative data which has relevance for sociological and historical understandings and that it can potentially aluminate important ‘gaps’ and collective blind spots that manifest unconsciously and are a contributing factor in the transmission of trauma, silently across generations. By attending to this case material the hope is to illustrate the value of using a psychoanalytic centred methodology. It is argued that the study of individual defences and the manner in which they come into consciousness, allows an insight into group defences and the unconscious forces that contribute to the silencing or un-noticing of important sources (or originators) of mental suffering.

Keywords: dream furniture (Bion) and psychotic functioning, reverie, screen memories, selected fact

Procedia PDF Downloads 199
22103 Exposure and Satisfaction toward Online News of Undergraduate Students in Thailand

Authors: Ekapon Thienthaworn

Abstract:

This research aims to study the exposure and satisfaction toward online news of undergraduate students in Bangkok, Thailand. This research is the survey research which 400 questionnaires are used to collect data with the accidental sampling technique and the data collected are analyzed with descriptive statistics. The result can be divided into 2 sections as follow: (1) Undergraduate students in Bangkok consume online news via most of the Smartphone. In most cases, they use average more than 2 hours per day. Most times to consume news are 22.01- 02.00 pm. Primary source is Facebook and the most interested news genre is entertainment news and headline of the day. (2) Undergraduate students in Bangkok have positive attitude in online news is a fastness and easy-to-access. Negative attitude is piracy. Finally, average satisfaction in consuming online news is in high levels.

Keywords: exposure, satisfaction, online news, Bangkok

Procedia PDF Downloads 247
22102 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 674
22101 Filmic and Verbal Metafphors

Authors: Manana Rusieshvili, Rusudan Dolidze

Abstract:

This paper aims at 1) investigating the ways in which a traditional, monomodal written verbal metaphor can be transposed as a monomodal non-verbal (visual) or multimodal (aural and -visual) filmic metaphor ; 2) exploring similarities and differences in the process of encoding and decoding of monomodal and multimodal metaphors. The empiric data, on which the research is based, embrace three sources: the novel by Harry Gray ‘The Hoods’, the script of the film ‘Once Upon a Time in America’ (English version by David Mills) and the resultant film by Sergio Leone. In order to achieve the above mentioned goals, the research focuses on the following issues: 1) identification of verbal and non-verbal monomodal and multimodal metaphors in the above-mentioned sources and 2) investigation of the ways and modes the specific written monomodal metaphors appearing in the novel and the script are enacted in the film and become visual, aural or visual-aural filmic metaphors ; 3) study of the factors which play an important role in contributing to the encoding and decoding of the filmic metaphor. The collection and analysis of the data were carried out in two stages: firstly, the relevant data, i.e. the monomodal metaphors from the novel, the script and the film were identified and collected. In the second, final stage the metaphors taken from all of the three sources were analysed, compared and two types of phenomena were selected for discussion: (1) the monomodal written metaphors found in the novel and/or in the script which become monomodal visual/aural metaphors in the film; (2) the monomodal written metaphors found in the novel and/or in the script which become multimodal, filmic (visual-aural) metaphors in the film.

Keywords: encoding, decoding, filmic metaphor, multimodality

Procedia PDF Downloads 526
22100 Normalized P-Laplacian: From Stochastic Game to Image Processing

Authors: Abderrahim Elmoataz

Abstract:

More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.

Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems

Procedia PDF Downloads 512
22099 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4

Authors: Ryan A. Black, Stacey A. McCaffrey

Abstract:

Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.

Keywords: instrument development, item response theory, latent trait theory, psychometrics

Procedia PDF Downloads 357
22098 Prediction of Oil Recovery Factor Using Artificial Neural Network

Authors: O. P. Oladipo, O. A. Falode

Abstract:

The determination of Recovery Factor is of great importance to the reservoir engineer since it relates reserves to the initial oil in place. Reserves are the producible portion of reservoirs and give an indication of the profitability of a field Development. The core objective of this project is to develop an artificial neural network model using selected reservoir data to predict Recovery Factors (RF) of hydrocarbon reservoirs and compare the model with a couple of the existing correlations. The type of Artificial Neural Network model developed was the Single Layer Feed Forward Network. MATLAB was used as the network simulator and the network was trained using the supervised learning method, Afterwards, the network was tested with input data never seen by the network. The results of the predicted values of the recovery factors of the Artificial Neural Network Model, API Correlation for water drive reservoirs (Sands and Sandstones) and Guthrie and Greenberger Correlation Equation were obtained and compared. It was noted that the coefficient of correlation of the Artificial Neural Network Model was higher than the coefficient of correlations of the other two correlation equations, thus making it a more accurate prediction tool. The Artificial Neural Network, because of its accurate prediction ability is helpful in the correct prediction of hydrocarbon reservoir factors. Artificial Neural Network could be applied in the prediction of other Petroleum Engineering parameters because it is able to recognise complex patterns of data set and establish a relationship between them.

Keywords: recovery factor, reservoir, reserves, artificial neural network, hydrocarbon, MATLAB, API, Guthrie, Greenberger

Procedia PDF Downloads 441
22097 Design of an Air and Land Multi-Element Expression Pattern of Navigation Electronic Map for Ground Vehicles under United Navigation Mechanism

Authors: Rui Liu, Pengyu Cui, Nan Jiang

Abstract:

At present, there is much research on the application of centralized management and cross-integration application of basic geographic information. However, the idea of information integration and sharing between land, sea, and air navigation targets is not deeply applied into the research of navigation information service, especially in the information expression. Targeting at this problem, the paper carries out works about the expression pattern of navigation electronic map for ground vehicles under air and land united navigation mechanism. At first, with the support from multi-source information fusion of GIS vector data, RS data, GPS data, etc., an air and land united information expression pattern is designed aiming at specific navigation task of emergency rescue in the earthquake. And then, the characteristics and specifications of the united expression of air and land navigation information under the constraints of map load are summarized and transferred into expression rules in the rule bank. At last, the related navigation experiment is implemented to evaluate the effect of the expression pattern. The experiment selects evaluation factors of the navigation task accomplishment time and the navigation error rate as the main index, and make comparisons with the traditional single information expression pattern. To sum up, the research improved the theory of navigation electronic map and laid a certain foundation for the design and realization of united navigation system in the aspect of real-time navigation information delivery.

Keywords: navigation electronic map, united navigation, multi-element expression pattern, multi-source information fusion

Procedia PDF Downloads 199
22096 Analysis of Radial Pulse Using Nadi-Parikshan Yantra

Authors: Ashok E. Kalange

Abstract:

Diagnosis according to Ayurveda is to find the root cause of a disease. Out of the eight different kinds of examinations, Nadi-Pariksha (pulse examination) is important. Nadi-Pariksha is done at the root of the thumb by examining the radial artery using three fingers. Ancient Ayurveda identifies the health status by observing the wrist pulses in terms of 'Vata', 'Pitta' and 'Kapha', collectively called as tridosha, as the basic elements of human body and in their combinations. Diagnosis by traditional pulse analysis – NadiPariksha - requires a long experience in pulse examination and a high level of skill. The interpretation tends to be subjective, depending on the expertise of the practitioner. Present work is part of the efforts carried out in making Nadi-Parikshan objective. Nadi Parikshan Yantra (three point pulse examination system) is developed in our laboratory by using three pressure sensors (one each for the Vata, Pitta and Kapha points on radial artery). The radial pulse data was collected of a large number of subjects. The radial pulse data collected is analyzed on the basis of relative amplitudes of the three point pulses as well as in frequency and time domains. The same subjects were examined by Ayurvedic physician (Nadi Vaidya) and the dominant Dosha - Vata, Pitta or Kapha - was identified. The results are discussed in details in the paper.

Keywords: Nadi Parikshan Yantra, Tridosha, Nadi Pariksha, human pulse data analysis

Procedia PDF Downloads 189
22095 Risk-Based Institutional Evaluation of Trans Sumatera Toll Road Infrastructure Development to Improve Time Performance

Authors: Muhammad Ridho Fakhrin, Leni Sagita Riantini, Yusuf Latief

Abstract:

Based on the 2015-2019 RPJMN data, the realization of toll road infrastructure development in Indonesia experienced a delay of 49% or 904 km of the total plan. One of the major causes of delays in development is caused by institutional factors. The case study taken in this research is the construction of the Trans Sumatra Toll Road (JTTS). The purpose of this research is to identify the institutional forms, functions, roles, duties, and responsibilities of each stakeholder and the risks that occur in the Trans Sumatra Toll Road Infrastructure Development. Risk analysis is implemented on functions, roles, duties, responsibilities of each existing stakeholder and is carried out at the Funding Stage, Technical Planning Stage, and Construction Implementation Stage in JTTS. This research is conducted by collecting data through a questionnaire survey, then processed using statistical methods, such as homogeneity, data adequacy, validity, and reliability test, continued with risk assessment based on a risk matrix. The results of this study are the evaluation and development of institutional functions in risk-based JTTS development can improve time performance and minimize delays in the construction process.

Keywords: institutional, risk management, time performance, toll road

Procedia PDF Downloads 164
22094 Teaching Tools for Web Processing Services

Authors: Rashid Javed, Hardy Lehmkuehler, Franz Josef-Behr

Abstract:

Web Processing Services (WPS) have up growing concern in geoinformation research. However, teaching about them is difficult because of the generally complex circumstances of their use. They limit the possibilities for hands- on- exercises on Web Processing Services. To support understanding however a Training Tools Collection was brought on the way at University of Applied Sciences Stuttgart (HFT). It is limited to the scope of Geostatistical Interpolation of sample point data where different algorithms can be used like IDW, Nearest Neighbor etc. The Tools Collection aims to support understanding of the scope, definition and deployment of Web Processing Services. For example it is necessary to characterize the input of Interpolation by the data set, the parameters for the algorithm and the interpolation results (here a grid of interpolated values is assumed). This paper reports on first experiences using a pilot installation. This was intended to find suitable software interfaces for later full implementations and conclude on potential user interface characteristics. Experiences were made with Deegree software, one of several Services Suites (Collections). Being strictly programmed in Java, Deegree offers several OGC compliant Service Implementations that also promise to be of benefit for the project. The mentioned parameters for a WPS were formalized following the paradigm that any meaningful component will be defined in terms of suitable standards. E.g. the data output can be defined as a GML file. But, the choice of meaningful information pieces and user interactions is not free but partially determined by the selected WPS Processing Suite.

Keywords: deegree, interpolation, IDW, web processing service (WPS)

Procedia PDF Downloads 355
22093 Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children

Authors: F. Jiménez, R. Jódar, M. Martín, G. Sánchez, G. Sciavicco

Abstract:

Abstract—Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time, to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Its application to unsupervised classification is restricted to a limited number of experiments in the literature. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We present a feature selection wrapper model composed by a multi-objective evolutionary algorithm, the clustering method Expectation-Maximization (EM), and the classifier C4.5 for the unsupervised classification of data extracted from a psychological test named BASC-II (Behavior Assessment System for Children - II ed.) with two objectives: Maximizing the likelihood of the clustering model and maximizing the accuracy of the obtained classifier. We present a methodology to integrate feature selection for unsupervised classification, model evaluation, decision making (to choose the most satisfactory model according to a a posteriori process in a multi-objective context), and testing. We compare the performance of the classifier obtained by the multi-objective evolutionary algorithms ENORA and NSGA-II, and the best solution is then validated by the psychologists that collected the data.

Keywords: evolutionary computation, feature selection, classification, clustering

Procedia PDF Downloads 371
22092 Policy Implications of Demographic Impacts on COVID-19, Pneumonia, and Influenza Mortality: A Multivariable Regression Approach to Death Toll Reduction

Authors: Saiakhil Chilaka

Abstract:

Understanding the demographic factors that influence mortality from respiratory diseases like COVID-19, pneumonia, and influenza is crucial for informing public health policy. This study utilizes multivariable regression models to assess the relationship between state, sex, and age group on deaths from these diseases using U.S. data from 2020 to 2023. The analysis reveals that age and sex play significant roles in mortality, while state-level variations are minimal. Although the model’s low R-squared values indicate that additional factors are at play, this paper discusses how these findings, in light of recent research, can inform future public health policy, resource allocation, and intervention strategies.

Keywords: COVID-19, multivariable regression, public policy, data science

Procedia PDF Downloads 22
22091 Climate Change and Landslide Risk Assessment in Thailand

Authors: Shotiros Protong

Abstract:

The incidents of sudden landslides in Thailand during the past decade have occurred frequently and more severely. It is necessary to focus on the principal parameters used for analysis such as land cover land use, rainfall values, characteristic of soil and digital elevation model (DEM). The combination of intense rainfall and severe monsoons is increasing due to global climate change. Landslide occurrences rapidly increase during intense rainfall especially in the rainy season in Thailand which usually starts around mid-May and ends in the middle of October. The rain-triggered landslide hazard analysis is the focus of this research. The combination of geotechnical and hydrological data are used to determine permeability, conductivity, bedding orientation, overburden and presence of loose blocks. The regional landslide hazard mapping is developed using the Slope Stability Index SINMAP model supported on Arc GIS software version 10.1. Geological and land use data are used to define the probability of landslide occurrences in terms of geotechnical data. The geological data can indicate the shear strength and the angle of friction values for soils above given rock types, which leads to the general applicability of the approach for landslide hazard analysis. To address the research objectives, the methods are described in this study: setup and calibration of the SINMAP model, sensitivity of the SINMAP model, geotechnical laboratory, landslide assessment at present calibration and landslide assessment under future climate simulation scenario A2 and B2. In terms of hydrological data, the millimetres/twenty-four hours of average rainfall data are used to assess the rain triggered landslide hazard analysis in slope stability mapping. During 1954-2012 period, is used for the baseline of rainfall data at the present calibration. The climate change in Thailand, the future of climate scenarios are simulated by spatial and temporal scales. The precipitation impact is need to predict for the climate future, Statistical Downscaling Model (SDSM) version 4.2, is used to assess the simulation scenario of future change between latitude 16o 26’ and 18o 37’ north and between longitude 98o 52’ and 103o 05’ east by SDSM software. The research allows the mapping of risk parameters for landslide dynamics, and indicates the spatial and time trends of landslide occurrences. Thus, regional landslide hazard mapping under present-day climatic conditions from 1954 to 2012 and simulations of climate change based on GCM scenarios A2 and B2 from 2013 to 2099 related to the threshold rainfall values for the selected the study area in Uttaradit province in the northern part of Thailand. Finally, the landslide hazard mapping will be compared and shown by areas (km2 ) in both the present and the future under climate simulation scenarios A2 and B2 in Uttaradit province.

Keywords: landslide hazard, GIS, slope stability index (SINMAP), landslides, Thailand

Procedia PDF Downloads 564
22090 Topology-Based Character Recognition Method for Coin Date Detection

Authors: Xingyu Pan, Laure Tougne

Abstract:

For recognizing coins, the graved release date is important information to identify precisely its monetary type. However, reading characters in coins meets much more obstacles than traditional character recognition tasks in the other fields, such as reading scanned documents or license plates. To address this challenging issue in a numismatic context, we propose a training-free approach dedicated to detection and recognition of the release date of the coin. In the first step, the date zone is detected by comparing histogram features; in the second step, a topology-based algorithm is introduced to recognize coin numbers with various font types represented by binary gradient map. Our method obtained a recognition rate of 92% on synthetic data and of 44% on real noised data.

Keywords: coin, detection, character recognition, topology

Procedia PDF Downloads 253
22089 The State of Employee Motivation During Covid-19 Outbreak in Sri Lankan Construction Sector

Authors: Tharaki Hetti Arachchi

Abstract:

Sri Lanka has undergone numerous changes in the fields of social-economic and cultural processors during the past decades. Consequently, the Sri Lankan construction industry was subjected to rapid growth while contributing a considerable amount to the national economy. The prevailing situation under the Covid-19 pandemic exhibited challenges to almost all of the sectors of the country in attaining success. Although productivity is one of the dimensions that measure the degree of project success, achieving sufficient productivity has become challengeable due to the Covid-19 outbreak. As employee motivation is an influential factor in defining productivity, the present study becomes significant in discovering ways of enhancing construction productivity via employee motivation. The study has adopted a combination of qualitative and quantitative methodologies in attaining the study objectives. While the research population refers to construction professionals in Sri Lanka, the study sample is aimed at Quantity Surveyors in the bottom and middle managements of organizational hierarchies. The data collection was implemented via primary and secondary sources. The primary data collection was accomplished by undertaking semi-structured interviews and online questionnaire surveys while sampling the overall respondents based on the purposive sample method. The responses of the questionnaire survey were gathered in a form of a ‘Likert Scale’ to examine the degree of applicability on each respondent. Overall, 76.36% of primary data were recovered from the expected count while obtaining 60 responses from the questionnaire survey and 24 responses from interviews. Secondary data were obtained by reviewing sources such as research articles, journals, newspapers, books, etc. The findings suggest adopting and enhancing sixteen motivational factors in achieving greater productivity in the Sri Lankan construction sector.

Keywords: Covid 19 pandemic, motivation, quantity surveying, Sri Lanka

Procedia PDF Downloads 95
22088 Impact of Weather Conditions on Non-Food Retailers and Implications for Marketing Activities

Authors: Noriyuki Suyama

Abstract:

This paper discusses purchasing behavior in retail stores, with a particular focus on the impact of weather changes on customers' purchasing behavior. Weather conditions are one of the factors that greatly affect the management and operation of retail stores. However, there is very little research on the relationship between weather conditions and marketing from an academic perspective, although there is some importance from a practical standpoint and knowledge based on experience. For example, customers are more hesitant to go out when it rains than when it is sunny, and they may postpone purchases or buy only the minimum necessary items even if they do go out. It is not difficult to imagine that weather has a significant impact on consumer behavior. To the best of the authors' knowledge, there have been only a few studies that have delved into the purchasing behavior of individual customers. According to Hirata (2018), the economic impact of weather in the United States is estimated to be 3.4% of GDP, or "$485 billion ± $240 billion per year. However, weather data is not yet fully utilized. Representative industries include transportation-related industries (e.g., airlines, shipping, roads, railroads), leisure-related industries (e.g., leisure facilities, event organizers), energy and infrastructure-related industries (e.g., construction, factories, electricity and gas), agriculture-related industries (e.g., agricultural organizations, producers), and retail-related industries (e.g., retail, food service, convenience stores, etc.). This paper focuses on the retail industry and advances research on weather. The first reason is that, as far as the author has investigated the retail industry, only grocery retailers use temperature, rainfall, wind, weather, and humidity as parameters for their products, and there are very few examples of academic use in other retail industries. Second, according to NBL's "Toward Data Utilization Starting from Consumer Contact Points in the Retail Industry," labor productivity in the retail industry is very low compared to other industries. According to Hirata (2018) mentioned above, improving labor productivity in the retail industry is recognized as a major challenge. On the other hand, according to the "Survey and Research on Measurement Methods for Information Distribution and Accumulation (2013)" by the Ministry of Internal Affairs and Communications, the amount of data accumulated by each industry is extremely large in the retail industry, so new applications are expected by analyzing these data together with weather data. Third, there is currently a wealth of weather-related information available. There are, for example, companies such as WeatherNews, Inc. that make weather information their business and not only disseminate weather information but also disseminate information that supports businesses in various industries. Despite the wide range of influences that weather has on business, the impact of weather has not been a subject of research in the retail industry, where business models need to be imagined, especially from a micro perspective. In this paper, the author discuss the important aspects of the impact of weather on marketing strategies in the non-food retail industry.

Keywords: consumer behavior, weather marketing, marketing science, big data, retail marketing

Procedia PDF Downloads 82
22087 Suspended Sediment Concentration and Water Quality Monitoring Along Aswan High Dam Reservoir Using Remote Sensing

Authors: M. Aboalazayem, Essam A. Gouda, Ahmed M. Moussa, Amr E. Flifl

Abstract:

Field data collecting is considered one of the most difficult work due to the difficulty of accessing large zones such as large lakes. Also, it is well known that the cost of obtaining field data is very expensive. Remotely monitoring of lake water quality (WQ) provides an economically feasible approach comparing to field data collection. Researchers have shown that lake WQ can be properly monitored via Remote sensing (RS) analyses. Using satellite images as a method of WQ detection provides a realistic technique to measure quality parameters across huge areas. Landsat (LS) data provides full free access to often occurring and repeating satellite photos. This enables researchers to undertake large-scale temporal comparisons of parameters related to lake WQ. Satellite measurements have been extensively utilized to develop algorithms for predicting critical water quality parameters (WQPs). The goal of this paper is to use RS to derive WQ indicators in Aswan High Dam Reservoir (AHDR), which is considered Egypt's primary and strategic reservoir of freshwater. This study focuses on using Landsat8 (L-8) band surface reflectance (SR) observations to predict water-quality characteristics which are limited to Turbidity (TUR), total suspended solids (TSS), and chlorophyll-a (Chl-a). ArcGIS pro is used to retrieve L-8 SR data for the study region. Multiple linear regression analysis was used to derive new correlations between observed optical water-quality indicators in April and L-8 SR which were atmospherically corrected by values of various bands, band ratios, and or combinations. Field measurements taken in the month of May were used to validate WQP obtained from SR data of L-8 Operational Land Imager (OLI) satellite. The findings demonstrate a strong correlation between indicators of WQ and L-8 .For TUR, the best validation correlation with OLI SR bands blue, green, and red, were derived with high values of Coefficient of correlation (R2) and Root Mean Square Error (RMSE) equal 0.96 and 3.1 NTU, respectively. For TSS, Two equations were strongly correlated and verified with band ratios and combinations. A logarithm of the ratio of blue and green SR was determined to be the best performing model with values of R2 and RMSE equal to 0.9861 and 1.84 mg/l, respectively. For Chl-a, eight methods were presented for calculating its value within the study area. A mix of blue, red, shortwave infrared 1(SWR1) and panchromatic SR yielded the greatest validation results with values of R2 and RMSE equal 0.98 and 1.4 mg/l, respectively.

Keywords: remote sensing, landsat 8, nasser lake, water quality

Procedia PDF Downloads 93
22086 Visual Template Detection and Compositional Automatic Regular Expression Generation for Business Invoice Extraction

Authors: Anthony Proschka, Deepak Mishra, Merlyn Ramanan, Zurab Baratashvili

Abstract:

Small and medium-sized businesses receive over 160 billion invoices every year. Since these documents exhibit many subtle differences in layout and text, extracting structured fields such as sender name, amount, and VAT rate from them automatically is an open research question. In this paper, existing work in template-based document extraction is extended, and a system is devised that is able to reliably extract all required fields for up to 70% of all documents in the data set, more than any other previously reported method. The approaches are described for 1) detecting through visual features which template a given document belongs to, 2) automatically generating extraction rules for a given new template by composing regular expressions from multiple components, and 3) computing confidence scores that indicate the accuracy of the automatic extractions. The system can generate templates with as little as one training sample and only requires the ground truth field values instead of detailed annotations such as bounding boxes that are hard to obtain. The system is deployed and used inside a commercial accounting software.

Keywords: data mining, information retrieval, business, feature extraction, layout, business data processing, document handling, end-user trained information extraction, document archiving, scanned business documents, automated document processing, F1-measure, commercial accounting software

Procedia PDF Downloads 130