Search results for: data source
25598 A Review of Methods for Handling Missing Data in the Formof Dropouts in Longitudinal Clinical Trials
Abstract:
Much clinical trials data-based research are characterized by the unavoidable problem of dropout as a result of missing or erroneous values. This paper aims to review some of the various techniques to address the dropout problems in longitudinal clinical trials. The fundamental concepts of the patterns and mechanisms of dropout are discussed. This study presents five general techniques for handling dropout: (1) Deletion methods; (2) Imputation-based methods; (3) Data augmentation methods; (4) Likelihood-based methods; and (5) MNAR-based methods. Under each technique, several methods that are commonly used to deal with dropout are presented, including a review of the existing literature in which we examine the effectiveness of these methods in the analysis of incomplete data. Two application examples are presented to study the potential strengths or weaknesses of some of the methods under certain dropout mechanisms as well as to assess the sensitivity of the modelling assumptions.Keywords: incomplete longitudinal clinical trials, missing at random (MAR), imputation, weighting methods, sensitivity analysis
Procedia PDF Downloads 41525597 Graphic Procession Unit-Based Parallel Processing for Inverse Computation of Full-Field Material Properties Based on Quantitative Laser Ultrasound Visualization
Authors: Sheng-Po Tseng, Che-Hua Yang
Abstract:
Motivation and Objective: Ultrasonic guided waves become an important tool for nondestructive evaluation of structures and components. Guided waves are used for the purpose of identifying defects or evaluating material properties in a nondestructive way. While guided waves are applied for evaluating material properties, instead of knowing the properties directly, preliminary signals such as time domain signals or frequency domain spectra are first revealed. With the measured ultrasound data, inversion calculation can be further employed to obtain the desired mechanical properties. Methods: This research is development of high speed inversion calculation technique for obtaining full-field mechanical properties from the quantitative laser ultrasound visualization system (QLUVS). The quantitative laser ultrasound visualization system (QLUVS) employs a mirror-controlled scanning pulsed laser to generate guided acoustic waves traveling in a two-dimensional target. Guided waves are detected with a piezoelectric transducer located at a fixed location. With a gyro-scanning of the generation source, the QLUVS has the advantage of fast, full-field, and quantitative inspection. Results and Discussions: This research introduces two important tools to improve the computation efficiency. Firstly, graphic procession unit (GPU) with large amount of cores are introduced. Furthermore, combining the CPU and GPU cores, parallel procession scheme is developed for the inversion of full-field mechanical properties based on the QLUVS data. The newly developed inversion scheme is applied to investigate the computation efficiency for single-layered and double-layered plate-like samples. The computation efficiency is shown to be 80 times faster than unparalleled computation scheme. Conclusions: This research demonstrates a high-speed inversion technique for the characterization of full-field material properties based on quantitative laser ultrasound visualization system. Significant computation efficiency is shown, however not reaching the limit yet. Further improvement can be reached by improving the parallel computation. Utilizing the development of the full-field mechanical property inspection technology, full-field mechanical property measured by non-destructive, high-speed and high-precision measurements can be obtained in qualitative and quantitative results. The developed high speed computation scheme is ready for applications where full-field mechanical properties are needed in a nondestructive and nearly real-time way.Keywords: guided waves, material characterization, nondestructive evaluation, parallel processing
Procedia PDF Downloads 20225596 Feedback Preference and Practice of English Majors’ in Pronunciation Instruction
Authors: Claerchille Jhulia Robin
Abstract:
This paper discusses the perspective of ESL learners towards pronunciation instruction. It sought to determine how these learners view the type of feedback their speech teacher gives and its impact on their own classroom practice of providing feedback. This study utilized a quantitative-qualitative approach to the problem. The respondents were Education students majoring in English. A survey questionnaire and interview guide were used for data gathering. The data from the survey was tabulated using frequency count and the data from the interview were then transcribed and analyzed. Results showed that ESL learners favor immediate corrective feedback and they do not find any issue in being corrected in front of their peers. They also practice the same corrective technique in their own classroom.Keywords: ESL, feedback, learner perspective, pronunciation instruction
Procedia PDF Downloads 23425595 The Issues of Irrigation and Drainage in Kebbi State and Their Effective Solution for a Sustainable Agriculture in Kebbi State, Nigeria
Authors: Mumtaz Ahmed Sohag, Ishaq Ahmed Sohag
Abstract:
Kebbi State, located in the Nort-West of Nigeria, is rich in water resources as the major rivers viz. Niger and Rima irrigate a vast majority of land. Besides, there is significant amount of groundwater, which farmers use for agriculture purpose. The groundwater is also a major source of agricultural and domestic water as wells are installed in almost all parts of the region. Although Kebbi State is rich in water, however, there are some pertinent issues which are hampering its agricultural productivity. The low lands (locally called Fadama), has spread out to a vast area. It is inundated every year during the rainy season which lasts from June to September every year. The farmers grow rice during the rainy season when water is standing. They cannot do further agricultural activity for almost two months due to high standing water. This has resulted in widespread waterlogging problem. Besides, the impact of climate change is resulting in rapid variation in river/stream flows. The information about water bodies regarding the availability of water for agricultural and other uses and the behavior of rivers at different flows is seldom available. Furthermore, sediment load (suspended and bedload) is not measured due to which land erosion cannot be countered effectively. This study, carried out in seven different irrigation regions of Kebbi state, found that diversion structures need to be constructed at some strategic locations for the supply of surface water to the farmers. The water table needs to be lowered through an effective drainage system. The monitoring of water bodies is crucial for sound data to help efficient regulation and management of water. Construction of embankments is necessary to control frequent floods in the rivers of Niger and Rima. Furthermore, farmers need capacity and awareness for participatory irrigation management.Keywords: water bodies, floods, agriculture, waterlogging
Procedia PDF Downloads 23825594 Automatic Tagging and Accuracy in Assamese Text Data
Authors: Chayanika Hazarika Bordoloi
Abstract:
This paper is an attempt to work on a highly inflectional language called Assamese. This is also one of the national languages of India and very little has been achieved in terms of computational research. Building a language processing tool for a natural language is not very smooth as the standard and language representation change at various levels. This paper presents inflectional suffixes of Assamese verbs and how the statistical tools, along with linguistic features, can improve the tagging accuracy. Conditional random fields (CRF tool) was used to automatically tag and train the text data; however, accuracy was improved after linguistic featured were fed into the training data. Assamese is a highly inflectional language; hence, it is challenging to standardizing its morphology. Inflectional suffixes are used as a feature of the text data. In order to analyze the inflections of Assamese word forms, a list of suffixes is prepared. This list comprises suffixes, comprising of all possible suffixes that various categories can take is prepared. Assamese words can be classified into inflected classes (noun, pronoun, adjective and verb) and un-inflected classes (adverb and particle). The corpus used for this morphological analysis has huge tokens. The corpus is a mixed corpus and it has given satisfactory accuracy. The accuracy rate of the tagger has gradually improved with the modified training data.Keywords: CRF, morphology, tagging, tagset
Procedia PDF Downloads 19425593 Building a Composite Approach to Employees' Motivational Needs by Combining Cognitive Needs
Authors: Alexis Akinyemi, Laurene Houtin
Abstract:
Measures of employee motivation at work are often based on the theory of self-determined motivation, which implies that human resources departments and managers seek to motivate employees in the most self-determined way possible and use strategies to achieve this goal. In practice, they often tend to assess employee motivation and then adapt management to the most important source of motivation for their employees, for example by financially rewarding an employee who is extrinsically motivated, and by rewarding an intrinsically motivated employee with congratulations and recognition. Thus, the use of motivation measures contradicts theoretical positioning: theory does not provide for the promotion of extrinsically motivated behaviour. In addition, a corpus of social psychology linked to fundamental needs makes it possible to personally address a person’s different sources of motivation (need for cognition, need for uniqueness, need for effects and need for closure). By developing a composite measure of motivation based on these needs, we provide human resources professionals, and in particular occupational psychologists, with a tool that complements the assessment of self-determined motivation, making it possible to precisely address the objective of adapting work not to the self-determination of behaviours, but to the motivational traits of employees. To develop such a model, we gathered the French versions of the cognitive needs scales (need for cognition, need for uniqueness, need for effects, need for closure) and conducted a study with 645 employees of several French companies. On the basis of the data collected, we conducted a confirmatory factor analysis to validate the model, studied the correlations between the various needs, and highlighted the different reference groups that could be used to use these needs as a basis for interviews with employees (career, recruitment, etc.). The results showed a coherent model and the expected links between the different needs. Taken together, these results make it possible to propose a valid and theoretically adjusted tool to managers who wish to adapt their management to their employees’ current motivations, whether or not these motivations are self-determined.Keywords: motivation, personality, work commitment, cognitive needs
Procedia PDF Downloads 12325592 A Human Activity Recognition System Based on Sensory Data Related to Object Usage
Authors: M. Abdullah, Al-Wadud
Abstract:
Sensor-based activity recognition systems usually accounts which sensors have been activated to perform an activity. The system then combines the conditional probabilities of those sensors to represent different activities and takes the decision based on that. However, the information about the sensors which are not activated may also be of great help in deciding which activity has been performed. This paper proposes an approach where the sensory data related to both usage and non-usage of objects are utilized to make the classification of activities. Experimental results also show the promising performance of the proposed method.Keywords: Naïve Bayesian, based classification, activity recognition, sensor data, object-usage model
Procedia PDF Downloads 32225591 Application of Post-Stack and Pre-Stack Seismic Inversion for Prediction of Hydrocarbon Reservoirs in a Persian Gulf Gas Field
Authors: Nastaran Moosavi, Mohammad Mokhtari
Abstract:
Seismic inversion is a technique which has been in use for years and its main goal is to estimate and to model physical characteristics of rocks and fluids. Generally, it is a combination of seismic and well-log data. Seismic inversion can be carried out through different methods; we have conducted and compared post-stack and pre- stack seismic inversion methods on real data in one of the fields in the Persian Gulf. Pre-stack seismic inversion can transform seismic data to rock physics such as P-impedance, S-impedance and density. While post- stack seismic inversion can just estimate P-impedance. Then these parameters can be used in reservoir identification. Based on the results of inverting seismic data, a gas reservoir was detected in one of Hydrocarbon oil fields in south of Iran (Persian Gulf). By comparing post stack and pre-stack seismic inversion it can be concluded that the pre-stack seismic inversion provides a more reliable and detailed information for identification and prediction of hydrocarbon reservoirs.Keywords: density, p-impedance, s-impedance, post-stack seismic inversion, pre-stack seismic inversion
Procedia PDF Downloads 32325590 A Data-Driven Monitoring Technique Using Combined Anomaly Detectors
Authors: Fouzi Harrou, Ying Sun, Sofiane Khadraoui
Abstract:
Anomaly detection based on Principal Component Analysis (PCA) was studied intensively and largely applied to multivariate processes with highly cross-correlated process variables. Monitoring metrics such as the Hotelling's T2 and the Q statistics are usually used in PCA-based monitoring to elucidate the pattern variations in the principal and residual subspaces, respectively. However, these metrics are ill suited to detect small faults. In this paper, the Exponentially Weighted Moving Average (EWMA) based on the Q and T statistics, T2-EWMA and Q-EWMA, were developed for detecting faults in the process mean. The performance of the proposed methods was compared with that of the conventional PCA-based fault detection method using synthetic data. The results clearly show the benefit and the effectiveness of the proposed methods over the conventional PCA method, especially for detecting small faults in highly correlated multivariate data.Keywords: data-driven method, process control, anomaly detection, dimensionality reduction
Procedia PDF Downloads 29925589 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects
Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town
Abstract:
The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry
Procedia PDF Downloads 9225588 “Voiceless Memory” and Holodomor (Great Famine): The Power of Oral History to Challenge Official Historical Discourse
Authors: Tetiana Boriak
Abstract:
The study is called to test correlation between official sources, preserved in the archives, and “unofficial” oral history regarding the Great Famine of 1932–1933 in Ukraine. The research shows poor preservation of the sources, being deliberately destroyed by the totalitarian regime. It involves analysis of five stages of Holodomor oral history development. It is oral history that provides the mechanism of mass killing. The research proves that using only one type of historical sources leads to a certain line of reading history of the Holodomor, while usage of both types provides in-depth insight in the history of the famine.Keywords: the Holodomor (the Great Famine), oral history, historical source, historical memory, totalitarianism.
Procedia PDF Downloads 10825587 An Investigation of E-Government by Using GIS and Establishing E-Government in Developing Countries Case Study: Iraq
Authors: Ahmed M. Jamel
Abstract:
Electronic government initiatives and public participation to them are among the indicators of today's development criteria of the countries. After consequent two wars, Iraq's current position in, for example, UN's e-government ranking is quite concerning and did not improve in recent years, either. In the preparation of this work, we are motivated with the fact that handling geographic data of the public facilities and resources are needed in most of the e-government projects. Geographical information systems (GIS) provide most common tools not only to manage spatial data but also to integrate such type of data with nonspatial attributes of the features. With this background, this paper proposes that establishing a working GIS in the health sector of Iraq would improve e-government applications. As the case study, investigating hospital locations in Erbil is chosen.Keywords: e-government, GIS, Iraq, Erbil
Procedia PDF Downloads 38925586 Evaluation of Classification Algorithms for Diagnosis of Asthma in Iranian Patients
Authors: Taha SamadSoltani, Peyman Rezaei Hachesu, Marjan GhaziSaeedi, Maryam Zolnoori
Abstract:
Introduction: Data mining defined as a process to find patterns and relationships along data in the database to build predictive models. Application of data mining extended in vast sectors such as the healthcare services. Medical data mining aims to solve real-world problems in the diagnosis and treatment of diseases. This method applies various techniques and algorithms which have different accuracy and precision. The purpose of this study was to apply knowledge discovery and data mining techniques for the diagnosis of asthma based on patient symptoms and history. Method: Data mining includes several steps and decisions should be made by the user which starts by creation of an understanding of the scope and application of previous knowledge in this area and identifying KD process from the point of view of the stakeholders and finished by acting on discovered knowledge using knowledge conducting, integrating knowledge with other systems and knowledge documenting and reporting.in this study a stepwise methodology followed to achieve a logical outcome. Results: Sensitivity, Specifity and Accuracy of KNN, SVM, Naïve bayes, NN, Classification tree and CN2 algorithms and related similar studies was evaluated and ROC curves were plotted to show the performance of the system. Conclusion: The results show that we can accurately diagnose asthma, approximately ninety percent, based on the demographical and clinical data. The study also showed that the methods based on pattern discovery and data mining have a higher sensitivity compared to expert and knowledge-based systems. On the other hand, medical guidelines and evidence-based medicine should be base of diagnostics methods, therefore recommended to machine learning algorithms used in combination with knowledge-based algorithms.Keywords: asthma, datamining, classification, machine learning
Procedia PDF Downloads 44725585 Application of GPRS in Water Quality Monitoring System
Authors: V. Ayishwarya Bharathi, S. M. Hasker, J. Indhu, M. Mohamed Azarudeen, G. Gowthami, R. Vinoth Rajan, N. Vijayarangan
Abstract:
Identification of water quality conditions in a river system based on limited observations is an essential task for meeting the goals of environmental management. The traditional method of water quality testing is to collect samples manually and then send to laboratory for analysis. However, it has been unable to meet the demands of water quality monitoring today. So a set of automatic measurement and reporting system of water quality has been developed. In this project specifies Water quality parameters collected by multi-parameter water quality probe are transmitted to data processing and monitoring center through GPRS wireless communication network of mobile. The multi parameter sensor is directly placed above the water level. The monitoring center consists of GPRS and micro-controller which monitor the data. The collected data can be monitor at any instant of time. In the pollution control board they will monitor the water quality sensor data in computer using Visual Basic Software. The system collects, transmits and processes water quality parameters automatically, so production efficiency and economy benefit are improved greatly. GPRS technology can achieve well within the complex environment of poor water quality non-monitored, and more specifically applicable to the collection point, data transmission automatically generate the field of water analysis equipment data transmission and monitoring.Keywords: multiparameter sensor, GPRS, visual basic software, RS232
Procedia PDF Downloads 41225584 Decision Support System in Air Pollution Using Data Mining
Authors: E. Fathallahi Aghdam, V. Hosseini
Abstract:
Environmental pollution is not limited to a specific region or country; that is why sustainable development, as a necessary process for improvement, pays attention to issues such as destruction of natural resources, degradation of biological system, global pollution, and climate change in the world, especially in the developing countries. According to the World Health Organization, as a developing city, Tehran (capital of Iran) is one of the most polluted cities in the world in terms of air pollution. In this study, three pollutants including particulate matter less than 10 microns, nitrogen oxides, and sulfur dioxide were evaluated in Tehran using data mining techniques and through Crisp approach. The data from 21 air pollution measuring stations in different areas of Tehran were collected from 1999 to 2013. Commercial softwares Clementine was selected for this study. Tehran was divided into distinct clusters in terms of the mentioned pollutants using the software. As a data mining technique, clustering is usually used as a prologue for other analyses, therefore, the similarity of clusters was evaluated in this study through analyzing local conditions, traffic behavior, and industrial activities. In fact, the results of this research can support decision-making system, help managers improve the performance and decision making, and assist in urban studies.Keywords: data mining, clustering, air pollution, crisp approach
Procedia PDF Downloads 42725583 Passive Greenhouse Systems in Poland
Authors: Magdalena Grudzińska
Abstract:
Passive systems allow solar radiation to be converted into thermal energy thanks to appropriate building construction. Greenhouse systems are particularly worth attention, due to the low costs of their realization and strong architectural appeal. The paper discusses the energy effects of using passive greenhouse systems, such as glazed balconies, in an example residential building. The research was carried out for five localities in Poland, belonging to climatic zones different in terms of external air temperature and insolation: Koszalin, Poznań, Lublin, Białystok and Zakopane The analysed apartment had a floor area of approximately 74 m² Three thermal zones were distinguished in the flat - the balcony, the room adjacent to it, and the remaining space, for which various internal conditions were defined. Calculations of the energy demand were made using the dynamic simulation program, based on the control volume method. The climatic data were represented by Typical Meteorological Years, prepared on the basis of source data collected from 1971 to 2000. In each locality, the introduction of a passive greenhouse system led to a lower demand for heating in the apartment, and the shortening of the heating season. The smallest effectiveness of passive solar energy systems was noted in Białystok. Demand for heating was reduced there by 14.5% and the heating season remained the longest, due to low temperatures of external air and small sums of solar radiation intensity. In Zakopane, energy savings came to 21% and the heating season was reduced to 107 days, thanks to the greatest insolation during winter. The introduction of greenhouse systems caused an increase in cooling demand in the warmer part of the year, but total energy demand declined in each of the discussed places. However, potential energy savings are smaller if the building's annual life cycle is taken into consideration, and amount from 5.6% up to 14%. Koszalin and Zakopane are localities in which the greenhouse system allows the best energy results to be achieved. It should be emphasized that favourable conditions for introducing greenhouse systems are connected with different climatic conditions. In the seaside area (Koszalin) they result from high temperatures in the heating season and the smallest insolation in the summer period, while in the mountainous area (Zakopane) they result from high insolation in the winter and low temperatures in the summer. In the region of middle and middle-eastern Poland active systems (such as solar energy collectors or photovoltaic panels) could be more beneficial, due to high insolation during summer. It is assessed that passive systems do not eliminate the need for traditional heating in Poland. They can, however, substantially contribute to lower use of non-renewable fuels and the shortening of the heating season. The calculations showed diversification in the effectiveness of greenhouse systems resulting from climatic conditions, and allowed to identify areas which are the most suitable for the passive use of solar radiation.Keywords: solar energy, passive greenhouse systems, glazed balconies, climatic conditions
Procedia PDF Downloads 36825582 Test Suite Optimization Using an Effective Meta-Heuristic BAT Algorithm
Authors: Anuradha Chug, Sunali Gandhi
Abstract:
Regression Testing is a very expensive and time-consuming process carried out to ensure the validity of modified software. Due to the availability of insufficient resources to re-execute all the test cases in time constrained environment, efforts are going on to generate test data automatically without human efforts. Many search based techniques have been proposed to generate efficient, effective as well as optimized test data, so that the overall cost of the software testing can be minimized. The generated test data should be able to uncover all potential lapses that exist in the software or product. Inspired from the natural behavior of bat for searching her food sources, current study employed a meta-heuristic, search-based bat algorithm for optimizing the test data on the basis certain parameters without compromising their effectiveness. Mathematical functions are also applied that can effectively filter out the redundant test data. As many as 50 Java programs are used to check the effectiveness of proposed test data generation and it has been found that 86% saving in testing efforts can be achieved using bat algorithm while covering 100% of the software code for testing. Bat algorithm was found to be more efficient in terms of simplicity and flexibility when the results were compared with another nature inspired algorithms such as Firefly Algorithm (FA), Hill Climbing Algorithm (HC) and Ant Colony Optimization (ACO). The output of this study would be useful to testers as they can achieve 100% path coverage for testing with minimum number of test cases.Keywords: regression testing, test case selection, test case prioritization, genetic algorithm, bat algorithm
Procedia PDF Downloads 38125581 Modeling of Landslide-Generated Tsunamis in Georgia Strait, Southern British Columbia
Authors: Fatemeh Nemati, Lucinda Leonard, Gwyn Lintern, Richard Thomson
Abstract:
In this study, we will use modern numerical modeling approaches to estimate tsunami risks to the southern coast of British Columbia from landslides. Wave generation is to be simulated using the NHWAVE model, which solves the Navier-Stokes equations due to the more complex behavior of flow near the landslide source; far-field wave propagation will be simulated using the simpler model FUNWAVE_TVD with high-order Boussinesq-type wave equations, with a focus on the accurate simulation of wave propagation and regional- or coastal-scale inundation predictions.Keywords: FUNWAVE-TVD, landslide-generated tsunami, NHWAVE, tsunami risk
Procedia PDF Downloads 15525580 Disparities in Language Competence and Conflict: The Moderating Role of Cultural Intelligence in Intercultural Interactions
Authors: Catherine Peyrols Wu
Abstract:
Intercultural interactions are becoming increasingly common in organizations and life. These interactions are often the stage of miscommunication and conflict. In management research, these problems are commonly attributed to cultural differences in values and interactional norms. As a result, the notion that intercultural competence can minimize these challenges is widely accepted. Cultural differences, however, are not the only source of a challenge during intercultural interactions. The need to rely on a lingua franca – or common language between people who have different mother tongues – is another important one. In theory, a lingua franca can improve communication and ease coordination. In practice however, disparities in people’s ability and confidence to communicate in the language can exacerbate tensions and generate inefficiencies. In this study, we draw on power theory to develop a model of disparities in language competence and conflict in a multicultural work context. Specifically, we hypothesized that differences in language competence between interaction partners would be positively related to conflict such that people would report greater conflict with partners who have more dissimilar levels of language competence and lesser conflict with partners with more similar levels of language competence. Furthermore, we proposed that cultural intelligence (CQ) an intercultural competence that denotes an individual’s capability to be effective in intercultural situations, would weaken the relationship between disparities in language competence and conflict such that people would report less conflict with partners who have more dissimilar levels of language competence when the interaction partner has high CQ and more conflict when the partner has low CQ. We tested this model with a sample of 135 undergraduate students working in multicultural teams for 13 weeks. We used a round-robin design to examine conflict in 646 dyads nested within 21 teams. Results of analyses using social relations modeling provided support for our hypotheses. Specifically, we found that in intercultural dyads with large disparities in language competence, partners with the lowest level of language competence would report higher levels of interpersonal conflict. However, this relationship disappeared when the partner with higher language competence was also high in CQ. These findings suggest that communication in a lingua franca can be a source of conflict in intercultural collaboration when partners differ in their level of language competence and that CQ can alleviate these effects during collaboration with partners who have relatively lower levels of language competence. Theoretically, this study underscores the benefits of CQ as a complement to language competence for intercultural effectiveness. Practically, these results further attest to the benefits of investing resources to develop language competence and CQ in employees engaged in multicultural work.Keywords: cultural intelligence, intercultural interactions, language competence, multicultural teamwork
Procedia PDF Downloads 16525579 Mobile App versus Website: A Comparative Eye-Tracking Case Study of Topshop
Authors: Zofija Tupikovskaja-Omovie, David Tyler, Sam Dhanapala, Steve Hayes
Abstract:
The UK is leading in online retail and mobile adoption. However, there is a dearth of information relating to mobile apparel retail, and developing an understanding about consumer browsing and purchase behavior in m-retail channel would provide apparel marketers, mobile website and app developers with the necessary understanding of consumers’ needs. Despite the rapid growth of mobile retail businesses, no published study has examined shopping behaviour on fashion mobile websites and apps. A mixed method approach helped to understand why fashion consumers prefer websites on mobile devices, when mobile apps are also available. The following research methods were employed: survey, eye-tracking experiments, observation, and interview with retrospective think aloud. The mobile gaze tracking device by SensoMotoric Instruments was used to understand frustrations in navigation and other issues facing consumers in mobile channel. This method helped to validate and compliment other traditional user-testing approaches in order to optimize user experience and enhance the development of mobile retail channel. The study involved eight participants - females aged 18 to 35 years old, who are existing mobile shoppers. The participants used the Topshop mobile app and website on a smart phone to complete a task according to a specified scenario leading to a purchase. The comparative study was based on: duration and time spent at different stages of the shopping journey, number of steps involved and product pages visited, search approaches used, layout and visual clues, as well as consumer perceptions and expectations. The results from the data analysis show significant differences in consumer behaviour when using a mobile app or website on a smart phone. Moreover, two types of problems were identified, namely technical issues and human errors. Having a mobile app does not guarantee success in satisfying mobile fashion consumers. The differences in the layout and visual clues seem to influence the overall shopping experience on a smart phone. The layout of search results on the website was different from the mobile app. Therefore, participants, in most cases, behaved differently on different platforms. The number of product pages visited on the mobile app was triple the number visited on the website due to a limited visibility of products in the search results. Although, the data on traffic trends held by retailers to date, including retail sector breakdowns for visits and views, data on device splits and duration, might seem a valuable source of information, it cannot explain why consumers visit many product pages, stay longer on the website or mobile app, or abandon the basket. A comprehensive list of pros and cons was developed by highlighting issues for website and mobile app, and recommendations provided. The findings suggest that fashion retailers need to be aware of actual consumers’ behaviour on the mobile channel and their expectations in order to offer a seamless shopping experience. Added to which is the challenge of retaining existing and acquiring new customers. There seem to be differences in the way fashion consumers search and shop on mobile, which need to be explored in further studies.Keywords: consumer behavior, eye-tracking technology, fashion retail, mobile app, m-retail, smart phones, topshop, user experience, website
Procedia PDF Downloads 45925578 Modified InVEST for Whatsapp Messages Forensic Triage and Search through Visualization
Authors: Agria Rhamdhan
Abstract:
WhatsApp as the most popular mobile messaging app has been used as evidence in many criminal cases. As the use of mobile messages generates large amounts of data, forensic investigation faces the challenge of large data problems. The hardest part of finding this important evidence is because current practice utilizes tools and technique that require manual analysis to check all messages. That way, analyze large sets of mobile messaging data will take a lot of time and effort. Our work offers methodologies based on forensic triage to reduce large data to manageable sets resulting easier to do detailed reviews, then show the results through interactive visualization to show important term, entities and relationship through intelligent ranking using Term Frequency-Inverse Document Frequency (TF-IDF) and Latent Dirichlet Allocation (LDA) Model. By implementing this methodology, investigators can improve investigation processing time and result's accuracy.Keywords: forensics, triage, visualization, WhatsApp
Procedia PDF Downloads 16825577 Low Cost Webcam Camera and GNSS Integration for Updating Home Data Using AI Principles
Authors: Mohkammad Nur Cahyadi, Hepi Hapsari Handayani, Agus Budi Raharjo, Ronny Mardianto, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan
Abstract:
PDAM (local water company) determines customer charges by considering the customer's building or house. Charges determination significantly affects PDAM income and customer costs because the PDAM applies a subsidy policy for customers classified as small households. Periodic updates are needed so that pricing is in line with the target. A thorough customer survey in Surabaya is needed to update customer building data. However, the survey that has been carried out so far has been by deploying officers to conduct one-by-one surveys for each PDAM customer. Surveys with this method require a lot of effort and cost. For this reason, this research offers a technology called moblie mapping, a mapping method that is more efficient in terms of time and cost. The use of this tool is also quite simple, where the device will be installed in the car so that it can record the surrounding buildings while the car is running. Mobile mapping technology generally uses lidar sensors equipped with GNSS, but this technology requires high costs. In overcoming this problem, this research develops low-cost mobile mapping technology using a webcam camera sensor added to the GNSS and IMU sensors. The camera used has specifications of 3MP with a resolution of 720 and a diagonal field of view of 78⁰. The principle of this invention is to integrate four camera sensors, a GNSS webcam, and GPS to acquire photo data, which is equipped with location data (latitude, longitude) and IMU (roll, pitch, yaw). This device is also equipped with a tripod and a vacuum cleaner to attach to the car's roof so it doesn't fall off while running. The output data from this technology will be analyzed with artificial intelligence to reduce similar data (Cosine Similarity) and then classify building types. Data reduction is used to eliminate similar data and maintain the image that displays the complete house so that it can be processed for later classification of buildings. The AI method used is transfer learning by utilizing a trained model named VGG-16. From the analysis of similarity data, it was found that the data reduction reached 50%. Then georeferencing is done using the Google Maps API to get address information according to the coordinates in the data. After that, geographic join is done to link survey data with customer data already owned by PDAM Surya Sembada Surabaya.Keywords: mobile mapping, GNSS, IMU, similarity, classification
Procedia PDF Downloads 8425576 An Investigation into the Views of Distant Science Education Students Regarding Teaching Laboratory Work Online
Authors: Abraham Motlhabane
Abstract:
This research analysed the written views of science education students regarding the teaching of laboratory work using the online mode. The research adopted the qualitative methodology. The qualitative research was aimed at investigating small and distinct groups normally regarded as a single-site study. Qualitative research was used to describe and analyze the phenomena from the student’s perspective. This means the research began with assumptions of the world view that use theoretical lenses of research problems inquiring into the meaning of individual students. The research was conducted with three groups of students studying for Postgraduate Certificate in Education, Bachelor of Education and honors Bachelor of Education respectively. In each of the study programmes, the science education module is compulsory. Five science education students from each study programme were purposively selected to participate in this research. Therefore, 15 students participated in the research. In order to analysis the data, the data were first printed and hard copies were used in the analysis. The data was read several times and key concepts and ideas were highlighted. Themes and patterns were identified to describe the data. Coding as a process of organising and sorting data was used. The findings of the study are very diverse; some students are in favour of online laboratory whereas other students argue that science can only be learnt through hands-on experimentation.Keywords: online learning, laboratory work, views, perceptions
Procedia PDF Downloads 14425575 Left Posterior Pericardiotomy in the Prevention of Post-Operative Atrial Fibrillation and Cardiac Tamponade: A Retrospective Study of 2118 Isolated Coronary Artery Bypass Graft Patients
Authors: Ayeshmanthe Rathnayake, Siew Goh, Carmel Fenton, Ashutosh Hardikar
Abstract:
Post-Operative Atrial Fibrillation (POAF) is the most frequent complication of cardiac surgery and is associated with reduced survival, increased rates of cognitive changes and cerebrovascular accident, heart failure, renal dysfunction, infection and length of stay, and hospital costs. Cardiac tamponade, although less common, carries high morbidity and mortality. Shed mediastinal blood in the pericardial space is a major source of intrapericardial oxidative stress and inflammation that triggers POAF. The utilisation of a left posterior pericardiotomy aims to shunt blood from the pericardium into the pleural space and have a role in the prevention of POAF as well as cardiac tamponade. 2118 patients had undergone isolated Coronary Artery Bypass Graft (CABG) at Royal Hobart Hospital from 2008-2021. They were divided into pericardiotomy vs control group. Patient baseline demographics, intraoperative data, and post-operative outcomes were reviewed retrospectively. Total incidence of new POAF and cardiac tamponade was 26.1% and 0.75%, respectively. Primary outcome of both the incidence of POAF(22.9% vs27.8%OR 0.77 p<0.05) and Cardiac Tamponade (0% vs 1.1% OR 0.85 p<0.05) were less in the pericardiotomy group.Increasing age, BMI, poor left ventricular function (EF <30%), and return to theatre were independent predictors of developing POAF. There were similar rates of return to theatre for bleeding however, no cases of tamponade in the pericardiotomy group. There were no complications attributable to left posterior pericardiotomy and the time added to the duration of surgery was minimal. Left posterior pericardiotomy is associated with a significant reduction in the incidence of POAFand cardiac tamponade and issafe and efficient.Keywords: cardiac surgery, pericardiotomy, post-operative atrial fibrillation, cardiac tamponade
Procedia PDF Downloads 9125574 Chemical and Physical Properties and Biocompatibility of Ti–6Al–4V Produced by Electron Beam Rapid Manufacturing and Selective Laser Melting for Biomedical Applications
Authors: Bing–Jing Zhao, Chang-Kui Liu, Hong Wang, Min Hu
Abstract:
Electron beam rapid manufacturing (EBRM) or Selective laser melting is an additive manufacturing process that uses 3D CAD data as a digital information source and energy in the form of a high-power laser beam or electron beam to create three-dimensional metal parts by fusing fine metallic powders together.Object:The present study was conducted to evaluate the mechanical properties ,the phase transformation,the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM,SLM and forging technique.Method: Ti-6Al-4V alloy standard test pieces were manufactured by EBRM, SLM and forging technique according to AMS4999,GB/T228 and ISO 10993.The mechanical properties were analyzed by universal test machine. The phase transformation was analyzed by X-ray diffraction and scanning electron microscopy. The corrosivity was analyzed by electrochemical method. The biocompatibility was analyzed by co-culturing with mesenchymal stem cell and analyzed by scanning electron microscopy (SEM) and alkaline phosphatase assay (ALP) to evaluate cell adhesion and differentiation, respectively. Results: The mechanical properties, the phase transformation, the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM、SLM were similar to forging and meet the mechanical property requirements of AMS4999 standard. aphase microstructure for the EBM production contrast to the a’phase microstructure of the SLM product. Mesenchymal stem cell adhesion and differentiation were well. Conclusion: The property of the Ti-6Al-4V alloy manufactured by EBRM and SLM technique can meet the medical standard from this study. But some further study should be proceeded in order to applying well in clinical practice.Keywords: 3D printing, Electron Beam Rapid Manufacturing (EBRM), Selective Laser Melting (SLM), Computer Aided Design (CAD)
Procedia PDF Downloads 45425573 The Communication Library DIALOG for iFDAQ of the COMPASS Experiment
Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius
Abstract:
Modern experiments in high energy physics impose great demands on the reliability, the efficiency, and the data rate of Data Acquisition Systems (DAQ). This contribution focuses on the development and deployment of the new communication library DIALOG for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. The iFDAQ utilizing a hardware event builder is designed to be able to readout data at the maximum rate of the experiment. The DIALOG library is a communication system both for distributed and mixed environments, it provides a network transparent inter-process communication layer. Using the high-performance and modern C++ framework Qt and its Qt Network API, the DIALOG library presents an alternative to the previously used DIM library. The DIALOG library was fully incorporated to all processes in the iFDAQ during the run 2016. From the software point of view, it might be considered as a significant improvement of iFDAQ in comparison with the previous run. To extend the possibilities of debugging, the online monitoring of communication among processes via DIALOG GUI is a desirable feature. In the paper, we present the DIALOG library from several insights and discuss it in a detailed way. Moreover, the efficiency measurement and comparison with the DIM library with respect to the iFDAQ requirements is provided.Keywords: data acquisition system, DIALOG library, DIM library, FPGA, Qt framework, TCP/IP
Procedia PDF Downloads 31625572 Mining Scientific Literature to Discover Potential Research Data Sources: An Exploratory Study in the Field of Haemato-Oncology
Authors: A. Anastasiou, K. S. Tingay
Abstract:
Background: Discovering suitable datasets is an important part of health research, particularly for projects working with clinical data from patients organized in cohorts (cohort data), but with the proliferation of so many national and international initiatives, it is becoming increasingly difficult for research teams to locate real world datasets that are most relevant to their project objectives. We present a method for identifying healthcare institutes in the European Union (EU) which may hold haemato-oncology (HO) data. A key enabler of this research was the bibInsight platform, a scientometric data management and analysis system developed by the authors at Swansea University. Method: A PubMed search was conducted using HO clinical terms taken from previous work. The resulting XML file was processed using the bibInsight platform, linking affiliations to the Global Research Identifier Database (GRID). GRID is an international, standardized list of institutions, including the city and country in which the institution exists, as well as a category of the main business type, e.g., Academic, Healthcare, Government, Company. Countries were limited to the 28 current EU members, and institute type to 'Healthcare'. An article was considered valid if at least one author was affiliated with an EU-based healthcare institute. Results: The PubMed search produced 21,310 articles, consisting of 9,885 distinct affiliations with correspondence in GRID. Of these articles, 760 were from EU countries, and 390 of these were healthcare institutes. One affiliation was excluded as being a veterinary hospital. Two EU countries did not have any publications in our analysis dataset. The results were analysed by country and by individual healthcare institute. Networks both within the EU and internationally show institutional collaborations, which may suggest a willingness to share data for research purposes. Geographical mapping can ensure that data has broad population coverage. Collaborations with industry or government may exclude healthcare institutes that may have embargos or additional costs associated with data access. Conclusions: Data reuse is becoming increasingly important both for ensuring the validity of results, and economy of available resources. The ability to identify potential, specific data sources from over twenty thousand articles in less than an hour could assist in improving knowledge of, and access to, data sources. As our method has not yet specified if these healthcare institutes are holding data, or merely publishing on that topic, future work will involve text mining of data-specific concordant terms to identify numbers of participants, demographics, study methodologies, and sub-topics of interest.Keywords: data reuse, data discovery, data linkage, journal articles, text mining
Procedia PDF Downloads 11525571 A Long Tail Study of eWOM Communities
Authors: M. Olmedilla, M. R. Martinez-Torres, S. L. Toral
Abstract:
Electronic Word-Of-Mouth (eWOM) communities represent today an important source of information in which more and more customers base their purchasing decisions. They include thousands of reviews concerning very different products and services posted by many individuals geographically distributed all over the world. Due to their massive audience, eWOM communities can help users to find the product they are looking for even if they are less popular or rare. This is known as the long tail effect, which leads to a larger number of lower-selling niche products. This paper analyzes the long tail effect in a well-known eWOM community and defines a tool for finding niche products unavailable through conventional channels.Keywords: eWOM, online user reviews, long tail theory, product categorization, social network analysis
Procedia PDF Downloads 42125570 Using Data Mining Technique for Scholarship Disbursement
Authors: J. K. Alhassan, S. A. Lawal
Abstract:
This work is on decision tree-based classification for the disbursement of scholarship. Tree-based data mining classification technique is used in other to determine the generic rule to be used to disburse the scholarship. The system based on the defined rules from the tree is able to determine the class (status) to which an applicant shall belong whether Granted or Not Granted. The applicants that fall to the class of granted denote a successful acquirement of scholarship while those in not granted class are unsuccessful in the scheme. An algorithm that can be used to classify the applicants based on the rules from tree-based classification was also developed. The tree-based classification is adopted because of its efficiency, effectiveness, and easy to comprehend features. The system was tested with the data of National Information Technology Development Agency (NITDA) Abuja, a Parastatal of Federal Ministry of Communication Technology that is mandated to develop and regulate information technology in Nigeria. The system was found working according to the specification. It is therefore recommended for all scholarship disbursement organizations.Keywords: classification, data mining, decision tree, scholarship
Procedia PDF Downloads 37625569 Relationship between Functional Properties and Supramolecular Structure of the Poly(Trimethylene 2,5-Furanoate) Based Multiblock Copolymers with Aliphatic Polyethers or Aliphatic Polyesters
Authors: S. Paszkiewicz, A. Zubkiewicz, A. Szymczyk, D. Pawlikowska, I. Irska, E. Piesowicz, A. Linares, T. A. Ezquerra
Abstract:
Over the last century, the world has become increasingly dependent on oil as its main source of chemicals and energy. Driven largely by the strong economic growth of India and China, demand for oil is expected to increase significantly in the coming years. This growth in demand, combined with diminishing reserves, will require the development of new, sustainable sources for fuels and bulk chemicals. Biomass is an attractive alternative feedstock, as it is widely available carbon source apart from oil and coal. Nowadays, academic and industrial research in the field of polymer materials is strongly oriented towards bio-based alternatives to petroleum-derived plastics with enhanced properties for advanced applications. In this context, 2,5-furandicarboxylic acid (FDCA), a biomass-based chemical product derived from lignocellulose, is one of the most high-potential biobased building blocks for polymers and the first candidate to replace the petro-derived terephthalic acid. FDCA has been identified as one of the top 12 chemicals in the future, which may be used as a platform chemical for the synthesis of biomass-based polyester. The aim of this study is to synthesize and characterize the multiblock copolymers containing rigid segments of poly(trimethylene 2,5-furanoate) (PTF) and soft segments of poly(tetramethylene oxide) (PTMO) with excellent elastic properties or aliphatic polyesters of polycaprolactone (PCL). Two series of PTF based copolymers, i.e., PTF-block-PTMO-T and PTF-block-PCL-T, with different content of flexible segments were synthesized by means of a two-step melt polycondensation process and characterized by various methods. The rigid segments of PTF, as well as the flexible PTMO/or PCL ones, were randomly distributed along the chain. On the basis of 1H NMR, SAXS and WAXS, DSC an DMTA results, one can conclude that both phases were thermodynamically immiscible and the values of phase transition temperatures varied with the composition of the copolymer. The copolymers containing 25, 35 and 45wt.% of flexible segments (PTMO) exhibited elastomeric property characteristics. Moreover, with respect to the flexible segments content, the temperatures corresponding to 5%, 25%, 50% and 90% mass loss as well as the values of tensile modulus decrease with the increasing content of aliphatic polyether or aliphatic polyester in the composition.Keywords: furan based polymers, multiblock copolymers, supramolecular structure, functional properties
Procedia PDF Downloads 129