Search results for: Data Collete Bob-Manuel
21835 Content Analysis and Attitude of Thai Students towards Thai Series “Hormones: Season 2”
Authors: Siriporn Meenanan
Abstract:
The objective of this study is to investigate the attitude of Thai students towards the Thai series "Hormones the Series Season 2". This study was conducted in the quantitative research, and the questionnaires were used to collect data from 400 people of the sample group. Descriptive statistics were used in data analysis. The findings reveal that most participants have positive comments regarding the series. They strongly agreed that the series reflects on the way of life and problems of teenagers in Thailand. Hence, the participants believe that if adults have a chance to watch the series, they will have the better understanding of the teenagers. In addition, the participants also agreed that the contents of the play are appropriate and satisfiable as the contents of “Hormones the Series Season 2” will raise awareness among the teens and use it as a guide to prevent problems that might happen during their teenage life.Keywords: content analysis, attitude, Thai series, hormones the Series
Procedia PDF Downloads 22921834 Using Collaborative Pictures to Understand Student Experience
Authors: Tessa Berg, Emma Guion Akdag
Abstract:
Summative feedback forms are used in academia for gathering data on course quality and student understanding. Students answer a series of questions based on the course they are soon to finish in these forms. Feedback forms are notorious for being homogenised and limiting and thus the data captured is often neutral and lacking in tacit emotional responses. This paper contrasts student feedback forms with collaborative drawing. We analyse 19 pictures drawn by international students on a pre-sessional course. Through visuals we present an approach to enable a holistic level of student understanding. Visuals communicate irrespective of possible language, cultural and educational barriers. This paper sought to discover if the pictures mirrored the feedback given on a typical feedback form. Findings indicate a considerable difference in the two approaches and thus we highlight the value of collaborative drawing as a complimentary resource to aid the understanding of student experience.Keywords: feedback forms, visualisation, student experience, collaborative drawing
Procedia PDF Downloads 34521833 Health Trajectory Clustering Using Deep Belief Networks
Authors: Farshid Hajati, Federico Girosi, Shima Ghassempour
Abstract:
We present a Deep Belief Network (DBN) method for clustering health trajectories. Deep Belief Network (DBN) is a deep architecture that consists of a stack of Restricted Boltzmann Machines (RBM). In a deep architecture, each layer learns more complex features than the past layers. The proposed method depends on DBN in clustering without using back propagation learning algorithm. The proposed DBN has a better a performance compared to the deep neural network due the initialization of the connecting weights. We use Contrastive Divergence (CD) method for training the RBMs which increases the performance of the network. The performance of the proposed method is evaluated extensively on the Health and Retirement Study (HRS) database. The University of Michigan Health and Retirement Study (HRS) is a nationally representative longitudinal study that has surveyed more than 27,000 elderly and near-elderly Americans since its inception in 1992. Participants are interviewed every two years and they collect data on physical and mental health, insurance coverage, financial status, family support systems, labor market status, and retirement planning. The dataset is publicly available and we use the RAND HRS version L, which is easy to use and cleaned up version of the data. The size of sample data set is 268 and the length of the trajectories is equal to 10. The trajectories do not stop when the patient dies and represent 10 different interviews of live patients. Compared to the state-of-the-art benchmarks, the experimental results show the effectiveness and superiority of the proposed method in clustering health trajectories.Keywords: health trajectory, clustering, deep learning, DBN
Procedia PDF Downloads 36921832 Harnessing Emerging Creative Technology for Knowledge Discovery of Multiwavelenght Datasets
Authors: Basiru Amuneni
Abstract:
Astronomy is one domain with a rise in data. Traditional tools for data management have been employed in the quest for knowledge discovery. However, these traditional tools become limited in the face of big. One means of maximizing knowledge discovery for big data is the use of scientific visualisation. The aim of the work is to explore the possibilities offered by emerging creative technologies of Virtual Reality (VR) systems and game engines to visualize multiwavelength datasets. Game Engines are primarily used for developing video games, however their advanced graphics could be exploited for scientific visualization which provides a means to graphically illustrate scientific data to ease human comprehension. Modern astronomy is now in the era of multiwavelength data where a single galaxy for example, is captured by the telescope several times and at different electromagnetic wavelength to have a more comprehensive picture of the physical characteristics of the galaxy. Visualising this in an immersive environment would be more intuitive and natural for an observer. This work presents a standalone VR application that accesses galaxy FITS files. The application was built using the Unity Game Engine for the graphics underpinning and the OpenXR API for the VR infrastructure. The work used a methodology known as Design Science Research (DSR) which entails the act of ‘using design as a research method or technique’. The key stages of the galaxy modelling pipeline are FITS data preparation, Galaxy Modelling, Unity 3D Visualisation and VR Display. The FITS data format cannot be read by the Unity Game Engine directly. A DLL (CSHARPFITS) which provides a native support for reading and writing FITS files was used. The Galaxy modeller uses an approach that integrates cleaned FITS image pixels into the graphics pipeline of the Unity3d game Engine. The cleaned FITS images are then input to the galaxy modeller pipeline phase, which has a pre-processing script that extracts, pixel, galaxy world position, and colour maps the FITS image pixels. The user can visualise image galaxies in different light bands, control the blend of the image with similar images from different sources or fuse images for a holistic view. The framework will allow users to build tools to realise complex workflows for public outreach and possibly scientific work with increased scalability, near real time interactivity with ease of access. The application is presented in an immersive environment and can use all commercially available headset built on the OpenXR API. The user can select galaxies in the scene, teleport to the galaxy, pan, zoom in/out, and change colour gradients of the galaxy. The findings and design lessons learnt in the implementation of different use cases will contribute to the development and design of game-based visualisation tools in immersive environment by enabling informed decisions to be made.Keywords: astronomy, visualisation, multiwavelenght dataset, virtual reality
Procedia PDF Downloads 9121831 Validation of a Fluid-Structure Interaction Model of an Aortic Dissection versus a Bench Top Model
Authors: K. Khanafer
Abstract:
The aim of this investigation was to validate the fluid-structure interaction (FSI) model of type B aortic dissection with our experimental results from a bench-top-model. Another objective was to study the relationship between the size of a septectomy that increases the outflow of the false lumen and its effect on the values of the differential of pressure between true lumen and false lumen. FSI analysis based on Galerkin’s formulation was used in this investigation to study flow pattern and hemodynamics within a flexible type B aortic dissection model using boundary conditions from our experimental data. The numerical results of our model were verified against the experimental data for various tear size and location. Thus, CFD tools have a potential role in evaluating different scenarios and aortic dissection configurations.Keywords: aortic dissection, fluid-structure interaction, in vitro model, numerical
Procedia PDF Downloads 27121830 Development of pm2.5 Forecasting System in Seoul, South Korea Using Chemical Transport Modeling and ConvLSTM-DNN
Authors: Ji-Seok Koo, Hee‑Yong Kwon, Hui-Young Yun, Kyung-Hui Wang, Youn-Seo Koo
Abstract:
This paper presents a forecasting system for PM2.5 levels in Seoul, South Korea, leveraging a combination of chemical transport modeling and ConvLSTM-DNN machine learning technology. Exposure to PM2.5 has known detrimental impacts on public health, making its prediction crucial for establishing preventive measures. Existing forecasting models, like the Community Multiscale Air Quality (CMAQ) and Weather Research and Forecasting (WRF), are hindered by their reliance on uncertain input data, such as anthropogenic emissions and meteorological patterns, as well as certain intrinsic model limitations. The system we've developed specifically addresses these issues by integrating machine learning and using carefully selected input features that account for local and distant sources of PM2.5. In South Korea, the PM2.5 concentration is greatly influenced by both local emissions and long-range transport from China, and our model effectively captures these spatial and temporal dynamics. Our PM2.5 prediction system combines the strengths of advanced hybrid machine learning algorithms, convLSTM and DNN, to improve upon the limitations of the traditional CMAQ model. Data used in the system include forecasted information from CMAQ and WRF models, along with actual PM2.5 concentration and weather variable data from monitoring stations in China and South Korea. The system was implemented specifically for Seoul's PM2.5 forecasting.Keywords: PM2.5 forecast, machine learning, convLSTM, DNN
Procedia PDF Downloads 5421829 Revealing the Potential of Geotourism and Geoheritage of Gedangsari Area, Yogyakarta
Authors: Cecilia Jatu, Adventino
Abstract:
Gedangsari is located in Gunungkidul, Yogyakarta Province, which has several criteria to be used as a new geosite object. The research area is located in the southern mountain zone of Java, composed of 5 rock formations with Oligocene up to Middle Miocene age. The purpose of this study is to reveal the potential of geotourism and the geoheritage to be proposed as a new geosite and to make a geosite map of Gedangsari. The research method used is descriptive data collection and which includes quantitative geological data collection, geotourism, and heritage sites, then supported by petrographic analysis, geological structure, geological mapping, and SWOT analysis. The geological data proved that Gedangsari consists of igneous rock (intrusion), pyroclastic rock, and sediment rock. This condition caused many varieties and particular geomorphological platform. Geotourism that include in Gedangsari are Luweng Sampang Canyon, Gedangsari Bouma Sequence, Watugajah Columnar Joint, Gedangsari Marine Fan Sediment, and Tegalrejo Waterfall. There is also Tegalrejo Village, which can be considered as geoheritage site because of its culture and batik traditional cloth. The results of the SWOT analysis, Gedangsari geosite must be developed and appropriately promoted in order to improve the existence. The development of geosite area will have a significant impact that improve the economic growth of the surrounding community and can be used by the government as base information for sustainable development. In addition, the making of an educational map about the geological conditions and geotourism location of the Gedangsari geosite can increase the people's knowledge about Gedangsari.Keywords: Gedangsari, geoheritage, geotourism, geosite
Procedia PDF Downloads 12221828 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 9921827 Fostering Resilience in Early Adolescents: A Canadian Evaluation of the HEROES Program
Authors: Patricia L. Fontanilla, David Nordstokke
Abstract:
Introduction: Today’s children and youth face increasing social and behavioural challenges, leading to delays in social development and greater mental health needs. Early adolescents (aged 9 to 14) are experiencing a rise in mental health symptoms and diagnoses. This study examines the impact of HEROES, a social-emotional learning (SEL) program, on resilience and academic outcomes in early adolescents. The HEROES program is designed to enhance resilience the ability to adapt and thrive in the face of adversity, equipping youth to navigate developmental transitions and challenges. This study’s objective was to evaluate the program’s long-term effectiveness by measuring changes in resilience and academic resilience across 10 months. Methodology: This study collected data from 21 middle school students (grades 7 to 9) in a rural Canadian school. Quantitative data were gathered at four intervals: pre-intervention, post-intervention, and at 2- and 4-month follow-ups. Data were analyzed with linear mixed models (LMM). Results: Findings showed statistically significant increases in academic resilience over time and significant increases in resilience from pre-intervention to 2 and 4 months later. Limitations included a small sample size, which may affect generalizability. Conclusion: The HEROES program demonstrates promise in increasing resilience and academic resilience among early adolescents through SEL skill development.Keywords: academic resilience, early adolescence, resilience, SEL, social-emotional learning program
Procedia PDF Downloads 1121826 An Inverse Approach for Determining Creep Properties from a Miniature Thin Plate Specimen under Bending
Authors: Yang Zheng, Wei Sun
Abstract:
This paper describes a new approach which can be used to interpret the experimental creep deformation data obtained from miniaturized thin plate bending specimen test to the corresponding uniaxial data based on an inversed application of the reference stress method. The geometry of the thin plate is fully defined by the span of the support, l, the width, b, and the thickness, d. Firstly, analytical solutions for the steady-state, load-line creep deformation rate of the thin plates for a Norton’s power law under plane stress (b → 0) and plane strain (b → ∞) conditions were obtained, from which it can be seen that the load-line deformation rate of the thin plate under plane-stress conditions is much higher than that under the plane-strain conditions. Since analytical solution is not available for the plates with random b-values, finite element (FE) analyses are used to obtain the solutions. Based on the FE results obtained for various b/l ratios and creep exponent, n, as well as the analytical solutions under plane stress and plane strain conditions, an approximate, numerical solutions for the deformation rate are obtained by curve fitting. Using these solutions, a reference stress method is utilised to establish the conversion relationships between the applied load and the equivalent uniaxial stress and between the creep deformations of thin plate and the equivalent uniaxial creep strains. Finally, the accuracy of the empirical solution was assessed by using a set of “theoretical” experimental data.Keywords: bending, creep, thin plate, materials engineering
Procedia PDF Downloads 47421825 Analyzing the Commentator Network Within the French YouTube Environment
Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes
Abstract:
To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.Keywords: YouTube, social networks, economics, consumer behaviour
Procedia PDF Downloads 6821824 Production Increase of C-Central Wells Baher Essalm-Libya
Authors: Emed Krekshi, Walid Ben Husein
Abstract:
The Bahr Essalam gas-condensate field is located off the Libyan coast and is currently being produced by Mellitah Oil and Gas (MOG). Gas and condensate are produced from the Bahr Essalam reservoir through a mixture of platform and subsea wells, with the subsea wells being gathered at the western manifolds and delivered to the Sabratha platform via a 22-inch pipeline. Gas is gathered and dehydrated on the Sabratha platform and then delivered to the Mellitah gas plant via an existing 36-inch gas export pipeline. The condensate separated on the Sabratha platform will be delivered to the Mellitah gas plant via an existing 10-inch export pipeline. The Bahr Essalam Phase II project includes 2 production wells (CC16 & CC17) at C-Central A connected to the Sabratha platform via a new 10.9 km long 10”/14” production pipeline. Production rates from CC16 and CC17 have exceeded the maximum planned rate of 40 MMSCFD per well. A hydrothermal analysis was conducted to review and Verify input data, focusing on the variation of flowing well head as a function of flowrate.as well as Review available input data against the previous design input data to determine the extent of change. The steady-state and transient simulations performed with Olga yielded coherent results and confirmed the possibility of achieving flow rates of up to 60MMSCFD per well without exceeding the design temperatures, pressures, and velocities.Keywords: Bahr Essalam, Mellitah Oil and Gas, production flow rates, steady and transient
Procedia PDF Downloads 5821823 The Effect of Change Communication towards Commitment to Change through the Role of Organizational Trust
Authors: Enno R. Farahzehan, Wustari L. Mangundjaya
Abstract:
Organizational change is necessary to develop innovation and to compete with other competitors. Organizational changes were also made to defend the existence of the organization itself. Success in implementing organizational change consists of a variety of factors, one of which is individual (employee) who run changes. The employee must have the willingness and ability in carrying out the changes. Besides, employees must also have a commitment to change for creation of the successful organizational change. This study aims to execute the effect of change communication towards commitment to change through the role of organizational trust. The respondents of this study were employees who work in organizations, which have been or are currently running organizational changes. The data were collected using Change Communication, Commitment to Change, and Organizational Trust Inventory. The data were analyzed using regression. The result showed that there is an effect among change communication towards commitment to change which is higher when mediated by organizational trust. This paper will contribute to the knowledge and implications of organizational change, that shows change communication can affect commitment to change among employee if there is trust in the organization.Keywords: change communication, commitment to change, organizational trust, organizational change
Procedia PDF Downloads 34221822 The Classification Accuracy of Finance Data through Holder Functions
Authors: Yeliz Karaca, Carlo Cattani
Abstract:
This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).Keywords: artificial neural networks, finance data, Holder regularity, multifractals
Procedia PDF Downloads 24621821 3d Property Modelling of the Lower Acacus Reservoir, Ghadames Basin, Libya
Authors: Aimen Saleh
Abstract:
The Silurian Lower Acacus sandstone is one of the main reservoirs in North West Libya. Our aim in this study is to grasp a robust understanding of the hydrocarbon potential and distribution in the area. To date, the depositional environment of the Lower Acacus reservoir still open to discussion and contradiction. Henceforth, building three dimensional (3D) property modelling is one way to support the analysis and description of the reservoir, its properties and characterizations, so this will be of great value in this project. The 3D model integrates different data set, these incorporates well logs data, petrophysical reservoir properties and seismic data as well. The finalized depositional environment model of the Lower Acacus concludes that the area is located in a deltaic transitional depositional setting, which ranges from a wave dominated delta into tide dominated delta type. This interpretation carried out through a series of steps of model generation, core description and Formation Microresistivity Image tool (FMI) interpretation. After the analysis of the core data, the Lower Acacus layers shows a strong effect of tidal energy. Whereas these traces found imprinted in different types of sedimentary structures, for examples; presence of some crossbedding, such as herringbones structures, wavy and flaser cross beddings. In spite of recognition of some minor marine transgression events in the area, on the contrary, the coarsening upward cycles of sand and shale layers in the Lower Acacus demonstrate presence of a major regressive phase of the sea level. However, consequently, we produced a final package of this model in a complemented set of facies distribution, porosity and oil presence. And also it shows the record of the petroleum system, and the procedure of Hydrocarbon migration and accumulation. Finally, this model suggests that the area can be outlined into three main segments of hydrocarbon potential, which can be a textbook guide for future exploration and production strategies in the area.Keywords: Acacus, Ghadames , Libya, Silurian
Procedia PDF Downloads 14321820 Variations in Heat and Cold Waves over Southern India
Authors: Amit G. Dhorde
Abstract:
It is now well established that the global surface air temperatures have increased significantly during the period that followed the industrial revolution. One of the main predictions of climate change is that the occurrences of extreme weather events will increase in future. In many regions of the world, high-temperature extremes have already started occurring with rising frequency. The main objective of the present study is to understand spatial and temporal changes in days with heat and cold wave conditions over southern India. The study area includes the region of India that lies to the south of Tropic of Cancer. To fulfill the objective, daily maximum and minimum temperature data for 80 stations were collected for the period 1969-2006 from National Data Center of India Meteorological Department. After assessing the homogeneity of data, 62 stations were finally selected for the study. Heat and cold waves were classified as slight, moderate and severe based on the criteria given by Indias' meteorological department. For every year, numbers of days experiencing heat and cold wave conditions were computed. This data was analyzed with linear regression to find any existing trend. Further, the time period was divided into four decades to investigate the decadal frequency of the occurrence of heat and cold waves. The results revealed that the average annual temperature over southern India shows an increasing trend, which signifies warming over this area. Further, slight cold waves during winter season have been decreasing at the majority of the stations. The moderate cold waves also show a similar pattern at the majority of the stations. This is an indication of warming winters over the region. Besides this analysis, other extreme indices were also analyzed such as extremely hot days, hot days, very cold nights, cold nights, etc. This analysis revealed that nights are becoming warmer and days are getting warmer over some regions too.Keywords: heat wave, cold wave, southern India, decadal frequency
Procedia PDF Downloads 12821819 Study of the Effect of Inclusion of TiO2 in Active Flux on Submerged Arc Welding of Low Carbon Mild Steel Plate and Parametric Optimization of the Process by Using DEA Based Bat Algorithm
Authors: Sheetal Kumar Parwar, J. Deb Barma, A. Majumder
Abstract:
Submerged arc welding is a very complex process. It is a very efficient and high performance welding process. In this present study an attempt have been done to reduce the welding distortion by increased amount of oxide flux through TiO2 in submerged arc welding process. Care has been taken to avoid the excessiveness of the adding agent for attainment of significant results. Data Envelopment Analysis (DEA) based BAT algorithm is used for the parametric optimization purpose in which DEA Data Envelopment Analysis is used to convert multi response parameters into a single response parameter. The present study also helps to know the effectiveness of the addition of TiO2 in active flux during submerged arc welding process.Keywords: BAT algorithm, design of experiment, optimization, submerged arc welding
Procedia PDF Downloads 63921818 Introducing a Proper Total Quality Management Model for Libraries
Authors: Alireza Shahraki, Kaveh Keshmiry Zadeh
Abstract:
Total quality management in libraries is of particular importance because high-quality libraries can facilitate the sustained development process in countries. This study has been conducted to examine the feasibility of implementation of total quality management in libraries of Sistan and Baluchestan and to provide an appropriate model for this concern. All of the officials and employees of Sistan and Baluchestan libraries (23 individuals) constitute the population of the study. Data gathering tool is a questionnaire that is designated based on ISO9000. The data extracted from questionnaires were analyzed using SPSS software. Results indicate that the highest degree of conformance to the 8 principles of ISO9000 is attributed to the principle of 'users' (69.9%) and the lowest degree is associated with 'decision making based on facts' (39.1%). Moreover, a significant relationship was observed among the items (1 and 3), (2 and 5), (2 and 7), (3 and 5), (4 and 5), (4 and 7), (4 and 8), (5 and 7), and (7 and 8). According to the research findings, it can generally be said that it is not eligible now to utilize TQM in libraries of Sistan and Baluchestan.Keywords: quality management, total quality, university libraries, libraries management
Procedia PDF Downloads 34021817 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 16421816 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain
Authors: Zachary Blanks, Solomon Sonya
Abstract:
Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection
Procedia PDF Downloads 29221815 Modelling High-Frequency Crude Oil Dynamics Using Affine and Non-Affine Jump-Diffusion Models
Authors: Katja Ignatieva, Patrick Wong
Abstract:
We investigated the dynamics of high frequency energy prices, including crude oil and electricity prices. The returns of underlying quantities are modelled using various parametric models such as stochastic framework with jumps and stochastic volatility (SVCJ) as well as non-parametric alternatives, which are purely data driven and do not require specification of the drift or the diffusion coefficient function. Using different statistical criteria, we investigate the performance of considered parametric and nonparametric models in their ability to forecast price series and volatilities. Our models incorporate possible seasonalities in the underlying dynamics and utilise advanced estimation techniques for the dynamics of energy prices.Keywords: stochastic volatility, affine jump-diffusion models, high frequency data, model specification, markov chain monte carlo
Procedia PDF Downloads 10421814 Educational Framework for Coaches on Injury Prevention in Adolescent Team Sports
Authors: Chantell Gouws, Lourens Millard, Anne Naude, Jan-Wessel Meyer, Brandon Stuwart Shaw, Ina Shaw
Abstract:
Background: Millions of South African youths participate in team sports, with netball and rugby being two of the largest worldwide. This increased participation and professionalism have resulted in an increase in the number of musculoskeletal injuries. Objective: This study examined the extent to which sport coaching knowledge translates to the injuries and prevention of injuries in adolescents participating in netball and rugby. Methods: Thirty-four South African sports coaches participated in the study. Eighteen netball coaches and 16 rugby coaches with varying levels of coaching experience were selected to participate. An adapted version of Nash and Sproule’s questionnaire was used to investigate the coaches’ knowledge with regards to sport-specific common injuries, injury prevention, fitness/conditioning, individual technique development, training programs, mental training, and preparation of players. The analysis of data was carried out using a number of different techniques outlined by Nash and Sproule (2012). These techniques were determined by the type of data. Descriptive data was used to provide statistical analysis. Quantitative data was used to determine the educational framework and knowledge of sports coaches on injury prevention. Numerical data was obtained through questions on sports injuries, as well as coaches’ sports knowledge levels. Participants’ knowledge was measured using a standardized scoring system. Results: For the 0-4 years of netball coaching experience, 76.4% of the coaches had knowledge and experience and 33.3% appropriate first aid knowledge, while for the 9-12 years and 13-16 years, 100% of the coaches had knowledge and experience and first aid knowledge. For the 0-4 years in rugby coaching experience, 59.1% had knowledge and experience and 71% the appropriate first aid knowledge; for the 17-20 years, 100% had knowledge and experience and first aid, while for higher or equal to 25 years, 45.5% had knowledge and experience. In netball, 90% of injuries consisted of ankle injuries, followed by 70% for knee, 50% for shoulder, 20% for lower leg, and 15% for finger injuries. In rugby, 81% of the injuries occurred at the knee, followed by 50% for the shoulder, 40% for the ankle, 31% for the head and neck, and 25% for hamstring injuries. Six hours of training resulted in a 13% chance of injuries in netball and a 32% chance in rugby. For 10 hours of training, the injury prevalence was 10% in netball and 17% in rugby, while 15 hours resulted in an injury incidence of 58% in netball players and a 25% chance in rugby players. Conclusion: This study highlights the need for coaches to improve their knowledge in relation to injuries and injury prevention, along with factors that act as a preventative measure and promotes players’ well-being.Keywords: musculoskeletal injury, sport coaching, sport trauma
Procedia PDF Downloads 16121813 Tourists' Percepion of Osun Osogbo Festival in Osogbo, Osun State Nigeria
Authors: Yina Donald Orga
Abstract:
Osun Osogbo festival is one of the biggest art festivals in Nigeria with over 235, 518 tourist visits in 2014. The purpose of this study is to generate data on the tourists’ perception of Osun Osogbo Festival in Osogbo, Osun State Nigeria. Based on the population of 199, 860 tourist visits at Osun Osogbo festival in 2013, Krejcie and Morgan sample size table was used to select 768 tourists/respondents. Likert questionnaire were used to elicit data from the respondents. Descriptive statistic was used to describe the characteristics of respondents and analyse the tourists’ perception of the festival. The findings from data analysed suggest that the trend of domestic and international tourist visits in the past ten years for the festival had shown a consistent increase since 2004 except in 2007 and 2008 and continue to increase up to 2013. This is an indication that the tourists are satisfied with traditional, historical and authenticity features of the festival. Also, findings from the study revealed that the tourists are not satisfied with the number of toilets at Osun Sacred Grove, crowd control of visitors during the festival, medical personnel to cater for visitors during the festival, etc. In view of the findings of the study, the following recommendations are suggested; provision of more toilets at Osun Sacred grove, Osogbo Heritage Council to recruit festival guides to help control the huge crowd at the festival, the Government of State of Osun in conjunction with Red Cross Society should engage adequate medical personnel to cater for medical needs of visitors at the festival, etc.Keywords: festival, perception, positive, tourists
Procedia PDF Downloads 20621812 End To End Process to Automate Batch Application
Authors: Nagmani Lnu
Abstract:
Often, Quality Engineering refers to testing the applications that either have a User Interface (UI) or an Application Programming Interface (API). We often find mature test practices, standards, and automation regarding UI or API testing. However, another kind is present in almost all types of industries that deal with data in bulk and often get handled through something called a Batch Application. This is primarily an offline application companies develop to process large data sets that often deal with multiple business rules. The challenge gets more prominent when we try to automate batch testing. This paper describes the approaches taken to test a Batch application from a Financial Industry to test the payment settlement process (a critical use case in all kinds of FinTech companies), resulting in 100% test automation in Test Creation and Test execution. One can follow this approach for any other batch use cases to achieve a higher efficiency in their testing process.Keywords: batch testing, batch test automation, batch test strategy, payments testing, payments settlement testing
Procedia PDF Downloads 6021811 Improving the Logistic System to Secure Effective Food Fish Supply Chain in Indonesia
Authors: Atikah Nurhayati, Asep A. Handaka
Abstract:
Indonesia is a world’s major fish producer which can feed not only its citizens but also the people of the world. Currently, the total annual production is 11 tons and expected to double by the year of 2050. Given the potential, fishery has been an important part of the national food security system in Indonesia. Despite such a potential, a big challenge is facing the Indonesians in making fish the reliable source for their food, more specifically source of protein intake. The long geographic distance between the fish production centers and the consumer concentrations has prevented effective supply chain from producers to consumers and therefore demands a good logistic system. This paper is based on our research, which aimed at analyzing the fish supply chain and is to suggest relevant improvement to the chain. The research was conducted in the Year of 2016 in selected locations of Java Island, where intensive transaction on fishery commodities occur. Data used in this research comprises secondary data of time series reports on production and distribution and primary data regarding distribution aspects which were collected through interviews with purposively selected 100 respondents representing fishers, traders and processors. The data were analyzed following the supply chain management framework and processed following logistic regression and validity tests. The main findings of the research are as follows. Firstly, it was found that improperly managed connectivity and logistic chain is the main cause for insecurity of availability and affordability for the consumers. Secondly, lack of quality of most local processed products is a major obstacle for improving affordability and connectivity. The paper concluded with a number of recommended strategies to tackle the problem. These include rationalization of the length of the existing supply chain, intensification of processing activities, and improvement of distribution infrastructure and facilities.Keywords: fishery, food security, logistic, supply chain
Procedia PDF Downloads 24121810 Hand Gestures Based Emotion Identification Using Flex Sensors
Authors: S. Ali, R. Yunus, A. Arif, Y. Ayaz, M. Baber Sial, R. Asif, N. Naseer, M. Jawad Khan
Abstract:
In this study, we have proposed a gesture to emotion recognition method using flex sensors mounted on metacarpophalangeal joints. The flex sensors are fixed in a wearable glove. The data from the glove are sent to PC using Wi-Fi. Four gestures: finger pointing, thumbs up, fist open and fist close are performed by five subjects. Each gesture is categorized into sad, happy, and excited class based on the velocity and acceleration of the hand gesture. Seventeen inspectors observed the emotions and hand gestures of the five subjects. The emotional state based on the investigators assessment and acquired movement speed data is compared. Overall, we achieved 77% accurate results. Therefore, the proposed design can be used for emotional state detection applications.Keywords: emotion identification, emotion models, gesture recognition, user perception
Procedia PDF Downloads 28521809 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 8721808 Artificial Intelligent-Based Approaches for Task Offloading, Resource Allocation and Service Placement of Internet of Things Applications: State of the Art
Authors: Fatima Z. Cherhabil, Mammar Sedrati, Sonia-Sabrina Bendib
Abstract:
In order to support the continued growth, critical latency of IoT applications, and various obstacles of traditional data centers, mobile edge computing (MEC) has emerged as a promising solution that extends cloud data-processing and decision-making to edge devices. By adopting a MEC structure, IoT applications could be executed locally, on an edge server, different fog nodes, or distant cloud data centers. However, we are often faced with wanting to optimize conflicting criteria such as minimizing energy consumption of limited local capabilities (in terms of CPU, RAM, storage, bandwidth) of mobile edge devices and trying to keep high performance (reducing response time, increasing throughput and service availability) at the same time. Achieving one goal may affect the other, making task offloading (TO), resource allocation (RA), and service placement (SP) complex processes. It is a nontrivial multi-objective optimization problem to study the trade-off between conflicting criteria. The paper provides a survey on different TO, SP, and RA recent multi-objective optimization (MOO) approaches used in edge computing environments, particularly artificial intelligent (AI) ones, to satisfy various objectives, constraints, and dynamic conditions related to IoT applications.Keywords: mobile edge computing, multi-objective optimization, artificial intelligence approaches, task offloading, resource allocation, service placement
Procedia PDF Downloads 11521807 Adopting Data Science and Citizen Science to Explore the Development of African Indigenous Agricultural Knowledge Platform
Authors: Steven Sam, Ximena Schmidt, Hugh Dickinson, Jens Jensen
Abstract:
The goal of this study is to explore the potential of data science and citizen science approaches to develop an interactive, digital, open infrastructure that pulls together African indigenous agriculture and food systems data from multiple sources, making it accessible and reusable for policy, research and practice in modern food production efforts. The World Bank has recognised that African Indigenous Knowledge (AIK) is innovative and unique among local and subsistent smallholder farmers, and it is central to sustainable food production and enhancing biodiversity and natural resources in many poor, rural societies. AIK refers to tacit knowledge held in different languages, cultures and skills passed down from generation to generation by word of mouth. AIK is a key driver of food production, preservation, and consumption for more than 80% of citizens in Africa, and can therefore assist modern efforts of reducing food insecurity and hunger. However, the documentation and dissemination of AIK remain a big challenge confronting librarians and other information professionals in Africa, and there is a risk of losing AIK owing to urban migration, modernisation, land grabbing, and the emergence of relatively small-scale commercial farming businesses. There is also a clear disconnect between the AIK and scientific knowledge and modern efforts for sustainable food production. The study combines data science and citizen science approaches through active community participation to generate and share AIK for facilitating learning and promoting knowledge that is relevant for policy intervention and sustainable food production through a curated digital platform based on FAIR principles. The study adopts key informant interviews along with participatory photo and video elicitation approach, where farmers are given digital devices (mobile phones) to record and document their every practice involving agriculture, food production, processing, and consumption by traditional means. Data collected are analysed using the UK Science and Technology Facilities Council’s proven methodology of citizen science (Zooniverse) and data science. Outcomes are presented in participatory stakeholder workshops, where the researchers outline plans for creating the platform and developing the knowledge sharing standard framework and copyrights agreement. Overall, the study shows that learning from AIK, by investigating what local communities know and have, can improve understanding of food production and consumption, in particular in times of stress or shocks affecting the food systems and communities. Thus, the platform can be useful for local populations, research, and policy-makers, and it could lead to transformative innovation in the food system, creating a fundamental shift in the way the North supports sustainable, modern food production efforts in Africa.Keywords: Africa indigenous agriculture knowledge, citizen science, data science, sustainable food production, traditional food system
Procedia PDF Downloads 8221806 Energy Efficiency Analysis of Crossover Technologies in Industrial Applications
Authors: W. Schellong
Abstract:
Industry accounts for one-third of global final energy demand. Crossover technologies (e.g. motors, pumps, process heat, and air conditioning) play an important role in improving energy efficiency. These technologies are used in many applications independent of the production branch. Especially electrical power is used by drives, pumps, compressors, and lightning. The paper demonstrates the algorithm of the energy analysis by some selected case studies for typical industrial processes. The energy analysis represents an essential part of energy management systems (EMS). Generally, process control system (PCS) can support EMS. They provide information about the production process, and they organize the maintenance actions. Combining these tools into an integrated process allows the development of an energy critical equipment strategy. Thus, asset and energy management can use the same common data to improve the energy efficiency.Keywords: crossover technologies, data management, energy analysis, energy efficiency, process control
Procedia PDF Downloads 210